AI Careers: A Pascal’s Wager
Key Points
- The AI‑job debate (pessimists vs. optimists) is less important than treating the future as a “Pascal’s wager”: you should act as if any outcome is possible.
- Regardless of whether entry‑level roles disappear or expand, the single career imperative is to become better at solving high‑quality, complex problems.
- Strong agency and problem‑solving skills prepare you for both scenarios—whether you’ll manage fleets of AI agents or work in large enterprise environments where AI’s impact is limited.
- Engineering serves as a proxy for the broader tech ecosystem, so trends in engineering jobs will ripple through product, design, marketing, and other roles.
- Rather than debating predictions, focus on actionable steps that build deep problem‑solving ability, which remains valuable no matter how AI reshapes the job market.
Sections
- Career Strategy Amid AI Uncertainty - The speaker likens choosing an AI‑focused career to Pascal’s wager, asserting that whether jobs vanish or multiply, the key to thriving is building strong agency by learning to solve high‑quality problems.
- Cultivating High‑Agency Meta Skills - The speaker argues that mastering transferable meta‑skills—problem recognition, solution design, resource marshalling, execution, and integration—provides valuable agency now and safeguards careers against uncertain future job markets.
- Beyond Code – Embracing Human Wisdom - The speaker argues that career success now depends less on technical showcases and more on taking agency and cultivating human skills—emotional clarity, discernment, and relational ability—as the economy shifts from pure knowledge to a wisdom‑focused, AI‑augmented landscape.
Full Transcript
# AI Careers: A Pascal’s Wager **Source:** [https://www.youtube.com/watch?v=XqwfFbuZF-0](https://www.youtube.com/watch?v=XqwfFbuZF-0) **Duration:** 00:11:13 ## Summary - The AI‑job debate (pessimists vs. optimists) is less important than treating the future as a “Pascal’s wager”: you should act as if any outcome is possible. - Regardless of whether entry‑level roles disappear or expand, the single career imperative is to become better at solving high‑quality, complex problems. - Strong agency and problem‑solving skills prepare you for both scenarios—whether you’ll manage fleets of AI agents or work in large enterprise environments where AI’s impact is limited. - Engineering serves as a proxy for the broader tech ecosystem, so trends in engineering jobs will ripple through product, design, marketing, and other roles. - Rather than debating predictions, focus on actionable steps that build deep problem‑solving ability, which remains valuable no matter how AI reshapes the job market. ## Sections - [00:00:00](https://www.youtube.com/watch?v=XqwfFbuZF-0&t=0s) **Career Strategy Amid AI Uncertainty** - The speaker likens choosing an AI‑focused career to Pascal’s wager, asserting that whether jobs vanish or multiply, the key to thriving is building strong agency by learning to solve high‑quality problems. - [00:03:09](https://www.youtube.com/watch?v=XqwfFbuZF-0&t=189s) **Cultivating High‑Agency Meta Skills** - The speaker argues that mastering transferable meta‑skills—problem recognition, solution design, resource marshalling, execution, and integration—provides valuable agency now and safeguards careers against uncertain future job markets. - [00:07:12](https://www.youtube.com/watch?v=XqwfFbuZF-0&t=432s) **Beyond Code – Embracing Human Wisdom** - The speaker argues that career success now depends less on technical showcases and more on taking agency and cultivating human skills—emotional clarity, discernment, and relational ability—as the economy shifts from pure knowledge to a wisdom‑focused, AI‑augmented landscape. ## Full Transcript
We need to talk about AI and jobs. And
no, I am not interested in the debate
between the pessimists who say jobs are
going away and the optimists who say
jobs are going to stay. I actually want
to take a different angle. I want to
suggest that the way forward is clear
for you and me regardless of which side
you take. This is like the Pascal's
wager of tech careers. Fundamentally,
the idea behind Pascal's wager is that
you kind of need to live your life a
certain way regardless of what you
believe. I think that's sort of the
insight we need to take for AI right
now. If you are a pessimist, if you
agree with Dario Amade's take this week
that half of entry-level jobs are going
to go away, okay, that's what you
believe. If you're an optimist and you
believe believe along with say Gurgaly
Oros uh that entry-level jobs may
actually scale, there's some evidence of
that. He's talked to folks at GitHub,
talked to folks at Shopify because
entry-level roles represent culture
change and people coming in who are
better at AI, etc. Great. That's what
you
believe. My point is this. Regardless of
which side you take on that bet, you
have a single problem to solve in your
career. You have to figure out how to
get better at solving high quality
problems because at the end of the day
if you have strong agency as a career
trait and you can solve highquality
problems you are ready whether you live
in Daario's world and you need to manage
fleets of agents or whether you live in
Gurgal's world and you have more
entry-level roles and you're working in
enterprise environments where you know
cursor makes a marginal difference and
the code bases are just too large for AI
and even though IQ is scaled context
windows haven't scaled. Memory handling
hasn't scaled. Um, and we still have to
have a lot of senior engineering work.
And by the way, I do regard engineering
work as a proxy for a lot of other
works. If you have to have a lot of
engineers at an enterprise, you have to
have a lot of other jobs that just go
with that. Comm's jobs, marketing jobs,
customer success jobs, product jobs,
designer jobs. And so, in a sense,
engineering is the core of tech. And if
the bet on engineering goes one way or
the other, the rest of the tech market
will follow and we should probably be
more honest about that. Now, are there
going to be differences here and there?
Yes. So, that's the first thing I want
to be really honest that we live in a
world where I think we should talk about
it as a Pascal's wager problem. In other
words, we should behave as if we need to
prepare to solve highquality problems
regardless of which way we think the
world is going to go. And let's be
honest, it kind of is a belief at this
point. You can point to evidence either
way. You can argue about which way it's
going. I'm not really interested in
having that debate here. And I think
that people who dive too deep on that
debate are missing the actionable steps
you can take to actually start to answer
the AI question in practice. What I
would call the agency principle.
Learning how to do problem recognition
well. Learning how to do solution design
well, learning how to resource marshall,
learning how to execute, learning how to
integrate. These are all things that you
can do regardless of whether you're in
engineering or other tasks. They're meta
skills and you need them. I've talked
about other meta-kills in the past, but
one of the things that keeps coming
through for me is this idea of solving
problems with high agency isn't going
away. And I think it's really really
important to recognize that and not
pretend that high agency has no value in
the future. And if you come back to me
and you say, "Well, it doesn't because
it's all going to code away
anyway." That's fine. But again, go back
to Pascal's wager. Imagine that you're
right. Would you have wanted to spend
the time between now and whenever you
believe that dark future will arrive
doing nothing and complaining about
it? Or would you rather prepare for a
world that you have some agency
over? I think regardless of what you
believe, that's the more interesting
place to be. It's also the less risky
place for your career because either
way, having more agency doesn't hurt
you. And waiting if you're wrong, if
Dario is incorrect, if Gurgal is right,
waiting and doing nothing and saying the
world is going to end and jobs are going
to be over profoundly hurts your
career. I also want to be honest about
the fact that we need to talk more about
in-person skills
because interviews are beginning to
shift back in person. Work is beginning
to shift back in person. And that's very
deliberate because people are wanting to
hire you first for your problem solving
skills first and then they need to check
that you know how to use AI, but they're
not hiring someone who can just read
answers off of chat
GPT. Look, I'm sure if I was given the
appropriate time to prep with 03, I
could read off answers for a leap code
interview tomorrow. I don't think that
that makes me a particularly qualified
engineer in a lot of different places
and that's fine. The point is this. We
need to talk about emotional clarity,
discernment in a world drowning in data
and options, how you find signal, the
ability to craft connection with people.
You are getting flown into interviews
more and more these days. You are going
to be expected to be human because that
is the only guarantee people have that
you're not an AI. And that gets back to
how companies now are actually answering
this vexed question around signal versus
noise in the candidate
pipeline. And this is why I am not
telling you that everybody should go out
and vibe code a website and stick the
code on GitHub. Is that an answer for
some people? Sure. But the problem is if
you really ask yourself what you're
proving there, it is another way of
showing that you can solve problems that
is a step removed from the resume which
is traditionally where you did that. And
the reason why the resume is useless is
because chat GPT has essentially made
every resume perfect. And so in a world
where every resume is perfect, it offers
no signal. And in a world where
everybody vibe codes something and
sticks it on GitHub, it also offers in
no signal. Now, I do still think there's
some signal there because it is harder
to replicate working code, even if
you're vibe coding, than it is to build
a resume. You can get a perfect resume
out of chat GPT in 2 minutes. You cannot
get a perfect vibe coded website that
functions, that draws users in 2
minutes. And so there is still some
signal. I am certainly not one to say
that you should not learn to vibe code
or you should not learn to build. I'm a
big fan of that. I teach a class on
that. Not what I'm saying. But I am
calling out that the keys to employment
long-term are not these specific skills.
It's not the ability to use lovable per
se. It's not the fact that you have a
GitHub repo with a lovablecoded project
per se. It is the fact that you are
taking agency over your career and
showing you can solve problems across a
wide range of tools, across a wide range
of problem sets. And coupled with the
human skills that enable you to function
as a human in the workplace effectively.
And by the way, this human skill thing
is not just something I'm making up. Joe
Hudson published a piece on every on
Thursday talking about the idea that we
are moving from a knowledge economy to a
wisdom economy. It sounds maybe a little
bit clickbaity, but I get the idea.
Fundamentally, if Chad GPT is good at
knowing facts, maybe we have to go back
200 centuries and talk about this idea
of elders and wisdom and humans gaining
wisdom and that becomes something that
is useful to us as humans in an AI
economy. Interesting thesis and I think
regardless of what you think about it,
the advice to look at human skills is
helpful because that is where workplaces
are starting to go. And by the way, if
you get better at emotional clarity, if
you get better at discernment in a world
drowning in data, if you get better at
crafting connections with people, that
does translate digitally as well. You
don't lose out. It's another Pascal's
Wager situation. Getting better at it is
always
good. So, where does all of this leave
this? I I I feel like what I want you to
take
away is the concept that you don't have
to pick a belief structure or a side
about the future of AI in order to take
steps that you know are going to be
positive for you and for your career.
You can work on those uh emotional
people skills. You can work on problem
solving, high agency, uh, proactivity in
what you do, which sounds like a buzz
word, but I promise you, if you've seen
it, you know it's a big deal when you
can find someone who has it. High agency
people are incredible. They run through
walls, and it's not because they
overwork. It's because they know how to
run around obstacles.
Um, and so my my call to action to you
is in a world where people will try to
make you afraid a lot, be the person who
is willing to take action for your
career and not the person who buys the
fear because I think that is very high
risk. It's high risk for you personally.
Daario Amade can say that and if he is
wrong, he still makes billions of
dollars. But if he is wrong and people
believe him, the people who spiraled and
went into a fear cycle and didn't
prepare for their careers will be
profoundly damaged over the long term.
Their career prospects are
affected. And I I am not saying that he
didn't have good intentions. He's
calling explicitly for larger efforts
beyond a private company. I get why he
did it. But the risk is real because
what I see in practice is that
statements like Daario made on Wednesday
and Thursday of this
week don't provoke attention from the
government which he's asking for. They
provoke attention from the media. They
provoke attention on Tik Tok and most of
it is a sharks feeding fest of
fear and there's not a lot of productive
discussion there. So this is my
response. I don't need you to believe
that the jobs will be better and it will
be an amazing future. Maybe that's a
step too far for you. But you can
believe that working on these skill sets
is going to have value because even if
you disagree and you're like a total
pessimist, it's still the rational
choice. It's still the correct bet for
you for your career selfishly. So that's
my little soap box. Hopping off my
little soap box. Cheers.