AGI, Job Loss, and Paradoxes
Key Points
- The speaker defines artificial general intelligence (AGI) as an AI system that can perform virtually all economically valuable work, noting that current chatbots are far from this level.
- While many fear that ubiquitous AGI will cause total job loss and push societies toward universal basic income or token‑ownership models, the speaker argues this panic overlooks the nuanced ways AI will affect different occupations.
- Existing studies on AGI’s economic impact are criticized for treating the technology as a single, interchangeable variable, ignoring the “ragged edge” where AI performance varies across job families.
- To properly anticipate AGI’s consequences, the speaker highlights the need to incorporate economic concepts like Jevons Paradox (increased efficiency leading to higher overall consumption) and a second, less‑known “Morx Paradox.”
- Recognizing these paradoxes suggests that greater AI productivity may actually expand demand for certain services rather than simply eliminate work, challenging the assumption of inevitable mass unemployment.
Sections
Full Transcript
# AGI, Job Loss, and Paradoxes **Source:** [https://www.youtube.com/watch?v=053O3UkfC3k](https://www.youtube.com/watch?v=053O3UkfC3k) **Duration:** 00:11:54 ## Summary - The speaker defines artificial general intelligence (AGI) as an AI system that can perform virtually all economically valuable work, noting that current chatbots are far from this level. - While many fear that ubiquitous AGI will cause total job loss and push societies toward universal basic income or token‑ownership models, the speaker argues this panic overlooks the nuanced ways AI will affect different occupations. - Existing studies on AGI’s economic impact are criticized for treating the technology as a single, interchangeable variable, ignoring the “ragged edge” where AI performance varies across job families. - To properly anticipate AGI’s consequences, the speaker highlights the need to incorporate economic concepts like Jevons Paradox (increased efficiency leading to higher overall consumption) and a second, less‑known “Morx Paradox.” - Recognizing these paradoxes suggests that greater AI productivity may actually expand demand for certain services rather than simply eliminate work, challenging the assumption of inevitable mass unemployment. ## Sections - [00:00:00](https://www.youtube.com/watch?v=053O3UkfC3k&t=0s) **Untitled Section** - ## Full Transcript
we are going to do an entire post on job
loss and fears of job loss with
artificial general intelligence buckle
up your seat belts this is going to be
very in-depth so first I'm going to
start with a definition of artificial
general intelligence that open AI is
using I think it's
useful roughly speaking if an AI system
is widely deployed and is capable of
doing almost all valuable work that
humans do we should call it an AR arcial
general intelligence by the way that
includes yard work it includes car
repair it includes the physical services
that humans do that are economically
valuable that AI is not close to
touching
today by that definition there is no way
the chatbot that is on your laptop is
close to artificial general intelligence
so we will start
there what happens in a world where
artificial general intelligence truly
becomes ubiquitous and we could even
give ourselves an easier bar what
happens if it can't do physical work but
knowledge work it just becomes really
good at doing most economically valuable
work I have seen the studies I have seen
doomers on YouTube I have seen panicked
people in the Tik Tock comments on my
Tik Tok Channel they all essentially say
the same thing if we were to get AR
artificial general
intelligence we would all lose our jobs
there would be no economically valuable
work left to be done there would be
widespread unemployment and the only way
to prevent massive societal disruption
would be Universal basic income or
perhaps tokenomics where you are all
sort of tiny investors in the AI that is
doing the actual work of
society I
disagree I have disagreed for a long
time but I don't think I've stated it as
plainly as I am here and the reason I'm
stating it really clearly is because I
am tired of seeing study after study
that treats the most crucial technology
we are likely to see in our lifetime as
if it were a
commodity as if it were something that
is a single variable to plug into a
study AGI the variable you plug it into
studies and then you can like work your
math equation and you can see what
happens that is not how actual
artificial intelligence rule out is
going to go we know it's going to be a
ragged Edge we know that AI That's
relevant for particular job families is
going to look very different we know all
of that already why don't our studies
cover that and I'm not done yet that is
just one initial critique of where the
studies are falling down but there are
two much more foundational pieces that
the studies aren't taking account of of
that we desperately need to fully
understand they are jevans Paradox and
morx Paradox and we are going to talk
about both because they are crucial for
understanding this moment jevans was an
economist thinker in the 19th century he
was writing about coal Coal at the time
was very valuable and if coal became
more abundant the thinking went that
demand would stay flat like you have
more coal there's only so much you can
do with coal you can burn it but like
how much use do you really have probably
like it will become worth
less jevans observed that that wasn't
true as the abundance of the commodity
increased demand for that commodity rose
that is a foundational insight into how
the way humans interact with technology
works I want you to think about the
internet
we had I kid you not an original
application for the internet that was
watching a coffee maker to see if it had
coffee in it you can look it up have we
found other things to do with the
internet since then we have we are
really really good at finding new
utility when supply of something useful
grows another example right now
currently is renewable energy we are
producing so much renewable energy
everybody's ejections keep breaking it
looks like a vertical line it is
absolutely insane how much renewable
energy is being produced every year
nobody can get the projections correct
everybody keeps predicting a tapering
off and it's just not happening because
we keep finding more use for renewable
energy it's jeevan's
Paradox but I have not yet seen any
studies that actually take account of
jev's par Paradox when something like
intelligence becomes cheaper and more
abundant and you would think that would
be a relevant thing to talk about
because
anecdotally I am seeing it happen all
over the place this is not just for me
by the way this is what I've observed
working with dozens of people over the
last few years hundreds of people
speaking to many
people when we use chatbots at work in
our personal lives they are not one for
one replacing
people they are actually doing work that
would not get done otherwise we are
living out jeevan's Paradox I give my
chatbot things to do that would just not
get done otherwise like there nobody
would do it it just wouldn't
happen and yet we don't take account of
that when we do our studies on the
projections for the future of artificial
general intelligence like it makes no
sense like we need to fully load in the
idea of jebin's paradox to really
understand what the future looks like
and it's one big reason why I'm more
bullish on the future of jobs than a lot
of the other people talking about AI
right
now Paradox number two morx paradox this
one specifically about computer systems
in AI a few decades ago moravec observed
that it is very very easy to teach a
machine to do things that humans find
difficult and very very hard to teach a
machine to do things that find Easy A
few examples chess I find it hard to
play chess even though I really enjoy it
I still remember as a kid when a machine
named deep blue from IBM beat Gary
Kasper off I was so
impressed it was relatively easy for the
machine and we've since built machines
that are even better at chess and now
they've solved go or almost solved go I
don't know the point is these tasks that
humans find really hard are things that
machines find easy and you can find
numerous other examples like
that on the other side humans find it
relatively easy to walk most of us most
humans find it relatively easy to catch
a ball these are things that machines
find really really hard and I'll go into
knowledge work because you might say
well this is all physical stuff right
like what are we doing with knowledge
work humans find negotiating politics
and stakeholder Management in the
internal people dynamics of a business
relatively easy now some of us are
really good at it some of us are okay at
it some some of us we kind of know are
not great at it but we know it exists
and our Baseline level of fluency far
exceeds what you could do with a machine
and it's not that hard for us we don't
really think about it a lot we are
basically taking the social dynamics and
cues we learn as small children in
family and social situations and we're
applying them in the organization
outside the scope of this YouTube the
point is we find it easy the machine
finds it hard it's more of X
Paradox so so think about it right now
is chat GPT better than me at
remembering every single fact about
product management yes it is and I've
been doing product management a long
time it's unquestionably better than me
at remembering all of the
facts but would I hire chat GPT to
replace me
no because all of the other things that
go into more of
Paradox it can't do it's not close to
doing it can't have a complex
conversation about timing another
conversation relative to a particular
event that's happening in the calendar
so we maximize Team Dynamics it can't
have a conversation about how to best
align the constraints we face in a sales
environment with value propositions and
deal rooms and figure out what that
means for what we build next it can't
even agentically proo protype without a
lot of help right now will it get better
at some of that it
will but my point stands more ofx
Paradox means that things that we find
easy including in the work environment
are hard for systems to learn and if
they're hard to learn they're hard to
learn well they're hard to learn in an
AGI sense in the sense that it would
cover most economically useful work
because that requires a very high degree
of
completeness so yeah I think morx
Paradox matters for AI I think Jin's
Paradox matters for AI I have yet to see
any projections of the future of the
labor market take these really seriously
and I'm not saying I have all of the
answers I'm just saying if this is a
transformative technology it deserves to
be studied carefully and specifically
and not just as one more variable thrown
out in a YouTube video for clicks and
and I'm tired of that and I'm tired of
studies that project off of flawed and
incomplete assumptions I think we can do
better I think this technology is
important enough that it deserves a
better
look and so if you are worried about Ai
and
jobs if you are in uh an economist role
this is hopefully some fodder for
thought if you are worried about Ai and
jobs and you're just a white collar
professional maybe this is encouraging
but I also hope it's something that you
reply to that you share that you save
because from what I've seen this
question that I am spending 10 minutes
talking about is the question everyone
has under their breath and no one really
wants to talk about except by saying
Doom and Gloom it deserves a wider
conversation we deserve to have a more
nuanced conversation about what
assumptions hold true as AI systems grow
in capab
ility unless you think that I'm the only
one who thinks about this I'm going to
call out something that's a small hint
that open AI themselves may be thinking
a little bit
more thoughtfully about how long they
have to keep employing people they have
recently changed their policy this was
in the summer around how they handle
stock grants for people who choose to
leave the company and long story short
they are making it easier for people who
leave to continue to get value based on
their tenure and the equity that's
vested in other words they are expecting
a future for a long while to come where
people will be employed at open AI for a
period and then leave that is a normal
employment
pattern they are not expecting the end
of all things and if they're not
expecting it why are we