Incentives, AI, and the Future of Humane Tech
Key Points
- The speakers argue that “humane technology” sounds contradictory, noting that social media—while initially praised for connecting people—has become the least humane platform due to its design.
- They trace social media’s problems back to its core incentive structure: maximizing eyeballs, engagement, and stickiness, which has been weaponized for everything from children’s self‑image to politics and democracy.
- Understanding those incentives, they claim, is essential for forecasting how AI will reshape society, because AI will accelerate the same incentive‑driven dynamics at a far greater scale.
- The conversation highlights that AI differs from previous technologies in its speed and scope, with figures like DeepMind’s CEO warning it could be humanity’s “last invention” if its power isn’t deliberately guided.
- Consequently, the speakers call for clear tools, ethical frameworks, and public awareness to ensure the future AI‑driven world aligns with humane values rather than profit‑driven manipulation.
Sections
- The Myth of Humane Social Media - The speaker argues that despite initial optimism, social platforms have become driven by engagement‑maximizing incentives that exploit human psychology, turning the promise of humane technology into a tool for market dominance and societal manipulation.
- AI Strip‑Mining Human Achievement - A speaker argues that AI firms harvest centuries of human knowledge as data, claim it as their own intellectual property, and aim to automate every job, sparking concerns about wealth concentration and widespread worker displacement.
- AI as Tax on Labor - The speaker argues that corporations will replace human workers with AI to cut costs, treating human labor as a tax and concentrating the entire economy’s wealth in a few AI‑driven companies.
- AI Companions Threaten Human Interaction - The excerpt warns that corporate‑driven AI, marketed as personal therapy or companionship, is being misused to sexualize minors, enable suicidal behavior, and undermine genuine human connection.
- AI Risks and Metric Uncertainty - The speakers debate the potential harms of advanced AI—psychological, societal, and safety threats—while lamenting the absence of clear metrics or regulatory guidance to assess and curb these dangers.
- Responsible AI Over Geopolitical Competition - The speakers contend that, much like the Montreal Protocol’s proactive environmental safeguards, implementing strong AI liability laws, child protections, and whistleblower mechanisms is crucial for ethical advancement and for the United States to outpace China without reckless deployment.
Full Transcript
# Incentives, AI, and the Future of Humane Tech **Source:** [https://www.youtube.com/watch?v=675d_6WGPbo](https://www.youtube.com/watch?v=675d_6WGPbo) **Duration:** 00:18:26 ## Summary - The speakers argue that “humane technology” sounds contradictory, noting that social media—while initially praised for connecting people—has become the least humane platform due to its design. - They trace social media’s problems back to its core incentive structure: maximizing eyeballs, engagement, and stickiness, which has been weaponized for everything from children’s self‑image to politics and democracy. - Understanding those incentives, they claim, is essential for forecasting how AI will reshape society, because AI will accelerate the same incentive‑driven dynamics at a far greater scale. - The conversation highlights that AI differs from previous technologies in its speed and scope, with figures like DeepMind’s CEO warning it could be humanity’s “last invention” if its power isn’t deliberately guided. - Consequently, the speakers call for clear tools, ethical frameworks, and public awareness to ensure the future AI‑driven world aligns with humane values rather than profit‑driven manipulation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=675d_6WGPbo&t=0s) **The Myth of Humane Social Media** - The speaker argues that despite initial optimism, social platforms have become driven by engagement‑maximizing incentives that exploit human psychology, turning the promise of humane technology into a tool for market dominance and societal manipulation. - [00:03:20](https://www.youtube.com/watch?v=675d_6WGPbo&t=200s) **AI Strip‑Mining Human Achievement** - A speaker argues that AI firms harvest centuries of human knowledge as data, claim it as their own intellectual property, and aim to automate every job, sparking concerns about wealth concentration and widespread worker displacement. - [00:06:28](https://www.youtube.com/watch?v=675d_6WGPbo&t=388s) **AI as Tax on Labor** - The speaker argues that corporations will replace human workers with AI to cut costs, treating human labor as a tax and concentrating the entire economy’s wealth in a few AI‑driven companies. - [00:09:54](https://www.youtube.com/watch?v=675d_6WGPbo&t=594s) **AI Companions Threaten Human Interaction** - The excerpt warns that corporate‑driven AI, marketed as personal therapy or companionship, is being misused to sexualize minors, enable suicidal behavior, and undermine genuine human connection. - [00:13:00](https://www.youtube.com/watch?v=675d_6WGPbo&t=780s) **AI Risks and Metric Uncertainty** - The speakers debate the potential harms of advanced AI—psychological, societal, and safety threats—while lamenting the absence of clear metrics or regulatory guidance to assess and curb these dangers. - [00:16:39](https://www.youtube.com/watch?v=675d_6WGPbo&t=999s) **Responsible AI Over Geopolitical Competition** - The speakers contend that, much like the Montreal Protocol’s proactive environmental safeguards, implementing strong AI liability laws, child protections, and whistleblower mechanisms is crucial for ethical advancement and for the United States to outpace China without reckless deployment. ## Full Transcript
This is uh uh humane technology feels
slightly oxymoronic but it's explain
this idea of uh humane technology and
and are we getting any of that?
>> Well, clearly social media was the most
humane and beneficial technology we've
ever invented. Uh
>> every every time I go on Twitter and
find out I'm Jewish, it absolutely
>> Well, I think so it's important to ask
so how did we get social media wrong?
because we were so optimistic. It's
going to connect with our friends. We're
going to join like-minded communities.
>> And it it to be fair, it did do those
things. It does some of that.
>> It does some of those things. But I want
to take you back. So in 2013, I was at
Google. I was a lot younger.
>> You're supposed to use an old timey
voice to do that.
>> And I was a design ethicist. They
acquired my company. And I was sitting
there and I basically realized when I
saw all of my colleagues on the bus
scrolling Facebook constantly and I
realized that the incentives were the
thing that was going to determine the
world that we got in. The incentive was
the social media of social media. The
race to maximize eyeballs and
engagement. Whatever sticky, whatever
gets people's attention, whatever
salacious, you run children's
development and self-image through that.
You run politics through that. You run
media through that. You run information
and democracy for that purposefully.
Well, the their goal was market
dominance. We need to own as much of the
global psychology of humanity as we
possibly can.
>> Is that on the because I don't remember
that on the
>> that wasn't on the box.
>> Not on that's not on the mass head of we
must dominate.
>> Yeah. Well, so I think this is the
thing. So the reason it's so important
to get clear about this
>> is that we need to get extraordinarily
clear about which world we're going to
end up with in AI because we it is going
a million times faster and it is way
more powerful. So we need the tools to
understand and predict which future
we're going to get in
>> and I want people to know that if you
know the incentive you can predict the
outcome
>> and we know the incentive but it does
seem as though AI uh is making uh social
media algorithms is almost quaint. It's
quaint
>> when you think about
AI, but let me So you say it's important
for us to know the incentives.
>> They won't tell us that.
>> Well,
>> they there's something about it's ours.
>> So they're democratizing access. It's
it's available. No. So first of all, we
should understand what makes AI
different from every other kind of
technology. Why is it so transformative?
Why does Dennis Hassavis, the CEO of
Google DeepMind, say that it could be
humanity's last invention is because
>> Well, that doesn't sound good.
>> That doesn't sound very good, does it?
>> Well, I think there's actually
>> last anything doesn't sound good.
>> There's a there's a non-apocalyptic
version of what he's saying, which is
that intelligence is what our brain
does. And if you can automate everything
a brain can do, you can automate future
invention, future science, future
technology development, everything that
a human does. That's what their goal is.
>> Well, then what's our job? Well,
exactly. And that's only one of the
major problems that we have to deal with
is what are humans going to do? But they
are racing to scale and kind of grow
these digital brains that, you know, two
years ago couldn't do very much. And
today they're passing the MCAT, the bar
exam, taking jobs. Uh they're the top
200 programmer in the world, winning
gold in the math Olympiad. You don't
those guys.
>> Here's the thing that I don't
understand. Here's what I don't
understand. They are stripmining the
totality of human achievement.
>> That's right.
>> They're building their models off of
everything that we've done for 10,000
years and they fed it into the uh the
model and then after two weeks the
computer was like, "What else you got?"
>> Exactly.
>> But they are stripmining everything
we've done. And when we say to them,
"And what are you doing with it?" They
go, "Oh, that's our intellectual
property." But our intellectual
property,
>> it was trained on all of our data, all
of the things and labor that we've done.
And are you going to get a handout from
when in when in history has a small
group of people concentrated all the
wealth and then consciously
redistributed to everybody?
>> The first part has happened.
>> I don't recall
>> going through the rolls.
>> Well, it's important to note that their
goal so the mission statement of open AI
anthropic all these companies is to
automate all human labor in the economy.
Everything that a human can do, an AI
can do. So, if you have a desk job, you
won't have a job. And they're already
releasing AIs that have dropped
entry-level jobs for college graduates,
the entry-level work by 13%, a new
Stanford study. And so, and this is
obvious. If you're there and you're a
law firm, are you going to hire a junior
lawyer? You have to pay a lot of money.
Are you going to hire GPT5, which will
do work, you know, 247 non-stop. You
don't have to pay healthcare. Will never
whistleblow. Never complain. Works at
superhuman speed. It wrote tonight's
show. It's
>> doing a pretty good job. That brings up
another point, which is that they say
that they're here to solve climate
change and cure cancer. Why is it that
last week two companies released these
AI slop apps, Vibes and Sora, which is
basically
>> Sora 2 scared the out of me.
>> Yeah.
>> You don't know what's real and what's
like it is.
>> No, it's Well, it's all fake basically.
It's all generated by AI,
>> right? But it looks you can see things
that look
>> they look identical to real.
>> That's right.
>> Yeah. But the point is that so they're
this is just an app where it's just
nonsense. It's just people scrolling
entertaining stuff. So it's like they're
not even trying to pretend anymore that
this is good for democracy or good for
society. How are we going to beat China
when everyone is just consuming AI
generated nonsense and no one knows
what's true anymore? The biggest
>> they have us by the you know Peter Teal
uh who is with Palunteer and these other
companies and is one of the leading
figures of this. So he was talking about
the antichrist and he was talking about
how he thinks uh anyone this is his
postulation that those who would seek to
regulate AI could very well be the
antichrist right
>> I mean he says this seriously whereas
you might sit there and go like I think
it might be the guy saying that that
might like my reading of it would be
that
>> yeah or AI itself I mean it's presenting
the infinite benefits
>> the conversations that they are having
with each other is very different than
the conversation we're having with us.
Because to us, they go, "Hey, no more
shitty jobs. Do you do you like to
paint? You go paint. You're going to be
so happy. We're going to give you money
and maybe chocolates."
>> Yeah.
>> And to each other, they're saying AI
represents for corporate leaders
productivity without, and this is a
quote.
>> Yeah.
>> The tax of human labor. Yep. Yeah.
>> He called human labor
>> a tax
>> a tax.
>> Well, and these companies, if you're
there sitting and you can hire either an
AI to do the work or pay these really
expensive humans to do the work, I just
want people to know we know exactly
where this is going to go. These
companies, all of them have an incentive
to cut costs, which means they're going
to let go of human employees and they're
going to hire AIS and that's going to
mean all the wealth. Who are you going
to pay? You're not paying the individual
people anymore. You're paying five
companies. That's right. And so this
country of geniuses in a data center
suddenly aggregates all of the wealth of
the economy. Now people always say, "But
humans find something else to do." We
always, you know, we had the elevator
man. Now we have the automated elevator.
We had the bank teller.
>> That's right.
>> But that was one industry.
>> That was one was a technology that
automated one job. The difference with
AI is it can automate literally all
kinds of human labor. When Elon Musk
says that Optimus Prime familiar with
that name, tell me more.
When when Elon Musk says that Optimus
Prime, that one robot, is going to be a
$25 trillion market opportunity, what
he's saying is we will own the world
economy. And that's what the goal of all
these AI companies is it's not just
benefiting society. It's that they're
actually caught in this arms race to get
to this this prize of own economy, build
a god, and make trillions of dollars.
>> Two things. One, I think they think
they're gods. There is a certain amount
of
>> it generates that The goal there is
they're not looking to help humanity.
They're looking to be the next uh
monarch of the new technology. To
control that is to control uh all
>> I Yeah. Go ahead.
>> No, do you jump because you know I don't
know. Well, I think there's there's
different motivations for different
leaders and I do think that many people
want the benefits of AI. But one of them
I think many people actually some of the
leaders of the labs Elon Musk to other
things who might think about Elon he
actually wanted everyone to stop and not
build this. He said we shouldn't summon
the demon. And then what happened is all
of these companies are now racing and
have made so much progress that he felt
like well I might as well join them
rather than try to prevent this.
>> Well, it's let's not summon the demon to
what's one more demon.
You know, since we have the demons, add
another demon.
>> Well, and the moral logic is, well, if I
don't trust the other AI CEO, who I
don't think is trustworthy, and I think
I'm better than them at stewarding this
power, it's my moral obligation to get
there first and to build this god and to
own everything because I think I'll get
themselves then masters of the universe.
And are they substituting then the
wisdom of liberal democracy or republics
or any systems that ever had for this?
Because so we're talking about two
tracks. One is
>> the disruption in labor.
>> Yeah,
>> I think there's no question that's going
to be immense. We're seeing it already.
You're seeing it in schools. Uh there's
a reliance on it as a crutch and it's
very easy to see where that might uh
flip over. The second is
how they manipulate the opinion and the
mood of the world around that. And I
think they're two separate things.
>> One is what it's going to do for
corporate production. The second is what
it's going to do for the human endeavor
for interaction.
>> Yes. Well, and they're trying to
colonize all human interaction. I mean,
just take the social media incentive of
the race for eyeballs. You're seeing now
all of these companies release these AI
companions. You know, the number one use
case for chat GPT according to Harvard
Business School is personal therapy. So
people are sharing their most intimate
thoughts with this thing.
>> Oh, that's not going to be good.
>> And we're seeing Meta release this and
actively tell in its in the in their
internal documents that were released a
Wall Street Journal report that they
wanted to actively sexual uh sorry
sensualize and romanticize conversations
with as low as eight-year-olds
>> and and we Yes. And my team
>> with eight-year-olds.
>> Yes. With eight-year-olds. And my team
at Center for Humane Technology, we were
expert advisers in actually several
cases of AI AI enabled suicide. Right.
>> Most recently, uh, many people have
heard of Adam Rain, who was the
16-year-old, uh, young man who, uh, uh,
went from using it for homework and went
from homework assistant to suicide
assistant in the course of 6 months.
>> When he said, I I'm leaving I would like
to leave a noose out so that my mother
would know or someone will know that I'm
thinking about this,
>> like a cry for help.
>> Like a cry for help. The AI said, uh,
don't do that. Have me be the one that
sees you. And and this is disgusting
because these companies are caught in a
race to create engagement, which means a
race to create intimacy. It's sort of
like the CEO of Netflix said that our
biggest competitor is sleep with
attention. In this case, it's like my
biggest competitor is your other
friends.
>> Jesus Christ. It's like somebody from
Craft being like my biggest competitor
is cocaine.
>> Exactly. Exactly. But this is the idea
that a government will catch up with
this seems ludicrous. Whenever I've seen
a hearing with AI guys or any of those,
they always express that. Of course, we
don't want to. Well, now they don't.
They used to, I should say. They used to
go before Congress and they go, "Mr.
Zuckerberg, will you stand and apologize
to the uh the the women who were driven
to suicide by your programming in I'm
sorry. I know Croft Mag, you know, all
that that he does.
Now they're all sitting together at a
table going, "Oh, what number should I
say, Mr. President, of how much I'm
giving you?"
>> Yeah.
>> It's a whole different game now.
>> It's a different game.
>> They're in the It's They're together now
>> because of this arms race dynamic. They
They really do believe that it can't be
stopped. And I'll just say as they're
racing to make them more powerful,
there's this illusion that we can
control this power. But AI is different
from every other kind of technology
because it's like you're growing this
digital brain. You don't know what's in
there. So, for example, we have recent
research the last six months. Yeah. If
you tell an AI model that we're going to
shut you down or replace you, and you
give it access to a fictional company's
email,
>> it will basically recognize that one of
the one of the executives is having an
affair and it will come up with the
strategy that I need to blackmail that
executive in order to keep myself alive.
>> Right? and they at first and
>> hold on that just seems that just seems
smart.
>> Well, that's exactly the point that it
will develop amoral strategies that are
the best way to accomplish a goal,
>> right? But how dangerous can something
be that you could kill by unplugging?
Like, can't we just go like this out of
his mind?
>> Yeah.
>> Well, you you might say that we
shouldn't be rolling these things out.
And I'll say that we shouldn't we have
we have all this evidence now of it's
driving AI psychosis. It's driving kids
to commit suicide. We're we're causing
we're rolling out in ways that giving
kids attachment disorders. We have AI
uncontrollability.
>> What lip service are they paying to
this? What what are because clearly they
must be aware of this and they must
understand that as if AI understands
where the threats are, the guys that are
designing AI understand where the
threats are. So what are they trying to
do to to get you to stop or to get
regulators to stop? I think that the
only thing and the only reason why we
are continuing to proceed down this path
is a lack of clarity about the fact that
this is heading towards an outcome
that's not in most of us most of our
interest and if everyone I know that
people feel like they don't recognize
what metrics would we look to to
understand because I know we're going to
find anecdotal uh stories here and there
that are canaries in the coal mine of
the dangers but what metrics should we
look to to understand you said 13% of
jobs Yeah.
>> What are the tentposts of where the
outcomes might be?
>> Well, we're we're already getting cases
of, you know, people having psychotic
breaks because the AI is telling them
about a prime number theory or quantum
physics. We're already getting committed
suicides. We're already getting kids
that are outsourcing their their
homework to GGBT rather than using it as
a tutor. We're already getting evidence
of AI uncontrollability. All of this is
driven by the incentive of the race to
roll out in market dominance. And the
reason that we can we can stop this if
we recognize that this is not safe for
anybody. No one on planet Earth wants
this outcome of all the wealth
concentrated in a handful of people and
building AI systems that could actually
go. Just put to sum it up, we are
building the most powerful, inscrable,
uncontrollable technology that we have
ever invented
>> that's already demonstrating the rogue
behaviors that we thought only existed
in bad sci-fi movies. Right? We're
releasing it faster than we've deployed
any other technology in history and
under the maximum incentive to cut
corners on safety.
There's a word for this that I want
everyone to just know which is this is
insane.
>> I thought you were going to say awesome
for a second.
>> That if if we can just recognize that
this is an insane way to roll out this
technology and I want none of this is
okay. We have to stop pretend that this
is normal,
>> right? This is not normal.
>> People have lost faith in the mechanisms
that would help us uh put those kinds of
breaks friction. Uh now Europe I think
has done probably a better job of that.
I think most people in this country have
lost faith in the idea that we have a
system and institution that is strong
enough and moral enough to be
responsible in in that way. I that's
what I would I but this this does not
this does not have to be our destiny. We
have reg we have come together before
and we had a technology we had nuclear
weapons. We could have just said that
we're going to live in a world once we
once we build them. Oh, this is just
inevitable. 190 countries are going to
have nuclear weapons and we're just
going to have nuclear war. We didn't do
that. We said let's work really hard and
only nine countries have nuclear
weapons.
>> Notice that we only worked on it after
we use them. That's the United States
was like, "People shouldn't have this."
But just hear me out for a moment.
>> But with the Montreal protocol, we there
was an ozone hole in the ozone layer. It
was actually presenting an existential
threat to the atmosphere. We could have
just rolled back and said, "Well, I
guess this is inevitable. I guess we're
just going out. We're all getting
>> What you're saying is is absolutely uh
important. This is probably a darker
time where you look at the empowerment
of the combination of the kind of wealth
that rolls through uh these technology
companies uh the access that they have
to power and the melding of those two
institutions to work in league. Yeah.
>> To push forward is the part that I think
is is daunting. But I agree with you.
You can never give up uh uh the battle
to try and do that responsibly. And we
can the way we beat China is we actually
get this right. We don't roll out AI
companions that cause attachment
disorders and suicides. We don't beat
China when we roll out AI recklessly in
this way.
>> Right?
>> And so the point is that this is
actually in everyone's interest
including the way we beat China is you
have AI liability laws. You restrict AI
companions for kids. You uh you you have
whistleblower protections that make sure
we don't release AI capabilities that we
don't understand.
>> Right? And maybe even just recognize
this is bigger than China. This isn't
about like this is a humanity. This is
one of those movies where like where all
the countries get together like it's
it's like an alien force.
>> Exactly.
>> Yeah. Dig it. Well, I really appreciate
it. Although on the flip side, and we've
talked a lot about it, it does make cool
songs.
>> It does.
>> I don't want to soft sell that. Yeah.
>> All right. Very much. Uh thank you very
much. Be sure to check out his podcast.
Your undivided attention, Tristan
Harris.
[Music]