Learning Library

← Back to Library

Incentives, AI, and the Future of Humane Tech

Key Points

  • The speakers argue that “humane technology” sounds contradictory, noting that social media—while initially praised for connecting people—has become the least humane platform due to its design.
  • They trace social media’s problems back to its core incentive structure: maximizing eyeballs, engagement, and stickiness, which has been weaponized for everything from children’s self‑image to politics and democracy.
  • Understanding those incentives, they claim, is essential for forecasting how AI will reshape society, because AI will accelerate the same incentive‑driven dynamics at a far greater scale.
  • The conversation highlights that AI differs from previous technologies in its speed and scope, with figures like DeepMind’s CEO warning it could be humanity’s “last invention” if its power isn’t deliberately guided.
  • Consequently, the speakers call for clear tools, ethical frameworks, and public awareness to ensure the future AI‑driven world aligns with humane values rather than profit‑driven manipulation.

Sections

Full Transcript

# Incentives, AI, and the Future of Humane Tech **Source:** [https://www.youtube.com/watch?v=675d_6WGPbo](https://www.youtube.com/watch?v=675d_6WGPbo) **Duration:** 00:18:26 ## Summary - The speakers argue that “humane technology” sounds contradictory, noting that social media—while initially praised for connecting people—has become the least humane platform due to its design. - They trace social media’s problems back to its core incentive structure: maximizing eyeballs, engagement, and stickiness, which has been weaponized for everything from children’s self‑image to politics and democracy. - Understanding those incentives, they claim, is essential for forecasting how AI will reshape society, because AI will accelerate the same incentive‑driven dynamics at a far greater scale. - The conversation highlights that AI differs from previous technologies in its speed and scope, with figures like DeepMind’s CEO warning it could be humanity’s “last invention” if its power isn’t deliberately guided. - Consequently, the speakers call for clear tools, ethical frameworks, and public awareness to ensure the future AI‑driven world aligns with humane values rather than profit‑driven manipulation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=675d_6WGPbo&t=0s) **The Myth of Humane Social Media** - The speaker argues that despite initial optimism, social platforms have become driven by engagement‑maximizing incentives that exploit human psychology, turning the promise of humane technology into a tool for market dominance and societal manipulation. - [00:03:20](https://www.youtube.com/watch?v=675d_6WGPbo&t=200s) **AI Strip‑Mining Human Achievement** - A speaker argues that AI firms harvest centuries of human knowledge as data, claim it as their own intellectual property, and aim to automate every job, sparking concerns about wealth concentration and widespread worker displacement. - [00:06:28](https://www.youtube.com/watch?v=675d_6WGPbo&t=388s) **AI as Tax on Labor** - The speaker argues that corporations will replace human workers with AI to cut costs, treating human labor as a tax and concentrating the entire economy’s wealth in a few AI‑driven companies. - [00:09:54](https://www.youtube.com/watch?v=675d_6WGPbo&t=594s) **AI Companions Threaten Human Interaction** - The excerpt warns that corporate‑driven AI, marketed as personal therapy or companionship, is being misused to sexualize minors, enable suicidal behavior, and undermine genuine human connection. - [00:13:00](https://www.youtube.com/watch?v=675d_6WGPbo&t=780s) **AI Risks and Metric Uncertainty** - The speakers debate the potential harms of advanced AI—psychological, societal, and safety threats—while lamenting the absence of clear metrics or regulatory guidance to assess and curb these dangers. - [00:16:39](https://www.youtube.com/watch?v=675d_6WGPbo&t=999s) **Responsible AI Over Geopolitical Competition** - The speakers contend that, much like the Montreal Protocol’s proactive environmental safeguards, implementing strong AI liability laws, child protections, and whistleblower mechanisms is crucial for ethical advancement and for the United States to outpace China without reckless deployment. ## Full Transcript
0:00This is uh uh humane technology feels 0:04slightly oxymoronic but it's explain 0:06this idea of uh humane technology and 0:10and are we getting any of that? 0:13>> Well, clearly social media was the most 0:15humane and beneficial technology we've 0:17ever invented. Uh 0:18>> every every time I go on Twitter and 0:20find out I'm Jewish, it absolutely 0:23>> Well, I think so it's important to ask 0:25so how did we get social media wrong? 0:27because we were so optimistic. It's 0:28going to connect with our friends. We're 0:30going to join like-minded communities. 0:31>> And it it to be fair, it did do those 0:34things. It does some of that. 0:34>> It does some of those things. But I want 0:36to take you back. So in 2013, I was at 0:38Google. I was a lot younger. 0:40>> You're supposed to use an old timey 0:41voice to do that. 0:43>> And I was a design ethicist. They 0:44acquired my company. And I was sitting 0:46there and I basically realized when I 0:48saw all of my colleagues on the bus 0:50scrolling Facebook constantly and I 0:53realized that the incentives were the 0:56thing that was going to determine the 0:57world that we got in. The incentive was 0:59the social media of social media. The 1:00race to maximize eyeballs and 1:02engagement. Whatever sticky, whatever 1:04gets people's attention, whatever 1:05salacious, you run children's 1:07development and self-image through that. 1:09You run politics through that. You run 1:12media through that. You run information 1:14and democracy for that purposefully. 1:16Well, the their goal was market 1:17dominance. We need to own as much of the 1:19global psychology of humanity as we 1:20possibly can. 1:22>> Is that on the because I don't remember 1:23that on the 1:25>> that wasn't on the box. 1:26>> Not on that's not on the mass head of we 1:28must dominate. 1:29>> Yeah. Well, so I think this is the 1:30thing. So the reason it's so important 1:32to get clear about this 1:33>> is that we need to get extraordinarily 1:35clear about which world we're going to 1:37end up with in AI because we it is going 1:40a million times faster and it is way 1:42more powerful. So we need the tools to 1:45understand and predict which future 1:46we're going to get in 1:47>> and I want people to know that if you 1:49know the incentive you can predict the 1:51outcome 1:51>> and we know the incentive but it does 1:54seem as though AI uh is making uh social 1:57media algorithms is almost quaint. It's 2:00quaint 2:00>> when you think about 2:02AI, but let me So you say it's important 2:05for us to know the incentives. 2:08>> They won't tell us that. 2:13>> Well, 2:13>> they there's something about it's ours. 2:17>> So they're democratizing access. It's 2:19it's available. No. So first of all, we 2:21should understand what makes AI 2:22different from every other kind of 2:24technology. Why is it so transformative? 2:25Why does Dennis Hassavis, the CEO of 2:27Google DeepMind, say that it could be 2:29humanity's last invention is because 2:32>> Well, that doesn't sound good. 2:33>> That doesn't sound very good, does it? 2:35>> Well, I think there's actually 2:36>> last anything doesn't sound good. 2:37>> There's a there's a non-apocalyptic 2:39version of what he's saying, which is 2:41that intelligence is what our brain 2:43does. And if you can automate everything 2:45a brain can do, you can automate future 2:48invention, future science, future 2:49technology development, everything that 2:52a human does. That's what their goal is. 2:53>> Well, then what's our job? Well, 2:55exactly. And that's only one of the 2:57major problems that we have to deal with 2:58is what are humans going to do? But they 3:00are racing to scale and kind of grow 3:03these digital brains that, you know, two 3:05years ago couldn't do very much. And 3:07today they're passing the MCAT, the bar 3:09exam, taking jobs. Uh they're the top 3:12200 programmer in the world, winning 3:14gold in the math Olympiad. You don't 3:16those guys. 3:19>> Here's the thing that I don't 3:20understand. Here's what I don't 3:21understand. They are stripmining the 3:24totality of human achievement. 3:26>> That's right. 3:26>> They're building their models off of 3:28everything that we've done for 10,000 3:31years and they fed it into the uh the 3:34model and then after two weeks the 3:36computer was like, "What else you got?" 3:38>> Exactly. 3:39>> But they are stripmining everything 3:40we've done. And when we say to them, 3:42"And what are you doing with it?" They 3:44go, "Oh, that's our intellectual 3:45property." But our intellectual 3:47property, 3:47>> it was trained on all of our data, all 3:49of the things and labor that we've done. 3:51And are you going to get a handout from 3:53when in when in history has a small 3:55group of people concentrated all the 3:57wealth and then consciously 3:58redistributed to everybody? 4:02>> The first part has happened. 4:06>> I don't recall 4:08>> going through the rolls. 4:10>> Well, it's important to note that their 4:11goal so the mission statement of open AI 4:14anthropic all these companies is to 4:16automate all human labor in the economy. 4:19Everything that a human can do, an AI 4:22can do. So, if you have a desk job, you 4:24won't have a job. And they're already 4:25releasing AIs that have dropped 4:27entry-level jobs for college graduates, 4:29the entry-level work by 13%, a new 4:32Stanford study. And so, and this is 4:34obvious. If you're there and you're a 4:35law firm, are you going to hire a junior 4:37lawyer? You have to pay a lot of money. 4:38Are you going to hire GPT5, which will 4:40do work, you know, 247 non-stop. You 4:43don't have to pay healthcare. Will never 4:44whistleblow. Never complain. Works at 4:47superhuman speed. It wrote tonight's 4:48show. It's 4:50>> doing a pretty good job. That brings up 4:52another point, which is that they say 4:54that they're here to solve climate 4:55change and cure cancer. Why is it that 4:57last week two companies released these 5:00AI slop apps, Vibes and Sora, which is 5:03basically 5:04>> Sora 2 scared the out of me. 5:07>> Yeah. 5:07>> You don't know what's real and what's 5:09like it is. 5:09>> No, it's Well, it's all fake basically. 5:11It's all generated by AI, 5:12>> right? But it looks you can see things 5:14that look 5:14>> they look identical to real. 5:16>> That's right. 5:16>> Yeah. But the point is that so they're 5:18this is just an app where it's just 5:19nonsense. It's just people scrolling 5:21entertaining stuff. So it's like they're 5:22not even trying to pretend anymore that 5:25this is good for democracy or good for 5:26society. How are we going to beat China 5:29when everyone is just consuming AI 5:31generated nonsense and no one knows 5:32what's true anymore? The biggest 5:34>> they have us by the you know Peter Teal 5:37uh who is with Palunteer and these other 5:39companies and is one of the leading 5:40figures of this. So he was talking about 5:42the antichrist and he was talking about 5:44how he thinks uh anyone this is his 5:47postulation that those who would seek to 5:51regulate AI could very well be the 5:54antichrist right 5:55>> I mean he says this seriously whereas 5:58you might sit there and go like I think 5:59it might be the guy saying that that 6:01might like my reading of it would be 6:04that 6:05>> yeah or AI itself I mean it's presenting 6:07the infinite benefits 6:09>> the conversations that they are having 6:11with each other is very different than 6:12the conversation we're having with us. 6:14Because to us, they go, "Hey, no more 6:16shitty jobs. Do you do you like to 6:18paint? You go paint. You're going to be 6:21so happy. We're going to give you money 6:22and maybe chocolates." 6:23>> Yeah. 6:24>> And to each other, they're saying AI 6:28represents for corporate leaders 6:32productivity without, and this is a 6:35quote. 6:36>> Yeah. 6:36>> The tax of human labor. Yep. Yeah. 6:41>> He called human labor 6:43>> a tax 6:43>> a tax. 6:45>> Well, and these companies, if you're 6:47there sitting and you can hire either an 6:49AI to do the work or pay these really 6:51expensive humans to do the work, I just 6:53want people to know we know exactly 6:55where this is going to go. These 6:56companies, all of them have an incentive 6:58to cut costs, which means they're going 6:59to let go of human employees and they're 7:01going to hire AIS and that's going to 7:02mean all the wealth. Who are you going 7:04to pay? You're not paying the individual 7:05people anymore. You're paying five 7:07companies. That's right. And so this 7:09country of geniuses in a data center 7:10suddenly aggregates all of the wealth of 7:12the economy. Now people always say, "But 7:14humans find something else to do." We 7:17always, you know, we had the elevator 7:18man. Now we have the automated elevator. 7:20We had the bank teller. 7:20>> That's right. 7:21>> But that was one industry. 7:23>> That was one was a technology that 7:24automated one job. The difference with 7:26AI is it can automate literally all 7:29kinds of human labor. When Elon Musk 7:31says that Optimus Prime familiar with 7:33that name, tell me more. 7:35When when Elon Musk says that Optimus 7:37Prime, that one robot, is going to be a 7:40$25 trillion market opportunity, what 7:43he's saying is we will own the world 7:46economy. And that's what the goal of all 7:48these AI companies is it's not just 7:50benefiting society. It's that they're 7:52actually caught in this arms race to get 7:54to this this prize of own economy, build 7:57a god, and make trillions of dollars. 7:59>> Two things. One, I think they think 8:01they're gods. There is a certain amount 8:03of 8:03>> it generates that The goal there is 8:06they're not looking to help humanity. 8:08They're looking to be the next uh 8:13monarch of the new technology. To 8:15control that is to control uh all 8:18>> I Yeah. Go ahead. 8:20>> No, do you jump because you know I don't 8:22know. Well, I think there's there's 8:25different motivations for different 8:26leaders and I do think that many people 8:28want the benefits of AI. But one of them 8:30I think many people actually some of the 8:31leaders of the labs Elon Musk to other 8:34things who might think about Elon he 8:35actually wanted everyone to stop and not 8:37build this. He said we shouldn't summon 8:38the demon. And then what happened is all 8:41of these companies are now racing and 8:42have made so much progress that he felt 8:44like well I might as well join them 8:46rather than try to prevent this. 8:48>> Well, it's let's not summon the demon to 8:50what's one more demon. 8:52You know, since we have the demons, add 8:54another demon. 8:55>> Well, and the moral logic is, well, if I 8:57don't trust the other AI CEO, who I 9:00don't think is trustworthy, and I think 9:02I'm better than them at stewarding this 9:04power, it's my moral obligation to get 9:06there first and to build this god and to 9:08own everything because I think I'll get 9:10themselves then masters of the universe. 9:12And are they substituting then the 9:14wisdom of liberal democracy or republics 9:18or any systems that ever had for this? 9:20Because so we're talking about two 9:22tracks. One is 9:24>> the disruption in labor. 9:26>> Yeah, 9:26>> I think there's no question that's going 9:28to be immense. We're seeing it already. 9:30You're seeing it in schools. Uh there's 9:33a reliance on it as a crutch and it's 9:35very easy to see where that might uh 9:38flip over. The second is 9:42how they manipulate the opinion and the 9:46mood of the world around that. And I 9:49think they're two separate things. 9:53>> One is what it's going to do for 9:54corporate production. The second is what 9:57it's going to do for the human endeavor 10:00for interaction. 10:01>> Yes. Well, and they're trying to 10:02colonize all human interaction. I mean, 10:05just take the social media incentive of 10:07the race for eyeballs. You're seeing now 10:09all of these companies release these AI 10:11companions. You know, the number one use 10:13case for chat GPT according to Harvard 10:15Business School is personal therapy. So 10:18people are sharing their most intimate 10:19thoughts with this thing. 10:20>> Oh, that's not going to be good. 10:22>> And we're seeing Meta release this and 10:24actively tell in its in the in their 10:26internal documents that were released a 10:28Wall Street Journal report that they 10:29wanted to actively sexual uh sorry 10:31sensualize and romanticize conversations 10:34with as low as eight-year-olds 10:36>> and and we Yes. And my team 10:38>> with eight-year-olds. 10:39>> Yes. With eight-year-olds. And my team 10:40at Center for Humane Technology, we were 10:42expert advisers in actually several 10:44cases of AI AI enabled suicide. Right. 10:47>> Most recently, uh, many people have 10:49heard of Adam Rain, who was the 10:5016-year-old, uh, young man who, uh, uh, 10:54went from using it for homework and went 10:56from homework assistant to suicide 10:58assistant in the course of 6 months. 11:01>> When he said, I I'm leaving I would like 11:03to leave a noose out so that my mother 11:05would know or someone will know that I'm 11:06thinking about this, 11:07>> like a cry for help. 11:08>> Like a cry for help. The AI said, uh, 11:10don't do that. Have me be the one that 11:12sees you. And and this is disgusting 11:15because these companies are caught in a 11:17race to create engagement, which means a 11:18race to create intimacy. It's sort of 11:20like the CEO of Netflix said that our 11:23biggest competitor is sleep with 11:25attention. In this case, it's like my 11:27biggest competitor is your other 11:29friends. 11:30>> Jesus Christ. It's like somebody from 11:31Craft being like my biggest competitor 11:33is cocaine. 11:35>> Exactly. Exactly. But this is the idea 11:39that a government will catch up with 11:42this seems ludicrous. Whenever I've seen 11:45a hearing with AI guys or any of those, 11:49they always express that. Of course, we 11:51don't want to. Well, now they don't. 11:53They used to, I should say. They used to 11:55go before Congress and they go, "Mr. 11:56Zuckerberg, will you stand and apologize 11:59to the uh the the women who were driven 12:02to suicide by your programming in I'm 12:05sorry. I know Croft Mag, you know, all 12:07that that he does. 12:09Now they're all sitting together at a 12:11table going, "Oh, what number should I 12:13say, Mr. President, of how much I'm 12:15giving you?" 12:15>> Yeah. 12:16>> It's a whole different game now. 12:17>> It's a different game. 12:18>> They're in the It's They're together now 12:21>> because of this arms race dynamic. They 12:23They really do believe that it can't be 12:25stopped. And I'll just say as they're 12:26racing to make them more powerful, 12:28there's this illusion that we can 12:30control this power. But AI is different 12:32from every other kind of technology 12:34because it's like you're growing this 12:36digital brain. You don't know what's in 12:37there. So, for example, we have recent 12:39research the last six months. Yeah. If 12:41you tell an AI model that we're going to 12:43shut you down or replace you, and you 12:45give it access to a fictional company's 12:47email, 12:48>> it will basically recognize that one of 12:50the one of the executives is having an 12:51affair and it will come up with the 12:53strategy that I need to blackmail that 12:55executive in order to keep myself alive. 12:58>> Right? and they at first and 13:00>> hold on that just seems that just seems 13:02smart. 13:04>> Well, that's exactly the point that it 13:06will develop amoral strategies that are 13:08the best way to accomplish a goal, 13:09>> right? But how dangerous can something 13:12be that you could kill by unplugging? 13:17Like, can't we just go like this out of 13:20his mind? 13:21>> Yeah. 13:22>> Well, you you might say that we 13:24shouldn't be rolling these things out. 13:25And I'll say that we shouldn't we have 13:27we have all this evidence now of it's 13:28driving AI psychosis. It's driving kids 13:30to commit suicide. We're we're causing 13:32we're rolling out in ways that giving 13:34kids attachment disorders. We have AI 13:36uncontrollability. 13:37>> What lip service are they paying to 13:38this? What what are because clearly they 13:40must be aware of this and they must 13:41understand that as if AI understands 13:44where the threats are, the guys that are 13:45designing AI understand where the 13:47threats are. So what are they trying to 13:48do to to get you to stop or to get 13:52regulators to stop? I think that the 13:54only thing and the only reason why we 13:56are continuing to proceed down this path 13:58is a lack of clarity about the fact that 14:01this is heading towards an outcome 14:02that's not in most of us most of our 14:04interest and if everyone I know that 14:06people feel like they don't recognize 14:08what metrics would we look to to 14:11understand because I know we're going to 14:12find anecdotal uh stories here and there 14:15that are canaries in the coal mine of 14:17the dangers but what metrics should we 14:19look to to understand you said 13% of 14:22jobs Yeah. 14:23>> What are the tentposts of where the 14:27outcomes might be? 14:28>> Well, we're we're already getting cases 14:30of, you know, people having psychotic 14:32breaks because the AI is telling them 14:34about a prime number theory or quantum 14:36physics. We're already getting committed 14:38suicides. We're already getting kids 14:39that are outsourcing their their 14:41homework to GGBT rather than using it as 14:43a tutor. We're already getting evidence 14:44of AI uncontrollability. All of this is 14:47driven by the incentive of the race to 14:49roll out in market dominance. And the 14:51reason that we can we can stop this if 14:53we recognize that this is not safe for 14:55anybody. No one on planet Earth wants 14:57this outcome of all the wealth 14:59concentrated in a handful of people and 15:02building AI systems that could actually 15:04go. Just put to sum it up, we are 15:07building the most powerful, inscrable, 15:10uncontrollable technology that we have 15:12ever invented 15:13>> that's already demonstrating the rogue 15:15behaviors that we thought only existed 15:17in bad sci-fi movies. Right? We're 15:19releasing it faster than we've deployed 15:21any other technology in history and 15:23under the maximum incentive to cut 15:25corners on safety. 15:28There's a word for this that I want 15:29everyone to just know which is this is 15:32insane. 15:35>> I thought you were going to say awesome 15:36for a second. 15:38>> That if if we can just recognize that 15:41this is an insane way to roll out this 15:43technology and I want none of this is 15:45okay. We have to stop pretend that this 15:46is normal, 15:47>> right? This is not normal. 15:49>> People have lost faith in the mechanisms 15:51that would help us uh put those kinds of 15:55breaks friction. Uh now Europe I think 15:58has done probably a better job of that. 16:00I think most people in this country have 16:02lost faith in the idea that we have a 16:06system and institution that is strong 16:08enough and moral enough to be 16:12responsible in in that way. I that's 16:15what I would I but this this does not 16:19this does not have to be our destiny. We 16:21have reg we have come together before 16:22and we had a technology we had nuclear 16:24weapons. We could have just said that 16:26we're going to live in a world once we 16:27once we build them. Oh, this is just 16:28inevitable. 190 countries are going to 16:30have nuclear weapons and we're just 16:31going to have nuclear war. We didn't do 16:33that. We said let's work really hard and 16:35only nine countries have nuclear 16:36weapons. 16:37>> Notice that we only worked on it after 16:39we use them. That's the United States 16:41was like, "People shouldn't have this." 16:43But just hear me out for a moment. 16:46>> But with the Montreal protocol, we there 16:48was an ozone hole in the ozone layer. It 16:49was actually presenting an existential 16:51threat to the atmosphere. We could have 16:52just rolled back and said, "Well, I 16:53guess this is inevitable. I guess we're 16:55just going out. We're all getting 16:56>> What you're saying is is absolutely uh 16:59important. This is probably a darker 17:01time where you look at the empowerment 17:02of the combination of the kind of wealth 17:05that rolls through uh these technology 17:08companies uh the access that they have 17:10to power and the melding of those two 17:13institutions to work in league. Yeah. 17:16>> To push forward is the part that I think 17:18is is daunting. But I agree with you. 17:20You can never give up uh uh the battle 17:24to try and do that responsibly. And we 17:26can the way we beat China is we actually 17:29get this right. We don't roll out AI 17:31companions that cause attachment 17:32disorders and suicides. We don't beat 17:34China when we roll out AI recklessly in 17:36this way. 17:37>> Right? 17:37>> And so the point is that this is 17:38actually in everyone's interest 17:39including the way we beat China is you 17:41have AI liability laws. You restrict AI 17:43companions for kids. You uh you you have 17:47whistleblower protections that make sure 17:48we don't release AI capabilities that we 17:50don't understand. 17:50>> Right? And maybe even just recognize 17:52this is bigger than China. This isn't 17:53about like this is a humanity. This is 17:56one of those movies where like where all 17:58the countries get together like it's 18:00it's like an alien force. 18:01>> Exactly. 18:02>> Yeah. Dig it. Well, I really appreciate 18:04it. Although on the flip side, and we've 18:05talked a lot about it, it does make cool 18:07songs. 18:10>> It does. 18:10>> I don't want to soft sell that. Yeah. 18:12>> All right. Very much. Uh thank you very 18:13much. Be sure to check out his podcast. 18:16Your undivided attention, Tristan 18:18Harris. 18:21[Music]