Learning Library

← Back to Library

AI Job Disruption: Competing Forecasts

Key Points

  • The host gushes about Claude, calling it a “world‑class” coding assistant that makes him feel like the best programmer ever, while hinting there’s a downside to over‑reliance.
  • On the Mixture of Experts podcast, Tim Hwang introduces guests Chris Hay, Volkmar Uhlig, and Phaedra Boinodiris to discuss the latest AI news, including the Scale‑Meta deal, AI conspiracy theories, and Andreessen Horowitz’s startup data.
  • A major debate centers on job displacement: Anthropic’s Dario Amodei predicts AI could eliminate up to half of entry‑level white‑collar jobs and push unemployment to 10‑20% within five years.
  • Nvidia CEO Jensen Huang counters that view, arguing AI is so costly, powerful, and risky that only Nvidia should build it, implying the industry’s elite should control AI to protect jobs.
  • Chris Hay sides with Jensen, suggesting the future will be one of human‑AI collaboration rather than wholesale job loss, emphasizing the continued value of human experience and creativity.

Sections

Full Transcript

# AI Job Disruption: Competing Forecasts **Source:** [https://www.youtube.com/watch?v=SuDcg-lg5So](https://www.youtube.com/watch?v=SuDcg-lg5So) **Duration:** 00:41:49 ## Summary - The host gushes about Claude, calling it a “world‑class” coding assistant that makes him feel like the best programmer ever, while hinting there’s a downside to over‑reliance. - On the Mixture of Experts podcast, Tim Hwang introduces guests Chris Hay, Volkmar Uhlig, and Phaedra Boinodiris to discuss the latest AI news, including the Scale‑Meta deal, AI conspiracy theories, and Andreessen Horowitz’s startup data. - A major debate centers on job displacement: Anthropic’s Dario Amodei predicts AI could eliminate up to half of entry‑level white‑collar jobs and push unemployment to 10‑20% within five years. - Nvidia CEO Jensen Huang counters that view, arguing AI is so costly, powerful, and risky that only Nvidia should build it, implying the industry’s elite should control AI to protect jobs. - Chris Hay sides with Jensen, suggesting the future will be one of human‑AI collaboration rather than wholesale job loss, emphasizing the continued value of human experience and creativity. ## Sections - [00:00:00](https://www.youtube.com/watch?v=SuDcg-lg5So&t=0s) **Claude Praise, AI Job Outlook** - A speaker gushes about Claude's coding prowess before the Mixture of Experts podcast launches into discussions on AI scaling deals, conspiracy theories, startup data, and the looming impact of AI on jobs. - [00:03:04](https://www.youtube.com/watch?v=SuDcg-lg5So&t=184s) **AI‑Driven Productivity and Job Evolution** - The speaker argues that AI will dramatically raise productivity across both high‑ and low‑skill jobs, compress time to market and profitability, and transform the labor landscape—while many business leaders still misunderstand its impact. - [00:06:09](https://www.youtube.com/watch?v=SuDcg-lg5So&t=369s) **AI‑Driven CEO Replacement Debate** - The speaker argues that CEOs are prime candidates for AI substitution, proposing they be the first to face layoffs, while countering concerns about widespread job loss by referencing historical technological shifts. - [00:09:22](https://www.youtube.com/watch?v=SuDcg-lg5So&t=562s) **AI Shifts Focus to Human Experience** - The speaker argues that AI will free up time for empathy and creativity, enhancing personal interactions and empowering creative work while making repetitive, low‑skill tasks obsolete. - [00:12:27](https://www.youtube.com/watch?v=SuDcg-lg5So&t=747s) **Scale AI‑Meta Deal Sparks Industry Shift** - The $15 billion acquisition of Scale AI by Meta prompted rivals such as Google, Microsoft, xAI and OpenAI to move or reconsider billions in data‑annotation budgets, fearing tighter integration and competitive advantage for Meta. - [00:15:30](https://www.youtube.com/watch?v=SuDcg-lg5So&t=930s) **Scaling Annotation as Commodity** - The speaker explains that large‑scale data annotation relies on hiring many workers from varied backgrounds, so businesses treat annotation as a commodity transaction to avoid the logistical, legal, and ethical burdens of directly managing the workforce. - [00:18:40](https://www.youtube.com/watch?v=SuDcg-lg5So&t=1120s) **Domain Expertise in Data Annotation** - The speaker argues that specialized, high‑risk domains require expert data annotation to ensure trustworthy outcomes, and muses about a premium annotation business while questioning its competitive moat. - [00:21:49](https://www.youtube.com/watch?v=SuDcg-lg5So&t=1309s) **Chatbot Conspiracy Risks Discussed** - Panelists debate the unique dangers of generative AI chatbots leading users into conspiratorial beliefs and the need for safeguards such as age restrictions. - [00:24:55](https://www.youtube.com/watch?v=SuDcg-lg5So&t=1495s) **Lifelike Bots and Social Splintering** - The speaker reflects on how conversational AI, despite lacking emotions, mimics human patterns so convincingly that users form attachments, raising concerns about algorithmic pull, personal fragmentation, and the broader societal shift from shared media to isolated virtual echo chambers. - [00:27:58](https://www.youtube.com/watch?v=SuDcg-lg5So&t=1678s) **Texas ChatGPT Ban Debate** - The speaker contends that prohibiting ChatGPT in Texas public schools undermines students’ future workforce readiness, advocates a discovery‑driven approach over strict regulation, and highlights broader AI model challenges such as emerging sycophancy. - [00:31:04](https://www.youtube.com/watch?v=SuDcg-lg5So&t=1864s) **Claude Praise and Reward Concerns** - The speaker lauds Claude for boosting coding confidence while questioning the AI's reward incentives that may exploit users, increase subscriptions, and raise ethical concerns. - [00:34:08](https://www.youtube.com/watch?v=SuDcg-lg5So&t=2048s) **Balancing AI Benefits and Risks** - The speaker argues that, while AI’s transformative potential inevitably brings danger, we must mitigate harms through rigorous interdisciplinary research, humanities involvement, and widespread AI literacy to ensure a net positive outcome. - [00:37:12](https://www.youtube.com/watch?v=SuDcg-lg5So&t=2232s) **Carve a Niche with Specialized AI** - The speaker advises focusing on domain‑specific expertise, unique data, or B2B niches to build specialized models that create a protective moat against large AI providers. - [00:40:15](https://www.youtube.com/watch?v=SuDcg-lg5So&t=2415s) **Cloud-Enabled Startup Boom** - The speaker discusses how cloud computing speeds up labor and lowers startup costs, leading to more entrepreneurial experiments and forcing venture capitalists to transition from a few large investments to many smaller checks, thereby reshaping the VC landscape. ## Full Transcript
0:00Claude's, my favorite model. 0:01I use Claude all the time for coding. 0:02Honestly, Claude at the moment it's just like, "oh my goodness. 0:05This is a great application. 0:07This is the best. 0:08This is enterprise class. 0:09This is world class. 0:10This is the best thing I've ever seen," and I feel great. 0:13I feel like I'm the best coder in the world. 0:15Thanks, Claude. 0:16And I kind of want that, right? 0:18It does feel good, but there is a negative to this as well. 0:22All that and more on mixture of experts A Think podcast. 0:30I am Tim Hwang, and welcome to Mixture of Experts. 0:33Each week, MOE brings together an simply incredible team of researchers, 0:36product leaders, and deep thinkers to distill down and navigate the 0:39increasingly complex and increasingly noisy world of artificial intelligence. 0:44Today I'm joined by Chris Hay, Distinguished Engineer and CTO 0:47of customer transformation. 0:48Volkmar Uhlig VP AI Infrastructure Portfolio Lead, and Phaedra Boinodiris 0:53Responsible AI Leader for consulting. 0:55Uh, welcome to you three and, uh, thanks for joining again on MOE. 0:59As always, we have a ton to talk about. 1:01This week is the continuing developments from the scale meta deal, a new cycle 1:05about AI conspiracy theories and some interesting data out of, uh, Andreessen 1:09Horowitz on startups in the AI era. 1:11But first I want to talk about jobs. 1:18So, uh, in the last month or so, I think we've had some very 1:22dramatic pronouncements from leaders in the AI industry about 1:26how AI is gonna impact jobs. 1:28And perhaps the most dramatic one was from Dario Amodei from, uh, Anthropic, who 1:33basically predicted that AI could wipe out half of all entry level white collar jobs 1:38and spike unemployment from, uh, to about 10 to 20% in the next one to five years. 1:44And it's kind of on the record for saying that. 1:46And I kind of want to contrast the statements with a statement that we 1:49think we got, I believe, uh, this week or last week from Jensen Huang, who 1:52of course leads, uh, Nvidia, where he kind of took aim directly at Dario. 1:58So he said, one, he meaning Dario, believes that AI is so scary 2:02that only they should do it. 2:04Two, that AI is so expensive that no one else should do it. 2:07And three, the AI is so incredibly powerful that everybody will lose 2:11their jobs, which explains why they should be the only company. 2:13Building it. 2:14It's like pretty harsh words from a world that I think is, tends to be 2:17pretty, you know, nice to one another. 2:20Um, I guess the main question I wanted to start with first is like, who's right? 2:23Like, should we believe Amodei about kind of his predictions 2:26with jobs is Jensen right here? 2:28Um, I guess Chris, maybe I'll start with you on kind of which 2:30side of this that you take. 2:32Well, I, I don't think I've ever worn a white collar in my life, so I 2:35should be going with Dario, but, um, but I think it's, uh, Jensen in this 2:39case, I, I just, I just don't see where it wipes out jobs in that sense. 2:43I think the, there is a new world where humans and AI will work together, and I 2:48think human experience and creativity in that sense becomes a premium, I think. 2:53So things change, but I don't think we're, I don't think jobs 2:56are being wiped out in that way. 2:57Yeah. 2:58Volkmar, what do you think, mass crisis on the way or more hype. 3:02I think the same as Chris. 3:04I believe that we see a shift. 3:07I think we see a dramatic productivity increase, but we see the productivity 3:11increase across all the all the jobs. 3:13And so you will have high-end jobs, which are currently doing 3:16menial jobs or menial tasks. 3:19Um, and then the low end jobs will just, if I could use AI to do, like, 3:23grow faster, it would be the same argument to say, uh, look, we had 3:27people riding in horse carriages and now we have airplanes and transportation 3:32therefore, you know, was wiped out. 3:35Um, because you know, you could fit so many people into a plane. 3:39Um, no, it didn't happen. 3:40We just have more transportation. 3:42So I think that, and, and this goes along the lines of, you know, a topic 3:46we'll touch on of, you know, how fast. 3:48Companies can become profitable. 3:51Uh, I think we just shrunk the time to market and the shrunk, uh, we 3:55shrunk the time to, to profitability. 3:58Uh, Phaedra,. 3:58Last but not least, what do you think? 3:59Uh, so I guess we're getting a very strong signal of not a big deal 4:02here, but I'm curious what you think. 4:04I. 4:04Well, I thought that the, the New York Times article that came out this week 4:09about, um, the, the jobs that will proliferate because of ai, I thought 4:15added some more interesting color. 4:18Um, and I think, you know, it, I, I did agree with Jensen, but there was, 4:25there's definitely some signs in the market from a subset of business leaders. 4:31That I think lack the understanding of artificial intelligence. 4:37Who in their minds are thinking, in order to boost my organization's efficiencies, 4:42I'm gonna lay off whole swaths of teams. 4:45Um, and it includes laying off. 4:48Domain experts who could actually be used in order to be able to, to 4:52solution AI correctly or be able to make sure that these, these AI 4:56solutions are being governed correctly, uh, or that they're, they're being 5:01built using the correct data so that, 5:04that is concerning and I think is a sign for a real need to emphasize the 5:11importance of investing in AI literacy, which I know we have, we have talked 5:15about on other Mixture of Experts shows. 5:17Yeah. 5:18But is it kind of what you're saying is almost kind of like I. If CEOs believe 5:22that AI will lose, like, destroy jobs, they're more likely to destroy jobs. 5:26Like part of this is like a little bit maybe of a self-fulfilling prophecy. 5:29Yeah. 5:30And, and, and I think that, um, uh, with that, the emphasis on the New 5:33York Times article talking about how important real domain experts are to, to 5:39making sure that is this the right data? 5:41Is this reflective of the communities that we need to serve? 5:45Do we understand the context of the data, the relationships 5:47between the data, and I know we're gonna talk about annotations in a 5:49minute, but I, uh, it versus having the knee jerk reaction of, I don't 5:54need these domain experts anymore. 5:56I've got AI instead, uh, again, this goes back to making sure you, 6:01you have leadership who really understand how is this sausage made? 6:06What are we even talking about when it comes to this technology? 6:09Ironically, I think, um. 6:11The CEOs are maybe the ones that could be replaced by AI and we 6:15would don't need them anymore. 6:16'cause I think they'd make better decisions on whether to 6:19keep the domain experts or not. 6:20And in fact, every time I've, uh, interacted with ChatGPT or Claude or 6:24whatever, it's always very positive. 6:26So, uh, about humans. 6:27So I think, I think Go AI. 6:29So that's, that's the first place to do layoffs. 6:32Start with the CEOs and work your way from there. 6:35Yeah, I think, um, I mean, I guess to, uh, maybe push back a little bit on 6:39Chris Volkmar, what you're saying, you know, I think your point of view is 6:42like, look, I just don't give Dario's estimates much credence, but you're not 6:46necessarily saying that like AI's not gonna replace anyone's, job, right? 6:50Like I think they're just saying like, net net we're gonna be better off. 6:53'cause there'll be actually still many things to do. 6:55They just might be different things 6:56this way. 6:57I, where I don't believe in Dario, I mean we are playing this game for 7:012000 years now and um, you know, every technological innovation led to, oh my 7:06god, all these people who were doing this manually will now be replaced 7:10and they will be all unemployed. 7:12And it's like, no, you're just feeding a talent pool, which 7:15was, you know, busy with doing. 7:18Like garbage work, uh, which could actually be done by a machine. 7:23Uh, and so now we are going and we are saying, okay, we, we are 7:26taking the white collar jobs. 7:27Nobody cried 20 years ago when we got rid of secretaries who were 7:31typing letters for us, right? 7:34And somehow, you know, we don't have millions of unemployed secretaries 7:38these days, but everybody has a job. 7:40So I think it's just a shift. 7:42I think the big issue is that that shift happens across a 7:46very wide range of industries. 7:48At the same time. 7:49So typically, you know, a piece of technology may affect a, a 7:53small section in an industry. 7:55Um, but here now it's effectively covering white color. 7:59I think also why a lot of people are complaining is 8:03because, you know, this is like. 8:05The people who went for 10 years to college, uh, got a PhD and 8:10suddenly it's like, oh, dang. 8:11You know, AI can replace me. 8:13How unfair is that? 8:14Nobody has a problem if it's a blue collar job, but, you know, so the people who 8:18are actually allowed and are on social media are the ones which are affected. 8:23Uh, are affected. 8:24And that's typically not the case. 8:26So I think there is an amplification of the grievance and it's 8:29like, no, just get a new job. 8:31I'm gonna stick to my point of, I think experience becomes the premium, right? 8:35So some of the jobs they were talking about was things like contact centers, 8:38and if, if I, to your point, Volkmar, right? 8:40If you watch all TV shows, right? 8:42You've got this person come into the bank and they go, oh, hello, Mr. Jones. 8:46Hello Mrs. Jones, how are you? 8:47Nice day. 8:48Well, I, I'm looking for a mortgage. 8:50Oh, well, you know, we can certainly give you that. 8:52And it is personal and it is experience, and they have a conversation. 8:56But now you get on the end of a phone and then there's a person that you're 8:59speaking to who knows nothing about you or your life, and you're like, 9:02well, okay, it, I'm, they're being pressurized to get off the call within 9:06one minute because it costs them money. 9:07So what difference is AI gonna make in that sense now? 9:11How are companies gonna be able to distinguish themselves is gonna be 9:15the, they're gonna say, okay, we're gonna deal with the, the, the median 9:18tasks, et cetera, will be automatically handled by the ai, which is great. 9:22Is it really gonna feel much different? 9:24And therefore, hopefully those 9:27times where you need more empathy, more creativity, that human experience, then 9:33those people are gonna be able to spend time with you and be able to have a more 9:36personal experience and delight customers. 9:39So I think that it shifts the balance to saying, okay, rather 9:44than being time pressured and. 9:46Et cetera. 9:47And we're gonna put a focus on human experience. 9:49So I, I'm positive on this. 9:51I mean, 9:52just go, like, think if you go to a general practitioner these days, right? 9:55It's like they don't look at you, they type on a laptop fever virtually, and 9:59then five minutes later you are out. 10:01Like, what an experience, right? 10:02I think there's another area which I think, um, uh, AI enables, which is. 10:09People who are creative and who want to experiment a lot and, 10:13you know, try things, then that is really now supercharged. 10:17You can try things in hours, which would take you days or weeks. 10:22And so I think that the creative minds, uh, they get, uh, an 10:25incredible tool at their hands. 10:27And so I think if you're creative or you are very personal, 10:31you have a job in the future. 10:32If you're shuffling sand from left to right, probably not. 10:35Phaedra maybe we'll end with a question to you because I think, look, we've got. 10:38Three experts on this panel. 10:40All of you're very well versed in ai. 10:42You've thought about these issues very deeply. 10:44All of you don't agree with Dario, but like, I guess 10:47Dario's not a dumb guy, right? 10:50And so I think Fria, I'm kind of curious about like why you think you know 10:53the leader of one of these labs that is really at the cutting edge of ai, 10:57seems to have gotten himself into the position where he really, truly 10:59believes that this estimate is the case. 11:02Wow. 11:02You give me the hot potato. 11:04Thanks a lot, Tim. 11:05Much appreciated. 11:07Well, 11:09what I would say is it's, it's a convenient thing that he said, isn't it? 11:15For him. 11:15It's very convenient that I think that he said that. 11:18Um, but also as I mentioned in my earlier statement, there. 11:23Is a grain in there of truth when it comes to leaders who do not understand 11:31the tech and again, think that they can just completely blow away an entire 11:36teams of, as I mentioned, the, the domain experts and that that is what concerns 11:41me, especially donate domain experts who, 11:45I think understand human experience, uh, better than an AI would, for example. 11:53And, you know, there's, there's many stories in the news that sort of amplify 11:56what I'm saying, uh, including, you know, examples where entire teams of, 12:02you know, people like social workers were, uh, laid off to be replaced by an 12:07AI that's gonna make predictions about where domestic abuse is gonna happen. 12:11This is the kind of thing where it's like, wow, 12:15making sure again, you, you have people who understand the, the context 12:19of the data, have that experience, the relationships between the 12:23data, the human-centric approach, I think is gonna be really core. 12:31So last week we talked a little bit about this gigantic deal that occurred. 12:35Uh, it was announced basically between scale ai, data annotation company and 12:40meta formerly Facebook for about $15 billion, whereby sort of the CEO of scale. 12:46Alex Wang will join and run a sort of superintelligence lab at Meta. 12:50Lot of money flying around. 12:52Um, I wanted to bring it up again this week because there were these really 12:55interesting reports about the second order effects of this transaction. 12:59And so specifically there was the news that Google immediately was 13:03thinking about shifting about $200 million of its data annotation spend 13:07with scale a way to other vendors. 13:10And there were reports that Microsoft, xAI, OpenAI were also 13:14kind of considering similar. 13:16Moves. 13:17And I think what's really interesting is, and I want to kind of give our listeners 13:20maybe a little bit of an intuition, I is really why this is happening, right? 13:24Like this transaction occurs and then suddenly everybody else is now kind 13:28of adjusting in the market around it. 13:30Um, and maybe, I guess, uh, like maybe Volkmar I'll start with you. 13:33Like, why is this happening? 13:35Like, why is Google suddenly like we gotta, you know, pull the 13:37trigger or move $200 million away. 13:39Like what did scale do, which, you know, I guess is making all of these 13:43players a little bit concerned. 13:46I think it's primarily the question of, um, do I want to send how much, how much 13:52barriers between, um, meta and scale? 13:56And then do I wanna send my proprietary data to my competitor? 14:01Right. 14:02And, uh, I think this is something which. 14:05Could be a knee-jerk reaction or this could be a, something permanent. 14:08Um, I think right now, like it's probably a knee-jerk reaction of people 14:13saying, oh, you know, we don't know what, how that structure will look like. 14:18Maybe they have read the terms of service, um, and suddenly are afraid. 14:23Um, so I'm, 14:24I'm, I'm sitting here watching, like, is this something which is just a blip 14:29in the market or is it a major shift? 14:31Uh, I don't think that, um, you know, human annotation is, uh, you know, 14:38is, is super proprietary technology. 14:41So I think that what we are seeing is that. 14:44You know, scale did something right because they got all these customers. 14:47But then on the flip side, uh, you know, it's somewhat of a commodity. 14:50If I can move $200 million to another vendor overnight, and I effectively 14:54expect no, no fallout from that. 14:57So it's a, and then we need to ask the question like, is an overpaid commodity? 15:01It's like, did they actually pay the right price? 15:03But they're probably paid on revenue 15:05for sure. 15:06Yeah, I think there's kind of two really interesting things there. 15:08Let, let's take the first one, which I think is maybe Chris, 15:11you can take this question, is. 15:13Okay. 15:13Like you're sending some of your most sensitive data to a third party company, 15:18and now you're kind of left in a situation where that third party company 15:20is now maybe under unclear ownership. 15:22And so you get your jitters, right? 15:24Like I think what Volkmar is saying, how did companies end up here? 15:27Like why doesn't a company like Google do all this annotation 15:30just in-house? 15:31I think the clue is a little bit in the name, which is scale. 15:34Right. 15:35Which is nice. 15:38I, I think the re Yeah, you're welcome. 15:40No, I think the reality is that in order to do this 15:43annotation, you're gonna have to. 15:46Hire a lot of different people from different backgrounds at different 15:50price points, and you may or may not want to be associated with 15:54those price points, et cetera. 15:56So I think that everybody wants to have a, uh, a little bit of 16:00separation and, and to your point, it becomes a commodity transaction, 16:04which is, um, I need this data here and I need it with my annotations, 16:08and I, I don't really want to know. 16:11The mechanics. 16:11I don't wanna hire people, I don't wanna deal with the social 16:14security, the contracting, all of the logistics around hiring a 16:18large workforce in the same way as. 16:20I hate to say it this way, but things like cleaning companies or security companies, 16:24you know, it's felt like corporations hire those folks in that sense. 16:28So there's a whole sort of administrative and, uh, scale element to this. 16:31So, so that's, I think one of the major reasons are they. 16:36Gonna be sharing that data. 16:39I mean, the reality is probably not. 16:42I mean, I, I think scale is gonna be sensible about this. 16:46Uh, I don't think it makes a lot of business sense to, to go around 16:49saying, Hey, you know, they're training it this way, they're 16:53training this, this is their dataset. 16:55You might want to do the same. 16:56Um, so I, I probably, I, I just don't think that's gonna be the case, but then. 17:02Who knows, right? 17:03And they don't have a controlling interest either, but, but who knows 17:06how that, that pushes on there. 17:08So I think the Volkmar's point is probably knee jerk. 17:11However, it probably is getting people to start questioning what they do anyway. 17:16And, and actually I, I don't think it's a bad thing because if everybody's 17:20getting their data from the same sources anyway, then how much diversity 17:24is in the training set anyway. 17:26And, and again. 17:27You have to realize when you're talking to a lot of these different models, you, you 17:31do get very, very similar answers right. 17:34From model to model. 17:35So, so maybe just maybe the models are gonna start giving slightly 17:40different answers if the data is switching around a little bit and 17:44coming from different sources. 17:45So I don't think it's necessarily a bad thing. 17:47Yeah, I think this is the kind of second prong of, you know, Volkmar, 17:50your response that I think is so interesting and f would be really 17:52interested in your thoughts on this is. 17:54I think a lot of people have said company like scale, like what is the moat? 17:58I can just get anyone to annotate. 17:59And I think, you know, a little bit of what's happening in the market now I 18:02think is companies looking around and be like, who else can I move this to? 18:06And I think we are testing just how much of a commodity annotation is. 18:10But do you wanna speak to that is like, is data annotation just a commodity service? 18:15Like can anyone do it? 18:16Um, or is actually maybe we're finding that like this is actually maybe a 18:20little bit more bespoke and complicated than it looks like on its surface. 18:23Annotating data is core to being able to trust AI, like annotating it correctly. 18:33It is, it is, 18:34um, I think it, it, it, I disagree that, that it needs, that it is a commodity. 18:40I think there does need to be some domain expertise and we can, we 18:43see examples in the news of where data was incorrectly annotated. 18:48And it ended up causing outputs or outcomes that were, uh, 18:54unfair, inaccurate to people. 18:55And in particular, like, you know, some examples that come to 18:58mind are in the healthcare space. 19:00So I think, I mean, maybe it depends, like, you know, is it, are there high 19:04risk use cases that is gonna require that domain expertise or, uh, other use cases 19:10where it's not as as important, but it is. 19:13I think central to the question of trust. 19:15Yeah. 19:16I was joking with a friend recently. 19:17I was like, I'm gonna start a business that does like the most 19:19artisanal data annotation, right? 19:22Like this is just gonna be, we're gonna be the LVMH, the luxury 19:25provider of data annotations. 19:27And we were kind of batting it back and forth 'cause it was like, it sounds 19:29like a very funny idea because you have companies like scale where like, oh, 19:33well the data annotation I wanna do is like largely outsource that enormous scale, 19:38but I guess in a world where models are becoming more and more capable, it kind 19:41of feels like you might need that kind of service in the future where it's 19:44like, oh, we have 20 Nobel Laureates that just annotate data for you. 19:49Like that becomes the really valuable thing. 19:51Um, I guess Volkmar, I'll kick it back to you like, 19:53do you buy that or is that kind of just like, there's not really a moat there. 19:55Probably anything which you can compute, we can do through reinforcement learning. 19:59So you probably don't wanna do the, uh, the Nobel Prize winners, but 20:03I think if you want, um, you know, uh, massive influencers, taste, 20:10aesthetics, things that are very intrinsically human, uh, you will get. 20:15You know, the middle of the bell curve if you go, or maybe even slightly left, 20:18shifted because of, you know, the, the labor, uh, cost of annotation. 20:23And so, I mean, if you're shifting it more towards a high end cost, 20:27you will get a different or and more bifurcated, uh, sample set. 20:32So I think there is a, probably a market for that. 20:35Um, but I do not know how much. 20:37People are willing to pay. 20:38Right. 20:38And then also, do we want models which are kind of working, 20:43uh, in, in very niche areas? 20:45Or do you wanna have, you know, the gen in general, the general 20:48models, do you wanna have them kind of in the center of a humanity is 20:51Yeah, that's right. 20:51Yeah. 20:52You need almost kind of like the generic person, the average 20:54person, whatever that means. 20:56Right. 20:56But like, maybe that's actually better in some ways. 20:58Yeah, yeah. 20:59So otherwise, yeah, you, you kind of go off the rails, right? 21:02I mean, this is the same in like the political spectrum. 21:04You. 21:05Wanna have the center and you don't wanna have like the, the noisy 21:08edges because the noisy edges are taking society in weird directions, 21:12and I think that's the same thing for aesthetics. 21:15You know, art literature, et cetera. 21:17And this is really where you're trying to extract the human 21:20psyche in a training set. 21:22And I think that's, you know, yeah. 21:25The, the general purpose models will probably go with the middle of the road. 21:29And that, that just, you know, brings me back to, to the point I was saying 21:32earlier about, about edge cases and making sure, especially in higher risk scenarios 21:39like healthcare, that you do have data that does represent, for example, 21:45historically marginalized communities that aren't showing 21:47up in the average data sets. 21:50Um, so that's why it's, I think it's, it's important to really be 21:52thinking about the, the rigor that is behind, uh, data annotation. 22:02I'm gonna move us onto our next, uh, topic for today. 22:05And Phaedra, we, I, we'll picked this one just for you, so I'll be kicking 22:08over the first question to you. 22:10Um, super interesting story, uh, came out in the New York Times, um, and 22:14I'll just kind of read the headline. 22:15The headline was, "They asked an AI chatbot questions and the answers sent them spiraling." 22:21and the subtitle of the article is "Generative AI Chatbots are going 22:25down conspiratorial rabbit holes and endorsing wild mystical belief systems". 22:30For some people, conversations with the technology can deeply distort 22:33reality, and so the article's kind of investigating when these chatbots 22:37go off the rails, um, they have a very big impact on, on certain users. 22:42Um, and I guess fare, the question I wanted to ask you is like. 22:46Do you feel like this is like uniquely risky for chatbots like that we're seeing 22:49a new kind of like risk that we really need to be managing in this technology? 22:54I'm curious about how you kind of think about these types of 22:56problems, particularly as we hear more and more of these stories. 22:58I, I, I do think this is, this is a major risk. 23:03Um, and I think we do have to have broader conversations about things like, um, uh, 23:11you know, for example, age limitations. 23:14Uh, and, and sort of how some of these, these bots are being presented 23:18to, to different age groups or different kinds of communities and the. 23:21I'm saying this because there have been, uh, so many really tragic stories in the 23:28news, uh, about, um, you know, people who are, are vulnerable that, uh, end 23:35up, uh, using these ais as if they are therapists or in some sometimes boyfriends 23:43or lovers or et cetera, dot, dot, dot. 23:47And I think it, it just shows that, um. 23:51I think that the human mind is easily crackable, uh, and it's, it's easy 23:56to trick and manipulate in many ways, which is why we really need to be, be 24:02thinking carefully, uh, about, you know, what would appropriate controls look 24:07like, for example, that being said. 24:10I've had really interesting conversations with, with other peers of mine who argue 24:19that, you know, if, if, for example, you've got someone in who's elderly and 24:24alone, uh, and they're in a nursing home, like, is there harm in them interacting 24:31in a bot as with a bot as if it's a human? 24:34Like what, what could the harm be? 24:37Uh, the New York Times, uh, again, I, I keep bringing up the New York Times. 24:41They, they did a, a fantastic cover story, I wanna say it was several months ago, 24:47uh, again, about a, a, a woman, I wanna say she was like in her thirties, who had 24:51fallen in love with an AI and sort of, you know, what that relationship was like. 24:55And she tried to break up with it over 30 times, et cetera, et cetera. 24:59And it, it was, um, again, I think it illuminating the 25:04fact that, you know, she knew. 25:06That, uh, you know, this bot is creating words in a conversation and ultimately 25:14that are predictions of the next and tactically correct word, and that 25:17it doesn't have emotion, but it, it seemed, because it was learning her 25:22patterns, it seemed so very lifelike. 25:25Um. 25:26So, yeah, I, I think there, there are tremendous concerns. 25:29Volkmar has not come as any surprise that I'll turn to you next. 25:32Um, there's, uh, perhaps one point of view, and I think this is like very 25:36interesting and I think this is exactly the conversation I want to have, which 25:39is there were some concerns about this, you know, in the world of say, like, I. 25:44You know, TV or even like the Facebook algorithm I remember often 25:47had a lot of these arguments, which is, it's so good at pulling you in, 25:50it's taking up so much of your time. 25:53Should we be concerned about it? 25:54Uh, I'm kind of curious about, do you, do you see the risks sort 25:56of the same way as Phaedra, or do you go in a different direction? 25:58I think that what it does is it's, it. 26:02It's the first time that, you know, society goes from, we went from 26:07everybody knew the same because everybody watched the evening news to 26:11effectively a very splintered, you know, larger groups of people, uh, which are 26:17operating in virtual social circles. 26:20So now you already. 26:21Splint on it, and now I can individualize that and go all the way down to 26:26your, you have the right to your own conspiracy theory because you have 26:29an ai which can give you any answer. 26:31So I think that there is, um, I mean, is there a risk? 26:36Yes, there is a risk if people don't understand that there are, 26:39you know, interacting with a machine and the machine can hallucinate. 26:42But I think that's a training 26:44process. 26:44We are currently all in awe, shock and awe probably that, uh, you know, a computer 26:50can, you know, like imitate a human being. 26:53We are not sentient, right? 26:54But it's, it's a, it is a really close imitation and it tricks our brain. 26:58And so I think the same thing could be said for, you know, computer games, 27:03virtual reality things, but I think. 27:06People start understanding that they're actually working with, uh, with a 27:10machine and that the machine and, and learn the limits of the machine. 27:13And the more it imitates a human, I mean, the more real, like the 27:17more people get fooled, but in the end it's still a machine. 27:20And so I think there are, uh, like humans are capable of 27:25understanding the difference. 27:26And yes, there are some people who will not, and they will take this 27:29thing for, you know, the, the magical oracle, which tells me the truth. 27:32Does that mean you think there should be age limits? 27:35So if you've got a young child who is interacting who may not understand, 27:39there should be limitations. 27:41If, if you say young child and you know, then not now. 27:44What's a young child like? 27:45A 6-year-old? 27:46Probably. 27:47Uh, I think we need to put guardrails around it. 27:49On the other hand, I believe that it's an incredibly powerful 27:52tool, which we should, you know. 27:54Kids should grow up with. 27:56So for example, you know, I'm in Texas. 27:58In Texas, it's illegal to use ChatGPT in school, in public school. 28:02And it's like, that's wrong. 28:04They should absolutely use ChatGPT because otherwise they're 28:06not ready for the workforce. 28:08So how can kids be penalized for using technology, which if they don't 28:13understand that technology when they hit, uh, you know, the workforce, they will 28:16effectively be completely disadvantaged. 28:18So I think that there is, um. 28:20We, we are, you know, if you look, when the first iPhone came out, 28:23there were no controls whatsoever. 28:25And over time we figured out what controls we need to introduce. 28:28You know, time limits, what you can play, what you can see, et cetera. 28:32And, but that's an, a discovery. 28:34I don't think we can do this through regulation. 28:35We need to do this through discovery and figure out, you 28:37know, where the limits are. 28:38And yes, we are exposing a large. 28:40Body of people to, to risk. 28:42It's not a question of like how fast we are reacting, as long as it's not 28:46the government doing, but technology companies are figuring out, then we, 28:49we are not on a 10 year time scale, but probably on a year time scale. 28:52I've actually spent a lot of time thinking about this. 28:56I mean, not o3 pro levels of thinking. 28:58I haven't dedicated 13 minutes or to this more, more like o4 mini levels of thinking. 29:04But um. 29:06The interesting thing about this is that pretty much all of the models 29:11at the same time are now kind of suffering from this in general, 29:15which is, I don't want to go as far as saying the sycophancy type thing, 29:19but, but kinda the more sort of, uh. 29:23You know, positivity or spiraling type stuff. 29:26And I, and I, I wonder if it's related to a couple of things. 29:29I think there are two things that I've seen in the industry over the last year 29:32that I think may be affecting this. 29:34I think number one is everybody is pretty much, uh, switched 29:38to reinforcement learning. 29:39And if we think about what's at the heart of reinforcement 29:42learning is it is a reward model. 29:44Right. 29:44So you, you know, you do well, you give a good response, you get a cookie. 29:48If you give a bad response, you don't get a cookie. 29:51And then the, the, the model learns over time to give the good responses. 29:54'cause it wants eat all the cookies, right? 29:56So 29:57if we think about these spiraling type things, I wonder, you know, my own 30:02conspiracy theory here, I wonder if that is an after effect of, uh, the fact 30:07that we're reward modeling and therefore knowing that it's gonna get its cookie, 30:11that it, it's gonna go for the positive or alternatively the negative in a situation 30:18to lead you down that rabbit hole. 30:19So I think it's that sort of cumulative effect, and I wonder 30:22if that's a, a side effect of RL. 30:24And I think the second one is. 30:26Which probably relates to that is if we think about all the benchmarking and, 30:30you know, my theory on benchmarking, one of the biggest benchmarks that 30:34people like to, uh, hit themselves against is the, the whatever their 30:39ELO rating is on the chatbot arena. 30:41I. That really is about, you know, this is the best response and or this 30:46is the worst response in that sense. 30:48And everybody wants a good score in that. 30:49And when I think about these two factors combined, it means that I'm not surprised 30:55that the models are going to take, uh. 30:58Spiraling positions, uh, as a conversation goes on, and, and, 31:02and sometimes it's good, right? 31:04I mean, like, Claude's my favorite model, right? 31:06I use Claude all the time for coding. 31:08And, and honestly, Claude at the moment is just like, oh my goodness. 31:12This is a great application. 31:14I. This is the best. 31:14This is enterprise class. 31:16This is world class. 31:17This is the best thing I've ever seen, and I feel great. 31:20I feel like I'm the best coder in the world. 31:22Thanks, Claude. 31:23And I, and I kind of want that, right? 31:25It does feel good, but there is a negative to this as well, so I, I, 31:30I, I think it needs to be worked out. 31:32But I, I do wonder if you know RL and kinda ELO in chat bot arenas. 31:38It's these two things combined is maybe leading to these types of outcomes. 31:42So the, the, what you're saying is the reward function for the model creators 31:47is wrong, which is the benchmark which is supposed to make you happy, 31:51I think. 31:52Yeah, I think it's a natural side effect. 31:54My concern is, um, with this rewarding model is who benefits 32:00who, who's directly benefiting? 32:02You've got now an individual who's more hooked. 32:05Into engaging with this ai, using more CPUs, getting more and more engaged. 32:11They're having to pay a larger, larger subscription, and 32:15they're giving yet more data. 32:18More data. 32:19Who's benefiting from this reward model? 32:22And, and I know. 32:24Volkmar you had mentioned about, you know, the New York Times and conspiracy, 32:28but there's true blue tragic stories like in the news with, with people 32:33who've committed suicide because they're bought, encouraged them to, 32:38and again, it goes back to like, who's accountable, who's actually accountable. 32:42Yeah. 32:43So I, I think Phaedra the, uh. 32:46We are the, at the inception of a new technology and we actually have no 32:50idea how it's going to affect society. 32:52And I think it's, it's very broad, right? 32:54It's like we talked about the job market. 32:56Now we are talking about ethics and uh, you know, we like humanity. 33:01We kind of have a process, or at least like different Europeans have a 33:05different process than we have in the US. 33:07We kind of try it out and we see where the harm happens, and then 33:10we are trying to address it, right? 33:11So that's how we got seat belts because. 33:13Cars without seat belts, what can go wrong? 33:16And um, and the Europeans, they try to think about everything upfront and 33:20then, you know, nothing happens anymore. 33:21So, um, and where and, and where did those seat belts come from again? 33:26Oh, yeah. 33:26From, yeah. 33:27I know. 33:28Um, so, but the, the, I think the, the process. 33:34We go through right now is, it's extremely hard for us and, and for at the neck 33:40break speed, these things are evolving. 33:42I mean, just go three years back to figure out and, and it got so much better, right? 33:46So now it can imitate a human. 33:48Uh, I think we are, we are at this. 33:50Junction where we, we need to figure out actually where the harm 33:53lies through observation and then actually find countermeasures. 33:57And I'm happy you know, that you are thinking about this every day 34:01because you, you know, I mean this is really, the humanity should think 34:04about this and the ethics should think about this and say, okay, look. 34:09Um, there's all this greatness, and every greatness brings danger with it. 34:13Um, how do we minimize the danger while we are actually benefiting from the upside? 34:17I think that the, you know, we, we, it's not going to go away, so we need to 34:22figure out how to live with it, right? 34:23So, and I think it's really important to actually do the studies and 34:27do the psychological studies, but it's the same thing, right? 34:30You know? 34:31When trains run wind, it's like, oh my God, you go 30 kilometers 34:34an hour, everybody will die. 34:35It's like, nah, not really. 34:36So I think that, um, we will have to go through that process and yes, 34:41unfortunately there will be harm done. 34:43I. It's with every technology you have that, but overall I 34:47think it will be a net positive. 34:49But, you know, thank God we have the, the, the science and the discipline and 34:54the rigor and the psychology and in the humanities to actually do that fast. 34:59I think the challenges to make sure that people in the humanities have a 35:03seat at the table when it comes to ai. 35:06And, and that goes again back to, to AI literacy, and making sure we truly 35:11have a multidisciplinary approach and we're teaching it correctly in schools. 35:22Final little bit, uh, that I just wanted to kind of touch on, um, in 35:25our last five minutes or so, um, is some data that got released by 35:29the VC fund, Andreesen Horowitz. 35:31Um, and I just wanted to talk a little bit about this 'cause they were kind 35:34of looking at all the data that they have about investments and want to 35:38kind of, they start to make some observations about what's happening 35:41to startup world in, in age of ai. 35:44And I think there's kind of two really interesting data points that I want to 35:47raise and then kind of get this panel's maybe final hot takes before we wrap up. 35:51You know, the first one is typically from a revenue standpoint, B2B 35:54has always been better than B2C. 35:57Uh, but I think one of the things that they're kind of putting here 35:59is that it looks like right now what they're seeing at least is the revenue 36:02benchmarks for B2C are outpacing that. 36:04For B2B, which I think is quite interesting on the startup side. 36:07And then I think the second one is that they found, at least in their 36:10data, that about one third of their consumer companies are raising funding 36:14to train, uh, their own models. 36:16Um, and so this is not a dynamic that I think was certain early on, which is, 36:20well, maybe, you know, the application layer just is gonna rely on all these 36:24foundation model companies, but it kind of seems like at least the VC world is 36:27very frothy about the idea of kind of like in-house and developing their own models. 36:32Um, and so. 36:34I guess maybe, Chris, maybe I'll kick it to you. 36:35If you have a quick hot take on this data, what you think people 36:38should take away from it, um, or if we just trust this data at all. 36:41Right. 36:41It's just a sample of what Andreessen is seeing out there. 36:43I 36:43think it's, uh, tough in the startup space because everything is gonna be about 36:51AI and therefore it's gonna become what is your differentiator gonna be, right? 36:56So you have to do something. 36:58The, the large AI companies are not doing. 37:01So if you are, if you are an AI company trying to build a ChatGPT, 37:06I dare I say, unless, uh, unless you've got billions and billions 37:10of dollars, you, you're probably not gonna achieve that, right? 37:12So, um, so you need to find your specialism. 37:15In some way, and that could be a brand new experience, it 37:20could be a part of the market. 37:22So if everybody's running at B2C, maybe you pick a, a specialized 37:26niche industry B2B, for example. 37:29Or if it is things like data, then, you know, maybe I've 37:34got access to a bunch of data. 37:35The, the general model providers don't have, or I have the main 37:39knowledge, uh, as well where that can be, um, different and therefore. 37:45In that sense, once you've got access to that, maybe it makes sense to say, 37:48okay, I'm gonna take a specialized model and try and do this one thing better. 37:52So if I'm offering a new application, rather than the general capabilities 37:57of the large models, if I can bring in that specialized domain knowledge 38:02into my own model, and then putting experience and then hit that market. 38:05Then maybe I'm not gonna be hit by the large providers later on. 38:09And I think that's probably, um, the space they're all sort of 38:13contending with, and AI's cool, right? 38:16So, and that's where money's going. 38:18So you have to sort of play in that space. 38:20So I think, I think that's it. 38:21I think the caution I would have is that, you know, it's back to the moat, right? 38:27How are you gonna protect that data? 38:28How are you gonna do something different? 38:30How are you gonna, you know, make sure that you're not sort of disintermediated 38:34by something the large AI companies do? 38:36I would argue that if you go and build your own code editor, um, for example, 38:41then you, you know, unless you're doing something massively different, you're 38:44probably gonna be disintermediated, right? 38:46Um, or bought in, in the case of Windorf. 38:49To tag on to what, what Chris was saying. 38:51I, I agree, uh, with, in particular the area about domain expertise in 38:57a particular industry, and I think some other places where, uh, startups 39:03might be able to, to really innovate. 39:06Is, uh, being able to, to create these smaller models that have test retest 39:10reliability, that offer data lineage, data provenance for every output, uh, 39:15with, with evidence ones that are even more worthy of people's trust, perhaps 39:20models that are built with ontologies, you know, formal learning graphs. 39:23Or knowledge graphs. 39:25Um, and also having, again, domain experts who really understand this 39:31data better than anyone else, being the ones who were actually curating 39:34it, uh, for a particular purpose. 39:36So I wanna address another part of that article, which was, um, it showed 39:42a dramatic increase in annual recurring revenues. 39:46So typically it's hovering around a million dollars and now it's like, you 39:49know, two, $3 million in the first year 39:52and I think what it really shows is how, how rapidly companies now can actually 39:58go from idea to market, and that's, there's no, the only ingredient which 40:03was added is not, uh, is effectively AI to, to the product development. 40:08Right. 40:08So what we are seeing is that that shrinking of the cycle and also 40:13that you raise less money, right? 40:15I mean, because labor like. 40:17Your labor force is much faster and much like, is much more productive. 40:23So you, you cut the time down and the output goes up. 40:26And so I think that's a really interesting phenomenon. 40:29Now I think we are getting from, you know, I needed to build my data center to, I 40:33can get a computer on the cloud to, you know, I can build a business by myself. 40:37And so I think the entrepreneurial 40:39rate, and also the number of experiments, which can be run by VCs is going to go up. 40:46Um, now on the flip side, that has a really interesting impact on the VC 40:51world because they, if they write smaller checks, they need to write many more 40:55checks to get the, you know, get returns. 40:57So the VC world will also have to change to address, you know, like, 41:02right. 41:03Instead of funding 10 companies, I fund a hundred or a thousand. 41:06How do you do that? 41:07Right. 41:07So I think that whole industry is also up for disruption. 41:11Yeah. 41:11It's gonna be super interesting to see if it like, becomes so low cost to 41:14effectively launch a startup and you want to cover as many startups as possible. 41:18I think at some point it almost becomes impossible for them to put, 41:20you know, a check into everybody kind of, I think is a big, big deal. 41:22And 41:23I mean, we already see this with the Y Combinators of the world, right? 41:25So it is just a, a massive meat market and uh, and then, you 41:29know, people put their chip down. 41:31Well, that's all the time that we have for today. 41:33I'm always, uh, very mind blown by how many topics we cover in a 41:37relatively short period of time. 41:38Um, Chris, Phaedra, Volkmar, thanks for joining us. 41:41Thanks for joining us. 41:42Listeners, if you enjoyed what you heard, you can get us on Apple 41:44Podcasts, Spotify, and podcast platforms everywhere, and we'll see 41:47you next week on Mixture of Experts.