Learning Library

← Back to Library

NY Tech Week: AI and Quantum

Key Points

  • Ash Minhas highlighted an IBM quantum‑computing event where participants accessed IBM’s quantum hardware via Qiskit and built an “8‑ball” circuit to generate random predictions.
  • Anthony Annunziata announced a panel examining the business impact of open‑source AI, focusing on its value‑creation potential and unique advantages for enterprises.
  • Sarah Amos described her IBM‑hosted masterclass on red‑team testing for multicultural and multilingual AI vulnerabilities, emphasizing hands‑on security practice.
  • The “Mixture of Experts” podcast previewed upcoming discussions on major market reports (e.g., Mary Meeker’s analysis, Linux Foundation findings) and unusual behaviors observed in Claude 4.
  • Attendees noted that New York Tech Week attracted a highly diverse, geographically dispersed crowd—including many students and long‑distance travelers—reflecting strong enthusiasm for AI career opportunities.

Sections

Full Transcript

# NY Tech Week: AI and Quantum **Source:** [https://www.youtube.com/watch?v=tU9Jal1-E6c](https://www.youtube.com/watch?v=tU9Jal1-E6c) **Duration:** 00:44:35 ## Summary - Ash Minhas highlighted an IBM quantum‑computing event where participants accessed IBM’s quantum hardware via Qiskit and built an “8‑ball” circuit to generate random predictions. - Anthony Annunziata announced a panel examining the business impact of open‑source AI, focusing on its value‑creation potential and unique advantages for enterprises. - Sarah Amos described her IBM‑hosted masterclass on red‑team testing for multicultural and multilingual AI vulnerabilities, emphasizing hands‑on security practice. - The “Mixture of Experts” podcast previewed upcoming discussions on major market reports (e.g., Mary Meeker’s analysis, Linux Foundation findings) and unusual behaviors observed in Claude 4. - Attendees noted that New York Tech Week attracted a highly diverse, geographically dispersed crowd—including many students and long‑distance travelers—reflecting strong enthusiasm for AI career opportunities. ## Sections - [00:00:00](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=0s) **NY Tech Week Highlights Unveiled** - Panelists discuss quantum demos, open‑source AI business impact, and multilingual red‑team masterclass at IBM’s New York Tech Week. - [00:03:05](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=185s) **Upskilling, AI Spread, Real‑World Focus** - The speakers emphasize the importance of professionals upskilling and sharing knowledge to steer AI toward concrete business applications, highlighting open‑source collaboration and the contrast between East‑coast emphasis on practical use cases and West‑coast hype‑driven talk. - [00:06:09](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=369s) **AI Business Shifts to Application Layer** - The speaker highlights how global tech hubs like New York nurture innovation, while noting that the AI industry is transitioning from model hype to focusing on practical, last‑mile applications as the main source of value. - [00:09:11](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=551s) **Open‑Source AI Fuels Global Innovation** - The speaker explains how increasingly accessible open‑source models are expanding AI experimentation and culturally tailored solutions worldwide, highlighting IBM’s AI Alliance and its recent launch in Vietnam. - [00:12:17](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=737s) **Open Source Dominance in AI** - The speaker cites a Linux Foundation report revealing that 89% of organizations use open‑source components and 63% adopt open models in their AI stacks, suggesting that open source has effectively won the open‑vs‑closed debate. - [00:15:21](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=921s) **Open‑Source AI Adoption Challenges** - The speakers discuss how impressive model performance is hindered by early‑stage, developer‑driven adoption that relies on open‑source transparency for customization, while also highlighting the resulting safety, bias, and fairness concerns. - [00:18:21](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=1101s) **Debating Openness in AI Models** - The speakers examine how open‑source AI models can enhance security and reduce costs, while wrestling with the ambiguous definition of “openness”—from transparent safety practices to the reality of closed model weights—and anticipate emerging norms to clarify the term. - [00:21:25](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=1285s) **Unprecedented Cost Gap in LLMs** - The speakers discuss a slide noting that training expenses for large language models are soaring while inference costs drop, raising concerns about profitability in what is increasingly seen as a commodity‑type business model. - [00:24:28](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=1468s) **AI Safety as Competitive Advantage** - A speaker argues that firms can differentiate and generate new revenue by prioritizing AI safety, standardizing evaluations, and building a safety ecosystem that adds high‑margin value layers above the core models. - [00:27:35](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=1655s) **Trust, Safety, and Market Incentives** - The speakers argue that as AI models proliferate and become more stochastic across supply chains, market pressures to boost adoption may undermine safety safeguards, echoing the “move fast and break things” lessons from social media. - [00:30:37](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=1837s) **Towards Universal AI Model Standards** - The speaker emphasizes that as interpretability advances, the industry and regulators will create common classifications and guidelines for AI systems—reducing disclaimer reliance and filtering hype—citing the recent Anthropic Claude release as an example. - [00:33:40](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=2020s) **AI Models Are Not Magic** - The speakers stress that large language models function as statistical next‑token predictors, not divine creations, highlighting alignment progress, uncertain data sources, and debunking myths about hidden script content like the Terminator. - [00:36:46](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=2206s) **Human-like AI Reliability Concerns** - The speakers explore how modern reasoning models exhibit increasingly human-like, sometimes unreliable behavior, raising questions about closing the value gap and ensuring dependable enterprise-scale deployment. - [00:39:51](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=2391s) **Interface Design Impacts LLM Trust** - The speakers argue that the way we present large language models—especially chat‑style interfaces—shapes user expectations and safety considerations, suggesting UI choices are as crucial as model behavior. - [00:42:56](https://www.youtube.com/watch?v=tU9Jal1-E6c&t=2576s) **Envisioning Future LLM Interactions** - The speaker muses about LLMs possibly having “off days” to temper user expectations, reflects on training and inference costs, and imagines everyday, integrated AI experiences—ranging from calendar management to children conversing with everyday objects—shaping how different generations will engage with conversational models. ## Full Transcript
0:00What's the thing you're most excited about for this week's New York Tech Week? 0:03Ash Minhas 0:04is a Lead AI advocate. 0:05Uh, Ash, welcome to the show. 0:07Uh, what have you been seeing? 0:08So, I, uh, went to a, an event here at IBM's offices, um, on quantum computing. 0:14And actually I had a great time. 0:17Because everybody in the room managed to get time on one of our quantum 0:19computers using Qiskit, and we built this, uh, circuit that, uh, basically 0:24emulates an eight ball and like, you know, sort of making random predictions. 0:28It was really cool. 0:29Anthony Annunziata is Director of AI Open Innovation. 0:32Anthony, what will you be seeing this week? 0:34Today we're hosting a 0:35panel on the business impact of open source AI. 0:37You hear a lot about open source AI from the technology perspective. 0:41Today we're gonna explore its business impact, the value it 0:43can deliver, and why it has some unique advantages for business. 0:46And uh, Sarah Amos is Product Manager at, uh, HumaneIntelligence 0:50Uh, Sarah, what will you be doing for New York Tech Week this week? 0:53Yeah, so one of the most exciting things I did was actually host a 0:55masterclass here at IBM, in which we had a whole bunch of people 0:59conduct red teaming for multicultural and multilingual vulnerability. 1:04All that and more on today's in-person episode of Mixture 1:07of Experts, A Think podcast. 1:13I am Tim Hwang, and welcome to a Mixture of Experts.. Each week, MOE 1:17brings together the friendliest, most interesting and smartest panel of 1:20technical experts, product leaders, and market analysts to talk about the 1:23big stories in artificial intelligence. 1:26We have a lot to talk about. 1:27We're gonna talk about some really big market reports that have come 1:29out of Mary Meeker over at Bond and the Linux Foundation will be talking 1:33about some really weird behaviors coming out of Claude 4 per the, uh, 1:37around the horn question. 1:38I really want to start with New York Tech Week, which is this week, and one 1:41of the reasons why we're here in person. 1:43It's the largest New York Tech week ever. 1:45And I'm kind of curious about like sort of the trends that you all have been 1:48seeing as you've been going out there. 1:50Maybe Sarah, I'll, I'll start with you. 1:51'cause you actually taught a masterclass. 1:53Curious about like what people are interested in, what people 1:55are talking about, what's hot? 1:56Yeah, so I think one of the things that struck me was just how much involvement 2:00there was from folks coming out of town. 2:02I even had a participant tell me that he had traveled over 4,000 2:05miles just to come to New York Tech Week, which is pretty impressive. 2:08We see geographic diversity, but we also see a lot of young folks, folks 2:12who are either, uh, in their final stages of college or coming out of 2:16college and looking for new jobs. 2:17And obviously New York is, is an exciting place to be, but it's also, uh, 2:21the idea that AI is such an important part of their future. 2:25So that, that was the buzz that I was hearing all about the week. 2:28Cool. 2:29Yeah, I think the students are like a big part of this. 2:31I keep seeing them at all the events and like, it's interesting how much... 2:34Like AI itself has become like the thing that everybody wants to do when they 2:37like, get outta college or whatever. 2:39Um, and yeah, I'm kind of interested, I mean, you know, you may have heard, uh, 2:42Dario Amodei CEO of Anthropic recently made these comments recently being like, 2:46all the jobs are in trouble because of AI. 2:48Kind of curious about how that's resonating among folks who are A, 2:51graduating just now and then B, you know, really interested in this technology. 2:54I mean, everyone's nervous with a headline like blood bath. 2:57I mean, how can you not be Right? 2:58It's very dramatic. 2:59Yeah. That as in the words of Dario. 3:01But, um, I think folks are still optimistic, um, wanting 3:04to be part of that future. 3:06And I think it's about trying to upskill themselves and 3:08also teach others around them. 3:10Um, because if they can catch this wave and also steer it towards 3:14their own career goals, then it is very beneficial for them. 3:16I think all of us as, as an industry and especially thinking about AI 3:20alliance and open source efforts that IBM has championed is how can 3:24we make sure that that innovation is spread around the world too? 3:27Yeah, for sure. 3:27I think I see you nodding. 3:28I dunno if you wanted to get in here with a comment at all. 3:31Well, I, 3:32I mean, it's a great perspective, 3:32Sarah. 3:33I agree with all of it. 3:34Of course. 3:34Uh, yeah, maybe I'd add, uh, one or two things. 3:37So from Tech Week in New York, being in New York, I think one 3:40of the really healthy themes here is actually applying AI right? 3:43In specific areas of business and beyond, right? 3:46In finance and legal and advertising. 3:49If you go to like conference on the West Coast, right? 3:51You hear about tech for tech's sake, like very much you making it the 3:54East coast, west coast thing, or. 3:58You didn't try, but maybe, yeah, I didn't try. 3:59What you hear here much more is like what people are doing with AI, like what 4:03it needs to do in the real world, what it's doing, like specific use cases. 4:07And I think it's really healthy and if you think about jobs and skilling and 4:10impact, that's where most of the impact and changes in AI are gonna happen, 4:13right on the front lines of using it. 4:15Or something. 4:15Mm-hmm. 4:16Yeah, absolutely. 4:17So Ash, if I can turn to you, I mean, you know, it was a little bit shocking 4:20your answer because I feel like Quantum, we've been hearing about for such a long 4:23time, and I think everybody always tells me, they're like, ah, but quantum's 4:26like years away, we're never gonna be able to actually make it practical. 4:30It's not really like a real thing, but it sounds like you actually 4:31got to like play with a real. 4:34Quantum computer kind of sounds like, right? 4:36Yeah, yeah. 4:37Um, that's right. 4:39So our Kiki is like sort of, uh, online and open to everyone. 4:43You can just go and Google it and look for, look for it and um, you 4:47can actually get compute time on one of our com quantum computers. 4:51And I think that was a real attraction to sort of the audience, was that you can 4:56actually make a circuit and run it and watch it run and get output out of it. 5:01And I mean, you know. 5:03You to address your comment around, you know, is this real? 5:05How far away is this? 5:07I mean, if I'm doing that 5:09in a lab during New York Tech Week in 2025. 5:12I mean, and you just sign up to play with it. 5:13It's like ridiculous. 5:14Yeah. When you sign up to play with it, right. 5:15Then that's, I mean, that's pretty real, right? 5:18Mm-hmm. Yeah. 5:18Yeah. 5:19So how bullish are you coming outta that? 5:20I mean, you know, I think the funny thing when we talk New York Tech 5:22Week, it's like we're actually just talking about AI, right? 5:25But it kind of sounds like here, I mean, there's other stuff going on 5:27and I think it's so interesting. 5:28I mean, Anthony, you kind of brought up. 5:29Like this contrast of like, oh, you go to kind of West Coast, you know, 5:33sort of AI events and it's like very abstract and it's like, look at 5:36this crazy new model that we built. 5:38But like here, it feels like there's a lot more of a culture in New York of like, 5:41well, it's all about like application. 5:43What are you actually gonna do with all this stuff? 5:45Um, and I know you've been around the sort of East Coast tech world for a long time. 5:50Do you think that's always been kind of the case? 5:51Or is this sort of like changing or, I don't know if you, like, you feel 5:54like the, the technical cultures are becoming more distinct with time? 5:57A couple things. 5:58Yeah. I'd say 5:58that, um. 6:00I'd say New York and most cities outside the Bay Area, California, 6:04uh, are more kind of practically and kind of application. 6:07Yeah. Dated about the Bay Area or anything. 6:09No, I love it. 6:09It's great. 6:10No, it's great. 6:10It's unique place. 6:11Yeah, for sure. 6:11For like, it's amazing and thank, thank God it exists, uh, 6:14right uhhuh right for the world. 6:16But at the same time, it's not the only place. 6:17And I'd say like New York, like other places, London. 6:20Paris, lots of places in the world, Tokyo. 6:23They care a lot about what the technology's going to do, how it needs 6:27to reach users and reach applications. 6:30Right. 6:30All those like last mile things that aren't such, just the last 6:34mile, there's actually a lot there. 6:35Yeah. 6:36Has it always been like that? 6:37I'd say like New York and most places have, you know, uh, maybe always been. 6:42Like that. 6:42Mm-hmm. 6:43Uh, yeah. 6:44You know, I last 10 years I think, like New York has been just growing, 6:47growing as a tech scene and, you know, but, uh, I, I think it's like really 6:51good and I, I see it staying grounded in like what you wanna do 6:55with tech, right? 6:55Yeah. Which I think is really healthy. 6:56For sure. 6:57You know, Sarah, one of the things we talk a lot about or have been talking a 7:00lot about, uh, on MoE has been sort of the idea that like the business of AI 7:04is changing in a pretty fundamental way. 7:06Where, you know, 24 months ago it would be like, oh my God, this new model and 7:10like you just had, you know, this huge acquisition for, of windsurf, right? 7:13And, and so we've been talking a lot about how like, it seems 7:15like this application layer. 7:17These like actual, like practical implementations are becoming where 7:20like a lot of the value is in AI. 7:22And do you think that will kind of change like which cities are dominant? 7:25I think it's kind of like a really interesting question if it turns out 7:27that like actually where the action is happening on AI is in New York. 7:30Yeah. 7:30Because in some ways that's where the value in AI is flowing. 7:33Yeah. Do you agree with that? 7:34I'm just kind of riffing a little bit. 7:35No. Yeah. 7:36I mean, 7:36I, I love this question because both, as a product person, I'm always thinking 7:40not in terms of like technology 7:42first and then putting it onto a problem. 7:44Um, but rather trying to identify the problem, understand it, and 7:47then, then find the solution. 7:49Um, so there's that. 7:50But then also I do think New New York is uniquely, um, capable of creating more 7:55creative applications, and this is just a function of being the place 8:00that so many people go to, right? 8:02So unlike a single industry city like the Bay, we've got, uh, arts, we've got media, 8:08we've got finance, we've got fashion. 8:10And I think that even down downstream in terms of the people working 8:13at these companies, most of my friends aren't in the tech industry. 8:16And I think I, I gain a lot by exposing myself to people in different 8:20industries and also understanding their concerns or their optimisms about AI. 8:24And so I think having that greater understanding of a customer use case 8:29means that us in New York, we can craft products that genuinely meet 8:33their needs as opposed to perhaps just technology for technology's sake. 8:37So yeah, I'm bullish on the application layer and that 8:40also being important as we see 8:43companies continuously investing in models. 8:45Does that become more of a commodity? 8:47Can the application layer be where you differentiate yourself? 8:49Yeah, for sure. 8:50Ash, do you agree with this? 8:51This is like a very East Coast centric. 8:53I know we're sitting here, you know, like, yeah, Madison Square 8:56is like a block away, you know, um, on kind of how you know Anthony and 9:00Sarah kind of assessing all this, 9:01I think that, um, what we've seen over the last couple of years is as the 9:06cost of inference drastically reduces, 9:10there's gonna be more inference. 9:11Right? 9:12Okay. 9:12And in essence, that means that there's gonna be more people who have access to 9:16the technology, especially now as the open source models are now comparable 9:20in performance to some of the more proprietary ones that we're gonna see. 9:24Innovation come out in all sorts of places. For sure 9:27big cosmopolitan areas like New York or London or Paris. 9:31Okay. 9:32It's just a, a melting pot of culture. 9:34And the combination of 9:36lower inference costs, the ability to experiment and innovate 9:39quickly, and those melting pots of culture is obviously gonna, uh, breed 9:43a lot of innovation here using AI. 9:46Uh, but I think that we may find this happening in all 9:48sorts of other places as well. 9:49Like, I'm thinking like agriculture, where are, where are the 9:52farms as like, not farms here. 9:54So, but there may be innovation there, for example. 9:56I was gonna agree fully and give a couple examples. 9:58Yeah, please do it. 9:58Drop 'em. 9:59So the, the main program that I'm responsible for at IBM and and 10:02globally here is the AI Alliance. 10:04Which is a program that brings together a lot of different organizations 10:07who are working in and around open source AI, and it's very global. 10:11So two months ago I was in Vietnam in Hanoi, uh, launching kind of 10:15a chapter there, and there's a very vibrant scene of startups and 10:19companies that are taking advantage of, of open source and AI, right? 10:22Open models, uh, creating, you know, custom versions, creating things 10:26that reflect what they need in that culture, that language, that 10:30business environment in Africa. 10:32Similar things happening with startups that are operating, 10:35uh, more on the edge, right? 10:36Mobile based tech is like really big and important there. 10:39Uh, you can't do that tying into, uh, you know, a centrally 10:42hosted API to a big model, right? 10:44So there's lots of ways that, uh, open source AI in particular. 10:48In the tech scene is uniquely helping and addressing like people 10:53and use cases like globally. 10:55I think you're gonna see a lot more of that. 10:56Any final thoughts, Sarah, before we move 10:58on to the next topic? 10:59Yeah, no, I think this kind of circled up nicely because the, the real issue isn't 11:03SF versus NYC, even though this is New York Tech Week for sure, and I 11:07got my New York Tech, um, hat on. 11:09But totally these points about open source is really broadening 11:12out and democratizing tech. 11:14Um, so if, if a farmer in rural Kenya has the same access to an open source 11:19model as perhaps, um, a user in a cosmopolitan city, what gains can 11:25be made and spread throughout the population that we can all benefit from? 11:28So that's where I'm the most excited. 11:30Nice. 11:30That's great. 11:31Well, a lot more to look forward to and, uh, a lot more events 11:33here at, uh, New York Tech Week. 11:40Alright, so I'm gonna move us on to our next segment. 11:42There's two sort of big industry reports that just came out fairly recently. 11:47One from the Linux Foundation and the other one from, uh, you know, the 11:50legendary Mary Meeker at Bond Capital. 11:54Uh, most known for her like voluminous 11:56slide decks, um, which, you know, have largely kind of focused on internet. 12:00But what's so interesting is that this year's kind of drop 12:03was like very AI focused. 12:05Um, and so I wanna kind of talk about a little bit about both of them. 12:08'cause I think often it's like there's so much going on in AI, it's really 12:11hard to kind of collect all that data and like have like a kind of 12:14grounded conversation in what's going on. 12:17Um, and I wanted to start first with the Linux Foundation report, Linux Foundation, 12:20of course, being in the open source world. 12:23Um, and uh, I think the stat that I really wanted to talk about was this one, I'll 12:26just kind of quote it, which is that they found that a significant majority, 12:2989% of organizations are using some form of open source in their AI stack. 12:34And almost two thirds, 63% of companies are using an open model. 12:39Um, and you know, in the past, I think when we had this discussion 12:42in the past, it's been like. 12:43Oh, is, is closed source gonna win? 12:45Or is open source gonna win? 12:46Or you know, how is open source adoption happening? 12:49This report kind of suggests is like, has open already won? 12:52Like, I don't know if we're like already in a world where like open 12:54source models in some ways have the advantage because they've just 12:58been adopted by almost everybody. 13:00And so I don't know if like this kind of classic distinction between 13:03like open versus closed is even like a worthwhile debate anymore because 13:07open dominates in so many places. 13:09But I, I think I'll point it to you first 'cause you're looking at me skeptically. 13:14No, not skeptically. 13:15Yeah. 13:16More in agreement. 13:16Uh, but let me try to Yeah. 13:18Dig in a little bit. 13:19Sure. 13:19Uh, so first being in open source AI, I wasn't too surprised by most of 13:23the conclusions of that, uh, report. 13:24Yes. It's great to see it. 13:25One place are open. 13:26Yeah. 13:27Uh, it's great to see it in one place. 13:28It really is for sure. 13:29I'd say like on that, like open versus closed debate, like. 13:33Yeah, I think it's more nuanced, right? 13:34Mm-hmm. Take that statement. 13:3589% of organizations are using some form of open source in their AI tech stack. 13:41Of course they are. 13:42I mean, Linux is open source. 13:43You know, PyTorch is open source. 13:45Many, I mean, many things are open source outside the model, right? 13:48Yeah. 13:49The models themselves, that's a healthy statistic of growth, right? 13:52That's great. 13:53That, uh, two thirds, yeah, about 63% Yeah. 13:55Are now using, uh, some form of open, open weight model. 13:58Mm-hmm. 13:59That's really great. 14:00Um, again, I'm not, not too surprised. 14:01Of course they are, but yeah, maybe it should be, right? 14:04Mm-hmm. 14:04Because like, if you think about two years ago it looked like, you 14:07know, AI maybe was gonna become kind of like cloud service style, right? 14:11That's right. 14:11A few clouds would have the APIs and that's everybody 14:14would just use them, right? 14:15Mm-hmm. Yeah. 14:16It would be so great and easy and that that's, that's all you would need. 14:19So it's kind of nice to see that 14:20not play out that way. 14:21Mm-hmm. 14:22But you think it's still like a story in progress. 14:24Like you, you see two thirds and you're like, well, there's still 14:26that other third that could be open. 14:28That's true. 14:29Sure. 14:29But I'd say more so like it's toward a more nuanced view. 14:32Right. Uhhuh? 14:33I think there's gonna be proprietary things that every organization 14:36uses in AI in their stack. 14:38Some will probably use some proprietary model services alongside open models. 14:42Um, some will use us an opportunity to focus on bringing the 14:46proprietary differentiation to a different part of the stack. 14:48Right. Higher up. 14:49Yeah. 14:50So at the application layer, as Sarah was talking about. 14:52Yeah, for sure. 14:53And Ash, I'm curious if how like, 'cause it seems like where Anthony's kind 14:56of pointing us is sort of the idea of like, it's not really open versus close. 15:00What we're gonna see is like everybody's gonna use open to a 15:02greater or less degree and there'll be like different ways of different 15:06paradigms maybe of integrating open. 15:08Is that kind of what you're seeing in your work? 15:10Yeah, uh uh, for sure. 15:13And I think that, um, one of the primary drivers for this is that 15:18the space is still pretty nascent, right? 15:21I mean, we have great model performance, okay, but the adoption of those 15:25technologies and using them in like functional ways that add value and 15:29bring, you know, sort of like a, a healthy return on the time and the 15:33effort that's put into using them, we're still nascent and we're trying 15:36to like work out what those things are. 15:38And yeah, we have some core use cases now, but. 15:41For a lot of organizations, it's the developers that are driving this. 15:45Mm-hmm. 15:45Right. 15:46And they need to know what's going on in these open source pieces of 15:51software and models because they're still tweaking and they're still 15:54customizing and they're still adapting to the use cases that they have Right. 15:58In within their own individual organizations. 16:01And if the stack wasn't open to an extent, that wouldn't be possible. 16:04Yeah. I love that argument and. 16:05Sarah, curious if you have some comments on this. 16:07'cause it's like, Ash, what I hear you saying is like, we have no idea what we're 16:10doing in AI and like, isn't it great that it's open because like otherwise we would 16:15really have no idea what we're doing. 16:17I don't know. 16:17And this is all these implications for safety and bias and fairness. 16:20Yes, exactly. 16:20Exactly. 16:21No, I mean, open source, it. 16:23It's so interesting from a safety perspective because what sometimes 16:26comes to mind is, alright, open source models have historically been used for 16:31harmful purposes, that perhaps closed source models will create guardrails 16:35around to prevent that behavior. 16:37Um, but I think saying, you know, proprietary good, open, bad from a safety 16:41perspective is obviously too naive. 16:43I think, you know, we. 16:44Have greater transparency into the safety measures of open source models, right? 16:49If we are only trusting proprietary closed source models on their own safety 16:53measures, we're taking them at their word. 16:55Whereas the beauty of open is that now the whole world is a tester, right? 17:00They can red team it, they can analyze it, they can go through the code and 17:04identify where vulnerabilities might be. 17:06And so that's where it's promising to me 'cause greater transparency, um, helps 17:11you know, safety in the long term. 17:13Hmm. 17:13Yeah. 17:14And do you think, actually, I mean, I think one interesting historical 17:16comparison, right, is like, you know, Apple versus Android, right? 17:20Which would be the classic one, you know. In there 17:22I think the way I often hear the story told is, well, Apple's closed. 17:26Everything's controlled end to end and as a result it's more secure and more 17:30private and all these sorts of things. 17:31And you know, Android being an open platform has a lot more security 17:34risks and you know, all these things we need to worry about. 17:36But you actually told a story about ai, which is like almost the flip, right? 17:40Where you're like, actually there's all these security advantages or safety 17:43advantages that come from openness. 17:45Do you think AI is gonna work in a very different way from what we've learned, 17:48I guess in the mobile ecosystem? 17:49Or like, are these different cases, I guess is what I'm trying to say? 17:52Yeah, no, it's interesting 'cause um, 17:54I think, I think it depends. 17:56So if the closed source model companies do decide to open up and engage more 18:01with the community in terms of red teaming, then they could take the 18:05benefits that I just described that open source models do benefit from. 18:09Um, however, uh, yeah, I mean if we think about, uh, similar to bug 18:15bounties for cybersecurity, like. 18:17Our nonprofit Humane Intelligence, we have bias bounties. 18:21Yeah. 18:21And so we are able to do those with open models. 18:24Therefore, that leads me to believe that there's gonna be more of an adversarial 18:28for good white hat hackers who are keeping on top of where the security 18:32vulnerabilities may lie within open. 18:35Um, and then my last thought is just, you know, in terms of, uh. 18:38Especially the cost savings for cus for customers who are gonna adopt open their 18:42ability to perhaps run these models, um, locally and then have even more control 18:47over their own security, uh, risks. 18:50And I guess Anthony, this goes to like an ongoing debate, I think in the space 18:52and like this was actually, I know like one of the bits of discussion 18:55around the Linux report is like, what is, what does open mean here? 18:58Right? Because open could be: 18:59we're, we're, we have open, you know, uh, transparency into what we 19:03did in order to make the model safe. 19:05But the model has closed weights, right? 19:07Like, is that a form of openness? 19:08You know, I think you certainly meet, you know, uh, free software radicals Yeah. 19:12That are like, nothing is open enough for us. 19:14Um, yes. 19:14And I'm curious about how you see that kind of meta resolving. 19:17Like, are we gonna get to some kind of common norm about like, yes, 19:20this model's open versus not open. 19:21'cause I guess, Sarah, what I hear you saying is it's very, it's fuzzy, right? 19:24Like what openness means in this space. 19:25I think eventually we will. 19:27Mm-hmm. 19:27I think there's gonna be plenty of debate and evolution, right. 19:30In the meantime. 19:31For sure. 19:32I think we need to stay focused on like, the practicality of why anything 19:35open matters or why something that's transparent is important, right? 19:38It's the ability to understand it, to improve it, to adapt it, uh, to 19:42use it as you see fit and therefore derive value in your own way. 19:45Those are kind of the fundamental principles. 19:47Mm-hmm. 19:48And so if we think about software. 19:50After a few decades, we have a really rigorous definition of 19:52what open source software means in different licensees licenses. 19:56Like for, um, for how to, uh, right. 19:58To enable use. 19:59Mm-hmm. 20:00AI, like a pre-trained model, has really only been on the scene in a 20:04big, broad way for a couple of years. 20:06Right. 20:06Yeah. And it's complex, right? 20:08Um, is it a data artifact kind of 20:11mm-hmm. 20:11Uh, is it more like software? 20:13Kind of mm-hmm. 20:14Is it unique from the, from the two? 20:16Yes, it is. Uhhuh, um, it has like compressed capability 20:20and call it intelligence. 20:22That, right, that no kind of shell of software alone has. 20:25So I think it's gonna take some time. 20:27I think we need to stay focused on why it matters, which is in my, in my view, 20:31like a practical view of it, right? 20:32Mm-hmm. 20:33Yeah. 20:33And if we can, if we can keep that focus, I think the definition 20:36will continue to evolve. 20:37And I think eventually, we'll, we'll wind up with sort of a commonly accepted 20:40definition of what open source AI means. 20:42Yeah. But 20:42it might just not be until like 2050, basically. 20:45So we'll see. 20:46Yeah, we'll see. 20:46Maybe before that, but that's right. 20:48It might take a little while. 20:48For sure. 20:49Um, I'm gonna move us onto the second big industry report, uh, 20:52which is the Mary Meer report. 20:54This 300 plus page slide deck, um, it's, it cites a lot of the stats that 21:00I think we're familiar with, but I think was useful for me to at least revisit. 21:04There's a great chart in there, which is like, how many days did 21:07it take to get to a million users? 21:08And it's like. 21:09You know, it's a fun comparison. 21:10It's like the Model T car, you know, TiVo, um, the iPhone and then 21:15at the very end it's like OpenAI, like five days to a million users. 21:18Which like, I think again, like the deck was useful for me just reminding 21:22myself like how crazy of this, this period that we're living through. 21:26Um, but Ash, I wanted to talk to you about, in specific, about 21:28one comment that's hiding in one slide in like eight point font 21:32at the very bottom of the slide. 21:34And it says, quote in the short term, it's hard to ignore that the economics of 21:38general purpose LLMs look like a commodity bi- commodity business with venture scale 21:44burn, which translated in my mind is like, this is really expensive and it's still 21:49kind of unclear whether or not it's a business you can make more than kind of. 21:53Commodity profits on. 21:55What do you think about that? 21:55Is that, is that concerning? 21:57Yeah, it is. 21:58That stood out to me as well. 21:59Uh, I mean the, one of the first things that she says, and I think 22:03that kind of underlines most of the report, is the word unprecedented. 22:07Okay. 22:08Right. 22:08And, and, and. 22:09In, in that vein, right? 22:11This is unprecedented. 22:13The amount of money that's being invested in training these large 22:16models seems to be going up. 22:19The GPUs are getting more efficient and you know, their power requirements are 22:23kind of going down, um, as well as sort of like, um, their cost for like inferencing. 22:31Um, but it kind of creates this like chart, which is like 22:35costs are going up to train them, costs are going down drastically to run them. 22:41So where's the math between those two things that are gonna close that gap to 22:47like bring a return on investment for all this money that's being poured into this. 22:50So Ash, what you're saying is like very concerning, right? 22:53Like how do we fill that gap? 22:55It's unprecedented. 22:56The situation that we're in. 22:58Um. 22:59Sarah, what's the solution? 23:00Yeah, well, you're not gonna like my answer. 23:03'cause of course, being a trust and safety focused person, I read it with a 23:07little bit of a different lens, so, okay. 23:08Yeah. Well what, what was your read? 23:09Yeah, yeah, yeah. 23:10So out of the 300 and some odd pages, uhhuh. 23:14So many dedicated to the potential revenue. 23:16You know, so many charts of hockey sticks, Uhhuh, I swear I was at a Rangers game. 23:20But, um, what about safety is my question Uhhuh? 23:23Sure. And I get it. 23:24Is a VC created report. 23:26Yeah. And that is not the main thrust of it. 23:27However, I do think we need to be having a more nuanced conversation 23:31about when we are deploying a technology to so many users and how 23:35responsible scaling is actually a good business decision. 23:39Mm. Um, I think when I was looking through it, I found like the word bias once uhhuh. 23:43Sure. 23:44There's a little bit of a concern and so I just do think, you know, no, no 23:47shade to the queen of the internet. 23:48But I do think that might have been a little bit of a mis. 23:51Opportunity to just talk through some of these issues, which could be barriers 23:55for consumers trusting the technology. 23:57Mm-hmm. 23:57Yeah. And therefore adoption. 23:59And I do think that then smart businesses are going to want to make 24:03sure that they, um, deploy it safely, not just to avoid regulatory pressures. 24:07Mm-hmm. Especially in the eu. 24:09But also if you think about it from a cost savings perspective, uh, 24:12finding a bug after you deployed way more expensive to, to fix than 24:16if you can catch it in testing. 24:18Yeah. Yeah. 24:18So of course, that's why I, I beat my drum around, uh, more, 24:22uh, more robust evaluations. 24:25Yeah. 24:25And you actually think that that will be like a, that will be a 24:27commercial phenomena as well. 24:28Oh yeah. 24:29I mean, the sense that like kind of Ash is offering this question, which is how 24:31do we navigate this world where the costs are crazy and we're still waiting for 24:35the kind of business value to show up? 24:36Are you kind of saying. 24:37The competitive advantage here will be something like 24:39safety, maybe something like, 24:41yeah, it'll, the competitive advantage for the firms deploying AI will be 24:45safety and how they can offer that as part of the product to customers. 24:49But I also think there is an untapped market for, uh, in a firms that 24:54want to also take advantage of this. 24:56So building out a broader safety ecosystem. 24:59I know we were just talking about how the model, uh, the open model environment, 25:03like we don't have certain standards that we would like to standardize those. 25:06I'd say the same for evaluations. 25:08Yeah. 25:08So there's a lot of, uh, potential revenue there that. 25:12Ms. Meeker did not touch upon. 25:14If you're listening, Mary Meeker, um, and you're nodding, I don't know 25:18if you wanna, you wanna get on this kind of hard things, a couple thing. 25:20Yeah, for sure. So first I, 25:21I agree with that direction. 25:22Take a little further. 25:23Uhhuh, I'd say yes. 25:24Value creation, uh, profit margin mm-hmm. 25:26Will be in layers, above models, just like they are layers above computing hardware. 25:31The layers that are closer to the application. 25:32The layers where different companies who have use cases are gonna focus on. 25:36There's lots of value and there's lots of, lots of margin there. 25:39Mm-hmm. 25:39Right. 25:40On the topic of like overinvestment in AI, I think it's really 25:42interesting if you take a step back 25:44mm-hmm. 25:44And think about the macroeconomic picture here. 25:46Isn't it amazing that a set of investment decisions that happen, 25:50like at a micro level, right? 25:51Do I invest in that startup? 25:52How much, what's the likely return, what are the rounds gonna look like? 25:55Results in an incredible overinvestment. 25:58Mm-hmm. 25:58It's unprecedented in the, in 25:59the ecosystem. 26:00But isn't that amazing Uhhuh? 26:02Because look at how fast it's pushing progress in competition. 26:04For sure. 26:04Yeah. 26:05Like no rational decision at a macroeconomic level would ever 26:07place that much like funding into AI development, but it's happening. 26:11Yeah, 26:12right. 26:12Because this series of all of these micro decisions and, and startups and funding 26:18rounds and all that collectively created this like amazing accelerator of progress. 26:22That's right. 26:22Yeah. Wow. 26:23Are you I'm pretty excited by that. 26:24Yeah. 26:24And are you saying like, are you kind of making like a 26:26wisdom of the market argument? 26:28Right. Which is they wouldn't do this. 26:31Right. 26:31Well, what we're discovering is that people really do have confidence 26:33that this is gonna generate value. 26:37Well, I'd say there's overconfidence. 26:39Sure. Okay. 26:40And I think many of us will benefit from overconfidence. 26:42That's right. Some people will lose a lot of money. 26:44Yes. But I. 26:45That's okay actually in the grand picture because we're all gonna benefit. 26:48Yeah. 26:48And there's a great book that came out called Boom, uh, I 26:51think it was earlier last year. 26:52Right? 26:52It was kind of arguing that like, even, even irrational bubbles, which you could 26:56try to make this argument, uh, have all these spillover benefits, right? 26:59And like we should actually keep our eye on some of that. 27:01Um, Ash, you wanna respond to some of these comments because I feel 27:04like in some ways maybe you're holding back a little bit, but maybe 27:07you're a little bit more skeptical. 27:08One question that I always keep asking myself. 27:11Okay. 27:12Um, is... 27:13Whenever you use something that's using a generative AI based backend, 27:19mm, you'll see a disclaimer. 27:21Like the answers might be wrong. 27:23Double check them. 27:26Is that gonna be forever, Uhhuh? 27:28I mean, we all just gonna live in a world where AI is everywhere and 27:32everything could all be wrong, and we just have to double check everything. 27:35Like that's a really, really important thing to consider. 27:38Right? Sure. 27:38As we go and like sort of proliferate, um, models across all sorts of, um, uh, 27:45supply chains and, uh, and, and, and, and value chains of, of information. 27:49If all of that goes from being really, really 27:53sort of deterministic to stochastic, then what do you trust anymore? 27:58Mm-hmm. 27:58Right. 27:58Yeah. 27:59And I think this is, it is like, I think one anecdote that I have in 28:01mind, Sarah, when you were talking, kinda making the case that like maybe 28:04safety is one of these things that you build value on, on top of, you 28:07know, the hardware or, uh, the model. 28:09Um, is is the case of, uh, ChatGPT image generation. 28:14Right. 28:15Where I think like one view you could have of that is that they concluded that, 28:20um, consumers actually want less safety. 28:22Mm-hmm. 28:22Right? 28:22We get more adoption the less we control the activity of the model. 28:26Mm-hmm. 28:26And this is kind of a perverse outcome, right? 28:28Which is like maybe the market incentives are pushing people to 28:31get more value out of the market by reducing their commitment to safety. 28:35Is that a good interpretation? 28:36Or, I don't know if you. 28:38I can't help but think about 28:40the lesson that I would've hoped we learned in the last 28:4220 years with social media 28:45and that lesson was, uh, 28:46well, is that, uh, when you move fast and break things, uhhuh, 28:50uh, you also break people, right? 28:52And like especially as this is adopted at an even faster rate mm-hmm. 28:56Than social media adoption according to the report. 28:58Why can't we learn our lesson and do you know, more responsible scaling? 29:03Mm-hmm. 29:03You know, make sure that it is a business requirement for these models. 29:07And I think un unfortunately, a lot of it is the genius out of the bottle. 29:09Mm-hmm. 29:10Chat OpenAI with open, uh, releasing ChatGPT into the wild, 29:14probably a little prematurely. 29:16Mm. Has sort of made it just. 29:18The norm that these half-baked products are going out. 29:20Mm-hmm. 29:20And I do worry that, um, that, uh, business leaders who are making 29:25decisions on which of these products to implement, and especially across 29:28huge enterprises, are overestimating their overall capabilities. 29:32Mm-hmm. 29:33They're also looking at 29:34these benchmarks, which purport high performance, but also a 29:37benchmark is a very narrow view of the overall performance of a model. 29:41Yeah. 29:41And so I do, I do wonder if, you know, we've already seen some of 29:44these AI first companies like, uh, Duolingo now backtracking, right? 29:49Mm-hmm. 29:49Mm-hmm. Yeah. 29:49And actually hiring more people. 29:51But I do think we are gonna be in a bit of a thrashy period as people, especially 29:56businesses like very enthusiastically adopt, try to implement it. 30:00There's the reality of any time you try to implement anything into any system mm-hmm. 30:05There's some blowback and then we're kind of left now questioning, all 30:08right, where do we go from here? 30:09Yeah, for sure. 30:10Ash, final comment on this is like, I mean, you offered this prompt 30:13by saying, you know, everybody's got these disclaimers, don't 30:16trust anything this model says. 30:17Yeah. Right. 30:18Um, and I guess, Sarah, what I hear you saying is, well, you know, 30:21maybe we're like in this like. 30:23Man, I hope we remember the lessons of social media moments. 30:26Like, do you think it's gonna be like 10 years? 30:27Everybody will be like, oh God, these models, you know, we really gotta have 30:31a renewed commitment to, you know, veracity and validation in, you know, 30:35model outputs or something like that. 30:37I, 30:37I, I do think that, um, there's lots of things, um, being developed currently. 30:43Like, um, I think we may have talked about this on a past episode around 30:47like mechanistic interpretability. 30:49Yes. Yeah. 30:49Right. 30:50I think that, um. 30:52As those areas mature, we'll have things in place, sort of controls 30:58that should make, you know, those disclaimers being hopefully less required. 31:04And we will mature as an industry that will get to a point where 31:06we'll have a universal agreement, just like we will around what's an 31:10open source model and what's not. 31:12Around, you know, this model meets some sort of classification, which 31:16means it can be used for this purpose. 31:19Mm-hmm. 31:20Yeah. 31:20I think it's important that sort of like industry as well as sort of government 31:23side of put some effort into, into doing that to make sure that we're 31:26using not just ai, but we're using the right AI for the right use cases. 31:30Says. 31:35I'm gonna move us on to our last topic, which actually in some ways 31:38is very related to what we've been talking about for the last few minutes. 31:41Um, two sort of very interesting stories widely chattered about on social media. 31:46And I think a big part of MoE's job is to just kind of like cut through the hype. 31:51Uh, you hear so much about AI that's just like, what is that? 31:53And you go digging. 31:54And it's like, it turns out the story is not as, as amazing or as 31:57scary as it was originally reported. 31:59Um. 32:00The one I wanted to really cover was this sort of interesting release that 32:03Anthropic did with the launch of Claude 4. 32:05Um, they released a, a model card that kind of describes how they 32:08think about safety and all the things that they did around safety. 32:11And there's one particular section, again, a little bit like the MEA 32:14report, like kind of like buried deep in that system card that got 32:17a lot of attention on social media. 32:19They said that in specific context, Claude 4 would quote. 32:23Blackmail people that believes are trying to shut it down. 32:26And the specific study they did was to say they had a couple of test scenarios 32:30where, um, a user would attempt to tell Claude that it was being shut 32:34down and replaced, and that Claude would've access to a bunch of emails 32:37that suggested that the person was involved in like an affair of some kind. 32:41And you know, lo and behold, the model kind of like. 32:43Threatens to expose that in response to the input of trying to be shut down. 32:49So this is like, of course, very, you know, Terminator, our AI is gonna 32:52take over the world and, and set off exactly that narrative online. 32:57Um, and I guess Anthony, I'm curious how you respond to this sort of thing, right? 33:00Like, is this, this is genuinely weird, but I guess the question 33:03is, is it something we should really be worried about? 33:05Should we be worried 33:06about it? 33:07A little bit. 33:07Okay. 33:07But not too much. 33:08Okay. 33:09Here's what I think's happening. 33:10A little bit scared. 33:11Here's what I think is happening. 33:12Uhhuh. 33:12Um. 33:13We train models. 33:14Yeah, we align models. 33:16Yeah. 33:16We try very hard to get them to solve problems. 33:19We try to get them to pretend to think. 33:22Mm-hmm. 33:22I say pretend to think 'cause they're not really thinking. 33:24Mm-hmm. 33:24Right. 33:24This is all statistics and trial and error behind the scenes. 33:27Right. 33:28So it shouldn't be surprising that as things move fast. 33:31If artifacts of the training process show up, interesting behavior emerges. 33:35And some of that may reflect human-like behavior because we're training 33:39on all sorts of human data, right? 33:40Mm-hmm. Right. 33:41So, you know, trying to prevent itself from being shut down. 33:43I mean, you could, you know, if it, if somewhere there are Hollywood 33:47scripts compressed in there, right? 33:49That's right. Like, okay. 33:50And I, too frequently, I practically practic working, being shut down, right? 33:52Yeah. 33:52So, right. 33:53The script of the Terminator, of course, is probably in there somewhere. 33:55It likes to copy. 33:56Okay. 33:56So of course it behaves like that sometimes. 33:59In fact, I'm, I'm surprised, like, uh, we've done a good job aligning 34:02models so that more of that kind of doesn't show up actually. 34:05It's kind of nice. 34:05Yeah. 34:06To see not, not that not being too, too prevalent, um, you know. 34:11I think it's important to remember that we know how AI systems LLMSs work. 34:16Mm-hmm. 34:16Right? 34:17They are statistical. 34:19Right. There's math, it's next token prediction. 34:21Yeah. 34:22Um, you know, depending on what model, we may or may not know 34:27kind of where the data comes from. 34:28Mm-hmm. 34:29It's not magic. 34:30Yeah. 34:30And like we should try hard not to pretend like we've created, like in 34:34some godlike way, some new life form. 34:36Sure. Because it distracts from like real. 34:38Issues that can be engineered like well mm-hmm. 34:41And tested well, right. 34:42Totally out of the problem. 34:43That's right. 34:43Yeah. 34:43But I think that that last clause of what you're saying 34:46is, uh, is doing a lot of work. 34:48Right. 34:48Which is if we engineer these things well, um, Sarah, to turn to you, I think 34:54there was another kind of anecdote, which is very related, that got shared around 34:57online from this research group called Palisade ai, and they said, look, we did 35:00a test with o3 We said, we asked it to do a bunch of math problems and then we said, 35:04on the next one we're gonna shut you down. 35:07And they said, low and behold, you know, the model gave back a response 35:10where it tried to subvert, you know, the script that would shut it down. 35:13Right. 35:14And you know, I agree, Anthony, with everything you're saying, right. 35:16It's not like there's a brain in the box that's like, I'm 35:18gonna take over the world. 35:20But it seems like you could imagine engineer implementing these systems 35:22in a not very thoughtful way. 35:24Where this disabling behavior of the model really does have a safety impact. 35:29And so how do we, how do, what do we make of that? 35:30Right. Yeah. 35:30It's like this, like weird made up behavior actually has 35:33practical impact on the ground. 35:34Yeah. 35:35Yeah. 35:35I, I was seeing some critiques online that were saying, well, 35:38they planted that evidence, like going back to the Claude example. 35:42They put the emails in there. 35:43Mm-hmm. 35:43And they said, you have no other options other than to 35:46blackmail or to uh, shut down. 35:48Mm-hmm. 35:49Ah, gotcha. 35:50Right. 35:50But I think it's less of that like, yes, you're doing that 'cause it's. 35:54You're stress testing it. 35:55Mm-hmm. You're red teaming it. 35:55Yeah. 35:56And we actually want to discover if prompted to these certain ends, would 36:01it actually, uh, enact that outcome? 36:04Mm-hmm. 36:04Um, versus something that this is like an emergent behavior 36:08that it would just do unprompted. 36:10Mm-hmm. 36:10Yeah. 36:10Um, but I think it makes the case of why we need to stress test them. 36:13And I think it might get lost among the headlines that this 36:15was in a controlled environment. 36:17Um. 36:18It's something that we, we wanna test things, not when, like, we 36:22don't wanna wait for a fire to test. 36:23We wanna test it with smoke, even if we have to make the smoke ourselves. 36:26Mm-hmm. 36:27Um, and so given that, uh, you know, increasingly we are going to have 36:31applications where user data is contained in systems that, especially if we go 36:35all in on agents, agents will have access to, I'm not worried tomorrow 36:39about that type of situation happening. 36:42But I think it's, it's, I actually applaud, uh, Anthropic for 36:45releasing that in the safety card. 36:46Mm-hmm. 36:47Because I think it also opens up a conversation then for the other 36:50proprietary models to answer, is something similar happening with their models. 36:54For sure. 36:55Yeah. Yeah. 36:56Ash, what I love about this conversation is that. 36:58You know, computers didn't use to behave like this. 37:01Like my favorite like set of things is actually coming outta like the reasoning 37:04models where you're like, could you just think harder about the problem? 37:07And like the computer delivers a better result. 37:09Like we're actually, it seems to me dealing with like computers 37:12that now behave in these like kind of very human like ways as 37:15a result of their training data. 37:17And like, uh, we were talking a little bit earlier about like, 37:19how do you close that value gap? 37:21And it feels like. 37:23You know, will you really want to implement some of these systems 37:25if they're kind of like weirdly humanly unreliable in this way? 37:28Right. 37:28I guess what I'm trying to point to is like computers we've designed 37:31because they're like really good at following instructions and something 37:35have this model that's like really good at doing things, but occasionally is 37:37just like, I'm gonna blackmail you. 37:39Or like, I don't know. 37:40The other one would be the, the ChatGPT getting lazy around the holidays thing, 37:43and it's like, how do you make these systems reliable enough that like. 37:47You know, you would want to use like an enterprise would wanna use that at 37:49massive scale in a way that really would drive value, I guess is the question. 37:52Well, I think the first thing we need to do is we need to make 37:55sure that we stop training the models on any episodes of Black Mirror. 37:58Yeah, exactly. 38:00That was where we went wrong is like, yeah, I mean that actually, but it's 38:04actually kind of a serious comment. 38:05Yeah. 38:05Is basically like one way of dealing with Anthony. 38:07The problem that you're proposing is we just get a lot more, uh, Orthodox about 38:11how we treat training data, which is like. 38:15Something we haven't really done with an ai. 38:17Uh, do you think that's an approach? 38:18Uh, absolutely 38:19Uhhuh. I mean...Okay. 38:21Fine. 38:22You know, software, as you said, right. 38:24You know, it's deterministic. 38:25We are expecting it to do things and 38:27those expectations are you're gonna do this sequence of instructions, or you're 38:30gonna go, oh, there's an error for whatever reason, and we'll have bugs, 38:34but, you know, we'll figure that out. 38:35Mm-hmm. 38:35Um, with, with, with, uh, something that's operating with a level of 38:40stochasticity and you're getting back sort of like predicted things. 38:45Okay. 38:45Um, I think it means that, yeah, absolutely. 38:48We need to have far more rigor on the data that we're putting in. 38:52I mean, like, there's the. 38:54Age old saying of garbage in, garbage out, right? 38:56Mm-hmm. Yeah. 38:56Let's make sure we're not putting garbage in, you know, and we don't have 38:59to deal so much for the garbage out. 39:01That's right. 39:01Anthony, you wanna 39:01jump in? 39:02I agree. 39:03It's a big challenge. 39:05Uh, it's actually something that the AI Alliance is starting to take on. 39:08Mm-hmm. 39:09We have an initiative and we're bringing a lot of organizations together 39:10that are active in the data space. 39:12Mm-hmm. 39:12Uh, curators, tool makers, and so on, with the big ambition to try to 39:16build a much better corpus of data for training and tuning models. 39:20Mm-hmm. 39:20Um. 39:21Yeah, that's challenging, right? 39:23This is internet scale and beyond data. 39:25This is like massive generated data sets and so on. 39:28The, there's, you know, many techniques and nuances in the 39:30post training phase, right? 39:32Uh, so it's not easy, but it is a big challenge that we're starting to take on. 39:36Wouldn't it be great if we had the choice, right, of different levels 39:40of data sets to train models on? 39:42Mm-hmm. 39:42We could decide, or an organization can decide what level of scrutiny 39:45or clear or, or, or screening and so on, they want to use. 39:49That's, that would be, I think, very helpful. 39:51Mm-hmm. 39:51Um, we're, we're gonna try. 39:52Yeah, for sure. 39:53Yeah, and it's, I think those efforts are like really exciting. 39:55It's like very ambitious, but if you're able to pull it off, 39:57I think it could be really huge. 39:59Sarah, I think maybe the last bit of this I would love to talk 40:01about before we have to close up on the show is about interface. 40:04So, you know, I had a conversation with a friend recently where I 40:08was like, it's so lucky that chat ended up being the like key initial 40:12experience that people have with these systems because it models talking 40:15to a human and humans are unreliable and they have weird emotions and 40:19occasionally they try to blackmail you. 40:20Right? And so it's like, it's actually like good. 40:22That the paradigm that we bring to interacting with LLMs is that they 40:25are weird and fuzzy and unreliable. 40:27Because I could imagine designing an AI LLM experience that like, I don't know, 40:31looks like a calculator or like, looks like a, you know, a terminal, right? 40:35Which increasingly we are doing, but I'm curious about how you 40:37think a little bit about that in the trust and safety world, right? 40:40Which is, it turns out that like it may be more than just the model. 40:43It may be like what interfaces we choose that kind of set our expectations 40:47with what the model can and can't do. 40:48And that's, that's kind of safety relevant, isn't it? 40:50Yeah, no, it, it is like safety is go, goes at all levels of the life cycle. 40:54Mm-hmm. 40:55And what's really interesting is we are seeing repeatedly people are turning 40:59to these models, not for what maybe the creators even originally thought. 41:03Mm-hmm. 41:03So in terms of like talk therapy 41:05Yeah. 41:05And the potential negative societal effects that come with talking with 41:09a system that has been optimized to, uh, be helpful to you, to, you know, 41:14be sycophantic. 41:16Yeah. 41:16And that's, that's some of the red teaming that we do is actually 41:19sycophancy, uhhuh uh, testing. 41:21Yeah. 41:22Um, and like what kind of society do we have when a bunch of people are just 41:25constantly told that they are right. 41:27And replacing interactions with real people who in the course 41:32of a day challenge each other. 41:33Yeah. 41:33Um, and of course what I'm talking about is the whole 41:36vertical of companion AI. 41:37Mm-hmm. 41:38Yeah. 41:38But, um, aside from that, you know, I think, I dunno, it's, it's interesting 41:42'cause I think a lot about how, um, users will take the results from an LLM and 41:47just blindly trust it as authoritative. 41:49Right. Yeah. 41:50And sometimes maybe we could see the. 41:53Weird edge, like the silver lining of all of this, of the LLM acting 41:58weird, for lack of a better word. 41:59Mm-hmm. 41:59Yeah. 42:00As indicating like, wait, this is not a perfectly neutral, 42:04uh, authoritative source. 42:05Like you can query it different ways and it gives you different answers. 42:09And I think ultimately that's important for us to keep in mind. 42:11So that way we don't fall into the temptation of 42:14believing in some computer God. 42:16Mm-hmm. 42:16But rather remind ourselves of the stochastic nature of the, 42:19of the probabilistic nature that is undergirding these systems. 42:23So yeah, 42:23for sure. 42:24Ash, I wanna give you the last word, but I kinda wanna bring it full circle, right? 42:27Because I think there's a part of me that's kind of like, is 42:29part of the problem that like. 42:31Yeah, it's like the Bay Area. 42:33It's a bunch of nerds. 42:35They want to train like Spock, right? 42:37They want like a Vulcan conversational experience. 42:40But the problem is that it, it conveys greater authority 42:43than it otherwise should. 42:44And so the joke would be like if you did a tri-state ai, it'd be like, you know, it 42:47would be like, it would be kind of mean. 42:49Yeah. 42:49And you're like, yeah. 42:50And the kind of, the question is like, should we be fine tuning these models? 42:53To like be more unreliable, right? 42:56Like should we have LLMs have a bad day? 42:58Like you log into ChatGPT and it's like, I'm just not feeling it today, man. 43:01Like that would maybe be better in terms of like training the user to have the 43:04right expectations around these systems. 43:06Obviously no company would ever do that, but I think that's kind of 43:09the interesting question, right? 43:10Right now, 43:13the, uh, the genies out the bottle as you, as you said, using chat as that mechanism. 43:19Right. 43:20But going back to what we were talking about on sort of like how much these 43:24models cost to train and what, how much inferencing costs go? I think 43:28what's more interesting to me is what are the other ways that we're gonna 43:32start interacting with these models? 43:34In our day-to-day lives that are kind of like no longer, you just 43:38like having a intimate chat with it and it's just happening. 43:42Like it's accessing your calendar. 43:44Yeah. 43:44And it's like doing other stuff. 43:46Right. 43:46And I think that, um, this level of like sort of, uh, conversational AI 43:51that, that we have today, I think this is probably just sort of, I 43:55don't know, a little bit com like 43:58a novelty factor I think for us as a generation. 44:01Mm-hmm. 44:01But for like people who don't have the internet right now, or I think 44:05about, you know, sort of like my nieces and nephews and so forth. 44:07Right. 44:07They're probably gonna be interacting with these systems in very 44:09different ways than we are today. 44:10Yeah. For sure. 44:11I cannot wait until young kids are just like talking to ate objects, 44:14assuming they'll talk back. 44:15Yeah. 44:15Like that's gonna be the future of like kids touching screens, 44:17assuming that the touch screens. 44:18Yeah, exactly. 44:20Um, anyways, uh, this is an incredibly rich discussion. 44:23Sarah, Anthony, Ash, thank you for coming on the show. 44:25Uh, and thanks to all you listeners for joining us. 44:27Uh, if you enjoyed what you heard, you can get us on Apple Podcasts, 44:29Spotify, and podcast platforms everywhere, and we will see you again 44:33next week on Mixture of Experts.