Learning Library

← Back to Library

Prompting Scores and Claude 4 Insights

Key Points

  • The hosts ask guests to rate their own prompting skills, with Kate rating herself an 8, while Chris and Aaron dodge the question, highlighting the playful uncertainty around prompt‑engineering expertise.
  • The episode of “Mixture of Experts” focuses on recent AI news, including high‑profile collaborations like Rick Rubin with Anthropic, Jony Ive with OpenAI, and Microsoft’s new “agent factory” concept.
  • A major discussion centers on the leaked Claude 4 system prompt, noting that Anthropic’s unusually long and publicly annotated prompt serves as both a practical guide and a benchmark for modern prompting practices.
  • Chris observes that Anthropic’s transparency—publishing most of the prompt despite some redacted parts—effectively educates users on how to craft effective system prompts, underscoring a shift toward openness in AI model behavior control.

Sections

Full Transcript

# Prompting Scores and Claude 4 Insights **Source:** [https://www.youtube.com/watch?v=e_B91C2vILc](https://www.youtube.com/watch?v=e_B91C2vILc) **Duration:** 00:43:50 ## Summary - The hosts ask guests to rate their own prompting skills, with Kate rating herself an 8, while Chris and Aaron dodge the question, highlighting the playful uncertainty around prompt‑engineering expertise. - The episode of “Mixture of Experts” focuses on recent AI news, including high‑profile collaborations like Rick Rubin with Anthropic, Jony Ive with OpenAI, and Microsoft’s new “agent factory” concept. - A major discussion centers on the leaked Claude 4 system prompt, noting that Anthropic’s unusually long and publicly annotated prompt serves as both a practical guide and a benchmark for modern prompting practices. - Chris observes that Anthropic’s transparency—publishing most of the prompt despite some redacted parts—effectively educates users on how to craft effective system prompts, underscoring a shift toward openness in AI model behavior control. ## Sections - [00:00:00](https://www.youtube.com/watch?v=e_B91C2vILc&t=0s) **Prompting Self‑Ratings on the Podcast** - In the opening of the “Mixture of Experts” podcast, host Tim Hwang humorously asks a panel of AI professionals to rate their own prompt‑engineering skill on a 1‑to‑10 scale, underscoring the playful uncertainty about what truly constitutes expertise in prompting large language models. - [00:03:04](https://www.youtube.com/watch?v=e_B91C2vILc&t=184s) **Cross-Model Prompting Insights** - The speaker highlights how system prompts from Claude 3.5 improve even unrelated Llama models, stresses the need to balance specificity with model autonomy, and discusses embedding safety “red‑flag” awareness into prompts. - [00:06:10](https://www.youtube.com/watch?v=e_B91C2vILc&t=370s) **Debating Release of AI Prompts** - The speaker weighs the trade‑offs of publishing LLM system prompts—transparency and proof of capability versus security risks and the need for user expertise. - [00:09:14](https://www.youtube.com/watch?v=e_B91C2vILc&t=554s) **Evaluating System Prompt Control** - The speakers discuss how system prompts—like directives to insert a “thinking block” after function calls—aim to steer model behavior, questioning the extent of their actual impact, the difficulty of thorough testing, and the need for academic access to validate these prompts. - [00:12:19](https://www.youtube.com/watch?v=e_B91C2vILc&t=739s) **Leaked System Prompts and Future Exploits** - The speaker warns about the dangers of leaked system prompts and obfuscation tactics that can covertly shape model behavior, and notes a new Anthropic collaboration with legendary music producer Rick Rubin. - [00:15:30](https://www.youtube.com/watch?v=e_B91C2vILc&t=930s) **Balancing Artistic Prototyping with Robust Engineering** - The speaker contrasts using creative “vibe coating” for rapid prototypes with the necessity of solid engineering for scalable, critical‑infrastructure applications, while advocating for diverse, artistic approaches to coding. - [00:18:32](https://www.youtube.com/watch?v=e_B91C2vILc&t=1112s) **Balancing Technical Rigor and Creative Freedom** - The speaker argues that while architectural schematics and engineered processes are essential, preserving a creative, exploratory “vibe coding” mindset—much like music production—is crucial for innovative design. - [00:21:36](https://www.youtube.com/watch?v=e_B91C2vILc&t=1296s) **Bridging Vibe Coding and Engineering** - The speaker likens vibe coding to collaborative invention, emphasizing the need to translate the creative, interdisciplinary brainstorming process into concrete, scalable engineering implementations. - [00:24:39](https://www.youtube.com/watch?v=e_B91C2vILc&t=1479s) **Designing the Future of AI** - The speaker argues that beyond simple collaboration, shaping AI’s multimodal, on‑device future will require AI and design firms—led by visionaries like Jony Ive—to reimagine interaction paradigms, form factors, and agent behavior. - [00:27:42](https://www.youtube.com/watch?v=e_B91C2vILc&t=1662s) **High‑Stakes AI Talent Deal** - A speaker critiques a $6.5 billion investment in a tiny AI firm, emphasizing the speculative $118 million‑per‑employee cost and the race to launch mass‑produced AI companions that could pressure Apple’s emerging intelligence platform. - [00:30:50](https://www.youtube.com/watch?v=e_B91C2vILc&t=1850s) **Ecosystem Trust vs Data Tradeoff** - The speaker argues that while users accept sharing data for utility, they gravitate toward trusted, integrated ecosystems like Apple’s, and OpenAI must embed its services within such cohesive platforms rather than remain isolated. - [00:33:52](https://www.youtube.com/watch?v=e_B91C2vILc&t=2032s) **Enterprise AI Agents: Democratization and Competition** - The speakers discuss how AI agents are becoming commodified, with multiple vendors offering prebuilt agents, and why Microsoft is focusing on training and customizable agents to meet enterprise demand while avoiding vendor lock‑in. - [00:37:02](https://www.youtube.com/watch?v=e_B91C2vILc&t=2222s) **Azure‑Powered Supercomputer Sparks AI Agent Talk** - The speaker highlights Azure’s cloud‑based supercomputer ranking on Top500, emphasizing Microsoft’s compute strength and AI tools as a foundation for a burgeoning AI‑agent market. - [00:40:10](https://www.youtube.com/watch?v=e_B91C2vILc&t=2410s) **AI Agent Marketplaces & Vibe Coding** - The speaker envisions standardized AI agent marketplaces that let specialized, interoperable agents “vibe code” tasks, turning the concept from a toy into a production‑ready factory model, exemplified by the newly released aLoRA technique. - [00:43:15](https://www.youtube.com/watch?v=e_B91C2vILc&t=2595s) **Podcast Closing and Platform Plug** - The hosts wrap up the episode, thank the guests and listeners, and promote where to find the show on major podcast platforms while previewing next week’s “Mixture of Experts” IBM episode. ## Full Transcript
0:00How good are you as a prompter on a scale from 1 to 10, with 1 being totally 0:05amateur and 10 being world class? 0:07Kate Soule is a Director of Technical Product Management for Granite. 0:10Kate, welcome back to the show. 0:11Prompting. 0:12How are you at it? 0:12Prompting is never something I wanna be known for, but I do think I'm pretty 0:16good at it, so maybe like a, an 8. 0:18Okay, cool. 0:19Nice. 0:20Chris. 0:20Hey, Distinguished Engineer, CTO, Customer Transformation. 0:23Chris, welcome to the show, uh, your prompting score as a Large 0:26Language Model. 0:27I could not possibly answer that question. 0:30Got it. 0:30And last but not least is Aaron Baughman, IBM, Fellow and Master Inventor. 0:34Aaron, uh, your prompting skill please. 0:36Does prompt engineering really exist? 0:38Yeah, I'm not quite sure. 0:39I always ask LLMs to produce a prompt for me. 0:42Okay. 0:42Everybody's fighting the question. 0:44All that and more on today's Mixture of Experts, a Think podcast. 0:53I am Tim Hwang, and welcome to Mixture of Experts. 0:55Each week, MoE brings together the sharpest team of researchers, engineers, 0:58and product leaders you'll find anywhere in the world of podcasting 1:02to discuss and debate the biggest news in artificial intelligence. 1:05As always, there's a ton to talk about. 1:07We're gonna talk about Rick 1:08Rubin's collaboration with Anthropic. 1:10Jony Ive with OpenAI, uh, Microsoft's new agent factory theory. 1:14Um, but first I really wanted to start by talking about the Claude 4 system prompt. 1:18So you may have gotten our emergency episode where we did a quick review of the 1:23release of Claude 4 um, and true to form. 1:26Uh, pretty soon afterwards, the system prompts leaked. 1:28I think that's kind of just almost standard practice now. 1:31It was pretty interesting. 1:32Simon Willison, uh, did a super interesting blog post where you sort 1:35of annotated the system prompt and I think in general I wanted to kind of get 1:38this group together 'cause we haven't talked about prompting in some time, but 1:41it's also just an interesting document as kind of like a state of the art 1:45on where prompting is at the moment. 1:47Chris, I'll start with you. 1:48Curious if there's anything that kind of stuck out to you reading this 1:50prompt that you felt was different or really kind of indicated where. 1:54You know, kind of the state of the practice was in prompting. 1:56I always find the Claude system prompts super interesting because one, they're 2:02very transparent about it as well. 2:04They publish it. 2:04I mean, there is some stuff they don't publish, but, uh, they're 2:07pretty transparent about that. 2:09But it's long. I mean, this is not a short system prompt, right? 2:13So if you think question, how good are we at system prompting? 2:17This thing is pages and pages long. 2:19So Anthropic are really giving you an education themselves on how to system 2:24prompt, pro, uh, how to prompt properly. 2:26So I think it's pretty good. 2:28There are 2:29a few things that I think is super interesting about it. 2:33Um, the first one is probably just, just simple things like 2:37guidance on how it wants to answer. 2:39It's like, um, you know, if it's a short thing, please just answer there. 2:43Don't use artifacts in this case, et cetera. 2:45Um, there's a lot of guidance perspective and then how to 2:49deal with personality as well. 2:52And you know, if it's a sensitive topic, blah, blah, blah. 2:55But I think the thing that probably. 2:56Makes me laugh the most is how it always talks to Claude in the third party. 3:01You know, Claude, you should do this, Claude, you should do 3:03that, Claude, you should do this. 3:04And I, you know, and, and per AI is gonna have an existential crisis already, you 3:09know, thinking in, in a third party form. 3:11But, um, but I, I think it is worthwhile everybody checking out that system, though, 3:16because you, you can learn a lot from it. 3:19And then I, I remember last year. 3:21Um, when Claude 3, it just came out, I think it was the, called 3.5 Models 3:25At that point, one of the videos I once did was I took the Claude 3:303.5 system prompts and then I put them on top of the Llama models. 3:33And, and I'm gonna be honest, even though those system prompts were 3:36designed for the Claude models. 3:39They actually improved the, the Llama models as well. 3:41So I, I honestly, I think it's something everybody should really 3:45read up on that will help them. 3:46Yeah, for sure. 3:47So there's a lot there. 3:48And Kate, maybe I'll turn to you. 3:49I mean, taking Chris's first point, I thought one of the most interesting 3:52things in the prompt was the degree to which it really feels like in prompting 3:56we're trying to figure out how much we need to specify versus like leave 4:00up to the knowledge of the model. 4:01So there's is an interesting quote where it's like, Claude should 4:04be cognizant of red flags in the person's message and avoid responding 4:07in ways that could be harmful. 4:09And part of Simon's annotation is like, it just has a notion of what red flags are. 4:13Um, and curious about like how you think about that. 4:15I know Chris is saying that these prompts are very long, but it almost 4:18kind of presages a world where, you know, we can increasingly sort 4:21of rely on model knowledge and keep prompts almost sort of short. 4:24Um, but curious how you think, think about that. 4:26Yeah. 4:27You know, I think what surprised me most was just how much of the Claude, you know, 4:33experience they're leaving up to a single prompt versus breaking some of these 4:38things down into more granular steps. 4:39So you mentioned red flags and you're saying, you know, all right, Claude, pretty 4:43please don't include, you know, don't respond to red flags. 4:46Whatever red flags might be. 4:48And you could easily envision in a different experience that 4:51Anthropic could have built where first there's a step where there's 4:54literal screening by a model whose only job is to screen for red flags 4:59or any other risks or harm and biases. 5:01And they might still be doing this behind the scenes, but, um, you 5:05know, I, I think where I see a lot. 5:07of the world starting to move and where I'd ex would've expected Anthropic to 5:11go a little bit more with Claude 4 and they didn't, is dividing this up into more 5:15steps, running more inferences and leaving less to kind of a really long essay that 5:21you have to maintain and, you know, do basically like security on a prayer. 5:25Like pretty, pretty, please will you, you know, not respond to harmful 5:28content, not respond to messages instead have more verifiable checks 5:32and balances that you can, uh, kind of 5:35articulate via software and more programmatic functions 5:38that you're checking. 5:39Yeah, for sure. 5:40Um, and Aaron, I guess to take the other side of kind of Chris's response 5:43to that first question, you know, you opened up with a, the, the, around the 5:47horn question by saying like, I don't actually really do much prompting at all. 5:50You know, um, and I think Chris is, you know, kind of almost taking the 5:54view that's like, it's good for us to kind of like read and understand 5:56what's going on here, but I don't know if you'd say this is maybe a 5:59little bit too aggressive, but like. 6:01Is it worth it for us to kind of study these prompts, uh, as 6:03someone who just kind of like gets models to generate them for him? 6:06Yeah, I mean, I mean, you know, there's, there's sort of two 6:09schools of thoughts here, right? 6:10And, you know, should these prompts, you know, be released or 6:13not, you know, and if they're not released, then they're potentially 6:16gonna be leaked anyways, you know? 6:17And. 6:18I think one of the schools of thought is we should release the prompts, you know? 6:22Um, because it's, it's proof, you know, that AI can be incredibly smart, but 6:27it can still completely misunderstand the assignment or what you're telling 6:30it to do, unless you understand like that manual of how to use the LLM. 6:34Right. 6:35But on the other hand, maybe you don't wanna release the prompts 6:37because, you know, AI could be like this new intern, right? 6:40Where it's eager, unpredictable, but somehow it's already 6:43running the company, right? 6:44And so we have to be very careful about releasing too much. 6:47Um, and then, and the leaking part of this, right? 6:50From, from what I saw, it looked like that Anthropic did 6:53release some of the system prompts. 6:55But what was really leaked were the tools part, you know, which could 6:59be very, uh, dangerous, you know? 7:01And so. 7:02And so the, the notion that, you know, do people need to read, you know, these 7:05manuals to understand how to use LLMs, um, from the expert level, you know, you 7:10know, when you ask, are you one to 10? 7:12If you're like an 8 to 10, you know, then I think it's good to study it, right? 7:16Um, if you're down on the lower end, one, three - 7:18maybe not, but I do think that, um, you know, whether 7:24or not these prompts, you know, are gonna be released or not, right? 7:27Um, is sort of up in the air, you know, of, you know, should it or should it not. 7:31And there are a lot of inherent risk, right? 7:33About, uh, exposing, you know, these, uh, prompts. 7:37But there's also benefits. 7:39I think it's not a bad thing though. 7:40I mean, to sort of come back to Aaron's point and case for a second, right? 7:43It's like. 7:44It's more of a handbook and a guide for the model. 7:47The model's gonna learn loads and loads of things over time, and it's 7:49gonna be put in different situations. 7:51But like us as humans, we're in different situations. 7:54How I act at a party is gonna be different to how I act to this podcast, right? 7:59So before we came onto this podcast, or a wonderful producer 8:01was like, Tim, make your bed Chris. 8:03Set up straight. 8:03You know, put your camera down, da da da, da. 8:05Here is the guide. 8:06For how you should behave in this scenario. 8:09And that's different in other scenarios. 8:11So I think it's okay for them to say, you know what? 8:14You are, you're not in an enterprise setting at the moment. 8:17And remember, the cloud model is gonna be doing enterprisey stuff. 8:20Maybe it's gonna be doing research, et cetera. 8:22You are now acting as a general chatbot. 8:24You're ask, you're answering general queries and that means. 8:28Average human beings don't want to hear you waffling on about, 8:31you know, life, et cetera. 8:32And what you think of this book, it wants it in a couple of paragraphs and, and it 8:36doesn't want you hallucinating things. 8:37It wants you to go and use the web tool and go, come back with the answers. 8:40So I, I think it's okay to have that in a system prompt to, to 8:44guide, like a handbook of how it should behave in that case. 8:47And because that's how we deal with things as well, right? 8:50In different scenarios, we have different guides of how we should behave. 8:54And I think one of the most interesting things here, and it goes 8:56back to what Kate and I were just talking, a moment ago about is like. 9:00You know, originally I think the idea of these prompts was 9:02to specify in detail, right? 9:04Like what you wanted the model to do. 9:06Um, and I always remember the, the joke I had with a friend was like, 9:09are we just rebuilding programming? 9:10Where you're like, you have to just like say really specifically 9:12what you want the computer to do. 9:14But you know, there's another quote that I had written down here. 9:16So one of the elements in the prompt is if thinking mode is interleaved or 9:20auto, then after function results, you should strongly consider outputting a 9:24thinking block, which is like kind of this very funny thing where you're like, 9:26okay, now the model has thinking mode. 9:28But rather than saying like, under these specific conditions, engage it, it's just 9:32like you should strongly consider it. 9:33Right. 9:34And it, it's, it's sort of interesting on like the degree 9:36to which these prompts like. 9:38Are actually giving us control over what the models are doing or 9:41versus, or versus us just like, kind of like giving it vague rules. 9:44I don't know if, Kate, you wanna respond there? 9:45Well, 9:45I, I mean like maybe the what might look like control. 9:49I think the other thing is like, how much have we really tested? 9:52And if you don't release system prompts, it's really hard for the 9:55academic community to do research and to validate some of this. 9:58But how thoroughly have we really tested if. 10:00Every single line of that system prompt actually has the intended effect. 10:05What is the degradation and performance and how often the model produces 10:09thinking if that line is there or is not. 10:12I see prompts all the time where, you know, people write them based 10:16off of like one weird edge case, and so they add a line and that 10:20one weird edge case disappears. 10:22But do they really impact the model behavior as a whole across, 10:26you know, everything that you're trying to impact and study? 10:29So I think there's also some degree of wishful thinking with system prompts 10:33where the model's been trained for a lot of these behaviors already, like when 10:38to do thinking and all sorts of stuff. 10:40Um, so. 10:41You know, I think we're trying to nudge and steer, but it also makes it seem 10:45like, oh, well if I told the model X, Y, and Z, then X, Y, and Z will happen. 10:50'cause I gave it this nice little playbook and I think it gives us a 10:52false degree of, uh, security that that is actually gonna be followed. 10:56I. I think a lot of these system prompts are probably way too long. 11:00If you actually want something that really is like, there should almost be standards 11:04of, is this system prompt, uh, certified to impact this type of behavior and, 11:09and the degree that it, it specifies. 11:12I do think there's a balance, right? 11:13That um, you know, you know, going back to tool calling, function, calling right? 11:17Is um. 11:19That there's a, there's a huge inherent risk, I think, of leaking 11:22those types of prompts, right? 11:24Because depending upon the, the use case, for example, on the extreme, if you're 11:27doing like, um, robotic surgery, right? 11:30And somebody could have a tool call, right? 11:32And hack the tool, call and bypass different types of, um. 11:37Uh, different types of, let's say, uh, refinements, right? 11:40They could do jailbreaking, refinements, uh, bypass content moderation, um, force 11:47different types of searching, right? 11:48Which, which could have catastrophic, you know, impacts on the patient, right? 11:52So those types of, um, I think system prompts could be obfuscated. 11:57Uh, they could be encrypted within fragments such that they're not. 12:02There, you know, to be used. 12:03Right. 12:04Um, you know, because I don't think some behavior 12:07should be, um, enabled. 12:09Right? 12:09Um, and, uh, released like if you're filing taxes or if you 12:13are sending an email, right? 12:15Uh, not you, but the LLM or genAI doing that. 12:19Right. I certainly wouldn't want it to "Oops. 12:21Um, sorry Kate, I sent an email on your behalf, you know, uh, 12:24because I hacked in, you know, this certain tool call or function call". 12:28Right? 12:29Um, you know, so I ghosted her, right? 12:31So I mean, I mean, so those, those types of more extreme 12:34right exploitations I think just need to be carefully thought of. 12:37And I thought Anthropic was taking that into account by not 12:40releasing some of those, um, system 12:43prompting elements within their original sort of manual, right? 12:47But then they were leaked anyways, right? 12:48So, so there's always, you know, this, that, that, uh, risk and balance that I 12:51think we all need to just think about. 12:53Yeah, for sure. 12:54And I think, I don't know, the, the layers of obfuscation here I 12:57think will get very interesting. 12:58'cause at the end of the day, it's just, it's just tokens, right? 13:01And so you can imagine constructing a prompt where a human reads it and is 13:04like, oh, well these are the rules that guide the system, but actually like, 13:08you know, impose certain other kinds of not written behavior, uh, on the model, 13:12which I think will be like a really interesting, you know, next development 13:16if it hasn't already happened, right? 13:17Uh, because all these companies know that the system prompt's just gonna get leaked 13:20within hours of the model coming out. 13:27So I'm gonna move us on to our next segment. 13:29Um, really interesting collaboration dropped between, uh, legendary music 13:33producer Rick Rubin and Anthropic. 13:36Um, they dropped this kind of document, uh, on thewayofcode.com 13:41and what it appears to be is a rewrite of the Tao Te Ching. 13:44Um, but. 13:45About vibe coding. 13:47Um, and, uh, this is like both a very funny kind of collaboration in some 13:50ways and made me think a little bit about this kind of famous interview 13:54that Rick Rubin did with 60 Minutes, where he said, I have no technical 13:58capability and I know nothing about music. 14:00Um, and, uh, he took a lot of, you know, criticism for this, um, being, you know, 14:06the legendary music producer that he is. 14:09But I kind of love this because it sort of asks the question for vibe coding about 14:13like just how far vibe coding will go. 14:16Um, and whether or not in the future we really will have Rick Rubin like 14:20producers for code, um, in the same way that we have for music where it's really 14:24unclear like what Rick Rubin's skill is. 14:27He just appears to be really good at getting number one hits. 14:30I don't know. 14:31Maybe Aaron, I'll throw it to you first is like, do you feel like 14:33in the future of vibe coding. 14:35We'll see people with zero technical ability be able to do incredible 14:39things with computers just given where things are going with code gen. 14:42Yeah, I mean, I mean there there, there's a, again, a continuum here, right? 14:46And you know, as, as a, as an engineer and scientist, right? 14:49I, I do believe that the mind gets into like these different patterns and 14:52constructs pathways as one develops 14:54and codes and builds. 14:55Right? 14:55And you can think of it as like a flow state, right? 14:57And then if someone just walks into your office, right, when you're in the middle 15:01of it, it's sort of like your flow state collapses and you gotta start all over 15:04and rebuild those constructs, right? 15:05And so that, that to me is 15:07kind of like this vibe coding. 15:09Um, but I think the way that Rick, um, you know, is approaching this is 15:12it's more of an art form or like this cultural phenomenon, you know, where, 15:17you know, you know, I did visit his, his was it way of code site, right? 15:20And looked, um, and it looked like you could go in and actually personalize 15:25some of like the graphics and such, you know, that, that he already sort of 15:28seeded with a vibe coding element, right? 15:30So. 15:31So, so I think in short, you know, if you are building like a production, um, app 15:36application that needs to be at scale, I think pairing vibe coating with good 15:40engineering, you know, is very important. 15:42But if you're just doing it for, um, you know, a prototype to build an 15:46experience that doesn't have to be so precise, maybe this kind of style of 15:50vibe coating, you know, is the way to go. 15:53Okay. Any responses to this? 15:54Um, I know it's always this kind of push pull, right? 15:57I mean, I, I think Aaron's response has a lot in there is where it's like. 16:00Well, this is good, but like we might really need real 16:03engineering at some point. 16:05Um, but curious about what you thought reading through, uh, the way of code. 16:09Yeah. 16:09You know, I, I think in many ways, you know, coding can be viewed 16:13reasonably so as an art form. 16:15It's creating in kind of the act of creation. 16:17I think, uh, it is inherently artistic and creative. 16:22And so from that perspective, I think there is something 16:25interesting about how do we unlock 16:29future developers who don't have the same backgrounds, who bring different 16:32experiences to find new ways to solve some of the thorny challenging problems. 16:36And I think that's kind of the spirit that, uh, Rick is coming 16:39from that, that I've seen. 16:40But I also, you know, think if we talk about 16:44critical infrastructure and you know what the world runs on. 16:47Like, you know, there's a big difference between art and, uh, you know, mainframe 16:51systems that run, you know, all of the financial transactions in the world. 16:55And, you know, there's different degrees of reliability and, 16:57and trust in everything else. 16:59So, you know. 17:00I think it's important to make sure that there's kind of a balanced approach 17:03at the end of the day. 17:04It's not saying the world is going to be vibe coding and only vibe coding, but how 17:09do we use this rather as a tool to engage more with the community, uh, with people 17:13who come from less traditional backgrounds that traditionally don't know how to code. 17:17But could bring really new, unusual and powerful ideas that could be, 17:22if you know, are being gonna be implemented in some sort of critical 17:25capacity, implemented maybe with more, uh, knowledgeable traditional means. 17:30Yeah, I love that. 17:30How basically, you know, maybe in an earlier era kind of computer code, 17:34you sort of couldn't approach in an artistic manner, but we're now living 17:38in a world where like almost the boundaries of that are a little expanded. 17:41Uh, and so you can approach it as if you were. 17:43You know, sort of a music producer or just kind of like vibing with it. 17:47Chris responses. 17:48I saw you went off mute here. 17:49Yeah, I love it actually. 17:51'cause I, I do think programming is, is an art form. 17:55I, I know we want it to be a science, but I do think it is art. 17:58So, and I, and I do love the idea of 18:01exploration and being able just to kind of figure things out. 18:04So I don't think we always need to take an engineering approach. 18:07And, and, and again, I, if I think of architecture, I, I don't mean 18:11computer architecture like we do. 18:12I'm meaning as in people with pencils and beards and flip flops 18:15and things like that, right? 18:16I don't, I don't know. 18:17Um, but, but. 18:19You know, if somebody came in and went, I want to design a new house, and you 18:23know, and then they, they start and sort of drew, drew a picture and now 18:26there's your new house, you'd be like, huh, should I give that to the builder? 18:30And they'd be like, sure. 18:32And you'd be like, I don't think that, how is this gonna work? 18:34And. 18:35And, but that's fine. 18:36But if, if you then got the technical schematic architect who just 18:41build that, then you know they're gonna be following the process. 18:44Uh, you know, this, joist needs to connect to this. 18:46I don't know any about building terms. 18:48Joists think is the only one I know. 18:50And that's like a thing that houses have. 18:53Exactly. 18:54And then I know roof and stuff, and then there'll be, but, but 18:57where's the creativity that's not gonna create you, you know? 18:59Um. 19:01Uh, the Guggenheim or something like that. 19:03There you go. 19:04I was trying to think of something that was a fancy building. 19:06It's an art thing. 19:06Yes. 19:07Yeah, exactly. 19:08So, so and so I think you've gotta have that mix, and I think 19:12that's the, it's almost the same as like music production, right? 19:15So in Rick Rubin's case, right? 19:17It's, it's just like. 19:18I think vibe coding allows you to break things down into their individual 19:22elements and then recompose them, right? 19:24And then I think that's okay to then take that to an engineered state, 19:27but I, I think that whole process of creativity is a good thing. 19:31So I'm, I'm a big fan of vibe coding because you can test out ideas really 19:36quickly and explore it, and then you can go and engineer the parts that you 19:40need to engineer and, and get a little bit more, uh, process oriented about it. 19:45But, but. 19:45Why kill the creativity? 19:47So I love it. 19:48I'm a huge vibe coder. 19:50Uh, and I love the collaboration. 19:52Kate, uh, this makes me think a little bit about like how vibe 19:55coding is gonna evolve within an organization or within an enterprise. 19:59You know, in all the companies I've worked for, there's always been like a little 20:02bit of like, uh, a class system, right? 20:05Between like the designers and the engineers and the designers were like, 20:09here's a mockup that you should build. 20:10And the engineers are like, we have to build it. 20:12And like, ah, like all these people with their crazy designs. 20:15And it kind of feels like what Vibe coding is gonna allow. 20:18Like what will change is that like designers can suddenly 20:20build workable prototypes. 20:22And so like there's a whole degree to which, like this allows a, a group 20:26of people within a company to kind of like seize the means of production in 20:30a way that I think might be like deeply disruptive to kind of like the, the 20:34natural state of affairs that has kind of presided, uh, over these companies and. 20:39That feels like it's gonna be really interesting to watch. 20:41Yeah, I don't know. 20:42I think it can go both ways though, because I think designers or whoever 20:46is, is trying to test the waters and who always say, oh, go build this. 20:51You know, it should be easy. 20:53Just put this button over there and then you'll be fine. 20:55And that button should do all these other things, by the way, and oh, 20:58it also needs to be compliant and X, Y, Z. And so they'll try it and. 21:02Undoubtedly it will fail if they just kind of vibe, code and throw it out 21:05into the world, uh, when it hits real production and, and learn some pretty 21:09nasty lessons that it's actually really complicated and there's a lot 21:12of important work that, you know, developers are doing behind the scenes. 21:16So, you know, I, I think it's just gonna probably be really important 21:20though as a communication tool, uh, to help better articulate 21:24vision, to help better explain what. 21:26What you're looking for or what the target goal is to help iterate faster on proof of 21:31concepts and you know, experiment faster. 21:33So I think it definitely will disrupt from those perspectives. 21:36Yeah. 21:37And it actually occurs to me as you're talking that like the 21:39annoyance will work both ways, right? 21:40Because suddenly engineers can be like, oh, I generated this picture 21:44of the website wanted you to create. 21:46It's like everybody's gonna be Aaron in like in everybody else's business. 21:49It seems like Aaron. 21:51Yeah, I mean, I mean, I mean this, this whole notion of vibe coding to me 21:54is very similar to inventing, right? 21:56Because it's, you know, you get lots of people together, um, and 22:00you need different perspectives. 22:01You need the artfulness of creating novelty, but you also need the 22:05engineering to make sure it's implementable and, and it can be used 22:08in some kind of embodiment, right? 22:10And vibe coding, to me, is very similar, where you get, you know, 22:13the creatives together, you know, you know, it's, it's a blur. 22:16It becomes more of a blur where the scientists and 22:18creative now become one, right? 22:20Because you're vibing to. 22:22Sort of do like this vibe science or vibe engineering, you know, to have 22:25these alternative, um, hypothesis. 22:28You know, it, it's, it's like exploring different branches very quickly. 22:32Um, and then when you need to get into an embodiment, then you build right? 22:35And, and then, and then implement, you know, so, so. 22:39So I think some of the white space here would be how do we connect 22:42vibe coating, um, to the actual build implementation, right? 22:46And deployment of something that's practical, that's usable, that 22:50can handle high scale and load. 22:52Um, some, some of the really hard challenges, right, that 22:54we face every day, right? 22:56So that, so, so I'm pretty excited about that area, which I think is 23:00just beginning to emerge a bit. 23:07So for a third segment, we're actually gonna do another design 23:10and AI story in some ways. 23:12Um, the biggest business story of really the last week or two in AI has been this 23:17enormous $6 billion plus acquisition of Jony Ive's secretive startup io. 23:25Um, and Jony, Ive, if you don't know, was most famously the chief architect 23:29of the iPhone and kind of like the. 23:30Sort of design mind between app for Apple during kind of a whole era of its history. 23:36Um, and the announcement is that Jony Ive himself, is gonna go collaborate 23:40with OpenAI on hardware, uh, through a design collective, um, that he owns. 23:45And so this is a, a huge transaction, right? 23:48Billions of dollars. 23:50Um, and you know, I guess Chris maybe to turn it to you like. 23:53Is it worth it? 23:55There's not even a product here, uh, and they're putting $6 billion down. 23:58How do you think about why OpenAI would do this and, and if it really 24:02is gonna pay out for them in the end, 24:03I hope for $6 billion, he does more than collaborate for them. 24:07That that seems a huge bill for collaborations. 24:10You know what I mean? 24:11I'm collaborating with you guys just now, and I, I'm not paying $6 billion. 24:15Sorry about that, Chris. 24:16Yeah, so I would be more worried if they paid $6 billion and Jony Ive went: 24:20"You can have my company, but I'm outta here, you're hearing nothing from me." 24:24You'd be like, what am I buying at this point? 24:27You know? 24:27So I, I, I think the whole thing, I, I mean, Jony Ive's incredible. 24:31I really do think, and, and therefore you're buying his talent, 24:37you're buying his brand, et cetera. 24:39So I, I, I do think it's gonna 24:41go beyond collaboration, and I think it's really gonna be about 24:44shaping the ideas that form what the future of AI is gonna look like. 24:50Because if we, if we actually truly think about where we are, we're now 24:54in this sort of multimodal world. 24:56We've got. 24:58AI is becoming cheaper, you know, being able to run on device. 25:02So there's new form factors that need to be discovered to, you know, 25:07to have AI in the right place. 25:09All right. How do I want to interact in that world? 25:11How is the world of agents? 25:13Yeah, I said it. 25:14How is the world of agents gonna behave? 25:17How does the future of web look like for that? 25:19How does the future of mobile devices? 25:21I think there's a lot of things to, to really work out and discover and, 25:26does that mean that how we interact today is gonna change 25:29and, and I think it will change. 25:30So actually. 25:32Being able to bring together AI companies and design companies 25:36together to go and figure out what that future looks like and experiment. 25:39I, I, I really think that is a smart move and, and have somebody like 25:43Jony Ive, who has, who's, uh, been through those transformations before. 25:48Um. 25:49I think it's a very sensible thing. 25:51Um, so I think it's an exciting collaboration and I kind of look 25:54forward to what this kind of next wave of experience design 25:58for AI is gonna look like. 26:00Yeah. 26:00And Kate actually, I mean, so I mean, to give them a little more credit, 26:03like this is more than just like a vibe acquisition in some ways. 26:07Uh, I was like curious. 26:08Uh, so there has been some details kind of leaked or rumored about 26:12what it is that they're working on. 26:13And as far as we can tell, it's a kind of like AI device with no screens. 26:18That's kind of their, their pitch. 26:20Um, and uh, that's 26:22pretty interesting, right? 26:23We've really built, you know, a whole digital paradigm on screens and so the 26:28idea we'd, that we'd go completely no screen in the future thanks to AI is, 26:31is pretty surprising, don't you think? 26:33Yeah, I think it's very surprising, but you know, I think it also kind of gives 26:38vibes of some of the AI companion type things we've seen, like, and nobody wants 26:43just basically to be accused of making a tamagotchi where you've got a tiny 26:46little screen companion that you know that you have to feed, otherwise it dies. 26:50So, you know, I, I think they're probably gonna lean into some sort 26:53of this like, life assistant route doesn't need eyes, you know, or if, 26:58if it doesn't need eyes, it doesn't need a screen to communicate with you. 27:01Right? 27:02We've got better tools now, um, that they're working on, but it'll be 27:05interesting to see what they come up with. 27:07You know, I, I've struggled to see that they won't be some sort of like. 27:10Phone app experience as well that connects to whatever 27:14device they're also working on. 27:16Yeah. 27:16It's hard to untether from that completely. 27:18Um, yeah. 27:19Aaron, how do you size it up? 27:20I mean, so the most obvious precedent for something like this is the, 27:23the humane pin, which I think we're talking about a year ago, right? 27:26Which is like a screenless device that you wear that's always on, that is kind 27:30of like an AI assistant in your life. 27:32Um, and. 27:33One point of view is like no one wants that and that's why it didn't work. 27:37There's another point of view, which is the technology kind of wasn't 27:40there, and we might finally be there. 27:42I don't know if a year later is enough time, but obviously things 27:45are changing very quickly in AI. 27:47Yeah, I mean, I mean, I'm a bit stuck on, you know, that, that this is one of the 27:51largest deals for 55 employees, right? 27:54That's what it is. 27:54That at least that we know of, and if I do the math right, 27:57that's what about 118 million-ish 28:01per employee that you're paying for. 28:03I, I mean, yeah, that, that's, you know, pretty good. 28:06It's a high stakes bet on this, on this talent, basically because, 28:09uh, the valuation right, is very speculative because I don't think 28:12that this company has created a user base or any devices at all. 28:17Right? 28:17So it's, it's basically a high stakes bet on bet on design talent, right? 28:21For these 55 employees, right? 28:23But if it goes right, um, to creating these AI companions, you know, so. 28:28I saw that, that, uh, Sam Altman, they wanted to release what, a hundred 28:31million of these AI companions, right? 28:32I, I mean, roughly about that, you know, and if they can pull it off, I mean, 28:36they can sell these very cheaply to get back, you know, their 6.5, you know, um, 28:43what billion dollar, uh, investment here. 28:46Right. 28:46But, but, but I mean, yeah, but again, you know, I, I just wanna 28:50see something tangible, right. 28:51Very quickly. 28:52Um, and. 28:54And I think that they can pull it off. 28:56Right? 28:56Right. 28:56I think their mission is in the right place and, and I would just say, you 29:00know, Apple, you know, watch out, you know, Apple Intelligence, you know, 29:02you need to get that going quickly. 29:04Right? 29:05Because I think if OpenAI you know, works with Ive here, then you 29:09know, then these AI companions could really, you know, be a nice bet to 29:15understand what's happening in one's life without having a screen perhaps, or, 29:19you know, maybe you're going to extend to an already existing screen, right? 29:24That's already there. 29:24But, but these different form factors, I think it's gonna be 29:27really, really interesting and, and combining, um, these cutting edge. 29:31AI experiences, right. 29:32Is is gonna be fascinating to watch as the field emerges. 29:36One thing though, Aaron, that I think Apple does well, obvi, 29:39they definitely need to catch up. 29:40I think you're right, but as we talk about wise in the past, assistance have failed. 29:44And what I think OpenAI will struggle with is still this 29:47notion of privacy and trust 29:49with data. 29:49Like, I think another reason why the AI virtual assistant companions, I mean, 29:53there was plenty of things that were done for on edge, you know, type learning. 29:57But it is just still this like shadiness factor of like, what are, why are my 30:02life's now being recorded and being beamed up to, you know, some machine and AI 30:07intelligence and I, I don't know that. 30:09OpenAI is best suited to crack that. 30:11So it'll be interesting to see if the new design team can help and think 30:14through new ways to design for trust. 30:16I think that's something Apple does have as a better starting position 30:21if they can figure out, you know, some of their Apple intelligence 30:23work, what they're doing there. 30:24Yeah, for sure. 30:25Yeah. 30:26I think the, the paradigm shift that will, that is implied for OpenAI to 30:30get this right, I think is really hard. 30:31Um, 'cause I think it's more than just devices, it's more consumer trust, and 30:35how do you ensure that from a technical standpoint, and it's like a whole 30:37nother way of thinking about this stuff. 30:39I don't know, I think we overthink trust sometimes. 30:42You know, I, I mean, I know we want trust, et cetera, but it's a trade, isn't it? 30:47It is like, here is the functionality that I'm gonna get, how. 30:51Better is my life gonna be, think of the hundreds of million of people 30:55are using ChatGPT every day, right? 30:57And, and everybody knows that you're giving away your data, 31:00but you know what it is. 31:02You're getting utility from that. 31:04So everybody's kind of prepared to make that payment or not. 31:07And some things you're not gonna make that payment for and 31:10say, okay, I don't trust that. 31:11And, and you'll lean into something and say, wow, you know, the Apple approach in 31:15this case is gonna, is gonna be important. 31:17And you might lean into that direction. 31:19But I think everybody sort of. 31:21Takes that utility and we understand that we're given a bunch of data away. 31:26Um, personally I find it very unlikely I'm gonna give up my iPhone. 31:31I love my iPhone, I love my iPad. 31:33I, everything is connected, all of my movies and, and this 31:36thing doesn't have a screen. 31:38What am I gonna play my movie on? 31:39So, I, I just, I, you know, I, I, I think there is a whole point 31:44about ecosystem things don't exist within islands, actually. 31:47And the thing that Apple does very well. 31:49Is they have a very good ecosystem of platforms and devices, right? 31:53Where everything connects well, and therefore, if they're making a 31:57move into that space, and I think they'll do very well, is you have to 32:01bring the ecosystem along with you. 32:02Because actually, back to the point about that pin thing, right? 32:06That didn't connect into anything, so it was, it sat on an island. 32:10So I think that's really gonna be the problem OpenAI has to think about is. 32:14Is, what ecosystem are you gonna plug into? 32:17And guess what, the, the only two choices in this case are Apple and Google. 32:21So, you know, um, so you, you gotta start figuring this out because, 32:26because if you can't plug in into that ecosystem, you're gonna have a problem. 32:33Alright, so Chris already beat me to it by saying the word agent, but we'd be remiss 32:37if we didn't do a story about agents. 32:40Uh, so I'm gonna close up today with our last segment. 32:43Um, super interesting Verge interview that popped up with Jay Parikh, who was the 32:47former, uh, engineering lead over at Meta. 32:51And is now over at, uh, Microsoft working on all things agents for them. 32:55Um, and we haven't heard from Jay in a little while. 32:57I think we talked about him on the show when he first joined Microsoft. 33:00And so, you know, I thought it'd be useful to kind of check back in on what he's 33:03been working on and to talk a little bit about Microsoft's strategy in the space. 33:07Um, I think the most interesting part of the interview, he had this quote, he said, 33:10I want our platform, meaning the Microsoft platform for any enterprise or any 33:14organization to be able to be the thing they turn into their own agent factory. 33:18So the idea is like. 33:19Whatever you're building, you're gonna be able to turn it into 33:22an agent using Microsoft tools. 33:24Um, and we've talked about this, this came up on last week's episode 33:27as well, which is that, you know, it's a little bit of a joke. 33:30The agent means everything. 33:32And I think one way of thinking about what these companies are doing is 33:34that they're all battling for like. 33:36What an agent even is. 33:38So you know, for Google io everybody was saying, oh well their 33:41version of agent is like search. 33:43It's like not that surprising. 33:44'cause they're a search company and I guess Microsoft is kind of articulating 33:47a new vision or their own vision, if you will, on how agents should work, 33:51which is very much kind of like I. 33:53You know, not really a platform, but like every enterprise being 33:56its own manufacturing kind of facility for agents, I guess. 34:01Kate, maybe I'll turn to you as like, it, it's assumes a world 34:04where these things become really commodified and really democratized. 34:08Um, do you see that happening, right, like soon? 34:10Is that a realistic way to think about where the market is going? 34:12Yeah, I mean, I think we see also a lot of other industry players that are putting 34:18some pressure on Microsoft to do similar. 34:20Like we've got, uh, the Agent Force from Salesforce, you know, 34:25all everyone's coming up with a suite of pre-canned agents. 34:28watsonx announced a bunch of agents that Think just this past conference. 34:32And so I think Microsoft's training, just to better speak the language 34:36of what all of our enterprise users and customers have been trained 34:39to speak, which is, I need agents. 34:41I need agents now, everything I can build can be built as an agent and 34:45trying to make sure that they're hyper targeted towards this 34:49kind of modality for how people are trying to build and starting to build. 34:53And I think it is very much being democratized as we start 34:57to see a lot of performances for useful enterprise tasks converge. 35:00Any model can do a lot of the things that, um, drive, you know, 80% 35:04of the value for these companies. 35:06So. 35:07The ability to build your own, to swap out parts, to customize, I think is gonna 35:12be critical as people continue to look to how to avoid getting locked into just 35:17kind of one, one endpoint and, you know, ultimately continuing to innovate within 35:21their own four walls of their company and how to use their data to to create value. 35:26Aaron, there's almost a question here. 35:27I think about like, almost like the ceiling on commodified agents. 35:32Uh, we talk a lot about, I think, on this show about like how complex it is to 35:35like orchestrate agents to work properly. 35:38You know, you need like the right protocols and you need tasks to 35:40be done in the right way, and it needs to be fine tuned and evals. 35:44The skepticism I've always had is like, well, it just seems like not 35:46every enterprise just has people who know how to do that outta the box. 35:50But I guess Kate's, I don't know Kate, I don't wanna put words in your mouth, 35:53but you're almost kind of arguing that there's enough kind of common tasks 35:56that like the sort of out of the box agent will be something that like most 36:00enterprises be able to, to play with. 36:02How do you think that market's gonna evolve? 36:03It sort of feels like it's like gonna go in two directions almost over time. 36:07You know, whenever I think about agents, the first. 36:10Thought that pops in my mind is James Bond oh oh seven. 36:13Right. He's the ultimate agent. 36:14Right. 36:14And, and we need to watch out for double agents and make sure that we 36:17can ensure that they don't go rogue. 36:19Right. 36:20Um, and I, I was, you know, looking at this and you know what this 36:25Agent Factory has, you know, it's, it's like it has this a service. 36:28It, it uses agent ident identity and governance. 36:31You know, where I can provide identification for each of the agents. 36:35Such that you can't go get a fake ID and, you know, uh, do maybe Doppel 36:40gang, you know, ano another agent to go do something else, right? 36:43Um, you know, it's got observability management, low code, no code tools, 36:47but, um, but I mean, you know, I think everybody in industry is trying 36:51to do, you know, get in the game, AI agents, what they should be. 36:55Uh, but I think for Microsoft, one of the biggest differentiators that I see, 36:59um, I happen to look, uh, two weeks ago. 37:02You know, I look every now and then at the, it's called the top five hundred.org. 37:05It's, it's this website that tells you the, the fastest 37:08supercomputers in the world. 37:09I. And, um, I was curious, was cloud on there? 37:12Right. 37:12And I think it was the number four, the fourth ranked one was called Eagle. 37:16Right. And it runs and built on, on Azure. 37:19Right? 37:19So it's a cloud-based, um, super supercomputer, which, you know, I 37:23didn't think I would see that right. 37:25Happen so quickly that Wow. 37:27Okay. 37:27So this isn't like a, like a blue jean, you know, a. A 37:30particular piece of hardware. 37:32Right. 37:33Um, so, so to me the compute power that, um, Azure, that, that Microsoft 37:38has on Azure, I think really can give them, you know, a nice opportunity here. 37:42They have data sources. 37:44They can integrate with Windows. 37:45They already have what Azure AI Copilot 37:48pieces they can expand into, into consumer markets, you 37:51know, with like Windows Copilot. 37:53So I think they have sort of the bread and butter elements, right, to 37:56make this AI agent factory happen. 37:59It's just hopefully they can, uh, release some of these features to map 38:03to their vision of how they're gonna do it so we can avoid these double agents. 38:07Yeah, for sure. 38:07And Chris, it looks like you're about to jump in. 38:09I mean, if I can kind of, maybe. 38:10Prompt you with a question. 38:12You know, we've talked a lot about like who's gonna win in the agent market. 38:16There's almost a part of me that kind of thinks about Aaron's comment and is like. 38:20Maybe actually over time the agent markets is gonna divide up, right? 38:23That it'll just turn out that like if you have a task that really requires 38:26search, you'll be using Google's agents, but you may not really need 38:29all those capabilities and so you'll like, maybe you really are more married 38:32to the, the Azure infrastructure and so you use Microsoft, like it may not 38:36be winner take all in this market. 38:38I don't know if that was what you're gonna address, but. 38:40I, I don't think it will be winner take all, and I'm, I'm, I'm happy to kind of 38:44say that because one of the big thing that's really happening in the market 38:47at the moment is the commoditization. 38:49So if we really think about what's going on here, I. All of the major 38:54providers have hooked onto model context protocol, MCP, as the, um, 39:00standard for remote tool calling. 39:03And I think that's a good thing, right? 39:04Because we're gonna move into this, uh, this world where we want to 39:10be built on composition. 39:11So if everybody's at least standardizing on tools, then there can be a 39:14marketplace of tools, and it also means the models can be trained to 39:17work with those tools very well. 39:19And therefore, if you want to shift to a different agent for 39:22whatever reason, then guess what? 39:24You can bring your tools along with. 39:25That. 39:26And then for, and I think in the factory context, this makes sense as well, right? 39:31So from a factory perspective, I'm gonna want to build something, right? 39:35But actually maybe 80% of the tools already exist and maybe 39:3980% of the agents that can work with those tools already exist. 39:43And I really need to do the 20%. 39:45Whereas in the previous world, I would've had to do all of that. 39:48So I, I think that becomes important. 39:51And then with things like. 39:52Eight oh a and a CP. 39:53So having agent protocols where you can have a standardized way of having agents 39:58be able to talk to each other, and again, whether it's Salesforce, Microsoft, 40:01et cetera, they're all landing on protocols, sort for that interoperability. 40:06So I think that moves us into a marketplace again. 40:10So I think as soon as you start to get in this world of 40:13marketplaces and you have this. 40:15Area of standardization, then I hope that that means we get away from this 40:20winner takes all market and then folks can specialize on the things that they're 40:23really good at and their differentiation. 40:25Their differentiation, the, the, the good news and probably the bad news 40:29at the same time is actually, I think this brings us back into the discussion 40:34we had at the beginning, which is about vibe coding, because actually. 40:39If I've got the engineering of agents that do tasks really well and I've got tools 40:43that do things really well and models have done that, and then agents know how to 40:47talk to each other and we all know how to talk to models, et cetera, then actually 40:51vibe coding becomes quite interesting. 40:55I. In the world of factories because then I can sort of vibe up what I want 40:59and then I can hand it across to some agents who are gonna do a productionized 41:03version and use productionized tools and it completes that circle. 41:06So. 41:06So I know we were talking about vibe coating being a toy, but actually I. I 41:11want you to think about that factory model for a second that kind of Microsoft's, 41:14uh, discussing and, and I don't, I, I think those two worlds blind over time. 41:19I can also envision a world where we have these AI agent skills, 41:23marketplaces, you know, so, you know, if, if we use these new approaches. 41:27Um, so, so we just released what, what's called aLoRA, I think it's 41:31activated low rank adaptation, where, um, you have like these weights that 41:36go, that can influence the attention. 41:39So, so your weight matrices that project and create, whether it's your, your 41:43keys, your queries, your values, right? 41:46Um, but they can be fine tuned to what kind of skill you would like, right? 41:51And then you save those weights and you can dynamically on the fly. 41:55I. Import that skill so that now that same model of which the, the 41:59same model topology of which you created your lower weights now 42:02has a different behavior, right? 42:03So, so this decentralization of skills is there, and you could vibe, 42:08do, do some vibe, skill, right? 42:10To create what kind of skill vibes with you and then put it up on 42:13a marketplace, right, to a share with your friends or, you know, um. 42:17Or create these emergent skills, but, but I think that might be where it's going. 42:21Um, and then last thing, I could talk about this for a while, but model 42:24distillations could play in that as well. 42:27Okay. I'll let you have the last word here. 42:28I know you on the first conversation, were maybe the most strong on 42:32look, you're not gonna use vibe coding to like build a bridge. 42:35I think Chris is maybe ending on a note of optimism. 42:37It's like maybe agents are the, the bridge that gets you there. 42:40Um, do you, do you buy that story or are you still a little bit skeptical about, 42:44you know, how far Rick Rubin can get. 42:45Uh, 42:46I think humans are gonna have to be in the loop more than 42:49just in the vibe, coding step. 42:51So I completely agree. 42:52I think vibe coding to create something, kicking it over to an 42:56agent to iterate, build it out a little more detail, like all fair 43:00game and is gonna be pretty exciting. 43:01But I'm not ready to totally just kick out the, the human in the loop part of 43:07the process there, where they start at the beginning and then, uh, you, you, 43:10you just see what bridge pops out on the other end and walk across it, uh, blindly. 43:15Seems like a fine bridge. 43:17All my agents are telling me it's the best. 43:20Yes, every agent agrees. 43:22Yeah, exactly. 43:23All right, well that's all the time that we have for today. 43:26Kate, Chris Aaron, always great to have you on the show and, um, thanks 43:29all your listeners for joining us. 43:30If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, 43:33and podcast platforms everywhere. 43:35And we will see you next week on Mixture of Experts 43:38IBM, 43:44IBM. 43:48Great, job everyone.