Learning Library

← Back to Library

ChatGPT Usage, AI Economics, Expert Insights

Key Points

  • The “Mixture of Experts” podcast, hosted by Tim Hang, brings together AI innovators (including IBM fellows and master inventors) to dissect the week’s most significant AI research and news.
  • The episode’s agenda covers a range of cutting‑edge work: the MBER study on how people actually use ChatGPT, the latest Anthropic Economic Index, DeepMind’s research on agent economies, the Ultra Ego demos, and Meta’s newest wearable technology.
  • In the news segment, the hosts note that Alphabet’s market value recently crossed the $3 trillion mark, the WTO predicts AI could increase global trade value by nearly 40 % by 2040, and researchers have taught dogs and parrots to interact with touch‑screen devices.
  • The discussion then pivots to the MBER paper “How People Use ChatGPT,” highlighted as a rigorous, economics‑style analysis that systematically maps real‑world ChatGPT usage patterns.
  • Lauren McHugh shares a personal connection to the study—her former professor David Deming is a co‑author—providing insider insight into the paper’s methodology and its implications for understanding AI adoption.

Sections

Full Transcript

# ChatGPT Usage, AI Economics, Expert Insights **Source:** [https://www.youtube.com/watch?v=1PlyV-pf9_M](https://www.youtube.com/watch?v=1PlyV-pf9_M) **Duration:** 00:46:50 ## Summary - The “Mixture of Experts” podcast, hosted by Tim Hang, brings together AI innovators (including IBM fellows and master inventors) to dissect the week’s most significant AI research and news. - The episode’s agenda covers a range of cutting‑edge work: the MBER study on how people actually use ChatGPT, the latest Anthropic Economic Index, DeepMind’s research on agent economies, the Ultra Ego demos, and Meta’s newest wearable technology. - In the news segment, the hosts note that Alphabet’s market value recently crossed the $3 trillion mark, the WTO predicts AI could increase global trade value by nearly 40 % by 2040, and researchers have taught dogs and parrots to interact with touch‑screen devices. - The discussion then pivots to the MBER paper “How People Use ChatGPT,” highlighted as a rigorous, economics‑style analysis that systematically maps real‑world ChatGPT usage patterns. - Lauren McHugh shares a personal connection to the study—her former professor David Deming is a co‑author—providing insider insight into the paper’s methodology and its implications for understanding AI adoption. ## Sections - [00:00:00](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=0s) **Is Humanity on AI Autopilot?** - The hosts and expert panel debate whether society has ceded control to AI, while reviewing recent research papers, demos, and industry headlines on the Mixture of Experts podcast. - [00:03:57](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=237s) **AI’s Economic Impact Through Embedded Search** - The speakers contend that generative AI will generate far greater economic value by being integrated via APIs into everyday services as a search‑enhancing tool, rather than through standalone chat applications like ChatGPT. - [00:07:48](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=468s) **Misaligned Expectations of LLM Use** - The speakers highlight how real-world interactions with large language models far diverge from popular assumptions—coding accounts for only about 4% of chats, while therapy, relationship advice, and gaming each make up less than 2%, underscoring a gap between anticipated and actual user behavior. - [00:14:17](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=857s) **From Search to Predictive Feeds** - The speakers argue that emphasizing traditional keyword search is shortsighted, advocating for AI-driven agents that proactively surface relevant information, while warning that such automated feeds may replicate the engagement‑centric biases of current social media. - [00:17:22](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1042s) **Anthropic AI Usage Density Insight** - The speaker highlights the significance of a normalized “usage density” metric to gauge real-world AI utility, noting its correlation with high‑income economies while also emphasizing emerging adoption in lower‑income regions through remote learning and accessible tools. - [00:20:34](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1234s) **Global Disparities in AI Chatbot Use** - The speakers analyze how nations such as Singapore, Canada, and India vary in adoption rates and purposes—like coding versus general queries—for chatbots like Claude and ChatGPT, highlighting cost barriers that influence usage patterns. - [00:25:12](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1512s) **Geography, AI Adoption, and Agent Economies** - The speaker reflects on a hopeful study suggesting that location need not dictate AI opportunity and proposes grassroots initiatives to raise AI usage in underserved areas, then pivots to discuss DeepMind’s speculative “Agent Economies” paper which envisions future economies populated by interacting AI agents and highlights the novel risks this scenario may introduce. - [00:28:49](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1729s) **Challenges of Emerging Agent Economies** - The speakers examine how autonomous AI agents communicate and act across layers, the risk of intent loss and unintended outcomes, and the necessity of steering such agent‑driven markets—citing algorithmic trading in finance as a cautionary example. - [00:32:46](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=1966s) **Rise of AI Agent Companies** - The speaker is optimistic about a forthcoming era of agent-driven firms, highlighting projects such as MetaGPT—a full‑stack software startup run by agents—and AI Scientist, a multi‑agent system that independently conducts scientific research and publishes results. - [00:35:54](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2154s) **Invisible AI Prototype and Meta Glasses** - The speaker describes a low‑profile AI demo that operates without visible wearables, likening it to an “invisible” AI companion, and then references Meta’s recent event unveiling next‑generation Ray‑Ban‑style smart glasses. - [00:40:13](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2413s) **Challenges of Thought‑Controlled Messaging** - The speakers explore how ostensibly simple AI chat‑bot interfaces become unexpectedly complex with near‑telepathic wearables, stressing the need for an approval step to ensure only intended thoughts are transmitted. - [00:43:18](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2598s) **Wearable AI: Seeking Real Use** - The speakers debate how to pinpoint everyday, valuable applications for AI‑powered wearables—beyond novelty—citing smartwatches’ fitness tracking success and proposing speech‑impediment assistance as a high‑impact, empathy‑driven use case. - [00:46:25](https://www.youtube.com/watch?v=1PlyV-pf9_M&t=2785s) **Podcast Closing and Promotion** - The hosts wrap up the episode, thank guests and listeners, and promote the show’s availability on major podcast platforms. ## Full Transcript
0:00Whenever I hear people's hypothesis and 0:02I read the paper, I ask myself this 0:04question, right? Is the human race 0:06officially on autopilot? Because first 0:08we use chat GPT, you know, for help, but 0:11then we used it for everything. All that 0:13and more on today's mixture of experts. 0:19[Music] 0:21I'm Tim Hang and welcome to Mixture of 0:23Experts. Each week, brings together a 0:25panel of the innovators who are pushing 0:27the frontiers of technology to discuss, 0:29debate, and analyze our way through the 0:31week's news in artificial intelligence. 0:33Today, I'm joined by a great crew. We've 0:34got Aaron Botman, IBM fellow master 0:36inventor, Lauren McHugh, program 0:38director, AI open innovation, and 0:40joining us, I believe, for the very 0:42first time is Martin Keane, master 0:43inventor. This is going to be a great 0:45episode today. We're going to be 0:46covering a lot of interesting research. 0:48We'll talk about a great paper out of 0:50MBER called how people use Use Chat GPT, 0:52the latest edition of the Anthropic 0:54Economic Index and a paper out of Deep 0:56Mind on agent Economies. We're also 0:58going to cover a pretty interesting uh 1:00set of demos called Ultra Ego. Um and 1:02also talk about the recent meta uh 1:04wearable. Um but first, as always, we've 1:07got the news and headlines from Ay. So 1:08Ay, over to you. 1:14Hey everyone, I'm Mcconnen, a tech news 1:16editor for IBM Think. I'm here with a 1:18few AI headlines you might have missed 1:20this week. Google's parent company, 1:23Alphabet, joined the $3 trillion club. 1:25Yes, that's trillion with a T. Only 1:28three other companies besides Alphabet, 1:30could brag about having a market cap 1:31over 3 trillion. According to a new 1:34report from the World Trade 1:35Organization, AI could boost the value 1:37of trade in goods and services by nearly 1:3940% by 2040. 1:42Are you ready for the animal internet? 1:44Scientists from the University of 1:46Glasgow have taught dogs and parrots to 1:48interact with touchcreens. The parrots 1:50learned how to use their tongues to play 1:52music, and the dogs use their paws to 1:55call their friends. Want to dive deeper 1:58into some of these topics? Subscribe to 2:00the Think newsletter linked in the show 2:01notes. And now back to the episode. 2:08>> So for the first segment I really wanted 2:09to talk about a really interesting paper 2:11that came out of MBER which is in some 2:14ways kind of like the gold standard 2:15working papers coming out of the field 2:17of economics. Um and it's a paper 2:19entitled how people use chatbt. Um and 2:22in many ways it's sort of a very 2:23straightforward paper um that breaks 2:26down I think for the very first time in 2:27a very kind of professional academic 2:30grounded way how people are using 2:32chatbt. Um and I guess Lauren we're very 2:34fortunate to have you on the show 2:35because I believe you mentioned that 2:37David uh Deming one of the authors on 2:39the paper actually was your old 2:40professor. 2:41>> That's right. 2:41>> And so I guess I don't know kicking off 2:43I guess maybe I'll give you the the kind 2:44of opening remarks on this one. uh 2:46curious about like what you thought 2:48about the paper and if there's 2:49particular things that stood out to you 2:50in terms of the trends, anything 2:51unexpected or um or did this kind of 2:54really confirm your biases in terms of 2:55how people are using this technology? 2:58>> Yeah, I mean I think um knowing a little 3:01bit about uh the research behind this, 3:03what I appreciate is what it takes to 3:06actually try to create a taxonomy around 3:09how people are using chat GPT. And they 3:12did that. they had um classification for 3:14different kind of tasks that people are 3:16using it for. And I think what stood out 3:18to me was that the number one task was 3:22gathering information 3:24um which is which is search you know 3:26that's you know chat GPT is really 3:29search GPT and then what that means from 3:31a you know the point of this was to 3:33understand the economic impact you know 3:35is it really that this is at least for 3:37now you know what the main use cases 3:39that it's being used for is this really 3:42more of like a search engine 2.0 0 3:45technology versus I think the probably 3:48hypothesis going in was that this is a 3:51net new like category making technology. 3:54So I think it's um interesting to see 3:57how that will evolve. But I also think 4:00too that um you know looking at the 4:03impact the economic impact of Genai 4:06through the consumer standalone tool 4:08like chat GPT itself is really limiting 4:12what I think is probably the bigger 4:15economic impact which is where that um 4:18technology like via an API gets embedded 4:21into other software that we use every 4:24day. So, like, you know, when I can go 4:26to Amazon and search, you know, what are 4:28great toys for a 5-year-old, or I can go 4:31to the New York Times website and search 4:32like, you know, what's the latest update 4:34on XYZ bill, you know, getting passed in 4:37the Senate. To me, those are probably or 4:41I I would argue are going to be the 4:42bigger economic impacts than people 4:44using Genai through a standalone chat 4:49interface like ChatBT. 4:51>> Yeah. And actually, Martin, would we 4:52love to pull you into this discussion 4:53because I think yeah, Lauren, similar to 4:55you, I kind of like read it and I was 4:56like, wait a minute. We've been sold 4:58this like multi-purpose general use 5:00technology. It's just it's just search, 5:03you know, and I guess Martin, I don't 5:05know, is that in some ways like kind of 5:06a disappointing outcome? It sort of 5:07feels like, you know, I guess Lauren, it 5:09seems like the argument you're trying to 5:10make is well, it's still early and all 5:11these other types of use cases are 5:13maturing. Is there a possibility here, 5:15Martin, where just it turns out the main 5:16use of AI is just search? Well, that's 5:19certainly what this report indicates is 5:21how many people are using it for for 5:23search. What I found interesting as 5:24somebody who really works in education 5:26was that the main use case in education, 5:3050% of the use for work in education was 5:33for writing, which I think in some 5:35respects is not terribly surprising 5:37because we see the internet full of AI 5:40slop everywhere, right? But but it kind 5:42of is surprising because large language 5:46models today generally are just not very 5:49good writers that you you have a certain 5:52writing style and that writing style is 5:55fine and totally legitimate but it's the 5:58same writing style and no matter how 5:59much you prompted it always wants to go 6:01back to the same way of of phrasing 6:03things. So initially I saw that and I 6:05thought my goodness that is a scary 6:06thought that educators 6:08>> kind of a grim outcome I guess. Yeah, 6:10just using this for writing. But when 6:12you look closer, actually twothirds of 6:14the things that are classified with this 6:16writing taxonomy are actually not 6:20creating new content, but are working on 6:22existing content. So editing or 6:24critiquing or translating. Twothirds of 6:26the use were for those three categories. 6:30uh which makes I think a lot more sense 6:31because to me that is where the power of 6:34this comes in that if you give it your 6:35own sample of writing if I'm trying to 6:37explain something to a particular 6:38audience and I need to put it in terms 6:40they will understand a large language 6:42model is pretty good at taking that and 6:45acting kind of as a reviewer as a and as 6:48an editor but certainly how I use it so 6:50I I was comforted to see that a lot of 6:52people are in education are doing the 6:53same thing 6:54>> yeah it's the same way I use it as well 6:55is like largely as like kind of it's 6:57great as like an editor and a critiqueer 6:59Um though it's interesting that like 7:00kind of like pure just text generation 7:03is the really big thing. Um, and I guess 7:06I don't know, do you how do you feel 7:07about kind of like sort of I think 7:08Lauren put out like a very interesting 7:10hypothesis and I've used this example 7:11before on the show, but I'm going to use 7:13it again just cuz I love it is basically 7:14like, you know, when they first invented 7:16the PC, you look at all the ads from the 7:18early days of the PC and they're like, 7:20"Oh, well, I don't know, you could maybe 7:21use it to store recipes, right?" And it 7:24like took a while for us to eventually 7:25come up with the idea that there's a 7:26thing called the spreadsheet and oh, 7:28okay, all right. Then everything changes 7:29for the PC on terms of what you use it 7:30for. Do you think there's a similar 7:32thing here where basically like we 7:34shouldn't be surprised because the chat 7:36interface might kind of only be good for 7:38like a couple things really and so like 7:40we're still kind of waiting for almost 7:41like different ways of interacting with 7:43this technology to start to see 7:45different results. 7:46>> Yeah. And think about what some of those 7:48uses we might think would be uh like 7:50coding for example. But this report 7:52showed something like 4% of messages are 7:54about coding which is you know much less 7:56than you would you would think. And when 7:57you look at the benchmarks and so forth 7:59they're very much coding focused. Other 8:01things that surprised me in this report 8:02were uh people talk about using this as 8:05a therapist. Well, something like less 8:07than 2% of messages were actually about 8:10relationships and personal reflection 8:12and then games and role playing and that 8:14kind of thing that was like a tiny 8:15percentage less than half a percentage. 8:17So the sort of the use cases that I 8:19think people have initially thought 8:20let's use large language models for uh 8:24and not necessarily playing out here in 8:26what people are actually doing with 8:27these models and these chat bots. 8:28>> That's right. And I think, you know, 8:30Aaron, just maybe kind of turn and get 8:31your voice into this conversation. I 8:33think one of the greatest ironies that 8:34I've always thought has happened with 8:36LLMs is you have like, you know, very 8:38leftrain technology generating a very 8:41like kind of like wordy almost kind of 8:43like feelings-based uh tech on the other 8:45side. And you know, I had a similar 8:47reaction to Martin reading the paper was 8:48almost like, have I just been living in 8:50a bubble for all this time? Because you 8:52know for the last few like I don't know 8:53you sampled the last 10 20 episodes of I 8:56think we talk about the codegen 8:58applications like every other week it's 9:01like very constantly in our discussion 9:03it's what like a lot of the high-profile 9:06use cases that people are excited about 9:08you know cloud code and etc. That's like 9:10what people want to talk about. But I 9:12think what we're finding here is that 9:13that's like a a total bubble. If you're 9:15interested in mass adoption of this 9:16technology it's like coding is not the 9:18thing. Um, so I guess Aaron, maybe to 9:20bring that to a question, not just rant 9:22at you, is like do you think it's 9:23actually like should technology 9:25companies should foundation model 9:26providers be like totally changing what 9:28they're focusing on? Because it sure 9:29seems like you talked to anthropic, 9:31they're spending a lot of time on this 9:32code application, but it accounts for 9:34like such a tiny slice of what people 9:36are doing. 9:37>> Yeah. Whenever I hear people's 9:38hypothesis and I read the paper, I ask 9:40myself this question, right? Is the 9:42human race officially on autopilot? 9:45Because first we use chat GPT you know 9:47for help but then we use it for 9:49everything. 9:49>> Uhhuh. 9:50>> And in this paper you know it's become 9:52this niche to everyday where it's a tool 9:54for the techsavvy users but now it's 9:56moving to the consumer tech just like 9:59the internet smartphones once did. And 10:02what really stood out to me uh was the 10:04amount of people that are using you know 10:06these tools. is about 10% of the world's 10:08population which depends on what source 10:11you look at. That could be about 800 10:13million people are using it. But the 10:16early adopters were professionals. 10:18That's where it started. But that's 10:20flipped now where now 70% of usage are 10:22now what we call non-workers, right? And 10:25if you look in that paper, the worker 10:26profile of what they do, you know, we 10:28discussed some of that uh here already, 10:30you know, where worker profiles used 10:32mostly for writing, for example, 10:34computer programming. It's really about 10:36helping knowledge workers find 10:38information in these knowledge intensive 10:40jobs. Whereas nonwork profile, it's 10:43really where AI is being embedded into 10:45everyday life where you want to create 10:48images, art, video, uh multimodal, but 10:51to help patterns of life where you can 10:52rewrite uh this uh content. And what's 10:56what's neat when you put both of those 10:58two groups together, you know, seeking 11:00information, practical guidance, and 11:02writing, um, according to this paper, 11:04that's about 80% of all the tasks and 11:07topics that are used within these, uh, 11:10models, right? And I think in the future 11:12where we're going that's alluded to this 11:13paper is that these are not just 11:15assistants anymore but these are agents 11:17like in sports you know where you would 11:20have an assistant that do that does your 11:22task what you tell your assistant to do 11:24where your agent is sort of constantly 11:25in the background working on your behalf 11:29and you know we're seeing these 11:30soloreneurs pop up where a singular 11:33individual has a lot of uh forsight and 11:36foresight with all these tools right and 11:39it gives these small teams 11:41access to expert level tools. But the 11:43last point is even though uh we have 11:46access you um the access par gap is 11:49closing uh although it still seems as 11:52though those who have those countries 11:53that have the highest GDP the people in 11:56those countries still have more 11:57accessibility to to these tools it still 11:59doesn't mean that there's an equal 12:01playing field about how to use these 12:03tools as an agent. Yeah, definitely. I 12:05do want to get to that because that'll 12:06definitely be part of like I think the 12:07anthropic paper which we'll talk about 12:09in the second segment. Before I move on 12:11to that though, I guess maybe a final 12:13question maybe Lauren to you about kind 12:15of business strategy here. Um, and it's 12:17I think related to Aaron what you're 12:19saying about like well the future is 12:20going to be agents and the use cases are 12:21going to look pretty different as we get 12:23agents to come online. It feels like in 12:25the near term if we don't figure out 12:27agents does Google kind of end up 12:29winning this game like uh you know one 12:32of the things we've been tracking ate is 12:33kind of like if you asked me like a year 12:35ago I would have said ah Google's out of 12:37the game they're so far behind they 12:39haven't released anything good but in 12:41pretty quick succession they really seem 12:43to be catching up very very quickly and 12:45I think this this kind of report made me 12:47think a little bit about like well it 12:49turns out the majority use case is still 12:51search does that mean that the incumbent 12:53search company ultimately kind of 12:54triumphs in this game like that like 12:56most people still go to Google for 12:57search and so very naturally if you have 12:59an AI product around search the two will 13:01kind of go together right and so I guess 13:03how much do you think this kind of 13:04weighs in Google's favor in terms of 13:06like winning ultimately these kind of 13:08early innings of the the AI game 13:10>> I I mean I really think the fact that 13:12search is the number one task well 13:15information gathering they call it is 13:18because there's still a long way to go 13:19in the imagination gap of what we could 13:21use generative AI for it's not because 13:24that is the singular best application 13:27for it. So I really think that in you 13:30know from a business strategy 13:31perspective investing in the actual work 13:35I mean this is really like product 13:36strategy and product design work to 13:38figure out what are the problems that 13:40besides search that can actually be 13:43solved with Jedai you know doing the 13:46market research doing the user research 13:48you know prototyping solutions that work 13:51has actually been surprisingly 13:54limited and um there's just now getting 13:58to be this wave of like entrepreneurs 14:01who are taking that forward. Like there 14:03were 36 I think AI unicorns this year 14:06alone. You know, unicorns have more than 14:09a billion dollars valuation. That's 14:11crazy. You know, it wasn't I don't know 14:12what the number was for last year or 14:14for, you know, 2023, but it was not. I'm 14:15sure it was not. 14:16>> Seems like a lot. 14:17>> Yeah. So, I really I really think that 14:19it's not that search will go forward as 14:22the most dominant use case. I think well 14:24it it might but I think that that isn't 14:27where we should focus. We should really 14:29focus on all the other things that it 14:31could do which might look more like a 14:33long tale but if we invest in figuring 14:36out and like using creative intelligence 14:39to figure out what those things are and 14:42then test them and eventually build and 14:44scale them that makes more sense. 14:46>> Yeah. Yeah. Yeah. To me, you know, 14:47search implies that it's human-driven, 14:50that a human has to go in and enter a 14:51keyword and search. Whereas, um, I I 14:54hope in the future data will find you, 14:57you know, where we all become magnets, 14:59you know, for the data that we don't 15:00have to actively search, but we have 15:01these agents going off already 15:02predicting what we want to see and 15:04searching for us and then providing us, 15:06you know, what we're looking for. And 15:08hopefully Google, you know, will will be 15:10on we and I think they will will be on 15:12the forefront of some of that. Yeah, 15:14it's a funny kind of future where you 15:15like wake up and you're like, "Oh yeah, 15:16this is everything I wanted to like 15:19I didn't even realize that this is what 15:20I wanted." 15:21>> That sounds an awful lot like a social 15:22media feed we have today. And I'm not 15:25sure that it's necessarily delivering 15:26the things that maybe we should be 15:28getting, even though it thinks that 15:30that's the thing that's going to get the 15:30most engagement. So yeah, it will be 15:32interesting to think of a world where uh 15:35we're not just trying to send you stuff 15:37to maximize the engagement of the stuff 15:38that we send you, but it's more 15:40personalized to the fact that we think 15:41there is some utility in you receiving 15:43this information and that you would 15:45benefit from it. 15:50I'm going to move us on to our second 15:52topic which uh I think luckily is like 15:53pretty related to a lot of the things 15:55that we're touching on. Um so you know 15:57not to be you know outclassed anthropic 16:00also has released a major review of how 16:02people are using AI technologies. This 16:05is the second I believe edition of their 16:07anthropic economic index. The basic 16:09intuition is that they have a lot of 16:11people using claude now and what they 16:13want to do is basically get a better 16:14sense of how people are adopting and 16:16using AI in the field using the data 16:18that they have um from operating a 16:20platform like claude. Um and it's super 16:23interesting and I think there's a lot to 16:24go through but I think the main thing I 16:26want to focus on which is new for this 16:28edition of the anthropic economic index 16:30is that they have started to expand 16:32their analyses beyond just the United 16:34States. Um and again I think in the 16:36spirit of like getting out of our bubble 16:38like not everybody uses AI for coding. 16:40Uh I think it's also really useful to 16:41for us to think about like the 16:43international scene and how that's kind 16:45of adopting AI. And so Aaron, I kind of 16:48want to go back and maybe kick off with 16:49the point that you had raised, which is 16:51you were talking a little bit about the 16:52fact that yeah, what anthropic finds is 16:54that there's this kind of relationship 16:56between like wealthier countries and 16:58adoption of Claude and there's kind of 17:00like this very kind of specific sort of 17:01income distribution difference um in 17:04what they're seeing in the data. Um, I 17:07guess should we I guess Aaron be worried 17:09about kind of like an AI gap like where 17:11actually just turns out that like 17:12wealthy countries adopt this technology, 17:14they get all the benefits and countries 17:16that are maybe relatively poorer don't 17:18adopt that technology and are left 17:20behind. 17:20>> Yeah, I think it's important to find the 17:22signal in the noise. Um, because I don't 17:25think it's all about, you know, 17:26wealthiness or GDP. What I really liked 17:29was the anthropic AI usage index that 17:32they introduced which looked at what 17:33they called uh usage density um and and 17:36it's like this normalized measure that 17:38when they adjust for let's say working 17:40age that smaller tech advanced countries 17:43are lead and usage per person right so 17:47so going back to some of Martin's point 17:48it's about the utility you know what are 17:50people actually getting out of using 17:52these tools right is it useful is it 17:54actionable right and then how much are 17:56they using it And um there there 17:58certainly is a correlation right in the 18:00paper about high income means you more 18:03usage density but there were some 18:05corners where potentially you know there 18:07are people who are maybe not in high 18:09income countries but they're learning to 18:12work with AI right so so they're uh 18:14eventually getting through it so um um 18:17they called it um directively automating 18:21task was was one place you where people 18:24are becoming uh much more familiar with 18:27the tools and lots of this is working 18:29out because of these remote 18:31technologies. You know, you can go on 18:32the internet and take a remote class. 18:34You can even watch mixture of experts, 18:36right, to learn how all of this 18:37technology works and and then begin to 18:39go pick up a tool and try to use it and 18:42they're very accessible. Uh which is 18:44really nice as well, right? So um you 18:47know businesses are tending to trust 18:50more of of these you know tools and that 18:53I think is beginning to spill over into 18:55individuals adopting this but it doesn't 18:58mean AI adoption is uniform because it's 19:00certainly not and I do think that we 19:03need to be careful right about widening 19:04this AI gap. Yeah, Martin, I guess on 19:07this education point, you know, it's so 19:09funny because I think, you know, when I 19:10used to work at an AI startup, the thing 19:12we always used to say is like, well, it 19:14turns out that LLMs, you can have a 19:15natural language conversation with them. 19:17And so this is like the seamless, most 19:20easy adopt interface because it's just 19:21conversation. Uh, and like you don't 19:24have to learn to program it. You don't 19:25have to read some handbook to learn how 19:27to use the interface. You just talk. Um, 19:29but it seems here and I think what 19:30Aaron's kind of pointing at is yeah that 19:33there actually is still this like 19:34learning curve even though it is 19:36conversation and so has AI turned out to 19:38be harder to learn how to use 19:40effectively than we thought. 19:41>> Yeah. When you see people selling online 19:44100page prompting guides that makes you 19:46think maybe this maybe this 19:48>> wait are we just back to where we 19:49started? 19:50>> Right. That's right. Suddenly we need a 19:51big manual just to be able to talk to 19:53this chatbot. I mean, and it's certainly 19:54been been shown that prompting is still 19:57a big part of this is understanding 20:00intent uh massively affects the the 20:03outcome of the model. I mean, I thought 20:05I thought it was really interesting the 20:06fact that they had lined up all of the 20:09countries that were part of this study 20:10and then they did correlate so closely 20:13to GDP that the higher the GDP, the 20:16higher the percentage of people in that 20:18country were actually using the model 20:20and maybe getting more utility out of 20:21it. But then when whenever they do that, 20:24it makes me interested to see what 20:26doesn't correlate with with the with 20:28that line, that straight line. And there 20:29are a couple of countries like that. 20:31Singapore and Canada, they significantly 20:34overindexed on that. So something like 20:37four times the amount of people in 20:39Singapore were using chat GPT than you 20:43or or in this case actually claude than 20:45you would uh actually expect considering 20:47the GDP of that country. So it's kind of 20:49begs the question well what sort of 20:51utility are these people getting out of 20:53it that that that other countries are 20:55not and then when you look at India we 20:57mentioned earlier that we think you know 20:59there's so much talk about using these 21:00tools for codegen and actually it wasn't 21:03a big part of what chat GPT usage was in 21:07India half of all use of claude were for 21:10coding tasks so you can see again that 21:13people in different countries are 21:15getting different utility out of these 21:16different chatbots 21:17>> Lauren I think there's a way to have the 21:19conversation, another way at this 21:21conversation if you will, like that 21:22we've been having for the last few 21:23minutes, which is like Tim, you're being 21:25a huge dummy, right? It's like it's kind 21:27of no surprise that rich countries adopt 21:29Claude more because the reason why is 21:31that like you can spend $200 a month on 21:33Claude, right? That it's actually like 21:35something that you you you have to pay 21:36for. Uh and like some of these rates to 21:39actually like get the better versions of 21:41the model are indeed extremely 21:43expensive. Um, and I guess I want to ask 21:45you as someone who works in open 21:47innovation, thinks a lot about open 21:48source and kind of how all those pieces 21:50fit together, do you think this map 21:52looks very different for open source, 21:54right? Like is this kind of correlation, 21:55right? Is there this huge dark matter 21:57where it's like, yeah, no surprise they 21:59don't use cloud because they don't want 22:00to pay for it. They're using open source 22:02alternatives that are free and much 22:03better. Is that one way of reading this 22:05data? Do you buy that? I think if you 22:06look at developer populations in 22:08countries, I would I would guess that um 22:12amongst developers only that GDP has 22:16much less of an impact because if you 22:18already are trained as a developer then 22:20you have access to all the tools in the 22:22world. You can get models, you can get 22:23inference engines, you can get valuation 22:26frameworks, tuning frameworks, you know, 22:28whatever you like. But I think the 22:29problem is that developers are a really 22:31small percentage of the population in 22:33some countries versus much bigger in 22:35others. And that's fundamentally an 22:37access issue and an economic issue. And 22:42I think this whole thing played out 22:45pretty much the same way with social 22:48media. Like 10 15 years ago, we were 22:51having the same debate of social media 22:54is so widely adopted in higher inome 22:56countries. there's such low adoption in 22:58lower income countries. The stakes were 23:00a lot lower because I think social media 23:02it's more of like a um you know has more 23:04entertainment value than you know like 23:06productivity and labor value. But I 23:10think what's most important now is what 23:11happens next because when that happened 23:13with social media I was actually living 23:16in East Africa at the time and Facebook 23:19made a very very um bold move which was 23:23to create something called internet.org 23:25or and made these tools available 23:27completely for free, you know, working 23:29together with the Telos and the internet 23:31providers and I could see how that was 23:34received on the ground and it was a very 23:36mixed bag. You know, there of course 23:38were people who were very excited and I 23:40mean even today in Kenya, you can if you 23:42run out of data on your cell phone plan, 23:44you can still access Facebook because 23:46it's there's it's negotiated to be for 23:48free and that's because the goal is to 23:51make sure that there is access and 23:52adoption. But then the other um you know 23:55cohort is that that essentially creates 23:58like a gatekeeper mechanism and and the 24:01big problem with that particular uh 24:04project was that it was a selection of 24:06certain websites and tools like 24:08Facebook, Wikipedia and a few others and 24:10in fact in India within a year of that 24:13being released it was banned by the 24:16government. So they actually said it's 24:18better to have no free internet if it's 24:22going to be curated by someone else you 24:24know with no sense of like net 24:26neutrality potentially creating 24:28information monopolies that you know the 24:31population is not deciding on. So I 24:34think whatever I think that this report 24:36helps bring awareness to the issue that 24:39right now adoption and access are not 24:41equally distributed. I think what 24:43happens next needs to account for like 24:46how to do this with dignity for the 24:49populations it's meant to serve. Not 24:51really using charity as a cover for 24:54actually just creating market dominance 24:57um in in these markets, these emerging 25:00markets. Um I think that's the most 25:02important thing to keep in mind now 25:04about now that this access gap has been 25:07identified, what do we actually do about 25:08it and not not repeat history. 25:10>> Yeah. Yeah. Yeah. This paper left me 25:12with some hope right at the end right 25:14where where where I was reading it and I 25:17interpreted it this that geography isn't 25:19necessarily destiny you know but it does 25:21help you know where you live and the 25:24things that that you can do right if you 25:26do live in a low usage region you could 25:29learn remotely you could create your own 25:31AI exposure you could find niche 25:34industries that could adopt AI but 25:36there's different areas that you can 25:38help to close that gap right so it could 25:40be a grass grassroots movement to 25:42increase the density of usage in these 25:44geographies that are looking like 25:46there's just no hope, you know, but 25:48there is. 25:53I'm going to move us on to our third 25:54topic of the day. Um interesting paper 25:56moving a little bit away from these kind 25:58of economic studies uh to a paper that 26:00DeepMind released called agent 26:02economies. Um, and it's a kind of fun, a 26:05little bit of a speculative paper, 26:07though I think we can debate about how 26:08speculative it is. And the paper 26:10specifically looks at the idea that 26:12look, in the future, we're going to have 26:14all of these AI agents in the economy. 26:17And in many domains, you know, these AI 26:19agents may be interacting with one 26:21another. Um, and kind of what the the 26:24paper does is just to simply point out 26:25like that'll be weird and new and that 26:28we will need to figure out what to do 26:29with that. And indeed that it kind of 26:31introduces a number of sort of new 26:33risks. Um and you know this actually 26:36connects to something that we talked 26:37about on the last episode. Um just to 26:39kind of recap everybody here and also 26:41our listeners, we talked a little bit 26:43about the phenomenon of you know 26:45particularly in job hiring recruiting. 26:47We have this world where basically 26:48people are starting to use LLMs to 26:50submit applications and then we're 26:52starting to see like HR teams and people 26:54teams use AI to try to filter through 26:56those applications and there's like a 26:57little bit of like an algorithmic war 26:58going on there which you know ultimately 27:01hasn't been great for anyone. Um there's 27:04an Atlantic article that literally was 27:06entitled something to the effect of like 27:07the job market is hell. Um and so I 27:10guess may Martin I'll kick it over to 27:11you is like should we be worried about 27:14agent economies? It's like it does feel 27:16like in many of the cases I could name 27:18you know the minute we sort of have 27:19automation on both sides of a market you 27:22know things get a little bit out of 27:23control and not always in the best way. 27:25>> Yeah we see this in education as well 27:27that uh somebody has written uh some 27:30sort of article to explain something in 27:32AI then somebody has used a large 27:35language model to summarize the AI 27:37summary and then they're using a large 27:39language model to create quiz questions 27:41based on the summary that was based on 27:42the LLM generated article. Yeah, we're 27:44just like, yeah, it's so yeah, that's 27:48that's obviously it's an such an 27:50interesting thought to think of the fact 27:52that, you know, how powerful a 27:54particular agent can be and now what 27:57happens if you take that thing that has 27:58so much utility and then you connect it 28:01to another thing that's just as 28:03powerful, another agent, and how does 28:04that communication work? I mean just 28:07sort of from the the plumbing point of 28:08view I've been looking at that recently 28:10how you integrate these two agents 28:11together and using things like the agent 28:13to agent protocol the A28 protocol which 28:15is a open source thing um that's now 28:17part of the Linux foundation but 28:19originally came from Google and just 28:21seeing how these agents can basically be 28:25rapid so that they can talk to to other 28:27agents and discover each other and so 28:28forth and the analogy I heard was oh you 28:30know it's kind of like making Lego a 28:32Lego brick out of an agent so the number 28:35of times in my IT career that I've heard 28:38that we are going to take something and 28:39we're going to wrap it and it's going to 28:40be like a Lego brick that we can plug 28:42into anything else. I mean we've been 28:44there with S SOA with Cora 28:45microservices. 28:47Yeah, here we are again. Uh you know 28:49that helps with discovery. It helps with 28:51with communication and so forth. But I 28:53think this is a whole new scale. This is 28:55not talking about a microser that know 28:57does one thing. It writes a field to a 28:59database. We're talking about agents 29:02communicating with other agents asking 29:04them to do things and then the amount of 29:06processing and complexity in that agent 29:09to perform that thing. How do we know 29:12that the thing it's going to do is 29:13really what we were asking from the 29:15original agent and so forth and as we go 29:17down the line that kind of the meaning 29:18could could get lost as it goes. So 29:21yeah, it's it's really interesting to 29:23think how this agent economy would work. 29:25what would the what would be the kind of 29:27the first use cases for it and all of 29:30the potential unintended consequences 29:32from that. 29:33>> Yeah, for sure. Aaron, are you are you 29:35hopeful here? I mean, so the paper ends 29:37on kind of a positive note because I 29:38think the the authors are trying to like 29:40offer a way forwards from a research 29:42standpoint and they pitch this idea of 29:45look, we need to figure out how these 29:46economies can be made steerable and if 29:49we can steer these markets in the right 29:51direction, we can make sure that they 29:52behave properly. And I guess my 29:54skepticism on the paper is like I don't 29:56know one of the most arguably agentified 30:00markets is the financial markets, right? 30:03People do algorithmic trading all the 30:04time and a huge amounts of the volume of 30:06the stock market is on algorithmic 30:07trading. The stock market has proven to 30:10be a really hard thing to try to steer. 30:12We're certainly maybe better at it than 30:14we used to be. But as far as I 30:16understand, I mean, when the market like 30:18gets into a crisis mode, the best 30:20solution we have is we literally hit 30:22what is called a circuit breaker and we 30:24stop the market for a period of time and 30:25start it again. Um, and so I don't know, 30:28do are we are we promise like do we feel 30:30steerable markets is like a promising 30:32frame or are we going to just be kind of 30:34like what we do financial markets which 30:36is like turn it off and turn it back on 30:37again and like hope that the system kind 30:39of keeps working properly. Well, I'm 30:40waiting for agents to unionize, right? 30:43And demand some sort of profit from this 30:45and even demand nap breaks, 30:49right? Like 30:50>> um but but I mean with that, so this 30:53this reminds me of of a field. So 30:56whenever I was back in college, you 30:57know, I was very interested in 30:58evolutionary computing. So I'd studied 31:00artificial life and that's really the 31:03study of how man-made systems could 31:04exhibit behaviors that are 31:06characteristic of living systems. Now 31:09these these agents are becoming very 31:12similar to that but the focus and the 31:15the enabler around um the modernday 31:19uh agents is AI which is more of a top- 31:21down logic driven piece right and and to 31:24your question about do will these agents 31:27be able to you know um solve the 31:29problems uh for example in the stock 31:31markets of uh tomorrow and of today well 31:35I think that you don't have to be the 31:36most powerful agent you just need to be 31:38the most needed agent, right, in order 31:40to help solve some of these problems. 31:42And it comes down to can we set the 31:44right um distribution of fairness, 31:46credit assignment, the right kind of um 31:49incentive for an agent to self- evvolve 31:52to so that they can better solve some of 31:54these problems. But you're still going 31:56to see, you know, I think, you know, 31:58traditional machine learning that's 32:00going to be embedded into these agents 32:02along with Jin AI and and there's going 32:05to be some intertwining between both of 32:06them. you know uh for example LLMs can 32:09use the outputs of you know let's say 32:12decision tree support vector machines 32:14and so on but also um it goes in the 32:16opposite direction you know you know 32:18common day or uh traditional machine 32:20learning can use the outputs of also 32:22LLMs and they work together so so I 32:25think that combination right um is is 32:27going to work together to help solve and 32:29and create this scalable coordination 32:31amongst all of these different agents 32:33>> so ultimately I think that that is 32:35promising I could see at right as a way 32:37of kind of like approaching uh some of 32:39these problems. Um maybe Lauren, I'll 32:41give you the last word here before we 32:43move on to our final topic for the day. 32:44Agent economies. Are you optimistic? Is 32:46this something that you're excited to 32:47see? 32:48>> I think I'm optimistic about what I 32:51would see as the next phase, which would 32:52be agent companies. So we'd have 32:55companies of agents before we get to 32:56economies. And of course, I look to 33:00what's happening in open source 33:02communities to see if that's realistic. 33:04And actually I'm us I'm generally the 33:07skeptical person on my team but on this 33:09one there's two projects I've seen that 33:12were like jaw-dropping which is one is 33:14meta GPT and it claims to be a software 33:17startup um of agents. So there's agents 33:20to do your product you know market 33:23research competitive landscapes agents 33:25to define the requirements to give to 33:27the engineering team and then the 33:28engineering team is a team of agents 33:30that does the codegen and then it gets 33:32deployed. So, it's a, you know, uh, 33:34end-to-end software company. That's 33:36super cool. It's a super popular project 33:39worth checking out. The other, um, which 33:42is mind-boggling to me is AI scientist, 33:44which is a team of agents that can do 33:48its own scientific experiment and come 33:50up with a publication and could even try 33:53to get that published. So, with AI 33:56scientists, this again, it's it's a 33:57popular project. It has like 10,000 33:59stars. um it you know writes its own you 34:02give it a prompt like you know what's 34:04going to be a more efficient way to use 34:06LLMs it will come up with a hypothesis 34:09design the experiment create the data or 34:12get the data run the experiments these 34:15are usually experiments about LLM itself 34:17so it's a little meta they actually got 34:20one of these AI generated experiments 34:23plus papers accepted into ICLR so they 34:27worked with the organizing committee and 34:29told them, you know, we're going to 34:31submit some AI generated papers to like 34:33keep this ethical, you know, like they 34:34gave them the heads up and one of the 34:36three papers they submitted got 34:38accepted. So, I think agent economies I 34:42I honestly I can't quite wrap my mind 34:44around it because first like let's make 34:46agents work better and then I think the 34:48next step from there would be agent 34:51companies that create the agent 34:53economies. I'm just wondering is this 34:56like just a giant echo chamber when it's 34:58coming up with a hypothesis it's then 35:01coming up with its solution and then 35:02it's peer reviewing itself like is it 35:04just going to be yeah just saying yes to 35:07everything 35:09>> yeah I do like kind of Aaron's 35:10hypothesis is that we'll see like a lot 35:12of other phenomena emerge here where 35:15like agents will try to unionize or like 35:17the AI scientists will argue over who 35:19gets credit on the paper or like the AI 35:21engineering team is always complaining 35:23about what the AI product team's putting 35:24together. Like I think we're gonna we're 35:26about to get there. That's that's the 35:27future of the AI economy. So 35:30>> AI bickering. 35:31>> Yeah. AI bickering. Yeah. You've seen AI 35:33cooperation. Get ready for AI bickering. 35:39>> All right. So I'm going to get us to our 35:40final topic of the day. And in some ways 35:42I want to tee this up as kind of a tale 35:44of two wearables. Um so there was a demo 35:47that was released a few weeks back from 35:49a startup by the name of Alterra Ego. 35:52Uh, we'll put the link in the notes, I'm 35:54sure, and you should just check it out 35:55online. You can search it up. Um, and it 35:57was a fascinating demo. Basically, the 35:59idea was it was just a guy sitting and 36:02he was able to do a bunch of stuff with 36:04AI, but there was no visible kind of 36:06interface. So, there weren't glasses, 36:08there wasn't a wearable, there wasn't a 36:09pendant he had to wear. It was almost 36:11just like a little kind of behind the 36:13ear effectly. Um, and uh, and it was 36:16really impressive as a demo. Um, and you 36:19know, I think they caveed it in an 36:21appropriate way. They said, "Hey, this 36:22is still prototype and this is what 36:24we're working on, but there's a company 36:25doing this right now." And it's very 36:28kind of like almost like invisible AI, 36:30right? Like the idea that in the future 36:31you'll carry an AI companion around, but 36:33there won't really be like a device. 36:35It'll just be kind of like a small kind 36:36of unobtrusive thing um that helps to 36:39kind of like use computer vision and 36:42language models and generative AI um as 36:45a way of assisting you throughout the 36:46day. 36:48split screen, I think, to yesterday. I 36:51believe Meta did a huge event where they 36:53demoed a bunch of Meta AI. Um, and their 36:56big kind of announcement was this, I 36:58believe, multi00, $700, $800 wearable 37:01with Rayban, which are kind of the next 37:03generation of their glasses. Um, and 37:06they had a bunch of kind of flops on 37:08their live demo, but overall the reviews 37:09have been very, very positive that this 37:11might be sort of like the glasses that 37:14finally get the AI integration to work. 37:16And they showed off some cool stuff 37:18like, oh, in the future it'll display 37:19like a translation, you know, uh, sort 37:22of captions while you're talking to 37:24someone, right? So you could kind of do 37:25live translation. Um, you can pull up 37:28notes while you're talking to someone. 37:30Um, and so I guess anyways, these are 37:32sort of two really, really interesting 37:33visions, I think, of what the kind of 37:36sort of wearable AI future might look 37:38like. Um, and I guess, uh, maybe Aaron, 37:41I'll kick it over to you. Does one of 37:43these visions are they more compelling 37:44to you than another? Like do you believe 37:46in kind of like you're going to have 37:47kind of a transparent screen in your 37:49glasses and that's how you're going to 37:51interact with AI or do you kind of 37:52believe in this like you know fully 37:54invisible it's just kind of an audio 37:55voice that you interact with? Yeah, I 37:57mean there's there's no free lunch, you 37:59know, so it always depends on what 38:01environment, you know, you're you're 38:02working in to what solution works out 38:04the best, you know, and this this 38:06reminds me um I worked in biometrics for 38:09quite some time, uh you know, for about 38:11about 10 years, uh before I you know, 38:12I'm doing what what I'm currently doing. 38:14And there are these biometrics called 38:17past thoughts where if you could measure 38:19what you're thinking, then you could get 38:21access and authenticate yourself. And 38:22this was back in 2005, right? and they 38:25they would use, you know, these these 38:27different pieces of um EEGs to measure 38:30brain waves. Whereas this is using 38:32what's called EMGs where it's looking at 38:34the neuromuscular, you know, facial 38:35throat activity that's happening. It's 38:37trying to deduce, right, and infer what 38:40you're going to say based on those said 38:43activations. So, you actually have to 38:44really think about it, of course, but 38:46then intrinsically your muscles have to 38:49respond without actually projecting 38:51sound out, right? um whereas there are 38:54these other non-invasive pieces where 38:56you don't have to do that right um and 38:59then if I look back and and by 39:00biomedical engineering work you know 39:02there's things like fMRIs you know 39:04there's transcranial magnetic 39:05stimulation where you can turn off 39:07portions of the brain but and and those 39:09are somewhat you know remote right so 39:12there's all these spectrums of how do 39:14you do this ubiquitous computing right 39:16do you wear it what kind of devices are 39:18those and so on and you know as always I 39:21think it's going to be a combination Uh 39:23what what I did like about the Ray-B 39:24band is that it uses it it used a um 39:28consumer device that people always use 39:30and need sunglasses and then they 39:32attached on you know some some kind of 39:35tech you know around that right so so 39:37you know if you could pick an object 39:40where someone looks at and and it has an 39:42affordance or you know what to do with 39:43it and then you add in AI because AI has 39:45become invisible right then I think it 39:48becomes very powerful so the so the less 39:50physically intrusive the better. Um, and 39:53I think this altry ego is getting there 39:55and it's a very interesting way. What I 39:57would like to see is more technical work 39:59and and research has been published 40:01because I I did search and I only was 40:04able to find like a paper in 2018, 40:06right, about this work, but I was 40:08looking for more information, right? How 40:10big is the library vocabulary and so on. 40:12So, there's just a lot of questions that 40:13I had. 40:14>> Yeah, definitely. And Martin, this goes 40:16to something we were talking about a few 40:17minutes ago. Interestingly, you know, I 40:18was joking a little bit about like, oh 40:20well, you know, chat GBT chat bots were 40:23supposed to be the easiest interface. 40:25And it turns out there's like actually 40:26like a lot to learn. Um, and I guess a 40:29little bit of what kind of Aaron is 40:30saying is like while theoretically it's 40:31better to just be like, I'm thinking 40:33about sending a text message and like 40:35the AI does it for you, like that's 40:37actually kind of almost more difficult 40:39than like glasses and like a little 40:41thing that is kind of like a mouse 40:42that's in my hands, right? 40:44>> Yeah. I I was watching that demo and it 40:46looked actually like these guys are 40:47really having to concentrate very hard 40:50to make this near telepathic wearable 40:52actually send a message and it's 40:54supposed to only pass intentional 40:56thought. But I'm thinking 40:58is is it really only going to pass the 41:00things that I want it to? Like I really 41:01hope there's an approval button that 41:03before it kind of sends the message cuz 41:05like I'm answering your question now Tim 41:07and I'm I'm thinking about how to answer 41:09it but I'm also looking at Laura and I'm 41:11thinking what's that picture frame 41:12behind her? I wonder what's in there. 41:14>> What am I having for dinner? 41:14>> I didn't want that included in the 41:16message. 41:17>> So So yeah, I you know it it was a very 41:20impressive piece of tech, but I always 41:23like to look at how the there's a bit of 41:26a disconnect between the engineering of 41:29hey, let's see if we could create a 41:30wearable and it uses some kind of 41:32brainwave analysis or something, you 41:34know, versus what the marketing 41:36department has to do, which is to create 41:38this video to sell the product. And you 41:40look at the use cases that they had in 41:42the video. And one of the use cases is 41:44well I I want to talk to my friend over 41:46here. My friend's in the room but it's 41:48perhaps it's too noisy. So uh you know 41:52that is the use case that they decided 41:54that if it's too noisy to talk to your 41:55friend you need a wearable that goes 41:57over your ear and uses telepathy. like 42:00could we not just use our phones and 42:01send a text message or write down on a 42:03piece of paper like and and I think back 42:05to like when the Apple Watch was 42:07released and you know that that was the 42:10idea was could you take all of the the 42:12tech from a phone and basically put it 42:14into a tiny little watch and then great 42:18if you can do that how would we use it 42:19and one of the things from the Apple 42:20keynote when that came out was uh these 42:23these heart tapbacks where it would 42:26measure your pulse right and then send 42:28it to somebody else because that used 42:31the the heart rate monitor that was in 42:33there and it used the the haptic engine 42:35that was in there. I mean that is not a 42:38problem that I'm sure anybody was trying 42:40to solve and nobody's going to go out 42:41and buy a watch for that and that very 42:43quickly disappeared. So it's interesting 42:44to see now the kind of use cases. Now 42:46the the meta rayband display that was 42:50announced. I mean that does look as 42:51Aaron said that's something that you 42:52might already be wearing anyway. So it's 42:55it not such a problem to have to like 42:57bring it with you and put it on. You 42:59might have it regardless. But even then 43:01when you look at the use cases they had 43:02things like a person out about town and 43:05they looked at a building and they said 43:06what sort of architecture is this? And 43:08then the AI gave them an answer. Well, 43:11yeah, that's kind of cool. But am I 43:13gonna need to know every day what kind 43:15of architecture I'm looking at? 43:16>> Four or five times a day, 43:18>> right? So, trying to find the the daily 43:21sort of use for these technologies, I 43:23think, is is the real thing. Like with 43:24the watch, it turned out the use was 43:26fitness tracking and notifications, and 43:28that's basically 90% of all uses. So, it 43:30does make me wonder how some of these AI 43:32powered devices, these things that you 43:34wear and and are basically screens, h 43:37how we'll actually end up using them. 43:38>> Yeah. Yeah. Yeah. I thought there was a 43:39bit of a miss on the use case. You know, 43:41I could quickly think one of the better 43:43use cases that I could imagine is the 43:45ability to help people who have speech 43:47impediments or can't speak, right? 43:50Because that's, you know, I looked and 43:51that's about 5 to 10% of the global 43:53population. That's as many people that 43:55use these Genai tools, right? That's 43:57about 400 to 800 million people. That's 43:59a big market that they would have 44:01already had. and it shows a nice use 44:03case that's you know that's helpful you 44:06know and it and it can really create and 44:08help create a sense of empathy with 44:10their product between people and product 44:12um you know I would like to see sort of 44:14that you know come out of this you know 44:17how can we help humanity better than 44:19just being this neat interesting tech 44:21>> Lauren I'll give you the final word of 44:23today's episode telepathy would you pay 44:25for it uh but more generally would love 44:26kind of your thoughts on this space I 44:29mean I think like it feels like we're 44:30getting skepticism on both sides. It's 44:31like regardless of whether or not it's 44:33telepathy or the glasses, I guess what 44:35I've heard from Martin and Aaron is 44:37well, we're still waiting on what the 44:38good use case is here. I mean, do do 44:40wearables have a future with AI for now, 44:42or is it kind of just still very 44:44speculative from your point of view? 44:46>> I think these two have two very 44:48different purposes. I think the glasses 44:50are more about making a more convenient 44:53or usable interface. So take the 44:56technology we have and just make it 44:57easier to bring up into your interface 45:01that you use with it. I think what's 45:03interesting about alter ego is it 45:05creates a new communication plane really 45:08like you know we have vocalized 45:10language, we have body language, we have 45:12facial expressions. This kind of sits in 45:14between where you want to say something 45:16to someone but not let the whole room 45:18hear it. Um you know do we really need a 45:21new communication plane? I feel like in 45:23pretty select circumstances like 45:26everyone's been saying. I mean, I I do 45:28think it would be sometimes convenient 45:30if you're in a in-person meeting or at a 45:34party and you want to, you know, I mean, 45:36sometimes it's been out of here. 45:39>> I want to get out of here. Let's go. Or 45:42like, you know, hey Aaron, would you be 45:44ready to talk about XYZ case study in 45:46this meeting? And I don't want to ask 45:47that out loud because it is a 45:49distraction. Um, so I, you know, I there 45:53are use cases. I'm not sure if they'll 45:55be worth the cost of these technologies 45:59just yet, but like Aaron also, I think 46:01the most compelling use case. It seems 46:03like when they were researching this at 46:04MIT, um, one of the main use cases they 46:07were focused on was people with MS and 46:09other um, distrophies to actually help 46:12them be able to communicate. That seems 46:14like hands down, you know, truly 46:16invaluable worth all of the research 46:18that it takes. And then if it can be 46:20extended to other more of like you know 46:22social conveniences that would be cool. 46:24Sure. 46:25>> That's a great note to end on. That's 46:27all the time that we have for today. Uh 46:29Lauren Aaron always great to have you on 46:30the show. Uh and Martin hope to have you 46:32back sometime uh ate. And thanks to all 46:35you listeners. If you enjoyed what you 46:36heard you can get us on Apple Podcast, 46:38Spotify and podcast platforms 46:39everywhere. And we'll see you next week 46:41on Mixture of Experts. 46:43[Music]