Your Brain on ChatGPT
Key Points
- The differing driving styles of robotaxi companies (Zoox, Waymo, etc.) raise questions about how humans should be trained to respond to a heterogeneous autonomous‑vehicle ecosystem.
- “Mixture of Experts” introduces its weekly AI deep‑dive format, featuring guests Gabe Goodhart, Kaoutar El Maghraoui, and Ann Funai.
- A recent MIT study titled “Your Brain on ChatGPT” used brain‑scanning techniques to explore how large language models affect cognition.
- Panelists report mixed personal effects: LLMs make some tasks feel easier and boost confidence (e.g., coding), while others feel they diminish understanding and make writing feel “dumber.”
- The consensus is that large language models themselves are neutral; their impact on intelligence depends on how users choose to engage with them.
Sections
- Debating Human Baseline for Robotaxis - The hosts examine the difficulty of using a human driving baseline to train autonomous taxis amid varied company behaviors, then introduce the Mixture of Experts podcast episode that will explore the MIT paper “Your Brain on ChatGPT.”
- AI Assistance Reduces Brain Activity - A recent study found that when participants wrote essays without AI help, their neural connectivity and alpha/beta network engagement dropped, prompting debate about whether relying on AI parallels past fears that new technologies diminish human cognition.
- LLMs Boost Cognitive Engagement - The speaker explains how using large language models as coding assistants keeps their brain actively in the “hot zone,” accelerates exploration of unfamiliar problems, and creates a heightened sense of intelligence and engagement.
- Cognitive Atrophy from AI Automation - The speaker warns that, similar to how industrial machines reduced physical strength, reliance on AI tools may erode deep thinking unless they are used to augment rather than replace human cognition.
- Cynical Optimist on AI - The speaker shares a “cynical optimist” stance, viewing AI as a means to offload uninteresting tasks so they can dive deeper into personal passions, and then cues a forthcoming story from the San Francisco Chronicle.
- Balancing Perfection and Human Compatibility - The speaker argues that autonomous vehicle AI must intentionally forego strict algorithmic perfection to mimic human driving habits and social norms, creating a paradox where less‑perfect behavior can actually enhance safety and trustworthiness.
- Geo‑Specific Prompting for Autonomous Vehicles - The speaker proposes using dynamic system prompts, similar to chatbots’ zero‑shot adaptation, to tailor autonomous vehicle behavior to local driving cultures, thereby enhancing driver comfort and overcoming the “uncanny valley.”
- Adapting Autonomous Driving Models - The speaker discusses Waymo’s rapid ride‑volume growth, questions whether human driving should remain the training benchmark amid diverse proprietary robotaxi behaviors, and argues for on‑device, continuously‑updating models.
- High-Profile GenAI NBA Ad - The speakers discuss Kalshi's groundbreaking generative‑AI commercial aired during the NBA Finals, highlighting its rarity in premium brand advertising and its broader market implications.
- AI Ads Threaten Likeness Rights - The speakers warn that as generative AI proliferates in advertising, unintended use of real individuals’ faces could spark legal battles over likeness ownership and compensation.
- AI‑Generated Ads: Personalization vs Shared Culture - A participant asks whether generative AI will push advertising toward ultra‑targeted, individual experiences or preserve widely‑shared, culturally iconic ads like Super Bowl spots.
- Balancing AI Ads with Creativity - The speakers debate the flood of hyper‑personalized AI advertising, argue that human creativity is essential to keep ads effective, and use the moment to promote their new “Transformers” podcast.
Full Transcript
# Your Brain on ChatGPT **Source:** [https://www.youtube.com/watch?v=brrJRoqjVY4](https://www.youtube.com/watch?v=brrJRoqjVY4) **Duration:** 00:36:04 ## Summary - The differing driving styles of robotaxi companies (Zoox, Waymo, etc.) raise questions about how humans should be trained to respond to a heterogeneous autonomous‑vehicle ecosystem. - “Mixture of Experts” introduces its weekly AI deep‑dive format, featuring guests Gabe Goodhart, Kaoutar El Maghraoui, and Ann Funai. - A recent MIT study titled “Your Brain on ChatGPT” used brain‑scanning techniques to explore how large language models affect cognition. - Panelists report mixed personal effects: LLMs make some tasks feel easier and boost confidence (e.g., coding), while others feel they diminish understanding and make writing feel “dumber.” - The consensus is that large language models themselves are neutral; their impact on intelligence depends on how users choose to engage with them. ## Sections - [00:00:00](https://www.youtube.com/watch?v=brrJRoqjVY4&t=0s) **Debating Human Baseline for Robotaxis** - The hosts examine the difficulty of using a human driving baseline to train autonomous taxis amid varied company behaviors, then introduce the Mixture of Experts podcast episode that will explore the MIT paper “Your Brain on ChatGPT.” - [00:03:06](https://www.youtube.com/watch?v=brrJRoqjVY4&t=186s) **AI Assistance Reduces Brain Activity** - A recent study found that when participants wrote essays without AI help, their neural connectivity and alpha/beta network engagement dropped, prompting debate about whether relying on AI parallels past fears that new technologies diminish human cognition. - [00:06:08](https://www.youtube.com/watch?v=brrJRoqjVY4&t=368s) **LLMs Boost Cognitive Engagement** - The speaker explains how using large language models as coding assistants keeps their brain actively in the “hot zone,” accelerates exploration of unfamiliar problems, and creates a heightened sense of intelligence and engagement. - [00:09:17](https://www.youtube.com/watch?v=brrJRoqjVY4&t=557s) **Cognitive Atrophy from AI Automation** - The speaker warns that, similar to how industrial machines reduced physical strength, reliance on AI tools may erode deep thinking unless they are used to augment rather than replace human cognition. - [00:12:21](https://www.youtube.com/watch?v=brrJRoqjVY4&t=741s) **Cynical Optimist on AI** - The speaker shares a “cynical optimist” stance, viewing AI as a means to offload uninteresting tasks so they can dive deeper into personal passions, and then cues a forthcoming story from the San Francisco Chronicle. - [00:15:29](https://www.youtube.com/watch?v=brrJRoqjVY4&t=929s) **Balancing Perfection and Human Compatibility** - The speaker argues that autonomous vehicle AI must intentionally forego strict algorithmic perfection to mimic human driving habits and social norms, creating a paradox where less‑perfect behavior can actually enhance safety and trustworthiness. - [00:18:37](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1117s) **Geo‑Specific Prompting for Autonomous Vehicles** - The speaker proposes using dynamic system prompts, similar to chatbots’ zero‑shot adaptation, to tailor autonomous vehicle behavior to local driving cultures, thereby enhancing driver comfort and overcoming the “uncanny valley.” - [00:21:44](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1304s) **Adapting Autonomous Driving Models** - The speaker discusses Waymo’s rapid ride‑volume growth, questions whether human driving should remain the training benchmark amid diverse proprietary robotaxi behaviors, and argues for on‑device, continuously‑updating models. - [00:24:51](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1491s) **High-Profile GenAI NBA Ad** - The speakers discuss Kalshi's groundbreaking generative‑AI commercial aired during the NBA Finals, highlighting its rarity in premium brand advertising and its broader market implications. - [00:27:54](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1674s) **AI Ads Threaten Likeness Rights** - The speakers warn that as generative AI proliferates in advertising, unintended use of real individuals’ faces could spark legal battles over likeness ownership and compensation. - [00:30:57](https://www.youtube.com/watch?v=brrJRoqjVY4&t=1857s) **AI‑Generated Ads: Personalization vs Shared Culture** - A participant asks whether generative AI will push advertising toward ultra‑targeted, individual experiences or preserve widely‑shared, culturally iconic ads like Super Bowl spots. - [00:34:06](https://www.youtube.com/watch?v=brrJRoqjVY4&t=2046s) **Balancing AI Ads with Creativity** - The speakers debate the flood of hyper‑personalized AI advertising, argue that human creativity is essential to keep ads effective, and use the moment to promote their new “Transformers” podcast. ## Full Transcript
Is the human baseline for driving
what they should be trained on going forward?
Because, I mean, if the Robotaxi acts one way, Zoox acts
another way, and Waymo is acting a third way, I mean, are we like
they're expecting a human response from the every other vehicle.
They have a known response from other vehicles in their network.
but then now you've got this whole other set of variables like how
how do you even train against that?
All that and more on today's Mixture of Experts.
I'm Tim Hwang, and welcome to Mixture of Experts.
Each week, MoE brings together the loveliest team of researchers, product
leaders, and deep thinkers to distill down and navigate the high speed
and evermore complex landscape of artificial intelligence.
Today, I'm joined by Gabe Goodhart, Chief Architect, AI Open Innovation
Kaoutar El Maghraoui, Principal Research Scientist and Manager
for Hybrid Cloud Platform.
And joining us for the very first time is Ann Funai, CIO
and VP for Business Platform Transformation.
We have an action packed episode today.
But first, let's talk about "Your brain on ChatGPT".
So, I really want to cover
this really interesting paper that came out of MIT.
A number of researchers published a paper that's literally
called "Your Brain on ChatGPT".
And it's a pretty fun paper.
But first, I kind of want to start with around the horn question, which is simply,
do you feel smarter or dumber in the age of LLMs?
Gabe, maybe I'll start with you.
How do you feel about this? Sure.
If I'm doing something I already feel smart at, like, writing code.
I feel smarter. It's awesome.
If I'm doing something I feel really dumb at, like,
writing for other people to read.
I feel actually a lot dumber
because I don't actually comprehend what I'm getting.
That's a great answer.
Ann, what do you think?
Actually, I generally,
I almost feel like
it's a neutral, like it's a validation of maybe some of my insecurities.
So I know I'm terrible, like words are hard at that.
And the LLMs like, the AIs, like they write the emails.
I am terrible at writing.
I almost feel like, So I have this validation of things.
I'm like, not smart at, but also it frees up
brain space for the stuff that I am intrigued by.
And it yeah, that I do enjoy pursuing. So.
That's great. We have such a nuanced panel.
It's just like people are going to be, like, smarter, dumber.
But, Kaoutar. What do you think?
How do you feel about all this?
Yeah.
I don't think I feel anything here, but I think maybe the question here is
isn't whether LLM make smarter or dumber about whether we choose to engage
with them in ways that sharpen or soften our minds.
So it's like, really how you engage with these LLMs.
Well, we'll get into all of this in this discussion.
So let me just kind of set up the paper a little bit.
Would love kind of your all's responses.
So this is a fun paper.
They're basically using brain scanning technology.
And what they did is they kind of divided
their research participants into a couple of cohorts.
And they said, okay, we're going to have you all do
a series of tasks where you write an essay.
And then for the people who basically like used LLMs to do this, they have a cohort.
They call LLM-to-brain, where they say, okay, and on this next task,
what you're going to have to do is write the essay just by hand, write
with no AI assistance,
and kind of what they claim is, and I'll just quote them directly.
"LLM-to-brain participants showed weaker neural connectivity
and under engagement of alpha and beta networks."
So, to put that kind of in more human readable language, the idea is
their brains were actually less active, in while
they were accomplishing this task, shifting from an LLM-based,
assistive kind of scenario to one where they had to do it all by themselves.
And so I guess, Ann, I know you're you're on the show for the first time,
maybe I'll turn it to you for your hot take is like,
how much do we take from this? Is like,
I mean, I know there are a lot of hot takes on the internet that were like,
"it's killing our minds", but do you read it that way?
So, you know, I. It's actually.
I'll just say I'm not surprised by the take, because I.
I would say, I think everyone I think the world is trying to figure out
how to use AI in the best and most advantageous way possible.
But what I actually it reminded me of is I almost feel like
it's like cycles of human-
computer evolution, like like
when tablets and phones
became ubiquitous, it's like, oh, "it's killing our mind.
I can look everything up instantaneously."
And I mean, if you even go way back, like if you think, you know,
historians with books, it's like, "I rule the world for the next generation.
They're going to ruin it."
And like, you know, it's like, I mean, you can go back through like,
you know, you know, Renaissance authors like and reading that
and it's like I kind of almost put that in the context of like, yeah.
So yes, real science behind it with the brain activity.
But is this just another like we "I
rule the world for AI is going to make us all dumber."
And it's and at the end of the day, it is what we make of it.
Like we can take the
I don't know, maybe maybe this is too much of a Gen-X reference.
We can take the idiocracy, take I get that reference.
Yeah.
We're just going to get stupider and just let it become our brains.
But, I'm.
I'm stealing an analogy someone else use like.
But if we become like Tony Stark with the Iron Man
suit on and let it, you know, be an amplification
of our brain power and an education tool.
That's goodness. Right?
That is I mean, that is real, real goodness
and actually should be pushing our brain power further.
I think if we use it properly.
Yeah. For sure.
Yeah. I was talking with a friend when I read this paper.
I was like you imagine the first person to invent a book
and they're basically like oh well we people have to memorize anything anymore.
You know, it's like so bad for us to have all these books.
It's funny.
We just, under the CFO, a leadership team of us
went through the IBM archives, and they were showing the original, like,
accounting books are on at the IBM archives.
And it's like, well, our, our, our accountants, it's a hard set of words.
Say, are our accountants dumber
because they have spreadsheets now or technology?
And the answer is no.
I think it's detail, nuance.
You know, you can really dig into problems in a different way.
Yeah. For sure.
Gabe, I want to bring you into this conversation
because I think you had such an interesting response, to the kind of hot
take around the horn question where I was like, I'm hoping every with someone like,
my hope was that people would be like, oh, I feel dumber, I feel smarter.
But I think if you were like, it depends on what tasks like for things
I'm good at, I feel more engaged.
I think what you said was if for the
I feel less good at, then I feel differently.
How do you think that kind of applies to some of the results here?
Yeah. So.
Yeah, I definitely teased it in that.
But that was really my read of this.
Is that, the one thing you didn't mention in the intro
is they kind of did the inverse as well as the LLM to brain,
they did the brain to LLM,
and that the brain to LLM group actually showed really good engagement.
And I think the, the way I have found myself
using LLMs is primarily as a coding assistant,
but where I am completely in control of the code,
and what I use for them is to accelerate my ability
to explore an area that I don't have prepped and ready to go.
In that context, I am still very actively engaged in the act of creation,
and that's a brain space in which my intelligence is
moving faster. - The brain's like firing.
Yeah. Exactly, so if the LLM can
like, remove a time
that my brain had to swap out and go figure out the right Google Search
like that keeps my brain in the hot zone longer and better and it builds faster.
So in that case, I feel way smarter.
Where I feel like it makes me dumber is when I'm trying to get it
to replace something that I don't like to do,
and I'm not very good at doing to begin with.
So I occasionally write blog articles, and if I get in the right Zen,
I can actually sit down and write, you know, expository writing.
But it's not my sweet spot.
And so I could try to come up with a prompt, slam it, on an LLM,
get some text out and skim it more in consumer
mode and critic mode rather than creator mode.
My brain never hits that hot zone.
I never hit that place where I'm actually really thinking
and framing and coming up with the right connections for it.
And in that case,
the thing I get at the end, yes, it took me a fraction of the time
it would have taken me to get it in the first place, but I don't feel
the same sort of ownership and the same level of deep
engagement with what I just created, and I think in that context.
So I think that's one thing that
I found really interesting about this study was sort of that difference between,
these two different ways of stimulating, either you're already deep in
with your brain and you're using the LLM to boost it,
or you're just starting with the other LLM doing it for you,
and then you're trying to apply your brain to what the LLM already did.
And I think those are a really different way of using LLMs.
Yeah. I'd love to bring these two comments together.
And Kaoutar kind of bring you into this conversation.
You know, I'm old enough to remember,
like the discourse around like, graphing calculators.
Right.
And it was basically like I remember
the basically the, the teacher always being like, "well, it's important
to understand how you do, I don't know, like a graph, a function.
Before you, before you do it automatically on your calculator."
And I think, Gabe, what you're pointing out is the that's that right
is basically it's the brain to LLM versus LLM to brain.
And so I guess Kaoutar, I know you said like you kind of don't feel any way
about this, but, wondering like, you know, how new is this in some sense?
Like, do you think this is just like, now LLMs are kind of repeating,
I guess what we've kind of already gone through with stuff
like, say, like a graphing calculator or something like that.
Yeah.
Actually, I like to think it also, as you know this a
parallel, like, kind of
mirroring the historical effects of the industrial automation,
you know, as machines relieved humans of physical labor,
physical strength and endurance kind of declined.
for, for the majority of us.
And unless you really, you know, work really hard,
you know, those muscles and exercise and things like that, you know,
if you look at the majority of the people back then, most of us, you know,
most of the people were stronger because they had to do a lot of physical labor.
But as we had not, we rely on more cars
and on, you know, these machineries to clean our houses, to do these things.
Our muscle evolved to be weaker.
And I worry a little bit, you know, are we, you know,
getting into these cognitive automation risks or a similar atrophy here.
Not for our muscles but for our minds.
You know, just as cars made us walk less, AI systems
could make us think less deeply.
So we're not just outsourcing task, we're externalizing cognition.
And I think that's what this paper is kind of a crucial wake up call here
regarding the uncritical adoption
of these AI tools for complex cognitive tasks.
So I think it depends how you engage.
You know, when I said, you know,
how you engage with these tools, so are you going to really, you know, over
rely on them for these deep thinking without really engaging your brains
or you want to use them, like Gabe mentioned,
as you know, to augment you for tasks that you really good.
So I think it depends.
And I think here, you know, if you're looking
at the concept of this cognitive depth also that, you know, mentioned here
is, you know, particularly compelling, suggesting that, you know, this subtle
but profound long term impact on how our brain functions.
So I think for individuals, you know, especially in educational
professional settings, I find the takeaway isn't,
you know, to abandon AI, but to cultivate, you know, this cognitive resilience.
So meaning, you know, using AI strategically for brainstorming,
in fact-checking, summarizing, boosting your performance, but consciously
engaging in, you know, deep thinking
analysis or original sentences ourselves.
So it's more about, you know, how do you treat these AI tools
to augment, not to replace our fundamental cognitive process.
So it's like, how do we find that critical balance?
Yeah. That's a great point.
And I think,
and maybe I'll kick it to you because we could go much longer
and I need to move on to our other stories.
But I mean, in responding to what Kaoutar is saying is,
is there a view here, like, I just to play skeptic for a moment,
it's like, well, it's all well and good to tell people that they need
to, you know, use their critical faculties with this technology.
But like, people are lazy, right?
Like we can't expect people to do that. Yeah.
And so like, I don't know I think like is it,
is it hoping against hope that people are going to kind of like
use this technology in a way that looks a little bit more like brain
to LLM versus LLM to brain, you know,
- to use the language of the paper. No, and I,
exactly, and my hope would be the
brain to LLM, you know the comment I made about.
You know, it's how we learn to use it.
My hope is that we shifted to that.
And I absolutely agree with the humans are lazy.
You know, again, myself include I use examples like read my email
like I put the words make it usable.
But you know, like you, I, I joke that I'm an optimist,
but I'm a cynical optimist because I could see every way
it could go wrong before you actually get to the most optimistic outcome.
And I mean, where I would put my
my hope and optimism in this case is, you know, at the end of the day,
we're still human beings that have things that interest us and drive us.
So you know, I, I, I love
I will go read technology papers, I'll play with things, toys, whatever,
you know, and that's always going to be what drives me.
And an AI is actually going to help me go further and deeper, I think in that.
And, you know, it could be the same with someone who's a doctor, a lawyer, like,
I don't know, maybe retail shopping
changes and marketing like my hope
and optimism would be that makes you lazy in the tasks
that don't drive the things that interest Right.
It's. It's optimally lazy. Yeah. Yeah.
It's optimally, yeah. I love that!
That's perf. It's optimally lazy. Yeah.
Yeah.
That's great.
Well, much more to talk about.
We'll be paying attention to this story.
I'm sure there's gonna be a lot more to come on this kind of research.
But I want to move us to our next topic.
Super interesting
story came out of the SF, Chronicle.
It's kind of like the local metro paper in San Francisco.
And in San Francisco.
I don't know if this is some of our listeners will be in cities where,
you know, these robot taxis are rolling out.
Autonomous driving is a thing where you can just call a robotaxi.
It will take you to your location.
And these are all run by, Waymo right now, which is,
part of the kind of Alphabet Google kind of network of companies.
And the article is really fascinating because it focuses on the idea that
now that they've seen such great success with the Waymo's,
the Waymo's are now actively driving
a little bit more 'aggressively.'
And one of the great examples they give is that now, you know,
the Waymo's will do this like little rolling start, you know, where
it's about to go through an intersection.
And, like, much like a human, would you kind of,
like, loosen up on the brake and it's kind of a signal
to the rest of the road, like I'm getting in here.
And I think this is, like, so fascinating.
And at least what the Waymo folks say in the article is that, like,
it turns out that having a robotaxi that's like a lot more brisk
and a lot more decisive and like, dare I say it kind of like a jerk
a little bit on the road actually makes things safer.
Which I think is just like such a funny sort of outcome.
And so, Kaoutar, I would love to bring you in on this is like,
how should we think about this?
Because normally, I think in the chat bot world, we've tended to make our AIs
like very, you know, like very, catering.
But this is almost like a, an example, I think, where it turns out
that we're getting better results from having AIs that are, like,
much more assertive when they interact with humans.
And so, just curious, you have any hot takes about that?
Yeah. It's very interesting.
I found really fascinating how Waymo, is now prioritizing human-like,
driving behavior to better integrate into real world or urban environments.
So I think what this is saying is safety
doesn't just mean rule following and being very strict with these rules.
It also means fitting into a human-centered system
and overly cautious, you know, automatic, you know, driving
machines or AVs can be disruptive, as, you know, what they've seen on the roads.
So this shift is kind of reflects the this delicate trade off,
between, you know, algorithmic perfection but also social compatibility.
So it seems like we're entering this
"uncanny valley" of behaviors, cars that are smart enough
to mimic our bad habits.
And what does it say here when AI becomes
more trustworthy by being less perfect.
So it's very interesting kind of paradox here.
The we need less perfection here to really fit, you know, these social norms and,
and that kind of translate into safer assistant because,
you know, these cars have to act in human environments.
So they have kind of to adopt how we're being.
So and this is kind of also critical, you know, highlighting here
a critical challenge in designing these AI systems, especially for the future
that operate in complex and predictable real world environments.
So how human-like should should they be.
And that's the key question here.
So and I think what's Waymo's approach here is suggesting
that really strict adherence to rules might not be the safest.
And that's that is very interesting.
Gabe. I, I lived in San Francisco for many years.
And then spent about few, a few years in LA.
And I remember when I moved to LA, I was like, these drivers are unhinged.
Like, basically like the, the culture of driving is just, like,
aggressive in a way that is, like, not very familiar.
Having driven around San Francisco for like, you know, close to a decade.
I guess one of the interesting questions and building off of what Kaoutar just said
is that it kind of suggests that for some of these systems,
we're almost going to have to, like, localize them to cultural practices.
Like, is that the right way of thinking about.
It's like very different from how we think about rolling out these systems.
Typically, I think.
Well, the analogy that immediately jumped to my mind reading this article
was the shift from pre-GenAI chatbots to GenAI chatbots.
If you think about how we built chatbots before Transformers,
we built them by crafting a very deep decision tree
and trying to figure out at every point in that tree
where the person was trying to go down that tree
and then taking them down the right to right, leg of it.
And if any of you, all of us, I'm sure, have all walked through one of those trees
on the phone or on a chat on somebody, customer assistance.
And it's really clunky.
It's like I'm trying to reverse engineer the tree in my head.
I'm trying to figure out, okay, I know I'm on the wrong branch.
I want to move up a node and get back down this other one.
And it's maddening. Right.
And I think, a rules-based vehicle is very analogous.
Right.
It's really trying to make sure it's following exactly the right structured
path in its trajectory of possible
actions at every given point in time.
And I would love to unbox what Waymo is doing here,
because my guess, frankly, is that they're starting to apply
a much more free-form, decision making space, akin to generate me
the next token, generate me the next thing that needs to happen.
So it's I wouldn't be surprised if they've got a reinforcement
learning transformer sitting on top of whatever their rule system is.
Now, that's
actually got a much wider space of possible next actions,
and they're generating the stuff on the fly.
So to me,
when you say localize to different geos, that's a different system prompt, right.
Like it's basically I need to basically,
you know, zero shot, learn my car
with a whole bunch of examples of the crazy LA drivers.
Right?
So in some ways too, if we start applying this more flexible way
of adapting the behavior to the environment, it it may actually,
you know, just as the article suggested,
make the vehicle fit in a whole lot better.
And I think, honestly, that's one of the things going back
to that analogy to chatbots that really powered the AI explosion
is that all of a sudden, the consumption experience
jumped over that "uncanny valley" that you mentioned, Kaoutar,
that said, you know, now I feel like I'm talking to a real entity on the other end.
I'm not reverse engineering in my head.
I feel comfortable in this space in exactly the same way that drivers
would now feel comfortable in a space where they're mixed
other humans and autonomous vehicles,
because those autonomous vehicles fit their mental perception.
Yeah.
I love the idea that somewhere hidden in, like, Waymo's
cloud, there's a prompt that's like you're a San Francisco driver,
you know, 25 to 35, and, like, you live in the Mission District is there.
In San Francisco, like, and I think, do you want to take us on, like,
a little bit of a journey into, like, where this all goes in some sense?
Because I think, like,
what's really fun about this is like, this is a multi-round game now, right?
Imagine a future where there's multiple car companies operating
autonomous vehicles and like one of them is like I can actually get my consumer
location faster if my car is like a little bit of a jerk.
And so Ann, I'm really interested in kind of thinking about
like how this evolves, but I'll just kind of give you that prompt.
I'm curious about how you think about that. Yeah.
And it's actually
It is actually funny, this was not planned at all, we can attest that; that's actually exactly where my head went to
Great
And as an aside, by the way, I'm in Austin, Texas,
and the San Francisco LA analogy is Austin and Houston, like,
Houston is like Houston is a whole other whole other game.
Entire social media feeds associated the ridiculous.
Houston driving.
But it is funny.
You you, you know, targeted me that way because what we have going in
Austin is interesting.
We have Waymo, we have Zoox.
And we now as of this week, have the Tesla robotaxis.
So I read the irony is right before
we, you know, got this article to look at to discuss.
It's coming back from a trip to the airport.
My partner and I are in the car and he's like, we're both like commenting and
looking "that Waymo is driving like a maniac."
Like it was like it was actually going above the speed limit.
It was doing a little bit of zoom -Oh, no
And it was actually a little bit
like "Uh, maybe I won't drive" - Yeah.
Maybe, it's not there yet...
but it kind of and he's a tech person too.
So it kind of led us into this
weird conversation of like, okay, where is this going?
What's doing that?
How is that learning?
And then saw this article and I thought like, gosh, what happens?
Because as there are more of the autonomous vehicles on the road,
they are trained on human behavior, not the behavior of each other.
And, there was actually another piece I had seen right around the same time,
I think it was, a New York Times article talking about how the like,
the evolution of Waymo,
that in the first six months of 2025, they've already done double the rides.
They did in all of 2024 and think 2024 was five
x 2023, which obviously due to expansion in part.
But it's like I know really got me thinking is like,
Is the human baseline for driving what they should be trained on going forward?
Because, I mean, if the Robotaxi acts one way, Zoox acts
another way, and Waymo is acting a third way, I mean, are we like
they're expecting a human response from the every other vehicle.
They have a known response from other vehicles in their network if you will,
Right.
but then now you've got this whole other set of variables
like how how do you even train against that?
Because let's be honest, they're all going to have proprietary.
They're all going to learn, they're all going to have
a proprietary way of doing it.
So I think it's I actually think in five years
that five years or less maybe, I guess looking at how fast Yeah.
It's doubling. Yeah.
I think it's going to be very, very important to adapt.
You know, like, how do we these, you know, train
models, adapt, you know, on the fly.
And this visually as maybe we need more tiny models
or more capable models, you know, local on device.
You know, that they can, you know, make decisions and, you know, retrain
fine tune and things, you know, on the fly in real time
especially, you know, the, you know, the driving will change depending on,
like you said, you know, in San Francisco, if I go to Morocco, for example,
the driving is way different, much more aggressive.
You know, yeah.
You would win aggressive in Morocco for sure.
oh my God, you know, I can't drive there myself.
So I, I can, I can imagine like a trained, you know, car here in the US,
you know, putting that in Morocco, you know, it needs to adapt completely.
You know, different much more aggressive behavior.
So I think we need more of that going forward.
You know.
So we just don't rely on these statically trained models.
But these models have to adapt constantly.
I could even see.
What could be interesting is, like a lot of open source consortiums
have started because of similar problems.
Like this is like we want to have,
you know, you want to have your proprietary piece as a company,
but you recognize there's an area where you have to have common
understanding, common knowledge, common engagement, like maybe it is
okay, we admit we're going to like go to use an open source piece.
That is how we train in the same way.
So we're not all crashing into each other, but then they put
proprietary pieces on top of that for their business model.
Yeah.
I think the handshake will be very interesting because. Yeah.
Think about different brands of autonomous vehicle.
You know, it's like your car's computer vision model is like.
Oh, but that's that's a Tesla robot, you know, like, we got we got to,
you know, navigate around it in a way that's different from a Waymo.
And I feel like the easier way is if there's just some technical handshake
that says, hey, you know, I'm just signaling to everybody on the road
that I'm from this company and have these attributes.
So that'll be very, very interesting to see.
Well, great.
I'm going to move us on to our next topic.
I am, by admission, not really a sports guy.
But I was roped into watching the NBA finals, which were great.
I think I'm now a basketball guy.
And, I caught this really interesting ad that,
it turns out, was, like, widely talked about in the ad industry.
There's a prediction market company called Kalshi.
And they did this completely surreal,
mind-bending ad that played during game three of the NBA finals.
A lot of crazy scenes.
And I remember looking at it being like, this really looks like GenAI.
And lo and behold, it
came out later that, of course, this is like a GenAI ad, and,
I think it's one of the most high profile end-to-end
GenAI ads that we've really kind of seen happen in the media.
And I wanted to bring it up because in the past,
I think we've talked often about generated AI, or generative
AI for ads is kind of something that we see for like,
you know, kind of like more bargain bin ad inventory, right?
Like the kinds of things you encounter online.
But this is like high prestige.
Often what marketing people call like brand advertising is like, this is like,
you know, an ad that you'd see in the New York Times, right?
It's like a little bit like that.
And so, I guess, Gabe, I'll bring you in.
I'm kind of curious about, like, how we should sort of read this that like,
basically almost like the use of these technologies
is so good now that, you know, a big company like Kalshi will say,
we're going to spend a huge amount of money and then like
put use this technology to generate an ad for this really high profile event.
It's a signal of some kind, I think. Right.
Yeah.
I mean, I think I have three different reactions to it.
Sure. We need all that takes. Bring them.
One on the technical front,
Sure.
one on the consumer front, and then one on the skeptic front.
So, one thing, on the technical front,
one thing that I thought was really compelling, was that
I watched the ad and
there were very few of what you might expect from the GenAI
sort of blemishes now.
They did a good job of making it a fast paced ad.
So your AI isn't going to pick up on the fact that one random person
in the crowd has six fingers or something like that.
But you know, they did a really, you know,
the actual quality of what was generated was really good.
And I think, you know, that merged with some clever expertise of how
to cut the ad together, you know, really produced a good looking ad.
It wasn't it didn't smack of, you know,
some duct tape here to hide all the the gorp.
Right.
From a consumer standpoint, you know, like,
it's a good ad, and if it, you know, lowered the denominator
for the cost of creating it for the company, making it cool.
That sounds like a, you know, a good optimization.
For, you know, the industry, for the world.
I think my biggest take, though, is on the,
the skeptic in the warrior front, which is,
who were those humans in the video?
Now, obviously they were not recorded humans, but
we all know that GenAI models are based on a whole boatload of training data.
So as this becomes more ubiquitous,
what are the odds that somebody face
who had did not give their permission to be in an advertisement for it?
Company X shows up on screen
with absolutely no way of validating whether that's happening or not.
Right.
It's it's a huge gulf between whatever training data went into the model
and the actual faces and images and body
representations and all of that, that pops out on the screen.
And, you know, right now it's a needle in a haystack, right?
Like, Do you think anyone in the background
scenes of any of those are going to happen to be people that watch it and say,
"hey, wait a minute, that's me.
I'm going to sue your pants off?" No.
But as the number of ads that are created with GenAI balloon,
it's going to happen, right?
The odds are going to shake out that somebody is going to suddenly realize
that their face is popping up in ads that they have nothing to do with,
and they're getting no compensation for.
So it's a whole it's a different but related element to
the copyright issues, around authors books,
you know, snippets popping up if they're sufficiently popular.
It's, it's, I think, going to go down that same rabbit hole of ownership
of likeness, ownership of content, where the content in this case
is your actual, you know, persona in visual space.
Yeah. Ann, maybe I'll turn to you.
Like, I think you're raising Gabe a really good point.
And I think one of the things
I really want to investigate is, like how mainstream this becomes, right?
Like, how much of this is kind of a one off novelty?
Everybody's, like, surprised that I can do this, but, like,
I have a friend who's in the ad industry who's like,
I just don't think it's a very good ad, you know?
But then I think on top of that, like you layer on
everything that you're talking about, which is, well, there's also
all these other risks that come with using this technology.
Do people want to take on that risk when they do these kind of ads?
I guess, Ann, to kind of like put it in like a sharper term.
It's like, you know, in 3 or 4 years, do we feel like every ad for game
three of the NBA finals is going to be AI generated?
Like how how far do you think this is going to go?
Yeah. Now. Well.
And before I answer plus one to everything Gabe said, I mean, there's there's
so many things that can go in any direction.
There.
I you know what I kind of went back to when looking through that article was,
you know, at the end of the day, marketing is still a data driven exercise, right?
Like to be a marketer, it's data-driven and
it's not so much about
are we going to have more AI ads, but what are the outcomes?
Businesses are kind of trying to drive through an advertisement.
Right.
And is it just awareness, like you feel like,
hey, AI people haven't been paying attention to us or,
you know, our awareness is going down, our revenues are dropping.
We need to do something flashy that just gets our name out there
and gets people looking at us again.
Or like, are we trying to, you know, sell a specific product.
But again, it goes back to like, what is your goal with the ad
and that outcome? You're kind of trying to drive.
So I think I would say that's where there's a little, you know, TBD on there.
But at the same vein, like let's go back to maybe that first conversation
about brain to LLM and LLM to brain, you may have a clear outcome.
You're trying to drive a clear vision of what you're trying to do.
But the AI may be able to create the advertisement faster
and better than a human could, which is, you know, I would say that's the brain to
LLM versus the LLM to brain diminishment we were talking about before.
So again, I would I would lean on,
you know, marketing is always going to be outcome driven.
It's going to be a flashy thing.
But I think, again, the direction of the flashy thing
used for the right purpose, I think could get really interesting.
Yeah, I think that's right.
Kaoutar, I think, one final bit.
I think you'd be well positioned to talk a little bit about is, you know,
I think in the past when I've heard this discussion
about AI-generated ads, it's been very much like, "oh, in the future,
everybody is going to have their own custom ad", right?
We can use generative AI to basically create, like your favorite movie star
telling you you should use Kalshi or whatever as a service.
This is kind of an interesting place where actually generative AI is being used
for everybody to see the same ad, and, curious
If you want to kind of talk a little bit about that, like, do you think, you know,
it's more likely that people will want the ultra targeted stuff,
which is a little bit building on the theme
that Ann was talking a little bit about, or, you know, is there something
really fundamental to advertising, which is no matter how it's created,
we still kind of want it to be like a like a shared culture in some ways.
Like I think about those Super Bowl ads that like, you know,
became kind of cultural movements in their own right.
It sounds like,
you know, maybe this is actually kind
of even preserved in a world of generative AI.
Yeah. That's a very, interesting point.
Of course, I think personalization, I think, is an important aspect here.
Some people would like that.
Some people don't because they want to like the shared
kind of advertising to see what's everybody seeing.
So it's interesting to see.
And I think in the world of generative AI, it's really possible
because there is so much data that they're collecting on each one of us.
So, you know, if they can generate, you know, this generic ad, they can
might as well generate these personalized ads based on, you know, your,
you know, your historical preferences and data, what you've purchased
and things like that.
So I think we will see both of those.
And just interesting just looking, you know,
when I was looking at this new ad and the statistics behind this.
So like they had like this 300 to 400
generated results in 15, around 15 usable clicks.
You know, the cost was $2,000, which is about
95% cheaper than traditional production.
And they reach, you know, it took two to 2 to 4 days
using one creator for the full ad
and an estimate of like 18 million views.
That is really huge, you know, in about 48 hours. So.
So what is this?
It's telling us, you know, of course, more of these marketers
and companies will use these tools to create these ads.
But what's the implications of this?
I think if we really think about this more deeply,
AI here isn't replacing the creatives.
It's also fragmenting this creativity task stack.
And, so the bottleneck is no longer in the production side,
but more in the ideation and the originality of the ad.
So, I mean, yes, we can generate all of these things like,
you know, like Gabe mentioned, maybe faces, you know, that.
Are you know, randomly will be picked up.
That could be any one of us.
And so how creative these ads are.
So I think what Kalshi here is highlighting is both the promise
and also the peril.
So democratizing content, you know, creation
here at an industrial speed, but also the risk
of having these homogenized, hyper targeted media.
And we could be like soon flooded with, you know,
highly personalized ad, but are they going to move us?
Are we going to find them creative?
Or like the Y factor?
Is it going to be there?
So that is the key question here.
Again generative AI do that.
Or maybe we need, you know, some additional things
that we have to bring to the table with human creativity
that's really going to make it or break it for the for the viewers.
Yeah. I hope they get it right.
I mean, I think otherwise, it's a pretty dark feature of, like,
just being flooded with, like, very slap ads that you just don't like. So.
Yeah, exactly.
Well, that's all the time that we have for today.
I want to end with two special notes.
Ann, I know this is the first time on the show.
If people want to find you, keep up with your work.
Where where should they go?
And so, fun enough.
If you enjoy podcasts.
We started a podcast called Transformers.
And, you know, our goal really is to show people across industries
and technical work spanning the season to non-technical roles.
Really what it takes to transform,
you know, a company, a business, you know, open source, closed source,
fintech, you know, you know, fintech, tech, tech.
So, you know, come, come find me over there.
It's, we have a lot of fun
with a lot of really interesting guests from a lot of really fun places.
And and hopefully conversation is entertaining.
Is this.
Yeah. For sure. It's really good.
You should subscribe, listeners.
And then finally, I want to take a personal moment to thank, producers.
Hans Buetow, Mike Rugnetta and Michael Simonelli.
They basically been fearlessly working behind the scenes,
essentially ever since MoE got started, like, a year ago.
And so we owe essentially a huge amount of the success of this show to them.
And we will miss you guys.
This is their last show that they are, working on with us, here at MoE,
So we will miss you guys.
Thanks for all you listeners.
If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify,
and podcast platforms everywhere, and we will see you next week on Mixture
of Experts.