AI Agents, Study Mode, and History
Key Points
- The panel floated the idea of “Anias,” an AI system that would rummage through historical records to surface surprising parallels, suggesting that cheaper compute could trigger a rapid expansion of accessible knowledge.
- Recent announcements like ChatGPT’s “study mode” aim to make AI a learning partner rather than a shortcut, responding to fears that reliance on generative tools dulls mental effort.
- The show’s experts debated a range of AI‑driven topics—including autonomous agents, using AI to explore ancient history, and the latest findings on the financial impact of data breaches.
- When asked about personal study habits, the guests highlighted diverse methods—from straight reading to active‑recall techniques—underscoring the ongoing relevance of human learning strategies in an AI‑rich world.
Sections
- AI, History, and Study Mode - In this episode intro, host Tim Hang outlines a discussion with AI experts about a proposed “Anias” system for scanning historical records, the impact of lower costs on knowledge explosion, AI agents, applications to ancient history, data‑breach expenses, and the newly released ChatGPT study mode.
- Expanding LLM Roles Beyond Answers - The discussion highlights how new “study” or learning modes let LLMs serve as tutors, editors, or creative partners, countering the cynical view that people only want quick answers.
- Balancing AI Agency and Human Control - The speaker emphasizes limiting AI autonomy while adapting it to human needs, referencing a recent article on WhatsApp‑based AI use in rural Colombian schools that has caused a decline in reading levels.
- Integrating AI into Youth Learning - The speaker advocates age‑specific AI tools, balanced screen time, and proactive teaching of ethical, interactive prompting skills, noting a broader shift toward tool‑based problem solving in education and hiring.
- Adaptive Interfaces Over Fixed UX - The discussion highlights how AI-powered agents can transform static, designer‑driven user flows into intelligent, personalized interfaces that remember user behavior and adapt in real time.
- Personalized Interfaces: Excitement and Anxiety - The speakers discuss how future, highly customized digital environments could boost convenience and agent-driven assistance, yet also risk user confusion and privacy concerns due to unfamiliar layouts and excessive data knowledge.
- Agentic Tooling and Content Concentration - The speaker compares the promise of highly customizable, agent‑driven interfaces to the early‑2000s “long‑tail” blog hype, questioning whether the result will be a proliferation of niche experiences or a continued concentration of attention on a few dominant creators, while noting associated risks and the need for balance.
- From UI Patterns to Neuromorphic Interfaces - The speakers explore how foundational design patterns are emerging for AI interfaces, highlight the potential of neuromorphic brain‑computer and biofeedback technologies for accessibility, and suggest a forthcoming shift from human‑centered to group‑centered AI research.
- AI for Cultural Preservation - The speaker argues that AI should be used to recover and connect lost human thought, positioning it as a tool for digital archaeology and equitable humanities research rather than profit‑driven productivity.
- AI's Cognitive Dividend Across Disciplines - The speakers argue that large language models deliver a productivity boost—particularly evident in coding—allowing underfunded fields such as archaeology and ancient history to achieve outsized progress despite limited resources.
- AI-Driven Virtual Unrolling of Scrolls - The speakers discuss how AI and high‑resolution 3D scanning overcome labor and physical constraints to digitally unroll fragile carbonized scrolls, turning a research bottleneck into new opportunities.
- AI Governance Gaps Accelerate Risks - The speakers highlight that despite AI’s valuable IP, only 63% of organizations have sufficient AI governance, exposing basic security hygiene failures and turning traditionally months‑long breaches into rapid, AI‑enabled threats.
- AI-Driven Security Cost Decline - The speaker emphasizes that growing AI use in cybersecurity is maturing, delivering substantial breach‑cost savings and freeing analysts to focus on prevention, indicating an optimistic trend toward lower data‑breach impacts.
- Starting AI Governance with Data Lineage - The speaker advises newcomers to begin AI governance by examining data—especially tracing the lineage and access controls of unstructured data stored in vector databases—to ensure clear accountability and inform model behavior.
Full Transcript
# AI Agents, Study Mode, and History **Source:** [https://www.youtube.com/watch?v=EB3rWZPz0gk](https://www.youtube.com/watch?v=EB3rWZPz0gk) **Duration:** 00:48:47 ## Summary - The panel floated the idea of “Anias,” an AI system that would rummage through historical records to surface surprising parallels, suggesting that cheaper compute could trigger a rapid expansion of accessible knowledge. - Recent announcements like ChatGPT’s “study mode” aim to make AI a learning partner rather than a shortcut, responding to fears that reliance on generative tools dulls mental effort. - The show’s experts debated a range of AI‑driven topics—including autonomous agents, using AI to explore ancient history, and the latest findings on the financial impact of data breaches. - When asked about personal study habits, the guests highlighted diverse methods—from straight reading to active‑recall techniques—underscoring the ongoing relevance of human learning strategies in an AI‑rich world. ## Sections - [00:00:00](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=0s) **AI, History, and Study Mode** - In this episode intro, host Tim Hang outlines a discussion with AI experts about a proposed “Anias” system for scanning historical records, the impact of lower costs on knowledge explosion, AI agents, applications to ancient history, data‑breach expenses, and the newly released ChatGPT study mode. - [00:03:08](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=188s) **Expanding LLM Roles Beyond Answers** - The discussion highlights how new “study” or learning modes let LLMs serve as tutors, editors, or creative partners, countering the cynical view that people only want quick answers. - [00:07:48](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=468s) **Balancing AI Agency and Human Control** - The speaker emphasizes limiting AI autonomy while adapting it to human needs, referencing a recent article on WhatsApp‑based AI use in rural Colombian schools that has caused a decline in reading levels. - [00:11:38](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=698s) **Integrating AI into Youth Learning** - The speaker advocates age‑specific AI tools, balanced screen time, and proactive teaching of ethical, interactive prompting skills, noting a broader shift toward tool‑based problem solving in education and hiring. - [00:14:46](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=886s) **Adaptive Interfaces Over Fixed UX** - The discussion highlights how AI-powered agents can transform static, designer‑driven user flows into intelligent, personalized interfaces that remember user behavior and adapt in real time. - [00:18:05](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=1085s) **Personalized Interfaces: Excitement and Anxiety** - The speakers discuss how future, highly customized digital environments could boost convenience and agent-driven assistance, yet also risk user confusion and privacy concerns due to unfamiliar layouts and excessive data knowledge. - [00:22:29](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=1349s) **Agentic Tooling and Content Concentration** - The speaker compares the promise of highly customizable, agent‑driven interfaces to the early‑2000s “long‑tail” blog hype, questioning whether the result will be a proliferation of niche experiences or a continued concentration of attention on a few dominant creators, while noting associated risks and the need for balance. - [00:25:47](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=1547s) **From UI Patterns to Neuromorphic Interfaces** - The speakers explore how foundational design patterns are emerging for AI interfaces, highlight the potential of neuromorphic brain‑computer and biofeedback technologies for accessibility, and suggest a forthcoming shift from human‑centered to group‑centered AI research. - [00:30:06](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=1806s) **AI for Cultural Preservation** - The speaker argues that AI should be used to recover and connect lost human thought, positioning it as a tool for digital archaeology and equitable humanities research rather than profit‑driven productivity. - [00:34:12](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=2052s) **AI's Cognitive Dividend Across Disciplines** - The speakers argue that large language models deliver a productivity boost—particularly evident in coding—allowing underfunded fields such as archaeology and ancient history to achieve outsized progress despite limited resources. - [00:37:18](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=2238s) **AI-Driven Virtual Unrolling of Scrolls** - The speakers discuss how AI and high‑resolution 3D scanning overcome labor and physical constraints to digitally unroll fragile carbonized scrolls, turning a research bottleneck into new opportunities. - [00:40:49](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=2449s) **AI Governance Gaps Accelerate Risks** - The speakers highlight that despite AI’s valuable IP, only 63% of organizations have sufficient AI governance, exposing basic security hygiene failures and turning traditionally months‑long breaches into rapid, AI‑enabled threats. - [00:44:08](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=2648s) **AI-Driven Security Cost Decline** - The speaker emphasizes that growing AI use in cybersecurity is maturing, delivering substantial breach‑cost savings and freeing analysts to focus on prevention, indicating an optimistic trend toward lower data‑breach impacts. - [00:47:13](https://www.youtube.com/watch?v=EB3rWZPz0gk&t=2833s) **Starting AI Governance with Data Lineage** - The speaker advises newcomers to begin AI governance by examining data—especially tracing the lineage and access controls of unstructured data stored in vector databases—to ensure clear accountability and inform model behavior. ## Full Transcript
What if we created a system that they
call Anias to go and scan the historical
record to surface interesting parallels
for people to look into?
>> If the cost goes down, I think we will
see an explosion of knowledge.
>> It wasn't trying to do the same job that
the uh that the human would have been
doing.
>> Some of the things that I know the study
showed they were physically impossible
to do.
>> All that and more on today's mixture of
experts.
[Music]
I'm Tim Hang and welcome to Mixture of
Experts. Each week, Moe brings together
a group of brilliant minds to explain,
distill, and hottake our way through the
truly bewildering wave of news each week
in artificial intelligence. Today, I'm
joined by a stellar veteran crew. We've
got Kush Varsn, IBM fellow AI
governance, Vulkmar Ulig, VP AI
infrastructure portfolio lead, and
Koutar El McGrai, principal research
scientist and manager for the hybrid
cloud platform. We have a packed episode
today. We're going to talk about agents,
of course. We're going to talk about
using AI for ancient history. And we
have a special segment with Suja Visen
on the latest cost of a data breach
report. But first, I want to talk about
study mode.
All right. So, just to quickly introduce
this topic, uh ChatGBT announced a new
feature uh just this past week uh called
ChatGBT study mode. And basically what
it is is a feature where you click it
and it's sort of an interactive learning
experience. It asks questions. It sort
of challenges you. And I wanted to bring
it up because uh if you recall,
listeners of the show will recall a few
months ago I think we covered this
report out of the MIT Media Lab that
scanned people's brains. And it kind of
took over at least my social media
because it was all about how you know
chatpt is making us stupid, right? Like
if you if you literally use AI to write
essays, your brain is firing less. And I
think this is sort of a really funny and
interesting story in part because uh
this is like an explicit attempt by Chat
PD to not have that happen. But I just
first want to do maybe our quick round
the horn question like usual and that
round the horn question is I'm curious
about your favorite study method. Do you
use flashcards, outlines, practice
tests? Maybe Chris, I'll start with you
if you've got a preferred study method.
>> Yeah. Um I just read the book. Um I
think that's the uh the easiest.
>> That's great. Uh, Katar, what's what's
your method?
>> Uh, I usually use active recall. You
know, I read and try to close and try to
repeat to myself.
>> That's a good one. I usually use that
one as well. And Vulmar, how about you?
>> I'm on Kush side. I'm just reading the
book.
>> Just reading the book.
>> Luckily, I don't need to remember things
anymore that are relevant.
>> Yeah, exactly. I had to kind of think
back for this question. I was like,
standardized tests. How did I go about
doing that? And
>> yeah, I think active recall was what I
tried to do. Um, all right. Well, so
what I want to do is uh uh kind of talk
a little bit about this idea because I
think it's so interesting because you
know I think people are always like,
"Oh, AI is making us dumb. AI is making
us smart, but like I think so much of
this is like how you actually design uh
the systems." And so Kush, I'm curious,
have you had a chance to play around a
little bit with study mode?
>> Uh not yet, but um yeah, actually this
isn't exactly new, I would say, because
Claude had their sort of learning mode
come out in April. Um and uh other
people have been kind of playing around
with this. So um yeah, I mean I think
it's a good thing to have this like
other kind of role that the LLM is
taking. Um because I mean a content
generator isn't the only thing, an
assistant isn't the only thing you can
think about a muse, an editor, um
devil's advocate. I mean all sorts of
different roles that an LLM is going to
want to take. And it's not going to
happen naturally because of all of the
RHF that's been done on these things.
Um, so yeah, I mean I think uh having
these additional roles is a is a very
good thing.
>> I'm curious, you know, so I was talking
to a friend about all this and you know,
he was basically like a cynic about this
whole feature. He was like, "Look,
people just want the answer. No, no
one's really going to use study mode.
This is just kind of like a marketing
thing they're using to kind of like push
back against the narrative that AIs are
making us dumb." Do you do you buy that
argument? I don't know if you're a cynic
like he is.
>> No, I have two kids. So, um
>> I think it's actually great because I
think the people learn differently. Um
some people just need, you know, YouTube
videos out of want frontal teaching.
Other people want to be quizzed and so I
think it's just in the in the
repertoire. Um it's almost a tutor,
someone who watches you, you know, and
then if you don't or if you make
mistakes and you know, the system would
actually adapt to your behavior. And so
it's almost like Khan Academy um you
know pushed into an LLM. So I think it's
a it's a great um extension to you know
the the portfolio. So if you give kids
the answer they will not learn. And I
think that quizzing method is actually
pretty good.
>> Yeah for sure. And I think Vulkmar you
mentioned Khan Academy. I think this is
like where I want to go with this
discussion because I think you know we
can debate the feature and how effective
it is and what it's useful for. But I
mean couch there's been a lot of
discussion at least among kind of my
circles about how far all of this AI
stuff goes in terms of education. Um and
you know I feel like these types of
features kind of point the way to the
idea of like well how much learning is
just eventually going to be like
completely doable through AI right like
and which raises really big questions
about kind of traditional schooling. Um
I I know you always hesitate to do like
you know grand forecasts but where do
you think this stuff goes in a few
years? I mean do we feel like we're
going to eventually have a technology
which is kind of competitive to I don't
know what you might get from a
traditional education.
>> Yeah that's a very interesting question.
Um it seems to me that you know already
right now many you know teachers and
students are already using AI. So I
think whether we want it or not. So I
think this uh new mode that they have uh
I I feel it's similar to like a a
cognitive gym. So uh because this study
mode uh it's kind of a step towards a
design philosophy that could be kind of
uh a gym versus like a crutch because
right now typically how students or how
many people use you know the LLMs is as
a cognitive crutch. So it does the work
for you to make your life easier. But
the the idea of this cognitive gym is
basically it's designed to make you do
the work but with some expert guidance
and support and kind of make you
stronger in the process. So, so I feel
like the future of of truly valuable AI
in education and professional
development isn't about providing
answers furers, but it's about building
those systems that are experts, socratic
partners uh that should know when to
give you the hint, when to ask a probing
question and when to force you to
struggle a bit. So, it definitely if
this is done right, it's it's going to
be a huge uh kind of way to redesign the
whole educational system. uh but I think
this this whole idea of uh the tutor
it's a much harder design channel
challenge than just building a better
answer engine so uh how do we really
shift you know the metrics from time to
answer to depth of user understanding
how do you evaluate these systems and uh
so it's going to be interesting to watch
>> yeah that's right and I think it's worth
getting into I mean the ethics of this
and how you go about doing it I think
are pretty interesting I guess at least
from what openai said in their blog post
they said you know we consulted with all
these expert educators and we kind of
have some magic working in the
background to like make sort of study
mode work. I guess to Vulmar's point,
Kush, I don't know if you have an
opinion on this, like ideally you would
eventually want AI to be able to kind of
just like go multimodal for whatever
teaching method it thinks is going to be
the most effective, right? I don't know
if you think we can get there, but yeah.
>> Yeah. I mean, I think the biggest thing
I mean, what Vulmar brought up, what
Qatar is bringing up as well is um uh
like just having a little bit more kind
of control over this, right? Um, so you
don't want the AI to have the agency.
Um, because there's a finite amount. Um,
like the AI can have some agency, the
human can have some agency and, uh, you
just need to to balance it. So, um, it's
not just doing everything for you. It's
not taking control over the the entire
sort of situation. And, um, I mean, the
next topic we'll get to is kind of uh
along the same lines. It's like uh uh
how do we um kind of have these uh
things adapt to us, make sure that uh it
it's doing things in in the ways that uh
that make sense for us. Um and yeah, I
just wanted to bring up one article that
actually came out yesterday. Um it's in
this um uh maybe lesser known uh sort of
news outlet called Rest of World. Um so
it's about kind of technology and in the
rest of the world. Um and it goes
through and talks about I mean how like
teaching is happening in uh like rural
Colombia. Um and uh it's making the
point that uh yeah I mean let's say um
open AI or anthropic are doing these
things but those are like closed
proprietary sort of things and um in
these schools in rural Colombia um
people are using the AI that's
integrated with WhatsApp um and uh it's
just like the level of reading has gone
down in a whole in a year um like all
these sort of things. So, uh it's great
that uh I mean we're seeing this but uh
how are we going to bring it down to uh
uh to a broader population because uh
just just keeping it uh enclosed within
a chat PT is is not going to be the the
thing for for for most of us.
>> Yeah, I think it's a really hard
problem. Um there uh Vulmar, before we
move to the next topic, I don't know if
you want to offer some parenting advice.
So, I've got a few kids at home. Uh, and
I think we've been traditionally quite
cynical about uh any screen time
whatsoever, right? So, we've been like
very much like keep them away from the
screens, but they're kind of reaching an
age where I'm like, should I give them
access to like chat bots? Like, so I
don't know how you approach that with
your kids. Like, for our listeners,
would you recommend?
>> Yeah. My So, my kids are a bit older.
Um, they're teenagers. Uh, and you know,
the screen that train left the station
like a couple years back,
>> long time ago. Um so what I'm seeing is
in the education system um is that if
you look at a traditional public school
system um there is effect a fight
against AI which is primarily because I
think the uh the public education system
is not willing to adjust the curriculum.
Um we are looking across the board and
we are also looking at for example
schools which completely integrated AI
and the curriculum. So there are I'm in
Austin there are a couple of schools
here which said you know you you do
studying for two hours solely AI based
no teacher and then everything else is
project work um where you know kids are
doing crazy stuff like there's a kid
which built a BMX bike park in the city
of Austin needs to raise funding build
it etc right so I think we are at a
point where we finally have the
opportunity to explore new education
methods I think that's the primary thing
um you We are because so far for
hundreds of years it has been teachers
standing in the front and a bunch of
kids need to listen and suddenly we can
actually adjust it and and adjusted also
to the learning speed of the kids and I
think this is my kids are really bored
in school and so you know they're kind
of sitting and waiting uh until class
catches up and I think with AI we
suddenly have the ability to either go
deeper or to you know just let them
sprint ahead and I think you know you
will at the edges of the bell curve you
will just get so more happiness.
>> I think if I might add to this, Tim, I
also have kids um that range from like
teenagers to 9year-old and 16 and
college. So, um it's important I think
to introduce the right tools at the
right age. So like for maybe kids one to
six uh kid-friendly tools like Khan
Academy Kids or something like that but
middle school and high school you know
things like you know Quizlet AI or AI
tutor where it's more like an
interactive not just a passive learning
so you promote active learning where you
teach the the children how to interact
with the AI ask follow-up questions
learn how to prompt things you know more
in intelligent manner and then but also
give them times when there is no screen
time. So I think it should be a
combination of you know no AI so they
you force you know their cognitive
capabilities to build but also slowly
introduce the AI because that's going to
be their war. They're going to be using
these things whether we want it or not.
So they need to learn how to use it
ethically in safe in safe ways and more
you know in a uh interactive not a
passive manner. This goes along what
came out yesterday that Meta is changing
the interviewing process. And so they
actually ask people who interview that
they they can actively use AI tools to
solve the interview questions, right?
And so I think it shows the shift from
you know are you individually capable of
actually solving the problem or are you
able to use tools to solve the problem
and so I think we are outside of the
realm where you know you don't use AI
the expectation is you do and so we are
we need to test whether you are you know
correctly using the tools not uh if you
are able to solve the problem without
the tool.
>> Yeah for sure. Yeah, there's all these
kind of interesting like processes and
systems in society that kind of assume
no AI and it's very interesting like
everybody's like either it's like fight
or adapt and I think eventually
everybody will have to adapt but it's
kind of like how long they want to sort
of put this off and what pain they're
willing to go through for that. That's
really interesting. I hadn't heard that
that Meta was doing it. Um Kush, any
final thoughts on this topic before we
move on?
>> We we just need this to be more
inclusive. I mean uh because uh the the
best and the brightest are going to like
have these sort of things at their
hands, but uh how do we make sure that
uh that everyone does? We don't want
this just to be a luxury good.
>> Yeah, absolutely.
>> All right, I'm going to move us on to
our next topic. This is a submission
from Vulmar, so I'll uh I'll tee it up
to you um as the first commenter on it.
Um interesting tweet came out from a
gentleman by the name of Greg Eisenberg.
He's an entrepreneur, runs a few
companies, and it's kind of a a very
nice little sort of Twitter or now X
essay. Um, and he kind of starts it by
saying there's a quiet shift happening
in how we design software. We're moving
from UX to AX, a Gentic experience is
what he says. Um, and originally I was
like, another agent thing. I don't want
to look at it. But then the more I read,
the more interesting it became. And I
think, uh, I'll just kind of quickly
sort of sum it up. I think his argument
was, you know, back in the old days, we
would design applications, interfaces
with sort of this idea that like the
flow of that interface was fixed, right?
And essentially that the interface was
dumb. You would start from zero and then
you maybe do a little bit of
customization, but a lot of it would be
like how do you guide the the user
through kind of this like very uniform
um sort of sort of experience of the of
an application or or a website. And so
his kind of argument is well with agents
now we're living in a world where um
these interfaces can become a lot more
intelligent. They can be retain a lot
more information about how we interact
with it. Uh they can retain a lot more
information in terms of like uh what
what we do or have done on that site.
And so I think his vision is a future
where these interfaces become a lot more
um malleable, a lot more adaptable. Um,
and and that really kind of changing the
way we've traditionally done UX, which
is to think about these very sort of
fixed flows. Um, and so I guess Vulmar,
you you you submitted this one. I don't
know if you want to talk a little bit
about why you thought this was
interesting and where you think, you
know, our listeners should focus on
here. Yeah, I think I mean the the core
change here is to rethink how we are
interfacing with the machine and you
know as you just pointed out it this was
the work of a designer and the designer
had to kind of figure out what are the
most common flows and I think AI is
really interesting uh in two ways. So
one is uh usually you implement 90% of
the flows and 10% is exception handling.
And so if you look from an enterprise
perspective, the 10% is your labor cost,
right? Because you have all these people
sitting around which are dealing with
the cases which haven't been
implemented. And so now you you suddenly
can go and say well I'm fundamentally
changing the interface it's dialogue
based or even you know even if it is uh
an application where you have you
interact with the screen the screen
elements could be um could be generated
directly by AI and you know you have an
adaptive user interface um because kind
of the AI thinks ahead what what's the
next step and it gives you the choices
and maybe you don't want to type you
know my next choice is this but maybe
you have a bunch of icons you click on
or so. Um the other thing is if the user
interface goes away from uh
pre-programmed flows, the speed of new
user interfaces will be incredible
because now you just describe what your
fundamental business problem is you're
trying to solve or you know consumer
problem and then the AI can just fill in
the gap. So I think now on the flip side
the the uh the implication is how do you
prompt because the AI could get lost in
the reads right so I think as it will be
a new skill set how do I put you know
guard rails around the model and the
possible questions and answers it gives
um what I'm I can't wait for is that I
don't need to reenter my name and my
credit card and my billing address
because there is absolutely no reason
that I need to do this every single time
Um and uh so the the the system can
incorporate you know information that
already collected about us and that's
kind of like you know when you go to the
bakery on the corner and they know
already what you want and so I think
that's that's something we now can
actually create in the digital realm and
it's you know I think so user interfaces
will become much more personal which is
nice you know it's it's adapted to you
and not to the mass.
>> Yeah I think this world starts to look
very interesting and different. I I was
reminded of recently my like you know my
wife was in the other room and she said
oh could you pick up my phone and I need
to text a friend and I picked it up and
I was like this home screen makes
absolutely no sense to me. It's like all
the apps are in different places all
different kinds of configuration. I I
mean I think one really interesting
world is that in the future you might
sit down at like someone else's computer
and just realize that like wait this is
what is what even what even application
am I using here because like it would be
so customized to their use. Um, I don't
know, Caler, if you had kind of
responses to this, if you're excited
about this world of customization or if
you you have, at least in my case, like
a little bit of hesitance that it might
actually be really baffling, I think,
because everything's going to be very
customized in a way that at least I
didn't grow up with, right?
>> Yeah. I think uh I have mixed feelings
here. So, I think, you know, part of me
is like this is going to be exciting
like this super customization that's
going to make life easier. And you know
I talk to you know these agents and they
know me so well. So I don't have to
explain myself every time like Volkar
said you know reenter all my history so
they know everything I want but then at
this you know the flip side of things is
that too bad you know when they know too
much uh you know what's the implication
what's the security implications like
you're saying here when we interact with
others or when we try to you know uh you
know look at other people's uh you know
phones or experiences we're going to be
lost. So it's like your word is going to
be super customized that you're going to
be lost you know in other words. Uh but
I I see you know this is a big shift
that we're noticing. So this move from
UX to AX it is one of the profound
shifts in human computer interactions
you know since I think the graphical
user interface. So it seems to me we're
moving in from a world where operators
where we are operators of tools to where
we are managers of agents. And uh so
it's not about you know having like
these uh menus and buttons and so on.
It's more about having these conver
conversations and uh and it's uh it's
you know you know for for decades we had
you know this invisible UI that was so
intuitive the user didn't have to think
but in the agentic world I think it's
the opposite. the agent's reasoning must
be really transparent. You know, like if
I'm asking an agent to spend about uh
you know thousands of dollars of my
money, I don't want magic here. I really
want a clear plan and the ability to
approve it and the trust. So the core
design challenge here I think is moving
from you know how do I arrange pixels on
a screen to more about how do I design a
relationship between the human and the
agent and a relationship that is built
on trust uh which requires you know uh
several things like competence does it
do the job well transparency can I
understand what it's doing and control
can I intervene and correct you know
when things are going wrong and so so
that is a big shift that we're noticing
right now in uh human computer
interaction.
>> Kar summarized it uh quite well. I mean
this is a new paradigm of uh of
interaction and uh uh something that uh
my team has been pursuing quite a bit
recently is um uh this concept of mutual
theory of mind. Um so uh pretty much uh
what uh what we've already heard right
um when two people are interacting if I
know um what the other person is
thinking and they know what I'm thinking
um we can just work better together. Um,
and this goes to like second and third
and fourth orders. So, some of you might
have seen this movie a long time ago,
The Princess Bride. And um, there's a
whole scene about uh like this guy
thinking of what the other person is
thinking and back and forth. Um, and uh,
like I mean this is exactly like I mean
a way to make things more productive in
a relational sort of way. And um uh I
think the uh there's going to be all
these like things from human
relationships that are going to come
into it. So um uh it's not just going to
be the AI that adjusts to us, but us
adjusting to the AI. We're going to be
um introducing ourselves to AI. AIS are
going to be introducing themselves to
us. Um there's going to be uh like just
thinking that uh uh what's the level of
conversation that I should be having? um
are there particular um information
processing styles that I should be
appealing to like in both directions?
And I think that's going to be like a
such a such a nice sort of uh sort of
change for for us. Um it'll allow so
much more tinkerability that uh we can
make these things authentic for for what
we need. But yeah, the risks are there
as uh as Clar talked about as Vulmar
talked about as well. So um yeah, just
balancing balancing all of that. Yeah,
for sure. There's one hypothesis that I
had kind of thinking through this this
post uh that I thought was fun because
we can almost draw an analogy to, you
know, if you the the rhetoric of like
the 2000s was like, oh, we're going to
about to be in live in a world where,
you know, everybody can have a blog.
There's going to be blogs on every
possible topic you could think of. And,
you know, at the time it was called like
the longtail, right? like, oh well, you
know, the the the most popular shows
will become less popular and then
there'll be lots and lots of kind of
like micro shows everywhere.
And I think one reflection from that era
was, well, that didn't actually quite
happen. Like it turns out that like on
YouTube, there's actually still a small
group that gets like this huge amount of
attention because it turns out that like
people have kind of some shared things
that they're interested in and there is
usually like breakout, you know,
behaviors. Um, and I guess I'm curious
if Volmar, you think that's maybe one
outcome for all this, which is that you
set up all this agentic tooling on your
interfaces. You allow the interfaces to
like drift and customize every way you
want. But it turns out that like humans
are very similar and so you actually end
up with like a lot of interfaces that
are really quite similar for the vast
majority of users and then there's like
this very long long tale of like truly
bizarre interfaces. Um, do you think
that's like maybe one outcome that we
end up with? I think I mean if you look
at the the web, right? The web there was
tons of different versions and now it's
all standardized. It all looks the same
and it's primarily that you can go from
one web page to another and your
cognitive load to go from one to the
other is low, right? And so I think we
we will have some of them which will you
know become more um dominant and then
everybody just will follow. But I think
this is exactly where we need to go
through an experimentation phase which
is why I'm really excited about this.
It's like finally after 40 years it's
not a mouse and you know an input field
but we can actually rethink uh a user
experience and that wasn't I mean the
last major shift in user experience was
the iPhone um you know and where the the
input device changed from a mouse to a
finger but otherwise I mean not much has
happened and so I think you know and
this morning I drove in and you know
Tesla now has Gro in the car and so I
actually use it quite often. I used to
use it on my phone. You just have
conversations and you know I'm like okay
you know I'm found this paper. Can you
summarize it for me? So I'm actually
having a dialogue with the car now which
is kind of silly but it's like you know
it's 25 minutes driving so I can
actually do stuff while driving right
and so I think um we will get into user
interfaces where there is an expectation
how a flow looks like. if you book an
airline ticket, if you buy something,
but you may branch off. Uh, but it knows
stuff about you. And so I think it will
cut out steps you don't want. Uh, but I
think that will a new way of actually
interfacing with the machine will
emerge. And I think we are currently in
the experimentation phase, but it's nice
that there's actually something new.
>> Yeah. No, I agree. It's a breath of
fresh air. I I met someone recently who
is the person who allegedly influenced
the interface where you pull and then it
refreshes on your phone. Uh she was
like, "Yep, that was me." And I was
like, "That's that's really crazy that
we kind of obviously had someone had to
come up with that." And it's interesting
to think that we're like in a very
similar place for AI now where all of
those kind of like sort of tropes or
design patterns need to be built out.
>> Yeah. I think what also excites me a lot
is those neuromorphic interfaces like
where your your brain is actually
interfacing or your eye or so not just
your voice or other things you know that
uh uh will be used uh as part of these
brain computer interfaces or these
prosthetics and bio feedback devices and
uh there's a whole word of these new
things that will emerge which I some of
them might be very useful especially for
people with disabilities. I hope you
know that's going to open up a lot of
things that they couldn't they can't do
today. Um so that is really exciting.
>> Yeah, I was just reading a paper
yesterday. It was from uh some folks uh
from Korea. um and they're kind of uh
thinking about what the next sort of
iteration of uh of interaction research
is and like kind of going and expanding
like we've kind of gotten to a point
where we have a definition of human-
centered AI but um the next thing is
maybe like group centered AI because um
uh you can kind of rely on uh like
cognitive psychology and these sort of
things at the individual level but uh
now you're going to have these things in
teams human and AI mixed sort sort of
teams and really it's a social
psychology sort of question now and um
uh really like what are the the new
considerations um uh how do you like uh
like make sure that that everyone's
voice is heard like all of these sort of
things are are going to be part of it so
I think that's uh another angle uh for
at least exciting research to to come
>> yeah for sure yeah and I think I was
just thinking when Vulcan was talking
earlier there is kind of a funny world
where everybody has their own different
interface in the future because we can
do that now but agents still need to
talk to agents and so like MCP will be
like the standard so like the the
remaining standardized part of the web
will be only what agents can see and
everything else will be very customized
when it's like you know getting to a
human readable state and I think the
complexity of that will be like very
interesting to navigate
all right so I'm going to move us on to
our last uh topic um super fun paper
came out uh if you're like me and enjoy
reading papers in nature, this is one
for you. Uh it's called contextualizing
ancient texts with generative neural
networks. Um and it's a fun paper which
basically says look a lot of the work of
historians particularly when they
analyze ancient texts is that they look
for what's known as parallels, right?
They're looking for texts that have
shared phrasing or they have very shared
function or they have shared cultural
settings. And the idea is you do
research by putting these next to one
another and trying to make inferences
between them, right? Well, this was
written in this place at this time and
this was written in this place at this
time, but they share these commonalities
and we can say, hey, they, you know,
maybe had a trade relationship. So,
their languages are similar and and have
some relation to one another. And so
this group of researchers said, well,
what you know, pattern matching, uh,
looking for parallels is something that
generative AI seems to do really, really
well. So what if we created a system
that they call Anias to go and scan the
historical record to surface interesting
parallels for people to look into and
the results are pretty interesting right
so out of the candidate kind of
parallels that were identified they
found that sort of historians found that
these were like useful research starting
points in 90% of the cases of what Anias
was was sort of surfacing um and so I
think this is a really fun story and I
think Katar I'll throw it to you you
know Because I think normally we say
well okay there's AI in the commercial
space what's happening in B2B what's
happening in B TOC we also say oh okay
but also AI is really going to be good
for research right and when we say
research we usually mean like pharma or
math or all the other topics that we've
talked about but this is an application
of generative AI to like a pretty
different domain of research and the
results here seem pretty impressive.
Yeah, I I I really love, you know, this
uh this direction here and because I
feel like we're in a world saturated
with conversations about AI for profit,
productivity or power. So this is this
is a story about AI for posterity here.
So I think AI in its absolute best not
just replacing human labor but even
recovering like lost human thoughts. So
I think this project is very uh kind of
a powerful counter narrative you know to
all these fears surrounding AI. It
really shows that AI can be a tool for
connection not just for optimizations.
It really connects us to you know the
philosophers you know voice from like
thousands of years ago you know voices
that we thought were permanently you
know silenced or lost. And uh you know I
think it it also provides kind of a
blueprint here for the future of the
humanities you know. So the combination
of advanced imaging pattern recognition
you know so here AI is creating a new
field of kind of digital archaeology
where we can apply the same techniques
um you know to faded manuscripts damaged
artworks etc. Uh so but one of the key
things I think one of the questions that
we need to pose here is how do we ensure
that this incredibly uh powerful or
often expensive AI tools are made
accessible to researchers in the
humanities. You know they're often I
think less funded departments and uh and
this you know can unlock a lot of you
know historical or cultural mysteries
that could be kind of the next challenge
for AI.
>> Volkar. One of the things I was thinking
a little bit about um is and I guess to
Kar's point about accessibility but also
kind of like the technology as a whole
like you imagine like the landscape of
all the possible research problems
humanity could work on and you're like
okay now I'm going to apply AI to them
and I guess we've often been assuming or
the discussion has often been assumed
like oh well it's going to accelerate
the hard sciences first right like we're
going to material science is really
going to accelerate very quickly or
finding new proteins is going to
accelerate really quickly but it may
also just be that like this is kind of
like a fundamentally lumpy thing. Like I
imagine a world where like this this
parallels task is like something that
like AI's been able to do for some time.
There's almost a world where like even
before we get like our future even
before we get all of the like you know
super super accelerated science, we
actually might like have like this
explosion in historical knowledge first
like I guess what I'm trying to point
out is like AI is going to have all
these weird effects on like what parts
of our knowledge become like move
quicker than others, right? Um and I
don't know I think those outcomes are
like very interesting to me. Yeah, I
agree. I I think like we are touching or
we're scraping right now on AGI in the
end, right? If you look at this, you can
go and just let this thing run for a
while and you know dig around and form
hypothesis and then based on this
hypothesis form further hypothesis,
right? And this is really where I think
that machines because you know we have
so many humans who are in archaeology
and you know as Kar just said you know
there's a limited funding in those areas
but suddenly you can fund this with just
energy right and so at the moment it's
just energy and a bunch of GPUs like you
we can accelerate our knowledge about
historic things just by a few people can
actually make massive discoveries
because you know they have suddenly a
huge body of I in this case Nvidia chips
behind them. Um and so suddenly you can
actually get to new knowledge which then
will you know there's always this this
chain of thought and so at the moment
you have something new invented then you
know what follows next and I think
here's now suddenly the possibility to
go and explore crazy ideas which we have
not you know have not been willing to
fund because you know you need to go and
apply for grants and blah blah blah and
so suddenly I think that will be it will
open up a lot of understanding of the
past um and also in just domains where
we are traditionally not considering it
as valuable. But if the if the cost goes
down, I think we will see an explosion
of knowledge.
>> Yeah, that's super interesting is kind
of like the um Vul sounds like you're
sort of arguing that there's this sort
of like there's this cognitive dividend
basically that like applies to these
fields because usually it'd be really
hard to get the grant funding to do this
and have people put a huge amount of
money into it. But like I guess it feels
like in a world of open source or even
with like less sophisticated models you
might be able to get very very far and
so there's like this benefit to all
these fields that might not otherwise
get funding to really push that research
ahead.
>> You can do the same thing. I mean what
we are seeing in in in code generation
right so you get a 10:1 ratio now I can
have one good engineer who can actually
use a large language model and produce
the code of 10 people. Why is that true
in coding and not for all the other
disciplines? coding I think is just a
very profitable business and so that's
where it gets applied first because
there's high competition for that skill
set. Um but I mean of course it will be
deployed in every skill set and so I
think we will get similar productivity
gains that it's just like you know
people who are building models are
programmers and so they are making their
tools first but of course it will roll
into the other disciplines. Yeah, for
sure. Chris, I'll ask the obvious
question, right? I guess some people
listening to this will say all the
archaeologists are out of a job. All the
ancient historians are out of a job.
>> Yeah.
>> Uh do you buy that?
>> Yeah, I don't think so. And uh I mean
the fact that there was synergy shown in
this particular paper is actually
unique. I mean like 95% of when there's
human AI teams, uh you actually don't
get any like overall benefit um uh from
the combination. And so uh here because
the task itself was the
contextualization um it wasn't trying to
do the same job that the uh that the
human would have been doing but uh
freeing them up to uh to do other sort
of things as part of their workflow. I
think that's the part of the story here.
And um the other thing that's
interesting about this one is that uh uh
Latin which is what uh uh the uh this is
all about um has a lot of resources. Um
there's been human historians, human
archaeologists who have been uh studying
this for a very long time. So there was
a large corpus they could draw on. But
um when you look at lower resourced
ancient languages um so a great example
is uh the Indis Valley civilization and
um their script is still undeciphered.
Um and uh because there's such a a
posity of uh of content from from that.
So uh actually the just earlier this
year there's a million dollar prize that
uh was announced saying can you use AI
to decipher this Indis Valley uh
civilization script and uh that's going
to be a completely different sort of uh
endeavor because um it's not kind of
using all of the human uh sort of
accumulation that uh that we have to uh
to do these things but like discover
something completely new completely
different.
>> Yeah for sure. And I guess it goes to
right I think that's an important
subtlety that I think Vulmar's comment
earlier and Chris I think you're picking
up on right is I guess we almost have to
think about like where the bottlenecks
are in the research and here I mean a
lot of it is just like you don't have
enough like grad student bodies to throw
at the data set to try to find these
parallels and so you're kind of like
unlocking all of these opportunities
just by like you know kind of
effectively uh enhancing that labor or
substituting that labor. So,
>> but I think additionally, you know, some
of the things that you know, the study
showed they were physically impossible
to to to do. Uh, for example, these
scrolls, they were buried and uh, you
know, they were carbonized into these
fragile solid lumps physically unrolling
them would destroy them. So, for
centuries, you know, the this the
content was a mystery. But right now
with the technology that involves, you
know, doing the 3D scanning, you need
using these high resolution CD scans to
create, you know, the detailed 3D maps
and and then using AI to train a
computer vision model to detect, you
know, these subtle differences, uh, you
know, on the density, the texture that
correspond to these ancient ink and
doing the virtual unrolling. So the
software can then unroll these detected
ink patterns. So these things you know
physically it was not possible to be
done by you know researchers or humans.
So the AI here is unlocking you know
these things that you know great things
researchers you know having a you know
able to read entire passages you know
but and but the this lost text
physically it's really difficult to get
so it's not here about AI replacing the
historians uh it's really a massive
collaboration between the computer
scientists the physicists and the
experts in ancient texts so it's just
kind of here AI is augmenting the human
expertise is allowing these scholars to
do something that was physically
impossible before. And that is pretty
powerful here.
>> Yeah, that's really cool. And it makes
me think a little bit about I mean it'd
be really cool to put like together a
little engineering core that just goes
from like field to field basically
looking for these uh areas to kind of
like break down the kind of bottlenecks
because I I assume once you start
looking they're kind of everywhere,
right, in these domains and you know the
traditional problem is how do you how do
you drag engineers to work on these
types of things? Um but uh but I mean I
would love to. I it's such a cool topic.
All right, I'm going to close us up
there. That's all the time we have for
today because we're going to move on to
this uh final segment with Suja on the
cost of a data breach report. But as
always, Kowar Kush Vulmar, thanks for
joining us and we'll see you soon on
Mixture of Experts.
>> Thank you.
>> Thank you.
>> Thank you.
>> So I'm thrilled to have Suja Visan
joining us today. She's the vice
president of security and runtime
products and she's joining us fore uh
for the very first time. So, Suja,
welcome to the show.
>> Thank you. I'm very excited.
>> Yeah, definitely. So, we wanted to have
you on the show because uh we actually
covered this cost of a data breach
report in 2024
and I understand that you were pretty
pretty deeply involved this year in
helping the 2025 report get together. Um
and so, yeah, would love to kind of dive
into the data. Uh but I think I wanted
to kind of start with one number which
really stuck out at me which was that it
turns out that 97% of organizations that
reported on this study said that they
had either experienced an AI related
breach or lacked proper AI access
controls. So this is really wild, right?
Because in some ways some of this IP
around AI is like some of the most
valuable things that companies own and
hold on to. And so I'd love to hear a
little bit more about that number. Are
you shocked by that number? It certainly
stood out to me. Look, it is it is
shocking and also kind of expected
because the the challenges that
typically an organization have are all
like multifold increased because of AI.
Uh it's it's like one of my colleagues
used to say it's like co you you wanted
to have for you you have vaccines to
help with you but you still have to wash
your hands and then do uh be keep it
keep it clean. So the basic hygiene that
are not there get exposed and exploited
very much in this AI era. That is why I
said it's not surprising but at the same
time it is surprising that the other
number is only 63% of organizations
actually 63% of organization don't have
enough AI governance policies in order
for them to address this kind of
problems.
>> That's right. And so is that the right
way of thinking about it which is that
some of the problem here is coming out
of just like companies already don't get
the basics right and then now what we're
kind of finding is this age of AI like
it's really almost accelerating those
issues in a really big way.
>> Exactly. Exactly. So some a problem that
took uh a breach that took 6 months or
so or even sometimes two three months in
order to get there. Now because of AI in
people's bad hands, it can just expose
all these problems in a much more faster
manner.
>> Yeah. And so we're really seeing this. I
mean, I remember when we were talking a
few years ago, it was like, oh well,
pretty soon you're going to have AI
enhanced attackers, right? And like the
space is going to be a lot more um
dangerous than it used to be in some
sense. Do you want to give, you know, I
guess our listeners some of the flavor
of the kinds of attacks that we're
starting to see? Because I think when
people hear like, oh, AI enhanced
attacks, they still sort of don't have a
fuzzy they have very fuzzy vision of
what that looks like. But in practice,
what are we talking about? Is it things
like fishing? I'm just curious about
what it looks like.
>> I think it's it's one part of it is
fishing definitely which took 14 days or
something to craft a fishing message and
then we have initial controls in play,
security controls and play. Oh, look at
the language. Is it is it is the grammar
and all those basic stuff that we used
to know is changing. Now it takes in
minutes or seconds in order to craft a
very very personalized diligent fishing
attack. So it has upped the game right
uh for for the attackers and then for
the defenders they need to be working on
different protocols on how to determine
this. And the second part of it is that
at the end of the day we do have lot of
data challenges. You talked about
initially the IP the the crown jewels of
a company are exposed through AI and
proper governance and controls are not
in place they can be easily exploitable.
So that is why this becomes two-way
prone and yes we also have AI on the
defender side to help. That is why if
you look at the numbers the the the cost
of data breach is going down because the
defense is also upping their game but at
the same time the offense is also
helping their game. So it's it's like a
race if you will.
>> Yeah that's right. When I did want to
ask a little bit about because we talked
about this last year any big trends that
stood out to you about how the kind of
trajectory of this is evolving? You
know, I think it sounds like you just
mentioned one of them, which is I guess
some good news, which is that like AI
defense is starting to mature and so the
end result is these breaches are less
expensive. Anything else that you think
people should be paying attention to?
>> Extensive uh use of AI in security is
actually giving lot of cost savings for
the customers, right? We saw in the data
bridge like about 1.9 million savings
because they are able to use AI in order
to pro protect. So that is a big thing
positive side. Yes, it is a scary world.
always have been but we are also seeing
lot of positive sides on how you adopt
AI in order to defend and increase your
protection. Uh the other part is lot of
work in this especially needle needle
and haste problem. With AI you can make
elevate your security analyst life
because now they are able to spend their
time in actually preventing the attacks
instead of doing the hard work labor
that we can now offload to the AI.
>> Yeah, that's right. And so are you
ultimately kind of uh are you
optimistic? Like do you feel like next
year when we're sitting here talking
about cost of a data breach 2026 is the
kind of cost like impact of these
breaches going to continue to decline?
Like is basically things becoming more
defense dominant over time or do you
feel like we're we're kind of bottoming
out like to next year we're going to see
that the numbers are kind of the same as
they are this year?
>> I think the cost will go down but the
volume might go up.
>> Okay.
>> Right. in a different way.
>> So what is that more breaches but less
per
>> breach because because of what I said
earlier where you have some existing
hygiene issues. So that is why your data
security posture uh for both structured
and unstructured your identity and
access your secret posture all those
need to encryption posture need to all
level up in order for you to reduce the
attack surface. And I'm kind of curious,
I mean, uh, again, I I only had a quick
chance to read this report because it
just got released recently. Is are you
finding that there's differences between
enterprises? Like, do certain types of
enterprises like, oh, wow, the data
governance problem is really big there
versus like some other types of
enterprises, it's like, oh, well, the
security analysts aren't enhanced enough
using AI kind of curious about like
almost what the landscape of this looks
like on the defense side. See the
critical infrastructure critical comp
companies enterprises it is important
because they already had regulators
regulations and everything for example
financial health they were all very
highly regulated that is why for them it
becomes really important so for them
it's a one-step function for those who
weren't they're not highly regulated but
now it is very critical for their
business because of ransomware because
of data breach if they lose trust with
their customers it's very difficult to
build it up. So that is why the
non-regulated industry even though they
were in they were they didn't care
before because they were regulated now
they have to care because of security
issues. That is where the difference
come um for for the regulated and the
unregulated industries.
>> Well I guess maybe one last question for
you that I'd be curious about. You know
we have a lot of listeners who are
probably reading this report and they
might be saying actually I you know I
don't have an AI governance policy or
anything like that. Um do you have a
recommendation for how people can get
started here? Like if they say okay this
is a real problem what what can I do
what should I be doing tomorrow?
>> The the first thing is look at your data
uh which is which is easier said than
done. So how is your structured and
unstructured because pre I do believe
that for structured we have a very clear
um regulation and governance governance
policy. The unstructured side because
that is what lot of AI is trained on to
help increase productivity. That is
where we need to look at the lineage. Is
your governance and access policy travel
along with this unstructured data all
the way in. So looking at that helps to
prevent some of these problem. I would
start with that. Do I have a lineage and
a clear um accountability when it comes
to unstructured data when they get
sharded and then put it in a vector
database. Do I know where it came from
and who can have access? Then that will
inform the model on how it is going to
behave. I think that is where I would
start as I'm thinking about every
enterprise.
>> That's great. Well, Suja, thanks for
coming on the show and hopefully we'll
have you back next year.
>> Thank you. Thanks, Tim.
>> Thanks to all you listeners for joining
us. Uh if you enjoyed what you heard,
you can get us on Apple Podcasts,
Spotify, and podcast platforms
everywhere. And we'll see you next week
on Mixture of Experts.
[Music]