AI-Powered Browsers with Ben Goodger
Key Points
- Ben Goodger, a veteran of Netscape, Mozilla, and Google Chrome, now leads engineering for OpenAI’s AI‑powered Atlas browser.
- Atlas is designed to look like a familiar traditional browser while embedding ChatGPT‑style assistance at its core, making the web experience more intuitive and intelligent.
- Over the past 18 months, Ben and his team have clarified that the product should blend advanced AI capabilities with user‑friendly design, focusing on security and practical workflow improvements.
- The recent Mac release of Atlas marks the first public step toward “AI‑first” browsing, showcasing how conversational AI can transform how people search, interact, and get work done online.
- The conversation also touched on broader implications for the future of browsers, including new security considerations and the evolving nature of digital work in an AI‑enhanced environment.
Sections
- AI‑Powered Browsers with Ben Goodger - Nate interviews OpenAI’s Ben Goodger, former Netscape and Chrome engineer, about the history of web browsers and the future of AI‑driven browsing technology.
- Rapid Code Understanding and Prototyping - The speaker outlines how tools like CodeX/Atlas help engineers quickly grasp large codebases, prototype ideas to assess viability, and directly implement features, exemplified by a conversational review of a GitHub repository.
- Agentic Chromium Architecture for Speed - The speaker explains their approach to building a browser that blends familiar UI with innovative, agent‑driven features by running Chromium as an out‑of‑process service to achieve rapid startup and secure input synthesis.
- Surprising Post-Launch Agent Use Cases - The speaker highlights unexpected personal and work applications of the new AI agent—including automated online shopping comparisons and rapid generation of Google Forms—showcasing its ability to streamline tasks that were previously cumbersome.
- From Boxes to Conversational Interfaces - The speaker reflects on how the early web transformed software delivery from physical boxes to instant clicks, and how modern LLMs further simplify interaction by letting users speak natural language to resolve everyday ambiguities, making technology feel like a helpful friend.
- Role Blurring: Engineers as Product Designers - The speakers discuss using LLMs to synthesize information across tabs and how their team’s engineers now act as product engineers—owning feature design, user research, and feedback analysis—reflecting the merging of traditional roles.
- Atlas Launch: Voice and Chat Memory - The speakers discuss integrating voice interaction and persistent chat memory into Atlas to make browsing more conversational and personalized, emphasizing how these features differentiate the product.
Full Transcript
# AI-Powered Browsers with Ben Goodger **Source:** [https://www.youtube.com/watch?v=8tfqsGDCCb4](https://www.youtube.com/watch?v=8tfqsGDCCb4) **Duration:** 00:24:45 ## Summary - Ben Goodger, a veteran of Netscape, Mozilla, and Google Chrome, now leads engineering for OpenAI’s AI‑powered Atlas browser. - Atlas is designed to look like a familiar traditional browser while embedding ChatGPT‑style assistance at its core, making the web experience more intuitive and intelligent. - Over the past 18 months, Ben and his team have clarified that the product should blend advanced AI capabilities with user‑friendly design, focusing on security and practical workflow improvements. - The recent Mac release of Atlas marks the first public step toward “AI‑first” browsing, showcasing how conversational AI can transform how people search, interact, and get work done online. - The conversation also touched on broader implications for the future of browsers, including new security considerations and the evolving nature of digital work in an AI‑enhanced environment. ## Sections - [00:00:00](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=0s) **AI‑Powered Browsers with Ben Goodger** - Nate interviews OpenAI’s Ben Goodger, former Netscape and Chrome engineer, about the history of web browsers and the future of AI‑driven browsing technology. - [00:04:43](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=283s) **Rapid Code Understanding and Prototyping** - The speaker outlines how tools like CodeX/Atlas help engineers quickly grasp large codebases, prototype ideas to assess viability, and directly implement features, exemplified by a conversational review of a GitHub repository. - [00:08:08](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=488s) **Agentic Chromium Architecture for Speed** - The speaker explains their approach to building a browser that blends familiar UI with innovative, agent‑driven features by running Chromium as an out‑of‑process service to achieve rapid startup and secure input synthesis. - [00:11:41](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=701s) **Surprising Post-Launch Agent Use Cases** - The speaker highlights unexpected personal and work applications of the new AI agent—including automated online shopping comparisons and rapid generation of Google Forms—showcasing its ability to streamline tasks that were previously cumbersome. - [00:15:08](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=908s) **From Boxes to Conversational Interfaces** - The speaker reflects on how the early web transformed software delivery from physical boxes to instant clicks, and how modern LLMs further simplify interaction by letting users speak natural language to resolve everyday ambiguities, making technology feel like a helpful friend. - [00:18:32](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=1112s) **Role Blurring: Engineers as Product Designers** - The speakers discuss using LLMs to synthesize information across tabs and how their team’s engineers now act as product engineers—owning feature design, user research, and feedback analysis—reflecting the merging of traditional roles. - [00:22:47](https://www.youtube.com/watch?v=8tfqsGDCCb4&t=1367s) **Atlas Launch: Voice and Chat Memory** - The speakers discuss integrating voice interaction and persistent chat memory into Atlas to make browsing more conversational and personalized, emphasizing how these features differentiate the product. ## Full Transcript
This is a good one, guys. I got to sit
down with Open AAI's Ben Goodger, the
lead engineer building the Atlas browser
for OpenAI. We had a really wide-ranging
conversation. We talked about the
history of browsers. Ben, of course, has
been involved in building Chrome, been
involved in building Netscape Navigator
for those of you with gray hairs, and
now he's taking the lead on building the
AI powered browsers of the future at
OpenAI. So, I had a lot of fun. We
talked about what the future looks like.
We talked about security implications.
We talked about the way we get work done
and how that's changing. It was a super
wide-ranging conversation. I put a cut
up here uh on YouTube and if you want
the full cut, of course, you can go over
to Substack. Uh enjoy and uh yeah, let's
talk about AI powered browsers. Hey all,
I'm Nate and I have a special guest with
me today. Uh Ben, why don't you
introduce yourself?
>> Hi, I'm Ben and I am the head of
engineering for Chat GPT Atlas here at
OpenAI.
And how did you get to be head of
engineering at JIGBT Atlas?
>> I've always been interested in the web,
web browsers, uh going back to I guess
like the mid to late 1990s when the web
was first developing. Uh I was like a
hobbyist web developer building sites
just for fun. Uh and early in my career
I got involved with Mozilla was open
source project made contributions to
that ended up getting hired by Netscape.
Uh and so I came I was grow I grew up in
New Zealand. Uh I I moved up to to
Silicon Valley for a period of time. Got
to see Netscape in its final days which
was an interesting experience. But ended
on ended up moving on to Mozilla where I
helped work on Firefox the first version
of that. Uh and then moved to Google. I
was at Google for nearly 20 years.
Helped build the Chrome browser there.
And then about um almost 18 months ago I
came over to OpenAI. uh is very
interested in exploring what the web
would look like if you see it through
the eyes of maybe having an assistant
like chat GBT really at the core of the
browsing experience. Uh and so since
then have been trying to to build that
and we built uh we shipped our Atlas
product for a Mac a couple weeks ago
which is really exciting. I am really
curious to hear, you know, you've shared
a bit about what made it compelling for
you to come to OpenAI, the sort of the
piece of having intelligence on the web.
What is it that you felt like you
learned or that crystallized for you
along that journey over the last 18
months that gave you a sense of clarity
about what you wanted Atlas to be and
where you wanted to take Atlas? I think
that, you know, the Atlas product, you
know, at face value, it kind of
resembles a traditional browser. And I
think that that's important because
everyone knows what a browser is.
Everyone uses a browser on a on a, you
know, pretty frequent basis. So, I think
there was an aspect of we need to build
something that people can understand.
Um, even as we try and bring really
advanced capabilities into it. Uh and
then the other thing that I've learned
is that the pace of development of the
the tech that is happening here at
OpenAI is just so incredibly fast uh
that even some of the you know
limitations that you might see you know
one month you know won't be there the
next month. So just you know as we've
built features like agent seeing that
come together seeing that get much much
faster get much more accurate um at at
clicking on things and doing stuff for
you on the web. Uh, so I I I've learned
to be more um uh you know, I'm an
optimistic person by nature, but I like
even more optimistic about about where
this tech is going and what types of
product experiences that that will
enable. Uh what is it about your working
style or the team's working style that
has shifted over the last 18 months as
you've been working on building this
browser? Now you know for me when I
joined uh you know I'm the team manager
but you know also like when I was the
first person here working on this I was
also writing you know code doing
prototypes that type of thing uh I used
chat tbt early on extensively to help me
learn new programming languages and like
really get up and running again like but
then as we had more and more engineers
join the team and then we've also
launched products like codeex we've gone
on the evolution of codec especially
codec cli I would say in the past couple
of months even codeex is like really
changed the way in which we work and uh
what we see is that people that are
using codeex are just like so much more
productive and I think there's an aspect
to codeex which is you know it allows
people who don't code that much to do a
little bit of coding but it also helps
very experienced engineers get way more
done and I think that's what I'm really
excited about uh because a really
experienced engineer can kind of steer
codecs and and and be like just
monstrously productive
is that you just sort of situation where
you see engineering productivity
patterns starting to shift toward
multi-threadedness.
One is to sort of understand how
software that exists works and another
is to um prototype a new idea to see if
it is you know sort of the juice is
worth the squeeze uh if you like. Uh and
then lastly it's to actually get the
work done um for for implementing the
new feature. uh we work in these large
code bases sometimes actually usually
the documentation isn't where it needs
to be or the documentation is out ofd or
that that type of thing you know codeex
or other similar tools can read much
more quickly than you can and give you
like a pretty good answer very fast so I
have an idea for the product and I think
well maybe this is like super
interesting and I could spend a bunch of
time on it I can throw together a
prototype and codeex and decide hey this
doesn't quite work the way I want it to
uh and so maybe I'll I'll choose
something else uh to focus on and then
lastly you know some of our most
experienced, most productive engineers
are just using it to build features uh
in the drive. You know, either
refactoring code or just building like
the front-end code itself or you know,
any any aspect of the feature really. I
think that gets a use case I actually
had for Atlas today. Um, which was
really interesting to me. Uh, I was
looking at a GitHub repo and I was
running through and I was like, I want
to get a sense of what's in this GitHub
repo really quickly. What if I just have
Atlas look at it and I put the assistant
up on the side and I just have a
conversation with Atlas about the repo.
And what I found was there was this sort
of magic that happened when I was in
Atlas because I was working with sort of
that uh there's sort of a magic to code
comprehension I'm finding with some of
sort of the way Chad GPT like touches
and plays with code. it was able to like
click through, take control of the
screen, look at all of the different
files in the repo, and it came up with
some really, really thoughtful questions
that enabled me to get much more
fingertippy with the code much more
quickly. It was one of those magic
moments for me. So, I think the browser,
it's it's so simple, it almost sounds
silly, but like reducing the friction
for a user to access some of this magic,
it can feel like making the magic
available like in the first place. You
know, I've always been able to take a
web page, maybe you could print it as
PDF or you could copy and paste it and
paste it into a into a into chatbt, but
that's just like a few extra steps. And
when you can just bring this up, you
know, in situ and ask the question
directly, it's it's like it, you know,
suddenly it's it's there. One of my
favorite use cases has been shopping.
You know, I could do a lot of online
shopping when I'm not not working.
Sometimes I'd be looking at a product
and I can ask the sidebar, is this sort
of the best price that I can find on
this thing? And there, you know, when
paired with our search agent, like it
will go off and like browse the web and
find out if there really is that is the
best price or if there's a better deal
on it somewhere. I had one case where it
actually it hit really well. I was
looking at a pair of shoes uh and it
found the shoes available from a
different site for about $60 less. Wow.
>> And that was one of those like really
kind of wow moments.
>> Yeah. I there's something around habit
formation where when you get that
dopamine hit of wow, this is really
easy. this is something that I didn't
realize I could do. What were some of
the things that your team really had to
wrestle with around trade-offs and
decisioning to make that browser come to
life? One is the product design itself
and then the other is the the technical
infrastructure and all the magic that
went into it. So with the product um we
wanted to design something that was
really useful but also felt familiar.
Like I said everyone kind of knows what
a browser is. Um, we had, you know, lots
of debates about how we should design
features like basic aspects of the
browser. I think there was a lot of room
for innovation in those areas, but it's
also a bit of a double-edged sword. So,
we've tried to find a balance between
something that feels familiar uh and yet
still has has some improvements for for
folks that are, you know, looking to
find more efficient ways to get stuff
done. And then on the technology side,
uh this is where you know we spent a lot
of time both on the more traditional
browser infrastructure building on
chromium as well as how we built some of
the more cutting edge features like
agent. We want to build a product that
feels very fluid and fast. Uh and we
also wanted to build a very cutting edge
kind of product user experience and just
sort of the standard way in which a lot
of Chromium browsers are built just
don't make that super easy to access. So
we built a a a unique way of holding
Chromium. It's almost sort of agentic uh
in in form. We run Chromium as an
outofprocess service so that when you
start Atlas, you're not actually blocked
on Chromium starting up. So the browser
can start very very fast. Uh and sort of
takes however long it takes to start up.
And then when you run a feature like
agent, it is doing things like
synthesizing input events to click on
things. um we're able to do that in a
very robust and secure way. So there's a
bunch of stuff uh like that that we've
done. We posted a we did a technical
blog post recently that uh that covered
this in some more detail. But yeah,
we're pretty excited about that. Yeah, I
recommend if if folks listening have not
read the technical blog post, if you're
at all technically minded, it's a super
interesting read. I'm I'm sort of
curious given your experience and sort
of all the different browsers you've
worked on, to what extent does Atlas
feel like a
fully solved problem, feel like a
partially solved problem? Are there
pieces of it that you're really excited
to dig your teeth into next? Uh, yeah, I
I I think this is really the first step
in a in a long journey. Like when I talk
to people about this, you know, if I'm
talking about browser history, I would
say this is like the Netscape 1.0 moment
for this new era of like Agentic
browsers. Um we're excited to um get get
the agent feature out there. Uh but it's
also very much sort of a research
preview. We are discovering use cases
for it. Uh we're also like really
enthusiastic to hear how other people
are using it and and and we expect to
make a lot of improvements uh both to
the you know the the experience the sort
of accuracy speed. Most most folks uh
haven't seen this type of functionality
in a browser before uh and so we think
it's really important to uh bring people
along you know with with this with this
thing. You're very used to clicking on
things yourself. So having a tool that
can do that on your behalf um you know
is both exciting sometimes maybe a
little intimidating. So we want to like
be very clear about how the product
works. So the first time you use this we
give you sort of this nice disclosure
that tells you you know what what its
capabilities are and we give you some
options too. You can choose for example
uh if you want the agent to run with
your logged in sites just like it really
was browsing as you you can make that
choice or you could choose for example
to have it run logged out and then you
have to be very explicit about what
sites you want it to log into and you go
and log into those yourself and so
there's choices like that for example
that help people you know have whatever
their comfort level is uh to apply that
to their use of the product.
>> Yeah that makes a lot of sense. Is there
a use case that you heard from the wild
post launch that was most surprising or
interesting to you? Well, there's stuff
that I think yeah, super cool. Like I
said, I was like an online shopping uh
fiend and I'm like, "Wow." Like, I used
to have to go and have a different tab
where I'd go and search for those things
and then go and try them back one by
one. And so, the fact that you could
just kind of set this and forget it and
then it would try them all and then pick
the best one, that was pretty pretty
cool. Um, yeah. And then like not
personal use case but more like work use
case. Um you know I obviously had been
diving through a lot of um feedback for
from our launch the other week. So I
wanted to come up with like a quick
survey. Uh and so I I had this
discussion with chat GPT about some good
good questions. Uh and then I asked the
agent to go off and make a Google form.
Uh so I could do the survey. And you
know I've used Google forms a bunch. It
tends to be a bit finicky, the different
types of question formats and other
stuff, but Agent just figured it all out
for me. And you know, a few minutes
later, I came back, I had this form, I
could just publish it right away. I can
think of a lot of government websites
that are very frustrating to use that
feel like they were built in the 1990s
that for me fall into that category. Um,
yeah. So then if we switch modes a
little bit about the security side of
things and how you think about security
with LLMs in the browser specifically
and there are divergent opinions ranging
from this is an impossible solve to this
is tractable to we can make progress in
this area. How do you think about it as
a problem space?
>> Yeah. Yeah. I think this is interesting
for for agent which is like a net new
capability and and both the technologies
evolving. We we expect to do a lot of
work on it uh you know over the next
while. Um I talked before about um just
sort of the onboarding and some of the
choices you have about uh you know what
what sort of site access it has if it's
uh authenticated or not. There's a few
other mitigations that we have in place
like if it's going to do something
sensitive uh on your behalf like be in
your email. Uh yeah it's going to want
you to watch it. Now I the analogy I
give for this is I have a car that has
this sort of auto drive functionality.
I'll be there on the highway and I can
turn on the the the cruise control and
it will you know even take the wheel and
steer it a little bit uh for me. But it
in in return what it wants is that I
keep my eyes on the road uh and it has
this little camera in the dashboard
somewhere that like will will shut it
off if I'm not paying attention. And so
similarly in in Atlas, if you were
having the agent do work in one of these
sensitive contexts like your email, uh
it wants you to be on that tab paying
attention. If you switch away, it's
going to stop. And then another thing
that we have is like if you ever been in
a machine shop and worked with a big
piece of lathe or something else,
there's usually like a big red button
somewhere that if it starts doing
something you don't want to do, it's
very clear. You hit the red button and
it stops. And so the agent has that,
too. Uh and so if you see it doing
something that you don't want it to do,
then you just hit stop. Um these are
good tools uh to have and I think will
help people get confidence that they you
know you're the one that's still like in
control um of how it works. Yeah, I
think that makes a lot of sense. I think
that this is a moment where we have a
it's new again um and we have the
opportunity to revisit a lot of these
sort of foundational primitives. I think
that sort of brings me to an interesting
question. It feels to me like there's an
opportunity for shifting the browser
experience further. But I I'm curious if
you turn on the high beams, to use the
car metaphor. What is what does that
look like for you?
>> One of the things I thought was special
about the early web was the fact if you
go back to that time in the 1990s, you
think about how people got software, it
was to go to a store, you drive to a
store, you buy a box of a product with
some shrink wrap, and you take the discs
out and you install it. Uh, and then you
think about the web where you just click
on stuff and it comes up on your screen.
Like that was pretty magic. And then
that aspect of using the web where you
just go from site to site. It actually
what resonated with me was that it felt
kind of like how my mind worked. Now if
you fast forward to LLMs today, it is an
even more accessible I can just talk to
this thing uh and this thing will figure
out kind of what to do. That's maybe
like an idea of what the the future
looks like where instead of having to
dig through a bunch of settings menus,
you can just tell the system what you
want from it and it will just figure out
how to do it. Uh, and then if I sort of
extrapolate from there, I actually think
like a lot of what people struggle with
on a day-to-day life is is um in their
day-to-day lives is ambiguity. Um, like
there's this thing that I want, but I
don't quite know how to do it. When I
first started using ChatgBT, it was kind
of like a friend. it would say, "Oh, if
you want to do this, you should, you
know, think about taking one of these
three steps." For something that, you
know, is about betterment of yourself,
maybe, you know, you should go do those
things. But for a lot of things that are
more tactical, like it would be great if
your agent could just go off and do that
thing for you and report back on the
status of it. And so, I think, you know,
maybe there's some version of the
future, but even though I say that, I
also think that people will continue to
browse the web themselves because
there's stuff that as humans we want to
do. We want to be entertained. We want
to create things. Yeah, that that's a
really sort of rich area to dive into.
It felt like there were sort of two big
buckets that browser work falls into.
There's the delight bucket where you're
you're trying to learn, you're trying to
be curious, you're you want to be
surprised and then there's the oh gosh,
I don't want to do this and I would
prefer to avoid it and could someone
please take care of that part for me?
Mhm.
>> And as I was reflecting,
it's pretty easy to make the case from
an agent perspective that the not fun
part is something ideally in a perfect
world you'd want the agent to just go
and take care of for you. But the fun
part to imagine is how could an agent
also enrich that delightful side of
things.
>> Yeah, totally. Like I I I definitely
think you know that trying to reduce
toil is like something that we
definitely want to support. browsers
have been evolving for 25 30 years. What
does it look like to take the next step
in that process and what is the
direction the trajectory that is
changing? And I find that super
fascinating because I think we are at an
inflection moment. Um, you know, when I
think about the early web, you know, a
lot of the focus back then was just, you
know, helping people understand those
those links uh that are out there and
clicking on links and going from place
to place. We could go and find like
collections of links that people had an
opinion about if they were good or not.
As the web scaled that stopped working
and then you got search and then search
was transformative because it helped you
find like the little piece of
information or the website that you
wanted to go to. People started building
these rich apps and then as you point
out yeah inflection point now we're in
this new stage. We like made the way in
which we can interact with this
technology just radically more human and
we can scale up the capabilities of the
platform and the platform will be able
to do things on your behalf. I think
this is really the third phase uh of the
web.
>> Yeah, it's super exciting.
>> There's some some UX that we're
exploring around it a few different
ways, but there's some compelling use
cases like I think I I ran into this
before launch when I was trying to um
you just sort of synthesize a single
document from a bunch of different
sources that were opened in tabs and
then it wouldn't work. So, I kind of
wanted I don't think it's even a fixed
number of tabs. It's like I wanted you
know n basically you know subject like
context window size. Um but but yeah,
>> we talked about ways of working and how
the team is using codecs a little bit.
One of the things I've been hearing
really consistently from small
companies, large companies, individuals
is that it feels like roles are blurring
as we lean into LLMs more and more and
more. And I'm curious if you look at the
roles on your team and how they're
evolving as you guys work, you know.
Yeah. So um the
every engineer on our team is a product
engineer basically. This is someone that
like really thinks about the breadth of
the the experience. Every engineer on
the team is empowered to to sort of own
the design and development of a feature.
And so all of our engineers are talking
to users uh reading feedback you know
figuring out how to integrate it into a
future update. Um we have a bunch of
tools. not super familiar with like all
of the tools that we use, but I know
that we use a bunch of, you know, LLMs
basically to help us sift through
feedback, get common themes, you know,
that sort of stuff. So, that will help
us,
>> you know, understand like what the top
pain points are uh that people are um
talking about at scale, you know,
through our user support forums and
stuff like that.
>> Yeah. No, it makes sense. If we pivot a
little bit uh sort of back to the
future, I would like to see this
breakthrough happen or I would like to
see this um technical challenge solved
and then we will unlock this new
experience. What pops out to you as
significant milestones in the next call
it 18 months to two years where you're
really excited to see something unlock
as we get to a particular capability.
I think people will get more used to
this more accustomed to this
functionality and so they will seek to
do more with it. That is from the
customer side and then on the product
side we will make that sort of magic
more more real uh and sort of help you
figure out the opportunities uh to to
leverage that magic. Um the capabilities
will continue to increase at a
breathtaking pace. And so what I want to
end up in is a place where
uh you can really give this tool fairly
ambiguous complex tasks and it will
break it down and figure out how to make
progress on your behalf. I used the word
toil before. You know, how do we
>> get rid of some of that that annoyance
and and and make it just sort of more
reliable, simple, trustworthy? uh you
know these are these are things that
that we want to do.
>> I one of the things we haven't talked
about that I'd be curious for your take
on so much of our browsing happens on
the phone. Are you guys thinking in
terms of mobile? Yeah, that's another
request that's coming in a lot. Uh we're
trying to figure out the best way uh to
bring this functionality to mobile. I
think one of the observations that we
have is that the way people interact
with the web is a bit different on
mobile. Um, so on the desktop platform,
you know, the browser is like an
embedded operating system. You know, all
of your favorite apps for the most part
are in the browser. Whereas on mobile,
the mobile operating system itself is
kind of like the browser. And so people
tend to have, you know, relationships
directly with specific apps. Uh, but
then, you know, for the web, there's a
couple of different use cases. One is I
want to go and read a specific website
uh that I don't have an app for. So then
the the browser form factor makes a lot
of sense. So I think we're just we're at
a stage where we're figuring out like
how we want to make browsing work uh on
on the mobile device given these
different use cases. The way mobile
works
implies dramatically different usage
patterns. What it looks like from a
small screen perspective to have the
chat assistant there and the browsing
experience. It's super interesting
challenges to get into.
>> Yeah, there's a bunch of also
interesting stuff. think like voice is a
very compelling u modality where if you
load a page that you can ask follow-up
questions and have all of that work. Uh
I think we just need to figure out the
right way to build that.
>> And you know I I would be remiss if I
didn't dive into one of the more
interesting sort of the more interesting
features that that that you've launched
with Atlas that uh and that is that you
guys bring in the chat GPT memory from
previous chat GPT conversations and that
is part of the browse experience. Uh,
I'd be curious for like the product
decisioning there and then how you how
you think about that as an asset to the
browsing experience and what that looks
like. Well, the chat memory feature of
Chat GBT is like an incredibly powerful
one. Uh, and it means that like as you
move from chat to chat that you don't
always have to start from zero, but in
some sense like makes it feel like it
knows you um a bit better in that way.
And that's a really interesting way in
which the browser becomes more useful
the more you use it. So,
One thing I always love to do as we sort
of bring this conversation to a little
bit of a close. If you could say one
thing that you feel like ah people
didn't quite get that piece of the
launch, I'd love to try and say it again
and emphasize it. What would that be for
you for Atlas?
>> So for Atlas, I think that this is a it
is a familiar tool but with this amazing
new set of capabilities.
>> Uh and so I encourage you to go and try
it out for a whole bunch of different
things. I would say even challenge
yourself to ask more questions of it or
things that even if you were thinking oh
I don't need to ask that just try it out
and see what it does I would say this is
the beginning of a journey for us to to
build this type of app uh we push a new
build every week and so as we hear more
feedback from you as you try it out and
and do a lot of different interesting
things uh we will make it better and
better
>> well thank you Ben thank you for coming
and chatting
>> yeah it's been great awesome I
appreciate