A Billion Software Engineers by 2027
Key Points
- Experts on the show predict a surge to roughly a billion software engineers by 2027, driven by widespread code‑assistant tools and the rise of “silicon” (AI) coders alongside humans.
- GitHub’s recent blog data shows a notable increase in developer numbers, especially as AI‑powered assistants like Copilot make coding more accessible.
- Python’s explosive growth is highlighted as a key factor, spurred by its dominance in data‑science and machine‑learning projects.
- The panel sees this trend as a democratization of programming—everyone, from hobbyists using Scratch‑style platforms to professionals, will be able to write code without traditional training.
Full Transcript
# A Billion Software Engineers by 2027 **Source:** [https://www.youtube.com/watch?v=V6vxTXrDCrA](https://www.youtube.com/watch?v=V6vxTXrDCrA) **Duration:** 00:39:41 ## Summary - Experts on the show predict a surge to roughly a billion software engineers by 2027, driven by widespread code‑assistant tools and the rise of “silicon” (AI) coders alongside humans. - GitHub’s recent blog data shows a notable increase in developer numbers, especially as AI‑powered assistants like Copilot make coding more accessible. - Python’s explosive growth is highlighted as a key factor, spurred by its dominance in data‑science and machine‑learning projects. - The panel sees this trend as a democratization of programming—everyone, from hobbyists using Scratch‑style platforms to professionals, will be able to write code without traditional training. ## Sections - [00:00:00](https://www.youtube.com/watch?v=V6vxTXrDCrA&t=0s) **AI's Impact on Future Engineers** - A panel of experts debates whether AI will boost or reduce the number of software engineers, referencing GitHub data showing rising developer counts from tools like Copilot and Python’s surge in data‑science use. ## Full Transcript
does the rise of AI mean that there will
be more or fewer software engineers in
the future Chris Haye is a distinguished
engineer and CTO for customer
transformation Chris welcome to the show
what do you think a billion software
Engineers by 2027 wow 2027 okay uh
Farney is a senior partner Consulting on
AI for US Canada and Latin America
uh what's your thought everybody will go
from becoming a programmer to being a
pro at grammar I will ask you to explain
that more in just a moment Kar El mcra
is a principal research scientist and
manager at the AI Hardware Center uh Kar
welcome um what do you think I think
it's going to be a different breed of
software Engineers that we will be
seeing all that and more on today's
mixture of
[Music]
experts I'm Tim hang and welcome to
mixture of experts each weeke brings you
the analysis debate and banter that you
need to stay ahead of the biggest
developments in artificial intelligence
today we're going to cover for cyber
security and the launch of search gbt
but first let's talk about software
engineering um there's a fascinating uh
blog post that came out from GitHub uh
the other week um basically reporting
out some data that from their perch
github's reporting that there appears to
be a rising number of developers um uh
driven largely by tools like co-pilot um
and second they also point out that
Python's incredibly uh becoming a really
really popular language driven largely
bu data science and machine learning
applications and this is super
interesting to me and this is one of the
reasons I wanted to bring it up as our
first uh story of the day which is had
you asked me I would have said look
where code assistance is going we're g
to eventually just replace all the
software Engineers there's going to be
no wor no more software engineers in
about a decade um and maybe Chris I'll
toss it to you first because your
prediction is that if anything we're GNA
have way way more software engineers in
I think 2027 right so literally like 24
months from now um why do you think that
I think if for two reasons number one is
with code assistant being everywhere and
with things like jet gbt large language
models pretty much in the everyday
person's hands everybody can become a
coder so you don't need to go and pay
money to go and get somebody to do that
you can literally have a go yourself and
I think that is just going to open up
this sort of democratization of coding
that we've all kind of hoped for and I
think more tools will come in like uh
you remember scratch from kind of MIT
then I think we're going to see more of
that style side of things and everybody
is going to become a coder the other one
is you didn't say in your question Tim
whether they had to be humans did you so
the carbons and the silicons and there's
going to be a whole bunch of silicon
coders to match us carbons so uh when I
multiply that up by 2027 there's going
to be a billion buddy okay all right
that's really interesting yeah I guess
kind of what we're talking a little bit
about is almost like that the I guess
the question is whether or not like the
the the job coder or the category code
software engineer is really going to
make sense in the future like it almost
feels like no one's like Oh I'm a word
processor right like kind of everybody
knows how to write um show I know your
response seemed to kind of suggest that
you think some of the some of the skills
you'll need are going to have to change
yes absolutely I think uh all of us will
become pro at uh writing good grammar
and the way you ask a question and how
you describe what you want to get done
uh it's the a good technical PM does a
really good job at explaining what
exactly they need so that the developer
can go and execute the code to the
vision of what the PM had right so I
think that's going that's going to shift
quite a bit let me just spend a minute
on just appreciating how far GitHub has
come we just you just refer to their
annual report that talks about the
GitHub all the numbers uh last week we
were at the at their Pig GitHub Universe
uh event and this is where we as IBM
sponsored that as well just to give you
a sense of how far they have come GitHub
is the world's biggest repository like
90 90% plus of all Fortune companies use
it 98% developers what not we're at
about what 100 million developers Plus
on GitHub today Chris not quite at the
billing that you want there to be but in
the last like N9 10 years now this 10th
year they've been running this they've
had like what six like close to 70
million GitHub issues people have solved
like 100 like almost 200 GitHub polls
like 300 million plus projects and
whatnot right the way I look at it open
source is is the biggest team sport on
Earth it's not soccer it's not football
it's open source as the biggest team
sport right it has been crazy growing so
when you hear from from Tom the CEO of
of GitHub they're giving you actual
stats of what they're seeing with people
developing more and more and he's very
true right to to say that AI the
threshold of creating code for AI
engaging with guub repositories and
trying it out downloading it
contributing back to it IBM has been a
big proponent of having a very open
community had a really good relationship
with GitHub and now that the GitHub is
opening up quite a bit it has Cloud
models and Google models that can be
leveraged in addition to all the open AI
models I think this is just an
Unstoppable Force right now in the
industry and more and more programmers
will have access to tools that we just
could not imagine we had a couple years
back yeah I think one of the most
interesting things in the report that
they did was also that it seems like the
geography of software engineering is
changing right that there's like a lot
more coders from they're seeing from the
global South come online on GitHub um I
guess sh do you think that's related to
code assistant or I'm kind of curious
about how you feel see like the role of
these assistants in even potentially
kind of like broadening like the
geographic scope of of who gets to be a
software engineer yes so I I spent a lot
of time with Latin America clients as
well u in Americas and I see a lot of
centers developing where all of a sudden
the threshold of being able to have
economic benefit in the region has dis
plemented so people can go create code
and go contribute to to other other
locations other countries and
increasingly so a lot of my clients are
starting to build their Latin American
presence the time zone helps in the US
as well but just the access to tools and
being able to create in every language
right now now I have an opportunity to
know Portuguese and Chile and be able to
code and get some assistance in
Portuguese while I'm creating code right
that did not exist earlier so the
barriers have come down significantly
and you see a a higher threshold this is
one additional thing I would I would add
to this we should not we should also
look at the way energy uh movements
happen across the the world right if you
look at countries like Chile or Latin
America there's a lot of energy that's
been created there and you want the AI
models to be trained closer to where
there's energy because energy
consumption is going to be so much I
would anticipate more pull towards Latin
America or centers where there's energy
production in Surplus it used to take a
lot to move that energy from Latin
America to say serve customers in the US
now the AI models will be will be
created closer to where the energy
sources are counter I want to kind of
turn to you is you know building on what
chit just talked a little bit about is
that um you know I think when you
responded to the opening question you
said well it's going to be more about
like asking the right questions um and I
think that's like one interesting is
like item here to kind of pull on one
thread to pull on is maybe actually in
the future it's actually we're going to
have a lot more technical PMS than we
really will have software Engineers
because it feels like the role that
people are increasingly having is
they're kind of managing this agent that
does the coding not really doing
software engineering themselves and I
guess kind of I'm curious if like the
right way to think about this actually
is we're going to just have a lot more
PMS in the future yeah I see of course
you know the the skills will be changing
shifting and this for example co-pilot
what it's doing is deifying coding for
people without uh formal training uh
turning more people into kind of Citizen
developers so this means that
professionals from diverse Fields such
as data analysis design Finance uh
Healthcare Etc can now use code to build
custom tools without extensive studying
or training or syntax Etc and this kind
of heading towards a word where basic
coding becomes as common as using
spreadsheets or even presentation
software so just learning it's kind of
they're trying to be prompt Engineers
also but specifically designing good
prompt for software engineering so I
think it's also time to start
reimagining what's the right developer
workflows here for experienced coders AI
can handle repetitive Tas letting them
focus on higher order problem solving
for example and this might alter the
skills expected in software developments
with these with coding transitioning
more from syntax heavy work to strategic
thinking and Architectural design so I
think those really would be good skills
to start acquiring not really focusing
on the syntax but more on how do you
build systems how do you design systems
how do you put them together and then
using the C Pilots to help do the syntax
work I think this also could have
implications even for Education right
now curriculums they focus a lot on
syntax and uh so if AI can assist with
coding should really Educational Systems
shift from focusing on syntax to broader
problem solving or even collaborative
design especially as mentioned open
source it is actually the the biggest
team sport right now so I think
acquiring those skills how do you
collaborate you know do all these things
uh do PRS and learn how to work in a
team going to become a really important
skills in the future so I think the
traditional computer science curriculum
really need to adapt emphasizing
creativity ethical coding practices
Advanced debugging and also
collaborative coding yeah for sure and I
think Chris I mean it kind of puts a
tough question to you I mean your title
is distinguished engineer uh so you
spent a lot of time getting really good
at the software stuff right um
but uh you know I think if a kid
approached me today and say should I be
a software engineer should I just tell
them not to like it kind of feels like
where we're headed is like is there any
more value in actually learning how to
code anymore right I think is the
question I want to put to you no they
should go and play soccer ball or
something like that you know yeah yeah
no no I'm no I
think so I think the question I would
say is what happens when it goes wrong
right so if if we really think about
history of software programming right
it's you're kind of back in the kind of
the Punch Cards and the ones and zeros
and then the Assembly Language came
across and then you know and then see I
mean there was a whole bunch of other
languages for try Etc but then it really
kind of took off I would say from the
kind of C onwards which was which is
very close to Assembly Language and then
the abstractions got higher up and now
we're at python Etc and then you know
now got rust blah blah blah so the
number of languages are increasing but
it's abstraction layer after abstraction
layer after abstraction layer we've went
from Hardcore kind of Punch Cards to
assembly to lowlevel languages to
garbage collected languages to higher
level languages blah blah blah blah and
again all I would say that's happening
here is we're moving to another level of
abstraction and that level of
abstraction is natural language um I
think it will be better because with
agents we'll have tools Etc but you're
still going to want to know the
fundamentals because what happens when
you get a bug and and it can't fix it
are you are you going to be like the
Homer Simpson you're just going to be
hitting the keyboard G try again try
again try again or or you're going to
have to go oh my God I'm going to have
to I'm going to have to use my brain how
dare you make me use my brain so I think
I think the fundamentals are still going
to be there I see this becoming a higher
level of abstraction now don't get me
wrong if the models become good enough
at some point then there may be a
different abstraction where models may
have their more native language Etc and
that that's a whole different discussion
but I think I I see this as an
abstraction because we need we need to
explainability we need the reasoning
somebody's going to have to maintain
this and look at it and you can't be
fully dependent on the AI um I do want
to address one thing though Tim on that
GitHub report like Python and we
mentioned python there being we didn't
talk about that aspect all that much so
yeah yeah python being the most popular
language I just want to point out one
thing right and I love all languages I
love python but when number two and
three are typescript and JavaScript
which are effectively the same language
my friends and more JavaScript like
people are becoming typescript people
you know if you add the two things
together who's number one again I mean
yeah I had the same reaction I mean I'm
I am a python die hard but I do feel
like that was a little a little bit
funny in the counting if I might add I
think there are also some risks here uh
there are potential risk for AI created
code especially as more code is
generated by AI quality control becomes
a concern here how do we ensure AI
generated code is secure efficient
maintainable so there is also the risk
of overreliance On Tools like co-pilot
which could lead to a drop in
fundamental coding skills among you know
the new programmers so of course you
know there are lots of advantages here
in terms of
democratizing having more developers uh
lowering the bar of entry and things
like that but we we shouldn't also
ignore the risks that will come with
this especially around quality assurance
control ethical consideration security
and also when things fail so can we
ensure that we have skilled programmers
or people like Chris hey mentioned that
no bug and figure out what's going wrong
or we will have less skilled people in
those fields so what's the right balance
here yeah I think it's always going to
be this tricky balance between kind of
you know democratizing making it
accessible making it usable and then
kind of like the Reliance on these
abstractions um my mom who was like a a
coder when she was before her retirement
has a story about like in her early days
like carrying a bunch of Punch Cards to
the computer and then like dropping the
Punch Cards everywhere and it basically
like and her having a good enough sense
of the program to basically reassemble
the program like physically by the cards
and I was like that is like a level of
diligence that like modern Engineers
just would not be able to accomplish so
but obviously we are happy that we've
moved past the punch card era for sure
I'm going to move us on to our next
topic uh there was a great and very
interesting story that kind of follows
on I would say a sequence of stories
we've had on Moe for the last few weeks
which is thinking a little bit about the
application of AI and specifically kind
of agents to the computer security space
um so Google did a blog post from their
security project project zero that
basically reported that they have a
cyber security agent called Big Sleep um
that was able to find a vulner ability
in SQL light which if you're not
familiar is one of the most widely used
kind of database engines out there and
this is a really interesting story
because at least by their accounting
this is kind of one of the first
instances in which an agent was able to
find sort of a genuine vulnerability in
the wild in a code base that is kind of
like widely used and so in some ways
it's almost kind of like a a real kind
of hello world demonstration that we
might one day be able to use these
agents for uh identifying um real world
vulnerabilities and making our systems
um safer and so I guess maybe Chris I'll
I'll kick it to you to kind of kick us
off on this topic but you know I think
the first thing I think a little bit
about is is this the beginning of just
kind of a new era like we will just
start to see agents play a bigger and
bigger role in making systems more
robuster is this still kind of in the
realm you think of like the toy project
right like we're still going to be a few
years off before we we live in that
world no I I think we're already in that
world I and there's a couple of things
about the big sleep thing the first
thing is if you give agents access to
tools and then you get them to follow
patterns um then the agents are going to
do a pretty good job so if you think of
cyber security you know go fix me this
bug go identify this pattern go find me
what ports are open and on a firewall
these are all things that agents can do
today now if we look at the big sleep
one and I and I do want to caution this
because when I read the paper there the
thing that they did is they took an
existing vulnerability that existed on
on that code base and then they got the
agent to go search the PRS and say hey
go find me another vulnerability of this
style that matches this pattern um that
wouldn't have been patch yet and then it
went and found that so as much as it's
like by my understanding as much as um
the agent discovered a vulnerability on
its own at the same time it's kind of
pattern matching and was prompted and
directed to go and find a bug of that
similarity and and that is completely
within today's technology um you know
agents and models are really good at
pattern matching and if you give them
access a large enough codebase by tools
Etc you access to PRS and the commits
they're they're going to be able to do
that um are they quite at the stage of
being able to find a whole new class of
vulnerability that is completely
undiscovered and not prompt and
patterned in itself I don't know yet I
think we're maybe a little bit off that
but I don't think we're too far away
from it yeah pretty interesting cter
Maybe uh to bring it to you next I mean
because you think a little bit about the
kind of risks around all these
Technologies uh you know it seems to me
right like that like you're going to use
this for security but also like the bad
guys will get access to these agents in
well and it seems like very
straightforward to be like I I have this
vulnerability find it elsewhere in this
code base is also exactly the kind of
same thing you need to do if you were
going to sort of harm these systems um
curious about how you see that kind of
cat and mouse game playing out like does
the defense have the advantage right now
do you think the the offense is
eventually going to have the advantage
kind of just what that balance looks
like um as these systems become more
sophisticated yeah that's a very good
point of course as big sleep or other
similar system they're strengthening
defense with AI agents so they're
revolutionizing vulnerability testing
allowing continuous autonomous scanning
that adapts to new threats and this this
is especially beneficial in complex
systems or complex environments like for
example Cloud infrastructures where
we're doing all these manual monitoring
is very inefficient
and security teams could be empowered
and act faster on these emerging
vulnerabilities and reducing the attack
window however at the same time there's
also this threat of offensive AI so Aid
driven security tools can also be a
weapon in the wrong hands just as
Defenders can use AI to preemptively
catch vulnerabilities attackers could
also use similar tools to identify
exploits at scale so this creates this
potential AI like he said arms rate in
cyber security where the line between
defense and offense is very thin yeah I
think what's so interesting about it is
it also suggests kind of eventually
we're going to see a whole kind of dark
criminal ecosystem which kind of M
mirrors the the kind of one that we have
publicly like that there will be
basically like a a criminal Lambda Labs
right where you can like kind of run all
these agents um you know completely free
and and for criminal purposes um and
it'll be really interesting to see how
that kind of ecosystem evolves because
you know people who want to use these
agents for bad purposes will sort of
need the same infrastructure that you
know the the people doing cyber security
are are engaged in yeah so I think
that's why maybe some ethical and
Regulatory here challenges are will will
need to be resolved you know with this
rapid development of AI Bas security
there is this call for framework to
ensures also responsible users how do
you protect these infrastructures and
tools uh so government for example and
government uh cyber cyber Security
Experts they need to be tasked with
creating also ethical guidelines and
regulations to balance the benefits of
things like big big sleep with its
potential misuse also yeah um let me
give you a client U perspective on this
we do a lot of work with our clients on
cyber security we have a whole Security
Services team with an i Consulting it's
been doing an exceptional job with
clients we also partner very heavily
with our partners like Palo Alto to do a
lot of cyber security work with them and
we leveraging generative AI models and
AI models quite heavily in that
partnership as well U there it's a
two-way street it is AI helping drive
better security and as the reverse how
do you secure the AI models themselves
right if you look at the three different
steps that our clients go through the
securing the actual data that went into
the models securing the model itself
from cyber attacks and then the usage
itself how do you prevent misuse of the
model when it's in production right so
there's across all these three different
buckets we've done quite a bit of work
in creating AI models that prevent and
detect and can can counter the serial
imp attacks and things of that nature we
had recently released our Granite series
of models Granite 3.0 uh if there's a
there are a lot of public benchmarks and
we have some private IBM benchmarks as
well where every model that we are
putting into production we have the
ability to go test them across all these
different uh attack patterns and stuff
right and uh if you look at that that
small class of models which are roughly
2 to8 billion parameter models we do a
really good job at across all those
different seven eight different criteria
the granite model scored higher than say
the llama and the Mel and a few other
models as well then on the on securing
the actual usage every time you're
talking to a model and you're you're
bringing data out for the model both
input and outputs get filtered so I'm
much more confident in
2024 November when we put models in
production there enough safety guard
rails from IBM and other ecosystem
partners that that we can start to
address these fairly well yeah that's
great and there's one subtlety here that
I think is worth diving a little bit
more into show but if you want to speak
to it is you know with big sleep you're
basically having like an agent like an
AI model examine sort of traditional if
you will software uh code um and it
strikes me that there's a whole separate
set of questions about how you could use
models to analyze the security of models
right yes um because I think obviously
where all this goes is that like once
you do security on agents it's the
security of your security agent that
becomes important curious if you can
talk a little bit about like how the
thinking around that is evolving because
it feels like the pattern matching of oh
here's a vulnerability and code that
we're finding elsewhere looks a little
bit different from how you might use a
model to evaluate the security or safety
of of a model yes and uh I've been
really excited about the work the
collectively the a community has done in
the space um outside of Google we've had
some amazing work done by Nvidia meta
IBM research on creating these models
that can detect vulnerabilities right so
we do that at scale there's a pattern
recognition on the logs that's coming
out there is vulnerability on what are
the corner cases you can now start to
create infinite possible combinations of
how you could break a particular model
and you can stress test them in real
time right so I think we we're doing a
good job as a community on sharing those
techniques as well a lot of the work in
the space has been very open source so
you can start to to to compare different
models different benchmarks private and
public that people are leveraging to
test these vulnerabilities of software
code um I think over time uh there's
there's a recent paper that came on uh
comparing even the llm judge how do you
judge the llm judge right so there's a
lot of this like starts to get thinking
about very meta and and the there AI
That's monitoring AI but I think we are
just moving the the bar of what does a
human do versus what does an AI do so if
you think about the way we uh employ
people into our organizations we would
have somebody who's a graduate from an
amazing school with multiple degrees
just like a really nice llm and we're
giving them some few short learning some
examples during training saying that
here's how we do this thing in our
company then you'll give them access to
all the other vulnerabilities and all
the other things right they are in real
time reading up on a new vulnerability
that happens in a particular environment
and then trying to think how will that
impact their own code so we're starting
to to Crunch through some of those steps
that a human would have done and if you
think about this as bring a new graduate
hire from an institution like MIT or
Stanford into your organization for
cyber security that's the exact same
pattern that we are following with LMS
as well yeah that kind of human metaphor
of how we train cyber Security Experts
uh and applying that to the model is is
interesting and I think lands on maybe
the final question I had for this
segment which is Chris if I can ask you
to make another wild prediction for this
episode um is you know it feels like the
threshold the badge of honor if you're a
security person is like you you
disclosed a really novel kind of exploit
at Defcon um and I guess I'm kind of
curious if like you think that like
agents will eventually pull that off and
if so you have an over-under on like the
year is it is it 2027 when we're going
to have a billion Engineers or how far
off is that in 2020
this is my prediction AI agents will
reveal the first human vulnerability in
code and therefore they will say this
person here is a human vulnerability and
they're doing bad things so that's my
prediction 2028 it's going to be the
other way around AI agents predicting
human vulnerabilities interesting yeah I
would love if the agent finds out a way
like this is the new method for social
engineering would be actually in some
ways like very perfect
I think also what's going to be
interesting is as AI finds our security
flows faster than ever the real question
is who's quicker Defenders patching them
or attackers ready to exploit them no
that' be really funny to see and the
human vulnerability part Chris you just
mentioned we're doing this for one of
the big Latin American Banks right now
where we're leveraging some social
engineering techniques and stuff the
emails that you create for social
engineering attacks is just looks so
plausible right llms are really good at
creating convincing content and you can
trick and let the whole like click
baiting people to go uh into a into a
rabbit hole that's working out really
well but it's really nice some of our
clients are saying that hey U I'm not
quite sure about putting AI in
production our security teams won't give
us the green check let's go pilot llms
for security team first if they're
convinced and they put it to production
then they don't have an excuse to to
bottleneck the rest of the organizations
it's been a good good method working
with lawyers and cyber security teams in
these large organizations yeah it's
going to be so hard when you like try to
log into work and it's like you've been
locked out cuz you're just too
gullible like we've assessed that you
can't make it here just like okay it's
coming 20128 you heard it here
[Music]
first for our final segment I want to
talk a little bit about search gbt so it
goes without saying that open AI is the
the heavy in the industry the the big
leader everybody's been waiting on their
features and what they release and one
thing that everybody's been waiting on
for a long time is for them to finally
get into to the search space um and long
anticipated but it finally launched um
and now open AI now has a search GPT
feature um and this enters a market
that's been kind of dominated and
competed over by you know companies like
perplexity and of course you know Google
uh through Gemini really wants to get
into this space as well and so this is a
big move right the the big industry
leader has finally kind of put its
marker down for what it wants to do in
Search and I know show you looked into
this you know the question I always come
to it is like does this mean that
perplexity is doomed like is everybody
doomed now that open AI is in the space
um and kind of curious about what you
think the effect on the Market's going
to look like so I I recently posted on
LinkedIn saying that uh after I've I've
had access to GD search for a while u i
I pay for a whole I'm very gullible in
paying 20 bucks a month to try out all
kinds of AI so I've been I've been a
paid subcriber for a while and I was
lucky enough to get access to it U I was
comparing it the closest competitor
would be something like Gemini search
right and then it'll be things like
perplexity right so I think if I did a
side by side comparison I have like 13
different areas of topics that I
compared GT search versus uh Google
Gemini and overall I don't think I'm
going to be switching my search from
perplexity and Google and Gemini over to
uh gbt search quite yet and there are a
few things that I found when I was
comparing them one by one just to just
summarize this have a whole article
giving you visual side by sides but
Google generally is a lot more visual
they've learned learned from years of ux
what's the best way to represent the
information for the user right so for
example if you're suggesting restaurants
if I ask gbt search to find restaurants
in a particular location versus Google
Gemini Google Gemini understands that
it's logical to put a map and pinpoint
all the restaurants in the response that
I'm giving you right so itance the right
ux and people would want to go interact
with the graphic and see which one is
closest and so on so forth right
similarly if you're talking about
weather it makes sense and for last few
years Google has had a really nice on
the very top and tells you exactly what
I looking for right the one thing that
I'm I still need to uh that GP needs to
address is they have have a
prolification of different capabilities
they're not quite combined into a single
UI yet so as an example when I switch
over to web search I lose the ability to
upload any content I can't give it any
attachments I can't use any function
calling things of that nature that I'm
very used to when I'm using my o1
previews or my my four O's right versus
in the Gemini world Google Gemini they
they fig out what I'm looking for right
so the simplest example would be if I'm
standing in front of a monument or some
place some Landmark I take a picture I
said can you find me restaurants around
this right now obviously Google Gemini
will identify the place very high
accuracy it will give me nice
recommendations and it help me fine tune
it chat gbt's gbt search cannot take uh
attachment so it can't take any iary it
can't do things like uh if I give you a
document and say Here's my here are the
people that I'm looking for go on link
in and scrap scrape something for them
it can't act it doesn't have access to
function calling I can't give you
documents right so there are certain
things that are like absolutely missing
uh on the GPT side I I think that the
last the last piece that is going for
Gemini which uh is is uh which is still
why I favor Gemini Google is the
connection to your personal data I've
been a big Google like my email address
is show with the Gmail I got that like
when they were starting at the very very
beginning right so we uh all my data my
photos my calendars and stuff like that
are inside of Gmail so when I ask about
hey can I find restaurants near the uh
near the hotel I'm staying in in Mexico
it'll be able to go find that really
quick it's very very personalized you
can go search with my permission of
course it can go and look into my emails
and things of that nature that has a
huge value add to me yeah that's so
interesting is basically that you know I
think way maybe one way of thinking
about this competition is like how much
is search about like the form of the
results versus like the substance of the
results right which is kind of show what
you're saying is like oh when you ask
for a restaurant it's great to have like
the map and the pins and like all the
stuff that Google has indexed even
though the response might be like less
conversationally well flavored than like
what you might get out of perplexity or
or something like that just to counter
maybe some of the arguments that
mentioned of course having that
personalization is so important having
access to all of that and I think Google
has perfected many of these features
given its long history with search but
don't you see as GPT is acquiring also
more
multimodality uh features and as people
more people are using uh chat GPT or
search GPT that personalization will
come uh along you know they they'll
acquire more personal data they can
customize also things so I think it's
just maybe a catchup game here uh one
thing also that I find nice in SE GPT
that I still don't see in Google search
is that interactive nature um the way
they basically it's more conversational
search so unlike traditional search
where they give you like a bunch of
links that you have to click through
this this is making search more
intuitive particularly for complex
queries or ongoing project users might
not longer need to click through a list
of links as the model delivers
synthesized responses so coner I will
push back on that a bit if I may I think
it's it's unfair it's apples or oranges
if you're comparing gbt search with the
classic Google search the right uh
comparison would be Gemini search with
Google right so Google's Gemini is
multimodal like I said earlier I can
take pictures and things of that nature
it is personalized you can tap into your
Google your Gmail and stuff if needed I
can take I can take images and so on so
forth right so I don't Google
understands acknowledges that the Blue
Link world is dying their Gemini Google
search I think is an incredible product
it works really really well and they're
trying their best to make sure that
within the conservative boundaries of
what they can do being such a large
company personalizing hyp personalizing
and multimodality and things of that
nature looking at very very long videos
and summarizing it like things like that
I think they have a very good mode but
the the true comparison is not Google
search Blue Links with GPT search like a
lot of people in media are comparing the
two together and I feel that it's it's
unfair to Google I agree with you it's
not a fair comparison yes and I think
the question here are we moving towards
this one model to rule them all scenario
for search or it's going to be a
competition so but we always had one
more model to rule them all with Google
because they had such a massive 95% Plus
Market right so I I think people are is
that shifting to the Google's gemini or
open AI right now will have a place as
it's also improving his search
capabilities so I I think open AI is
going to win this one out but maybe not
for the reasons you think and so my
experience with chat gbt with search in
this case is it works as a true
extension to the the conversation I was
having anyway so I was having a good so
maybe I'm looking at a particular paper
on something I want something updated
before without access to to the internet
there it's only going to come back with
uh a limited amount of information right
with chat GPT with search it extends out
it takes its knowledge plus the
knowledge that it's got from the
internet and then starts to give me back
better answers and and for me that is
the game changing part and I just found
myself using chat GPT with search more
naturally than I did before so rather
than reaching out for Google to go
answer that question and then mess
around I'm just doing it within the
conversation now if I then bring that in
with the o1 capabilities as that starts
to get releasing and as they start to
combine the modalities the fact is you
know uh you know open ai's been leading
on the modalities on this for a while uh
they're aheed at a game with the o1
models Etc making it more agentic when
they bring all that together um I think
Google's got a lot of work to do there
are they going to go after true search
Etc no but if this is a compar
comparison between Gemini and and the 01
models with search capabilities and
tools as it stands today I I think open
AI is winning that one and and I feel
that today from experience I'm having
and and and the fact is there are
millions of people using chat gbt today
and there's maybe 12 people on chit
that's using Gemini toar today so so I
think that's that's that's where uh
that's where that's my feeling yeah I
think there's a very interesting
question here a little bit about like
it's a debate over what we think the
commodity asset is and what do we think
is the Irreplaceable asset or the hard
to reduplicate ABS asset I think show
bit if I don't want to put words in your
mouth but show bit your your position
seems to be all of this data all of this
kind of incumbent Advantage is the hard
to replace thing and I think what Chris
is saying is like well actually getting
the data is not the hard part it's this
initial additional analysis layer which
is going to be the really unique
differentiator I don't know maybe that's
the right way so there's no doubt that
Google is under logge pressure
perplexity has just shown how well they
work and like I'm Pro user for a very
long timey amazing work right so I think
yeah generally speaking yes they have a
lot of pressure on getting this right
it's a hundred billion dollar problem
for them to solve so they're putting
everything that they can behind it right
so they they had to make sure they nail
the conversational search part and more
more personalized I think the things
that are going in favor of Google are
the fact that they have the world's data
to train on in YouTube and search and
they have like Decades of how people the
the patterns that people follow to get
to the right answer when they're
planning a trip things of that nature
they do have a lot that they can tap
into that other competitors like open I
do not have access to today right so
over time they'll try to catch up with
each other uh Google will always have a
lot of fire behind them to go fix this
to get this the right way but I'm just
the fact that my personal data is
accessible to Google I think that may
change at at some point but in the
current state it is more relevant for me
to have an answer that's hyper
personalized to me and the way I do
things right the fact that I'm asking
you to set an iary in Italy it should
know that I'm landing at 2 p.m. and not
start my itary at 6:00 a.m. right so
that that fun fundamental part of me
having to tell a model say guys just
understand what is important to me first
and you know that the airport is X hours
away so take all of that information
into consideration and I'm thinking
about this from a very Enterprise
perspective as well right for us our
clients are more focused on I've all
this repository of manufacturing
documents and warranty documents and
stuff like that and then I have all of
the other data sets I need to be able to
search against those with high accuracy
and the same experience I'm getting with
chib or search or with Gemini I need to
bring that into my employees to get
unlock the value and it's really nice to
see that meta is starting to get into
this game as well there's a lot of rumor
over the week about meta coming up with
his own search because now they're
incrementally making progress towards
that space as well so I'm really excited
about the future of what happens with
getting information to show in the
moment that I need that's hyper
personalized to the way I consume
information and what's in my emails
things of that nature and I agree with
that shet but you know what I don't want
have Google having exclusive access to
my information right do you know what I
actually want an open EOS system in
marketplace where I can plug into the
agents here go access to my Gmail go
access to this Etc and as opposed to
going well Google's already got this
information and it can train its models
and do whatever it wants with my data
and nobody else can play in the system
so open ecosystem is where I am so yes I
agree but it's got to be open yeah there
is this potential uh for a centralized
AI search model to emerge potentially
monopolizing search you know while this
could bring you know consistency and
ease of fuse it also risk creating you
know this information bottleneck so I I
definitely agree with Chris that having
an open system would be better because
if one model provides small search
answers it might centralize information
flow reduce diversity information
sources and also shape public knowledge
in ways we really don't yet understand
great well uh that's all the time we
have for today uh it's great show but
that you mentioned that meta thing
because that was the other part I wanted
to get into so we will definitely have
that on a future episode of mixture of
experts but unfortunately we are out of
time today so thank you for joining us
if you enjoyed what you heard you can
get us on Apple podcast Spotify and
podcast platforms everywhere show bit
kowar Chris thanks as always appreciate
you joining us