Developers Debate AI's Real Intelligence
Key Points
- Developers see generative AI more as a helpful “librarian” that retrieves and assembles information rather than a truly intelligent system.
- JJ emphasizes that current AI lacks logic or reasoning, operating like predictive‑text by selecting the next most likely word from large datasets.
- Because it can query natural‑language inputs against a company’s documentation, AI is best used as a specialized tool for accessing and summarizing existing knowledge bases.
- The term “artificial intelligence” is considered overloaded and misleading; calling it “AI” creates confusion about its actual capabilities.
- While the broader conversation hinted at concerns about AI taking jobs, the focus remains on positioning AI as an augmenting tool rather than a replacement for developers.
Sections
- Developers Debate AI’s True Role - In a conversation with IBM’s developer advocate JJ Asghar, the discussion reveals that many developers see generative AI as a useful, but limited “librarian‑type” tool rather than a truly intelligent system, and they grapple with its impact on their work rather than fearing it will replace them.
- Navigating Generative AI Libraries - The speaker likens generative AI to an overwhelming library, explaining how to choose and evaluate the ever‑growing array of models while emphasizing the importance of sovereign, locally‑run solutions like IBM Granite to keep proprietary data secure.
- Open Source vs Closed Source - The speaker explains open‑source development, contrasts it with proprietary “cathedral” models, and argues that multiple eyes improve security, citing recent incidents.
- Developers Prioritize AI Ethics & Transparency - The speaker stresses that developers value ethical governance and transparent AI models—exemplified by the Granite initiative—while reassuring listeners that AI will not replace their jobs.
- Developers: Embrace AI Beyond Kubernetes - After thanking a non‑technical audience member, the speaker urges modern developers to prioritize learning and thoughtfully integrating AI tools—rather than merely slapping AI onto existing cloud‑native solutions.
Full Transcript
# Developers Debate AI's Real Intelligence **Source:** [https://www.youtube.com/watch?v=MmgdcWA3bcs](https://www.youtube.com/watch?v=MmgdcWA3bcs) **Duration:** 00:15:22 ## Summary - Developers see generative AI more as a helpful “librarian” that retrieves and assembles information rather than a truly intelligent system. - JJ emphasizes that current AI lacks logic or reasoning, operating like predictive‑text by selecting the next most likely word from large datasets. - Because it can query natural‑language inputs against a company’s documentation, AI is best used as a specialized tool for accessing and summarizing existing knowledge bases. - The term “artificial intelligence” is considered overloaded and misleading; calling it “AI” creates confusion about its actual capabilities. - While the broader conversation hinted at concerns about AI taking jobs, the focus remains on positioning AI as an augmenting tool rather than a replacement for developers. ## Sections - [00:00:00](https://www.youtube.com/watch?v=MmgdcWA3bcs&t=0s) **Developers Debate AI’s True Role** - In a conversation with IBM’s developer advocate JJ Asghar, the discussion reveals that many developers see generative AI as a useful, but limited “librarian‑type” tool rather than a truly intelligent system, and they grapple with its impact on their work rather than fearing it will replace them. - [00:03:17](https://www.youtube.com/watch?v=MmgdcWA3bcs&t=197s) **Navigating Generative AI Libraries** - The speaker likens generative AI to an overwhelming library, explaining how to choose and evaluate the ever‑growing array of models while emphasizing the importance of sovereign, locally‑run solutions like IBM Granite to keep proprietary data secure. - [00:06:27](https://www.youtube.com/watch?v=MmgdcWA3bcs&t=387s) **Open Source vs Closed Source** - The speaker explains open‑source development, contrasts it with proprietary “cathedral” models, and argues that multiple eyes improve security, citing recent incidents. - [00:09:31](https://www.youtube.com/watch?v=MmgdcWA3bcs&t=571s) **Developers Prioritize AI Ethics & Transparency** - The speaker stresses that developers value ethical governance and transparent AI models—exemplified by the Granite initiative—while reassuring listeners that AI will not replace their jobs. - [00:12:34](https://www.youtube.com/watch?v=MmgdcWA3bcs&t=754s) **Developers: Embrace AI Beyond Kubernetes** - After thanking a non‑technical audience member, the speaker urges modern developers to prioritize learning and thoughtfully integrating AI tools—rather than merely slapping AI onto existing cloud‑native solutions. ## Full Transcript
It seems like folks treat technology development as if there is an easy button.
You know, just press it and it's all good.
So I have to wonder, how do the people who are actually doing the work
developing these programs themselves, how do they feel about AI?
Or, the $10 million question, do developers of AI
worry about AI taking their jobs?
You know we want to know.
So my guest today is JJ Asghar, developer advocate at IBM.
And he’s going to take us inside.
JJ, welcome. Hey, thank you so much for having me.
First up, you're coming from the developer POV.
So can you tell us from where you stand or where you sit, how are developers
thinking about generative AI as a new development tool?
The core problem with AI for developer standpoint
is, it's not very smart.
It's, it's not and it's not really, so you'll notice quickly that I will not use
the term artificial intelligence because it's not intelligent.
AI is an overloaded term now
that has caused confusion in the market.
There are certain tools for certain jobs.
And the best part about AI is it is one of the best librarians
you'll ever have in your life.
Hold on.
Is that a controversial take right there, JJ You know, as somebody who lives
and breathes this every single day, I'm trying to tell you the truth here.
Now, I want to spend a little bit of time on that then,
because you really made a point
to, you know, take away the intelligence part of it.
Why can this not be considered intelligent?
Oh, that's a philosophical question there my friend.
But in short, what it is, is a is a program
that is looking for the best possible answer to what you are asking.
There's no logic inside of it.
There's no reasoning inside of it.
What it is doing is looks in a database and it says you're looking for apples.
Okay, cool.
Well, these words are really close to the word apple.
So maybe you're asking about Granny Smith apples.
Maybe you're asking about red apples. Maybe you're asking about green apples.
So it generates to that question
where adds to those words as that sentence.
If you’ve ever noticed as you use generative AI,
it comes out in word per word because it's actually looking for that next word.
It doesn't figure everything out and then dump it out.
It's like predictive text on your phone, if that makes sense.
No, that makes complete sense.
So then I'm wondering from your perspective as a developer, then
how do you best go about using Gen AI to its maximum capacity?
That's a great question.
Again, it's another tool, right, where every single company
out there, every single person, frankly, has a bunch of documentation, right?
They have information that they need to store in,
what some people call the second brain.
Right. You know,
you take notes or those notebooks, people have with notes and whatever.
If you look at generative AI from that lens, where it is now a thing
that you can query in natural human language or natural language
processing to be able to ask it questions about those documents.
Hence, the librarian, that becomes really powerful
because now a company can have all of its documents
and then instead of going to the HR representative to talk about,
you know, your insurance policies from 1963 or,
I don't know, whatever number you're thinking, right?
But, instead of asking those questions and having them go look for this,
now you have this thing that already knows about all that,
all that data, or at least gets really close to that data.
Well, then, since you started this, I'm going to go back and forth with you
with this as a library metaphor, because I love a good library.
But a library can also be super duper intimidating, like my university.
You know,
we had two levels
and levels in the stacks, and so much information is housed within there.
So if we're looking at gen AI in that way, there are just constantly
and ever expanding options that are out there.
How does one even begin to navigate and evaluate these different tools
that are there?
JJ, how do you know which book to pull first?
Yeah, great. Wonderful question.
One of the biggest challenges is that every single model, which is
what is the brain, right, of the generative AI,
every single one is there's different ways of designing and programing those.
One of the best parts is, is there are generative
AI models out there, just like something called Granite from IBM
that you can run locally inside your own data center
or inside your own country, which allows you to have sovereign AI.
One of the biggest problems is that you don't want to as a company,
you don't want to send your data out to the San Francisco Bay area
and have them crunch the numbers and come back right
that you're sending it across the internet.
Would you send your secret sauce across the internet?
No, that's a horrible idea, right?
Yeah.
So that’s
the power of having these different models and different ways they’re designed.
And the foundational model from IBM called Granite, it's a model
that is designed to be able
to run in your own data center and now you can train it, or what
we call fine tuning to give it more skills and more abilities.
Well then hold on.
So you mentioned Granite.
What you’re talking about, it sounds as though Granite is open source, right? Yes.
You can actually get the paper, believe it or not, on my browser right now,
I have a link to the actual paper right above your head, which is really funny.
because I read it all the time.
It is, it's a it's a math paper, so it is actually kind of hard to read,
to be honest with you.
But I do actually, it is really there.
But yes, you can actually see exactly what IBM used to build the data set.
So if we're using the analogy that, a model is a program,
think of the data set as the source code, okay.
That's not 100% true, right.
People are going to pick at me because I said that.
But if you're trying to keep that analogy in your head to understand
the power of this, data sets are the source code for the models, so that
that's actually what builds the models, it gives it the initial knowledge.
As you know, I didn't say intelligence.
Initial knowledge of what it has to understand.
And then you put your knowledge or what we call fine tuning on top of it,
of your company's PDFs or documents or whatever inside of it.
Gotcha.
Well, sidebar, you're going to start having me say, AK instead of AI.
Now you're messing with me, JJ. I love it.
Now, I love that you
broke down Granite for us there, but in general, can you let me know
a little bit more about why open source could be considered as a beneficial thing?
And is open source always the ideal? So
you, my friend are a philosopher deep down inside, I'm starting to get this.
I'm actually an open source engineer right.
So what does that mean?
That means most, if not all, my code is out in the public,
right where you can actually see the tooling
and the work that I'm doing where you can literally just find me
on the internet and be like, oh, this is what JJ is working on now.
Right.
That is the core of open source, where you're using what they call
I think it's, Eric Schmidt wrote an essay called The Cathedral and the Bazaar.
There's a cathedral
where it's very top down mandated, which is closed source software.
And then there's the bazaar, which is like,
you know, like a marketplace where everybody's kind of working back and forth
and you leverage all these engineers across the planet to make stuff.
So what does that mean?
Open source allows you to have multiple eyes on problems looking for stuff, right?
There's certain security issues
that have recently happened on that have hit the news.
That one was a closed source system that caused a lot of people
a lot of problems traveling.
And then there's another one that was actually even worse, but
on the open source side, but was caught before any major issue happened.
And it was because some, some nerd out there couldn't
actually access their server as quickly as they usually got,
which was really, really interesting.
So we had one that took down travel, which was a closed source system,
and then we had another one who was just one nerd
who was like, I couldn't log into my server fast enough.
Oh, there's a backdoor in open ssh.
This isn't good.
And then he found the CVE, and he figured it out,
and then put it out to the world and fixed it before anything happened.
Okay. See, hold on.
That's actually really counterintuitive to me, because I would have thought
that the closed system would have been safer than open.
Because when I think of open, I think, okay, people can just come on in here
like, bad actors can come and do their thing and mess around with it.
But you're saying that in this case, the open source system,
because it was able to draw upon, experiences from people
that weren't just inside of that actually ended up being a stronger force.
Exactly.
It's 100% that because you have so many more eyes, so much more experience, right?
I mean, what is the whole story of like, why do you need different people in
the room is you need
diversity and the ability for people to come with different viewpoints.
What is open source?
But the, the way the nerds are doing, a true diversity
where you have people who are who’ve been in the military, you have people
who have only ever, who failed out of university,
you have people who didn't go to university,
all looking at the problem in different ways, and they all resolve it.
And there's a handshake agreement
inside those rooms that allows you to say, okay, this is a good patch.
Let's go ahead and submit this so this fixes the problem.
So you mentioned this a little bit before too, JJ,
I believe you said the word transparency.
So if possible
I want to time travel a little bit and go back there and dig going into it
because you know transparency, ethics, governance, these are huge questions
when it comes to AI or AK, in your situation.
So what really matters to developers when we're thinking about those big questions,
when we are thinking about ethics and data transparency and governance?
So, frankly, as a developer, somebody who gets to play around
in the plumbing, not the porcelain but the plumbing of the world, right.
The ethics and the and the and the governance of it
is insanely important to me, because I need to know
the thing that I'm working on, frankly, like, I'm a human.
I like people.
I don’t want to kill people.
Right? Like, that's not something I want to do.
What a relief. Yes. Yeah. Yeah, exactly.
But, you know, if we take AI the wrong way, it can really hurt society, right?
It really can.
And having that governance, having that transparency in it, we can be the rebels.
And that's what we're doing here with the Granite model and the transparency.
So we're giving you an opportunity to actually see into
how these models are made, so you can make good choices
for your business and hopefully society as a whole.
Let me take you back then to this idea that there are legitimate concerns
for you to have when it comes to AI, especially as a developer.
So I'm going to get really personal with you for a second.
Are you and your fellow developers, are you concerned about AI taking your jobs?
No, not at all.
Not at all.
There's some great stories around using AI to build
software, and people are like, oh, well, why won't you just get the AI to do it?
People don't realize until much later on in their career,
especially as a whole, or they don't teach you this
in university, if you go down the computer science and engineering, space,
they assume that engineering is math.
And a lot of like, you know, sitting there
thinking abstractly to figure out problems.
But believe it or not, software engineering as a whole
is actually knowledge work, right?
It's actually artistic also,
where you have to think of problems and unique ways to do it.
And back to the intelligence statement earlier,
AI doesn't have intelligence, doesn't have logic to figure it out.
It can regurgitate code that it knows about.
But if I put my business and ask it to create something for it,
and then something went pear shaped inside of it, I would have to have
an army of engineers to unwind what it did to make it happen.
And this goes back to the analogy of the librarian, where there are
some code completion systems out there, including Watson code assistant,
which is from IBM, that allows you to use it as a reference, right?
Where you can ask it like it’ll give you suggestions
to put in like if/then statement or stuff like that.
As a whole, you would never ask it to build me a piece of software.
You'd use it as a pair programmer, a programmer sitting beside you
so you’re like, so I'm trying to do this, and you write it out as a sentence,
and then it gives you a suggestion, and then you look at that suggestion
and then you edit it to actually do what you're looking for.
Right.
It gives you kind of a framework, if you will, or a straw man of the problem
that you're trying to resolve and then come out with that.
Does that make sense?
No. that does. That does make sense.
And thank you for breaking it down in that way.
I just have to give you additional props right now because as someone who's
not a developer, you're actually making this make sense to me.
And I just appreciate you for doing that.
But now I want to give you a chance to actually speak directly to some of the
developers that may be listening, that hopefully are listening to this right now.
If you can encourage developers to do one thing as they move on
with evaluating tools and building solutions, what would that one thing be?
As a developer, and hopefully it's a modern day
developer, you're looking at me right now.
You probably spent some time in the cloud native ecosystem, right
where we use this thing called Kubernetes, and we're trying to do all these, like
these VM to Kubern pod conversions and all that jazz.
We thought that was hard
and we thought we were going to make a lot of money doing that,
because that was going to be the next generation.
Well, turns out, AI is two generations ahead of that and it's even harder.
So what you've got to do
is you've got to go learn this stuff, and this stuff is confusing as all hell.
I'm not going to lie.
And it is a completely different way of looking at it.
But it's not just PhDs and Jupyter notebooks anymore.
There's actual tooling to get something useful out of it,
but you're going to have to talk to your bosses to understand that
as much as the VCs of the world want you to just slap AI on the side
of your company or whatever to say that you're doing it, there's a lot more there,
and you will quickly realize that there's a lot to learn, and the best thing to do
is start from the ground zero and learn what a token is.
And as soon as you understand what a token is,
then find out the next thing you need to learn.
I want to invite you real quick to let me know,
is there anything that you would love to share that I didn't ask you about today?
Oh, actually, yes. Back to the open source story.
So we talked about the Granite model and we talked about how all that works.
Well, there's another open source project
out there called Instruct Lab that is came out of IBM research
and has been donated to, Red Hat that runs it now.
It is basically that fine tuning narrative that we were talking about
by putting your your company's knowledge or your knowledge on top of something
like Granite to be able to do something.
It's in its infancy of a project, but we really do need developers
to come into our space to start helping us in here,
because the more we have there, the more transparency we show
and the more the ability for the things that I was talking about earlier,
it all boils down to what we're trying to do inside of Instruct Lab.
And there’s enough to learn here that will teach you
the AI ecosystem as you’re going down this path.
So you’ll be able to understand the value of this space.
Developers, you hear that, right?
You've now got your mission.
You got your charge, JJ needs you.
Well look, JJ, thank you so much.
This episode has been hugely informative.
And again, if you are a developer who's been listening, first off,
thank you for being here.
But I know that you're also going to walk away with some great intel.
So once again, appreciate you, JJ.
And that's it for today's episode.
But y'all, please stay tuned for more because you know that it's on the way.
We'll see you then.