Engineers Harness LLMs for Coding
Key Points
- Engineers are leveraging LLMs to instantly comprehend API schemas and endpoint behavior without manually consulting documentation.
- LLMs can automatically diff code versions, highlighting changed lines and often explaining the underlying functionality.
- Large code‑base maintenance tasks such as trimming unnecessary code and refactoring thousands of lines are being delegated to LLMs for efficiency gains.
- Routine but undesirable activities like writing documentation or generating boiler‑plate code are offloaded to LLMs, freeing developers to focus on higher‑value problems.
- The overarching theme is that LLMs serve as a cognitive aid, handling boring or cognitively demanding steps so engineers can concentrate on creative and complex aspects of software development.
Full Transcript
# Engineers Harness LLMs for Coding **Source:** [https://www.youtube.com/watch?v=juG3FUPyrVQ](https://www.youtube.com/watch?v=juG3FUPyrVQ) **Duration:** 00:09:09 ## Summary - Engineers are leveraging LLMs to instantly comprehend API schemas and endpoint behavior without manually consulting documentation. - LLMs can automatically diff code versions, highlighting changed lines and often explaining the underlying functionality. - Large code‑base maintenance tasks such as trimming unnecessary code and refactoring thousands of lines are being delegated to LLMs for efficiency gains. - Routine but undesirable activities like writing documentation or generating boiler‑plate code are offloaded to LLMs, freeing developers to focus on higher‑value problems. - The overarching theme is that LLMs serve as a cognitive aid, handling boring or cognitively demanding steps so engineers can concentrate on creative and complex aspects of software development. ## Sections - [00:00:00](https://www.youtube.com/watch?v=juG3FUPyrVQ&t=0s) **Engineers Harness LLMs for Coding** - The speaker reviews recent blog posts that showcase ten practical ways engineers employ large language models—such as API comprehension, code diffing, trimming codebases, and refactoring—to simplify boring or difficult programming tasks. ## Full Transcript
are you really curious where llms are
actually getting used by Engineers so am
I and that's why I was so pleased to see
a couple of blog posts come out in the
last few days talking about how
Engineers are actually using AI in
everyday coding life to do tasks that
were boring or difficult to do I'm going
to go through 10 of them I'm going to
call out at the end sort of where I
think the overall theme is and I'm also
going to link the posts underneath
because I think both Eric and Nicholas
did a great job articulating how they
have used LL M to make their lives
easier and then sharing that work so
others can learn from it so much of llm
uh learning is about sharing what you
know so others can build on it so
appreciate both of them so the 10 things
that they called out and they're not
exactly 10 like they called out more
than that I just called out a few that
I've seen elsewhere as well uh one is
really around how you understand an API
without having to go and look it up so
API reing right like if you go in and
you say I need to what this schema is I
need to know how this endpoint Works
super easy to get that with an llm
particularly if it's a really well-known
API number two diffing code you can pull
the code down you can see what are the
differences between lines of code and
llm is going to see that call out the
lines of code and will as a bonus
probably tell you how it works number
three trimming code bases so three and
four are related so trimming code bases
and refactoring code both about
efficiency and fundamentally if you're
if you're looking just to trim down the
llm can look at shortening for you if
you're looking to refactor the llm can
do that it's a slightly higher level
task right because you're looking to
make multiple pieces of code work
together and work together efficiently
and drop extraneous code uh and the
engineers report using an llm for both
of those including refactoring you know
several thousand lines of code so it's
not just that it's going to take like a
100 lines of JavaScript that you could
bash out and like make it slightly more
efficient it's going to actually do a
little bit more than that uh so I also
want to call out the like these two like
f five and six fundamentally it's about
taking stuff that you don't want to do
or that your brain has a hard time doing
and making it easier and so there's a
bunch of stuff that's effectively sort
of categorized under boring tasks that
Engineers don't enjoy doing like writing
documentation uh there's a blank page
problem where you don't know quite how
to start on a problem and you want to
just get something out there so you can
start to think about the problem and
code both those things are challenges if
your brain is not ready to write the
code right that second if you need to
get warmed up get into the problem or if
you don't want to take time out of the
code and you want to sort of focus on a
really naughty problem and you don't
want to sort of get all the extraneous
tasks to distract you and so in those
situations just just give the thing you
don't want to do to the llm which is
sort of what the engineers uh recommend
doing here and certainly what I've seen
other places as
well you sort of treat the llm as you
would would treat sort of a assistant
who you are paying to take some of the
load off your plate and that's not super
surprising because we see that with uh
non-engineers as well it's kind of the
same work motion but it's interesting to
see that it extends into the code space
as
well all right uh you can use it if you
are trying to understand the new problem
and typically you're Googling for it but
like it's faster just to get a clear
explanation from an llm so you can just
say hey I need to get deeper on python
in area or I need to learn about curl or
I need to learn about rust or Pearl or
whatever it is and you can actually go
in and just get the primer that you need
to work in the application the way you
want
to and that is way more efficient than
trying to understand something by
inferring from Google results you can
skip the inference you can get the
explanation and what I find is even
though we all talk about large language
models as hallucinators a lot humans
typically tolerate a degree of
inaccuracy everyone is known if you've
been on the internet for a while that
not all links are valid links do
everything on Reddit is true and so if
an llm is coming back to you I think we
have a little bit of a built-in
tolerance for noise or a built-in
tolerance for it's okay it's
approximately right in our information
already and so when we get a primer if
it's mostly right we tend to be like
okay fine like I can get forward with
this I can move along and I can tolerate
whatever happens after afterward by
asking questions and
debugging uh so that's definitely one
that I think that we use for other
things if we're not Engineers but
Engineers sort of using it reinforces
the strength that llms have for
coding building an app so if you want to
sort of understand like how powerful
these can be Engineers are reporting
they can build entire apps maybe not
super big apps but entire apps in code
um and they can do it in zero shot or
one shot where basically they like send
uh a single prompt out maybe two prompts
out and they can get the whole app done
is that for something massive are you
building a B2B SAS business off of that
not necessarily but these days the use
for software is also changing and so
because we can use software for more
things because the cost of making it is
going down there's a lot of utility or
value to be unlocked in just getting a
good prompt and writing the app out
cleanly in one go and so a lot of it is
about prompting and and sort of
understanding what you need to ask the
AI to do and then trusting it to ask you
questions and drive the show when it's
starting to write that code like I've
had a lot of success playing around with
Claude recently myself just sort of
getting Claude into a space where I'm
encouraging it to ask me questions for
clarity around requirements versus me
asking it questions which is a lot more
labor
intensive so that one resonated for me
uh so and then the last two I wanted to
call out one of them is writing
throwaway so I talked about the cheaper
cost of software like it's gotten so
cheap now you can literally write
throwaway code I wrote uh 10 some lines
of JavaScript just to mess around with
Autos scheduling in my calendar and it
took me like 10 minutes between meetings
and it was not a big deal and I don't
know if I'll use it again it was useful
for that moment and whether I go back
into Google appscript and mess with it
again I kind of don't care and I kind of
don't care how stable it was and it was
so cheap to write I was just done with
it if I had had to solve that problem
before llms I would have had to like
really labor over it I'm not like I'm
not a programmer by trade I'm not super
fluent in JavaScript and I would really
have had to sort of dig in to make it
work and it would not have been worth my
time so this is changing the time value
of all of our work and the coding piece
is really interesting because it means
that coding is becoming something that
non-coders can do even if Engineers are
far far better at the architecture at
the Elegance at the clarity of
engineering thinking and that's not
changing and that's still needed the
actual sort of grunt work of generating
code is something that everyone suddenly
has access to and we're seeing that come
through in a lot of appetite for cheap
throwaway code all right and the last
one is it's really tough sometimes to
know how to break a larger problem down
that's something that has been covered
to great effect when we talk about
product management and we talk about how
you break out requirements there's a
whole startup around that now chat PRD
but if you're looking at from an
engineering perspective and breaking out
technical requirements well it turns out
llms can do that too and I think that
one of the things that I take away from
that is that a lot of the Motions as I
look through sort of that Eric's post
Nicholas's post these are motions that
are common to a lot of knowledge work
but they're just being executed in the
code they're not actually different
except that you're working in code all
day instead of working with text all day
and I think that gets it some of the
underlying strength of an llm where it's
a next token predictor and it works on
code and it works on text and if you
happen to work in code it works pretty
well there so if I actually had to pull
out a theme I would say this is more
like as someone who's like a technical
product person this feels more like the
work I do anyway with llms which was a
little bit surprising to me uh than it
does like a foreign language or
something that like is a totally
different use case for an llm
fundamentally people are using large
language models for boring stuff for
stuff they don't want to do to lighten
the cognitive load around really
challenging tasks frankly just to save
them time that is a common use case and
it's really good to see specific
examples of this I'm definitely going to
link both of those blog posts below so
you can check them out I think it's
important to start to socialize good use
cases that are actually practical and
usable for llms especially in the coding
space where there's just so much
assumption around oh you know the llm is
going to take jobs or the L m is useless
let's just talk about what it does and
let's worry about the implications later
I hope this was fun I hope it was
helpful