Six AI-Powered Coding Work Patterns
Key Points
- The speaker critiques the “hack‑centric” view of AI‑assisted development as brittle and emphasizes the need for more stable, repeatable approaches.
- By analyzing practices across industry leaders—founders, indie hackers, and product heads—they identified six proven work patterns that serve as reliable foundations despite the rapid churn of new tools and prompts.
- The first pattern, **codebase mapping and onboarding**, uses AI to generate summaries, graphs, and PRs that accelerate understanding of existing codebases, even for non‑engineers.
- Real‑world examples illustrate this pattern: Claire Vo leverages Devon for repo analysis and initial PR generation with ~80% success, then refines outputs with Cursor; CJ Zafir similarly integrates PRDs and planning into Cursor for seamless edits.
- The article also offers an extensive review of AI coding tools, positioning the six patterns as “hidden stable elements” that developers can rely on while mixing and matching the ever‑evolving toolset.
Sections
- Stable Work Patterns Amid AI Tools - The speaker critiques fragile AI hacks and proposes focusing on six proven work patterns that unify diverse tools, offering a comprehensive guide to stable development practices.
- AI-Powered Code Mapping Tools Overview - The speaker outlines various AI utilities—such as repo prompt/onboard files, cursor rules, Claude code, Windsurf Cascade, and IDER—used for repository context extraction and onboarding, emphasizing pattern recognition over a single “best” solution.
- Importance of Planning in AI Development - The speakers stress that thorough planning using tools like Cursor Composer prevents verbose errors, high‑load throttling, and model refusals, enabling reliable execution and easy rollback.
- AI Debugging: Tools, Limits, Tips - The speaker outlines how AI can streamline bug detection and fixing—with examples from industry leaders—while emphasizing the need for clear error traces, organized code, and human oversight to mitigate tool limitations and regression risks.
- AI Coding Consistency and Context - The speaker stresses the need for human sign‑off, clear rule files (e.g., markdown or dotcursor), and robust context‑engineering practices to enforce consistent, drift‑free code generation, citing tools like docursor, claw.md, model context protocol, cascade, and Claude sub‑agents for multi‑agent workflows.
- AI Prompting Empowers Non‑Tech Builders - The speaker argues that using AI prompts to write code makes building applications accessible to anyone, dispelling the myth that technical expertise is required.
Full Transcript
# Six AI-Powered Coding Work Patterns **Source:** [https://www.youtube.com/watch?v=Z0wb0y5BVIY](https://www.youtube.com/watch?v=Z0wb0y5BVIY) **Duration:** 00:22:28 ## Summary - The speaker critiques the “hack‑centric” view of AI‑assisted development as brittle and emphasizes the need for more stable, repeatable approaches. - By analyzing practices across industry leaders—founders, indie hackers, and product heads—they identified six proven work patterns that serve as reliable foundations despite the rapid churn of new tools and prompts. - The first pattern, **codebase mapping and onboarding**, uses AI to generate summaries, graphs, and PRs that accelerate understanding of existing codebases, even for non‑engineers. - Real‑world examples illustrate this pattern: Claire Vo leverages Devon for repo analysis and initial PR generation with ~80% success, then refines outputs with Cursor; CJ Zafir similarly integrates PRDs and planning into Cursor for seamless edits. - The article also offers an extensive review of AI coding tools, positioning the six patterns as “hidden stable elements” that developers can rely on while mixing and matching the ever‑evolving toolset. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=0s) **Stable Work Patterns Amid AI Tools** - The speaker critiques fragile AI hacks and proposes focusing on six proven work patterns that unify diverse tools, offering a comprehensive guide to stable development practices. - [00:03:20](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=200s) **AI-Powered Code Mapping Tools Overview** - The speaker outlines various AI utilities—such as repo prompt/onboard files, cursor rules, Claude code, Windsurf Cascade, and IDER—used for repository context extraction and onboarding, emphasizing pattern recognition over a single “best” solution. - [00:06:38](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=398s) **Importance of Planning in AI Development** - The speakers stress that thorough planning using tools like Cursor Composer prevents verbose errors, high‑load throttling, and model refusals, enabling reliable execution and easy rollback. - [00:11:20](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=680s) **AI Debugging: Tools, Limits, Tips** - The speaker outlines how AI can streamline bug detection and fixing—with examples from industry leaders—while emphasizing the need for clear error traces, organized code, and human oversight to mitigate tool limitations and regression risks. - [00:14:26](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=866s) **AI Coding Consistency and Context** - The speaker stresses the need for human sign‑off, clear rule files (e.g., markdown or dotcursor), and robust context‑engineering practices to enforce consistent, drift‑free code generation, citing tools like docursor, claw.md, model context protocol, cascade, and Claude sub‑agents for multi‑agent workflows. - [00:19:24](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=1164s) **AI Prompting Empowers Non‑Tech Builders** - The speaker argues that using AI prompts to write code makes building applications accessible to anyone, dispelling the myth that technical expertise is required. ## Full Transcript
You know, one of the challenges with
artificial intelligence and development
or building in code is that everyone is
going for this is my particular hack or
this is my gimmick that I use, this is
the tool set. And it all feels really
brittle. It feels like if you don't use
the tool and if you don't use the prompt
and if you don't build exactly that
thing, it's not going to work. I wanted
to do something a little bit different.
I wanted to go through and look at work
patterns that I see being practiced
across multiple industry leaders. Now
these people are not all founders. Some
of them are indie hackers. Some of them
are product leaders. They exemplify how
rich and how varied the opportunity is
right now for people who want to start
building in software. They also
exemplify how many different tools you
can use. I include lots of tool reviews
in this article. You will not be short
of those if that's your thing. It's
probably the most comprehensive look at
industry coding tools with AI anywhere
on the web. That being said, I think the
heart of this approach is really the six
proven work patterns that I've been able
to uncover and the examples I can give
you of how different tools can be
stitched together to create those work
patterns. I view those work patterns as
the hidden stable elements in an
otherwise endlessly changing sea of new
tools, new patterns of prompting, new
leaders that come along and give you new
hacks, new applications. Every time I
turn around, there's a new thing in AI,
right? And so what I wanted to get to
were these patterns that were battle
tested that you could go back to and bet
on. So with that in mind, let's look at
all six and look at examples from
leaders along the way. Number one,
codebase mapping and onboarding. You
might not think of that as a development
pattern, but it really is. You can use
AI to quickly understand existing code
bases. You can generate maps or
summaries or graphs for onboarding or
legacy dives. This is especially useful
if you have an existing codebase
obviously. And if you want to bring
someone in on your team quickly, you can
treat AI output as sort of a starting
point for further refinement. In this
context, this can accelerate onboarding
very rapidly for new builders. It can
extract highle context really quickly
and it's very useful uh for people who
are non-engineers to get to know a
codebase. I've actually written some
prompts that are designed to help you
get to know a particular coding pattern
that you're using. Very similar, very
similar process. Here are some examples
from actual leaders that use codebase
mapping and onboarding. Claire Vo uses
Devon for initial repo analysis and then
she refactors her assessments in
production code bases based on Devon's
assessment. She'll use Devon to generate
PRs or tests with roughly an 80% success
on the first try she says and then she
can train into cursor for edits down the
road. CJ Zafir loads PRDS and plans into
cursor and curs specifically
cursors.cursor cursor rules file to
establish persistent context and then
uses Gemini 2.5 to scan a large
codebase. Eric proven primes claude code
with repo prompts onboard/onboard
file for context and then enables
structured XML edits from there. See
these get very tactical and specific and
by themselves you would be like I'm
really you know I'm I'm overwhelmed.
Repo prompt/onboard file what is that?
Cursors.cursor rules files what is that?
You can look those up and I give you
more information in the report to dig
into it. It's not that you are going to
be lost looking things up. It's that you
will be lost if you don't see the common
pattern. And that's what I want to give
you. Claire is doing initial assessment
with Devon. CJ is loading PRD plans into
cursor rules. Eric is load loading cloud
code with repo prompt/onboard file.
They're giving it the context it needs
to map. Gurgaly Oros uses Claude codes
uh codecs command line for session
context on larger projects. Melvin uses
Windsurf's Cascade for autocontext on
large databases. You see all the
different tools you can use for this.
Simon Willis uses Claude Code/Onboard
file as well, but he uses it for GitHub
actions context. You can mix and match
these tools a lot of different ways. And
so the goal isn't for me to give you the
single golden best way because someone's
going to come along. chat GPT5 will come
along and it will change the way we
think about what is best on a particular
tool basis. It will not change the fact
that we will use AI for codebased
mapping and onboarding that's going to
stay and that's why I call it out. So
what are some tools that get mentioned a
lot here? Cloud code I've mentioned
Devon cursor windsurf I also want to
call out IDER. IDER is helpful here as
well. The principles to pull out of this
first mapping and onboarding piece point
the AI at a repo. You want to prompt it
for summaries or graphs and then refine
from there. I would start with a small
codebase. I would update context files
regularly. And for teams, you can share
those AI generated docs almost like
documentation for collaborative
onboarding. Let's jump to pattern two.
Planning first development. I actually
teach this one a lot when I am teaching
my Maven course. You want to use AI as
an architect to outline plans,
functions, logic, edge cases before you
generate code. Then you approve and
refine and proceed. You can actually
simulate pseudo code as a way of getting
there. So you can have Claude code up a
React artifact and it can be pseudo code
that helps you to understand what you
want. This prevents tangential outputs.
It ensures coherence. It ensures
maintainability and conveniently all the
work you did doubles as documentation.
We'll go back to these leaders to see
how they're doing it. CJ Zafir asks
cursor for approaches and plans.
Generates actions list, builds in
chunks, and uses 03 mini to develop, I
kid you not, 40step plans. Dan Shipper
sets up opponent processors in Claude
code for parallel sub agents with
opposing goals and then synthesizes
outputs on long runs. Again, it's a it's
a planning action. Eric Provenar
delegates planning to 03 via chat and
then Claude applies edits. Gurgle uses
plan mode and claude code and the codeex
command line for roadmap sub aents for
parallel tasks. I want you to think
about this as these leaders have got
tools they feel good about working with
and then they're going through these
workflow motions in a way that makes
sense to them. You too will probably
have tools you prefer. And even if your
tools are not the same ones, maybe
you're not using cloud code, you're
using something else, you can still go
through the same process. You can plan
for lovable just like you plan for cla
code. Peter Yang demos three agents in
claude code with custom commands like
think ultra hard for quality plans.
Riley Brown uses cursor composer for
planning in diagram phases. And he finds
that if you're not planning outputs can
be verbose and wrong. And that's
actually something that we see ac across
the board. Dan Shipper notes that you
can have like high load claw throttling
issues which you can get if you're not
planning well. And you can get model
refusals if things get pushed too far.
So Claude uh can just absolutely refuse
if it just runs out of context. CJ Zafir
notes that winds sometimes will break
after initial steps. So essentially the
pitfalls are reminding us of the
importance of planning because if you
have something that's somewhat brittle
like that, you'd better have a solid
plan so that you can roll back to the
plan and continue to follow it. The
people I know who are able to build
successful applications put their 8020
effort into planning first and then
execution because they can always go
back to the plan side. So the principles
are prompt for a breakdown. Look for
like sketch solution design something
that actually gets you a full picture of
what you're trying to solve. Approve
that plan before you code and then use
you know whatever tools you need to use
to actually plan out the layout of
complex features. You can do it in claw
code as you can see. Some people are
doing it in cursor. Some people are
doing it in wind surf. You can probably
do a version of it in lovable as well.
And then you want to wherever you can go
back to the plan over time. You need to
have habits of work that push you back
to the plan. Okay. Pattern number three.
This one is related to tools. It is not
going anywhere. You know what the
fastest tool to $100 million is? It's
not cursor anymore. It's lovable. The
vibe coding tool. It is such a big deal
that Microsoft launched their own
copycat version for GitHub called Spark.
I think natural language driven coding
or vibe coding. It is in a sense its own
pattern. I wanted to give it its own
airtime. You prompt in natural language
for code generation. You iterate for
refinements. It's ideal for prototypes,
for scripting, for exploration. And you
can honestly build real applications. I
know people who have built CRM for small
services businesses off of lovable. I
know people who have built small
applications that are focused on uh
crypto monitoring off of lovable. The
strength is speed. You can get through
things very very rapidly. Absolutely
zero setup in many tools and non-coders
are not blocked. The only thing blocking
you if you are a non-coder increasingly
is the clarity of your intent. If you
are clear about what you want, you can
make it. So Riley Brown uses Cursor for
a 100% AI workflow and then uses Replet
to go quickly from idea to deploy. So
Riley's demonstrated a CRM, similar to
what I was talking about in one prompt
and so he can quickly game out the UI.
Melvin Vivos prompts Windsurf for
deploys and switches to Gemini for the
UI. Similar thing, Peter Yang types app
descriptions in Cloud Code and asks
agents to build them. You'll notice even
though it's called vibe coding and I
talk about lovable, these leaders are
not just confining themselves to prompt
driven tools like lovable or bolt or
GitHub spark or what have you. They're
using cursor for this. They're using
claude for this. You can vibe code in
these tools. CJ Zafia prompts cursor for
tweaks v0ero for UI. CJ wrestles with
the idea that if you have ambiguous
prompts, you are aiming the code off
base. And CJ is not the only one. Peter,
Melvin, Riley, others mentioned, I have
seen cases where if you don't prompt
with intention when you're using natural
language for prompting, you end up
steering your codebase arise. One of the
biggest challenges with natural language
driven development is that you have to
interpret an ambiguous human phrase into
very unambiguous code. The fact that it
kind of works is a miracle. And it's
getting much better. Lovable actually
launched agent over I think last
weekend. And really the focus of agent
is to help you burn fewer tokens on
lovable by making surgical fixes and
improving the accuracy of code editing
and updates because lovable is aware
that you need the option to refine and
iterate as you see what the system
initially infers about your human
language intent. So application
principles for vibe coding you got to
describe very clearly. You have to
review for security and style. You
should start small and iterate. And you
need to pair with planning. I emphasize
that so much. I said it above. Pair it
with planning. If you want to read more,
I have a whole article on vibe coding
that's separate. I think it's called the
vibe coding bible. And it will help you
get deep into it. I think it's a
discipline that everybody would benefit
from playing around with given the
strength of these tools. Let's move to
pattern four. AI augmented debugging.
Bug, bug, bug, bug, bug. I hear bugs so
many times. You want to pull AI into
debugging. You want AI to help you
analyze errors, to suggest fix, to loop
until resolved, to automate fixed run
cycles and tests as much as you can. So,
examples from leaders that have tackled
this, Clarevo uses Devon for debugging
with data dog and generates tests with
human review. Riley Brown uses cursors
terminal access for API setups and then
fixes them via diffs. Simon Willis
reviews commits file by file in cloud
code. And if you think about like what
this takes, you have to recognize that
it's not always going to work well. I
have seen cases where you just pound on
that bug over and over again and you
just don't get anywhere. You need to re
recognize that any fix may introduce
regressions. Reducing that was actually
the goal of the new lovable agent build.
Logical bugs may need humans. So if
windurf is just stopping mid session,
which Zafir had happen, humans may need
to step in to figure out what's going
on. And Devon may in particular, not to
pick on the tool, but Devon may
underperform in messy repos. So keeping
your code organized and neat is one of
those hidden success stories for build.
So from a principal perspective, if
you're lading back out, you want to be
be responsible for making sure that your
error traces are very clearly presented
to the AI. One of the hidden things with
these tools is they can't see the local
host previews or the local the previews
on your screen that you sometimes see if
you're prompting in lovable. They don't
know. You have to paste the error traces
that you're getting clearly into the
model. You should be able to prompt for
a clear root cause assessment. I have a
I have a prompt for that that that I can
share that is a clear dig in find the
root cause and then come back with a
proposed fix. And then you need to make
sure that you are fixing cautiously.
Make sure you know what files are being
touched. I would recommend sandboxing if
it's a real production build so that you
can see it working in the sandbox before
you deploy to production. Okay, let's
jump to pattern five. AI assisted code
reviews and refactors. Imagine AI as a
pre-pole reviewer. Prompt it for
feedback, automatically refactor, etc.
So, Claire vote chains Devon to chat PRD
for uh PRs and cursor for surgical
change. It enables Devon to act as
initial reviewer. Gurgly uses cursor and
Windsor Windsurf for inline edits during
rollouts. Simon Willis commits file by
file after reviewing claude code. So
some of the pitfalls here are that if
you trust blindly if you do not actually
check what the system is doing you can
get nasty regressions cursor can edit
outside scope per zapier but it's not
just cursor like I have seen report
after report does this sometimes lovable
does this sometimes uh claude code will
sometimes do this complexity can lead to
confusion and so you can get these
larger beyond scope edits now the
principle to call out here is that you
want to prompt for a review and you want
a prompt very specifically for the
constraints and guardrails around that
review. If you wanted to review a
particular part of the code, say it. Say
what it is. Constrain it as much as you
can to avoid that overediting problem.
Make sure you have humans for final sign
off. And make sure that you have clear
rules on how you want the resulting code
to look and work and any dependencies
that it's related to. Pattern six,
context engineering and consistency
enforcement. Yes, a lot of words there,
but we'll get there. You want to
maintain AI readable files like a clawed
markdown file or dotcursor rules file
with clear guidelines that prepen to
prompts for ontarget outputs. This is
when I talk about like maintain your
house style. This is what I mean. It's
going to reduce drift. It will reduce
hallucinations. It reinforces best
practices across your codebase and it
compounds benefits. CJ Zafir uses
docursor rules and cursor for this. Eric
a provenar uses claw.md in repo. Gurgle
uses model context protocol uh to for to
handle context limits. Melvin Vivas uses
cascade for autoc context. One of the
things that you see here is that I am
combining both context engineering and
consistency because I think they're
related. If you have consistent rules in
a markdown file, you can more reliably
go through model context protocol to go
get context and to surgically get
context and not over get context. It's
really important that you have root
files that have clear principles and
examples. I also want to call out that
just a couple of days ago claude code
sub aents launched. Sub aents enable
multi- aent workflows like opponent
processors or parallel task agents and
this allows you to build promptbased
setups that are quite complex. Now, if
you can get into responsible manual
orchestration of these sub agents and
give them separate rules and give them
separate tasks, you can do incredible
incredible things. You can have one
agent that's expanding your pre your
your PRD. You can have another agent
architecting. You can have a third agent
building etc. It is really really
important though to follow these
principles as you apply it because sub
agents essentially just accelerate you.
They don't actually give you different
capabilities. They just enable you to go
faster if you're applying these
principles successfully. And so I want
to go through these six again and make
sure you really understand them. I don't
want you to be confused by the tool
mention. I want you to think about the
principles. So codebased mapping and
onboarding. The durable principle here
is that you are pointing the AI at the
repo. You are prompting for summaries of
the repo and you're refining manually.
Planning first development. You are
prompting to understand what you want to
build first. You're developing a plan
first. You're approving the plan before
coding. It's really, really critical.
Vibe coding or natural language driven
development. It doesn't have to be in a
vibe coding tool like Lovable, although
of course for many people that's a great
spot. You must describe clearly what you
want. It goes back to that planning
piece. You have to review it for
security, for style. You should probably
start small if you haven't done it
before and be willing to iterate
thoughtfully. Sometimes in parallel. I
will sometimes build with bolt and with
lovable and with replet and parallel to
see what gets me the farthest. Pattern
four, AI augmented debugging and
testing. You want to make sure that you
are pasting your actual error traces and
communicating clearly what's wrong. That
you are being very specific and
prescriptive about how the system will
get to root cause about the kind of
suggested fix you will accept in line
with your house rules. And you should
apply fixes cautiously, especially in
production databases. Pattern five, AI
assisted code reviews. You want to be in
a place where you can prompt for review
of the code and then constrain the
review to just the space you want looked
at. Keep in mind, humans will have to do
a second pass, but you can use a tool
like Devon to go and get a lot done
quickly. I think Claire's a great
example there. Finally, pattern number
six, context engineering. It is really
important to look at context engineering
as an opportunity to reduce drift,
reduce hallucinations through two basic
principles. One is maintaining AI
readable files and the other is being
clear about your prompts for ontarget
outputs. And so when gurgly uses model
context protocol, it's a combination of
having clear rules and then having clear
prompts that tell model context protocol
where to go. So, these are the six
patterns and I'm going to dive deeper
into them. I'll talk about each of the
leaders in the article. I'll talk about
all of the different pitfalls that we've
seen come across from those tools. No
tool is perfect. But the thing I want to
emphasize is that this overall review of
workflow patterns is durable. When Chat
GPT5 comes out, maybe later this week,
you are not going to lose your way
because you can slot it into these
durable patterns. It's something you can
hang on to in a world that's changing
very very very fast. I want to close
with a question that I get a lot. Why
should I care? Why should I care?
And I want to tell you that prompting
for development or using code to develop
with AI is one of the easiest and most
efficient ways I have ever seen at
helping people understand what AI can
actually do because it's so clear. The
prompt runs or it doesn't run. And so
even if you don't plan to ever be a
builder, I encourage you to think about
exploring lovable, exploring a simple
tool that lets you play around with
developing code to express your idea.
The the most powerful thing I have
shared with many people in the last year
is that the old's era fears that they
were not technical enough, that they
could not be their own technical
founder, that they could not be their
own builder are not true anymore. You
can be your own technical founder. You
can be your own builder for any idea you
want to create. Now, I have yet to see
people really not be able to do
something because they didn't have the
knowledge. When you know how to ask AI
to teach you, which is something I've
written about, and when you know how to
apply these six work patterns in ways
that enable you to build with AI, you
were in a position to make the dreams of
what you want to make come true. Now,
I'm not saying and I don't believe in a
future where all of us will only build
our own apps and we won't ever buy apps
from each other. I don't think that's
true. Cooking has been around for a long
time. We have kitchens, we cook, but we
still go out to restaurants. We still
Door Dash. In the same way, we're still
going to buy software. But I think
knowing how to cook and knowing how to
build are equivalently useful skills.
And actually, knowing how to code is not
more difficult than knowing how to cook
now. It's it's become much simpler
thanks to artificial intelligence. And
so if you are listening to this and you
have never tried building, this is my
plea to you. I want you to not be left
behind. I want you to be able to try.
And that is why I have taken the time to
break out these these tools and these
leaders examples into discrete,
specific, durable patterns. And if you
are building, the durable patterns are
pretty helpful, too. Because one of the
things I hear from people who build is
that it's hard to keep up. Everything
changes. These durable patterns aren't
going to change. They may have new tools
that slot in, but the patterns are still
going to be there. For example, vibe
coding will still be there tomorrow.
It's not going to go anywhere. And so,
look at these as the six underlying work
patterns of the AI development
revolution and figure out where you want
to level up your own work so that it is
more effective. Now, maybe you're
really, really good at vibe coding.
Maybe what you need to learn about more
is planning. Maybe what you need to
learn about more is review. There's
things we can all grow in. I personally
I think I can get better at at
test-driven development. I think I can
get better at like telling the AI to run
unit tests as I build. That's an area of
growth for me. Everyone has their area
of growth. My goal with this is just to
lay out the patterns so that you can
jump on them and find them useful. I
hope this was helpful. I hope this
demystified some of the chaos and some
of everything that's changing with AI
right now in development.