Learning Library

← Back to Library

Six AI-Powered Coding Work Patterns

Key Points

  • The speaker critiques the “hack‑centric” view of AI‑assisted development as brittle and emphasizes the need for more stable, repeatable approaches.
  • By analyzing practices across industry leaders—founders, indie hackers, and product heads—they identified six proven work patterns that serve as reliable foundations despite the rapid churn of new tools and prompts.
  • The first pattern, **codebase mapping and onboarding**, uses AI to generate summaries, graphs, and PRs that accelerate understanding of existing codebases, even for non‑engineers.
  • Real‑world examples illustrate this pattern: Claire Vo leverages Devon for repo analysis and initial PR generation with ~80% success, then refines outputs with Cursor; CJ Zafir similarly integrates PRDs and planning into Cursor for seamless edits.
  • The article also offers an extensive review of AI coding tools, positioning the six patterns as “hidden stable elements” that developers can rely on while mixing and matching the ever‑evolving toolset.

Sections

Full Transcript

# Six AI-Powered Coding Work Patterns **Source:** [https://www.youtube.com/watch?v=Z0wb0y5BVIY](https://www.youtube.com/watch?v=Z0wb0y5BVIY) **Duration:** 00:22:28 ## Summary - The speaker critiques the “hack‑centric” view of AI‑assisted development as brittle and emphasizes the need for more stable, repeatable approaches. - By analyzing practices across industry leaders—founders, indie hackers, and product heads—they identified six proven work patterns that serve as reliable foundations despite the rapid churn of new tools and prompts. - The first pattern, **codebase mapping and onboarding**, uses AI to generate summaries, graphs, and PRs that accelerate understanding of existing codebases, even for non‑engineers. - Real‑world examples illustrate this pattern: Claire Vo leverages Devon for repo analysis and initial PR generation with ~80% success, then refines outputs with Cursor; CJ Zafir similarly integrates PRDs and planning into Cursor for seamless edits. - The article also offers an extensive review of AI coding tools, positioning the six patterns as “hidden stable elements” that developers can rely on while mixing and matching the ever‑evolving toolset. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=0s) **Stable Work Patterns Amid AI Tools** - The speaker critiques fragile AI hacks and proposes focusing on six proven work patterns that unify diverse tools, offering a comprehensive guide to stable development practices. - [00:03:20](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=200s) **AI-Powered Code Mapping Tools Overview** - The speaker outlines various AI utilities—such as repo prompt/onboard files, cursor rules, Claude code, Windsurf Cascade, and IDER—used for repository context extraction and onboarding, emphasizing pattern recognition over a single “best” solution. - [00:06:38](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=398s) **Importance of Planning in AI Development** - The speakers stress that thorough planning using tools like Cursor Composer prevents verbose errors, high‑load throttling, and model refusals, enabling reliable execution and easy rollback. - [00:11:20](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=680s) **AI Debugging: Tools, Limits, Tips** - The speaker outlines how AI can streamline bug detection and fixing—with examples from industry leaders—while emphasizing the need for clear error traces, organized code, and human oversight to mitigate tool limitations and regression risks. - [00:14:26](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=866s) **AI Coding Consistency and Context** - The speaker stresses the need for human sign‑off, clear rule files (e.g., markdown or dotcursor), and robust context‑engineering practices to enforce consistent, drift‑free code generation, citing tools like docursor, claw.md, model context protocol, cascade, and Claude sub‑agents for multi‑agent workflows. - [00:19:24](https://www.youtube.com/watch?v=Z0wb0y5BVIY&t=1164s) **AI Prompting Empowers Non‑Tech Builders** - The speaker argues that using AI prompts to write code makes building applications accessible to anyone, dispelling the myth that technical expertise is required. ## Full Transcript
0:00You know, one of the challenges with 0:01artificial intelligence and development 0:03or building in code is that everyone is 0:06going for this is my particular hack or 0:08this is my gimmick that I use, this is 0:10the tool set. And it all feels really 0:12brittle. It feels like if you don't use 0:14the tool and if you don't use the prompt 0:16and if you don't build exactly that 0:17thing, it's not going to work. I wanted 0:19to do something a little bit different. 0:21I wanted to go through and look at work 0:23patterns that I see being practiced 0:26across multiple industry leaders. Now 0:28these people are not all founders. Some 0:30of them are indie hackers. Some of them 0:31are product leaders. They exemplify how 0:34rich and how varied the opportunity is 0:37right now for people who want to start 0:39building in software. They also 0:41exemplify how many different tools you 0:43can use. I include lots of tool reviews 0:46in this article. You will not be short 0:48of those if that's your thing. It's 0:50probably the most comprehensive look at 0:53industry coding tools with AI anywhere 0:55on the web. That being said, I think the 0:57heart of this approach is really the six 1:02proven work patterns that I've been able 1:03to uncover and the examples I can give 1:06you of how different tools can be 1:08stitched together to create those work 1:12patterns. I view those work patterns as 1:14the hidden stable elements in an 1:16otherwise endlessly changing sea of new 1:19tools, new patterns of prompting, new 1:23leaders that come along and give you new 1:25hacks, new applications. Every time I 1:28turn around, there's a new thing in AI, 1:30right? And so what I wanted to get to 1:32were these patterns that were battle 1:34tested that you could go back to and bet 1:36on. So with that in mind, let's look at 1:39all six and look at examples from 1:41leaders along the way. Number one, 1:43codebase mapping and onboarding. You 1:45might not think of that as a development 1:47pattern, but it really is. You can use 1:50AI to quickly understand existing code 1:52bases. You can generate maps or 1:54summaries or graphs for onboarding or 1:56legacy dives. This is especially useful 1:58if you have an existing codebase 2:00obviously. And if you want to bring 2:02someone in on your team quickly, you can 2:05treat AI output as sort of a starting 2:07point for further refinement. In this 2:08context, this can accelerate onboarding 2:10very rapidly for new builders. It can 2:12extract highle context really quickly 2:15and it's very useful uh for people who 2:17are non-engineers to get to know a 2:18codebase. I've actually written some 2:20prompts that are designed to help you 2:22get to know a particular coding pattern 2:25that you're using. Very similar, very 2:27similar process. Here are some examples 2:29from actual leaders that use codebase 2:32mapping and onboarding. Claire Vo uses 2:34Devon for initial repo analysis and then 2:36she refactors her assessments in 2:39production code bases based on Devon's 2:40assessment. She'll use Devon to generate 2:42PRs or tests with roughly an 80% success 2:45on the first try she says and then she 2:47can train into cursor for edits down the 2:49road. CJ Zafir loads PRDS and plans into 2:53cursor and curs specifically 2:54cursors.cursor cursor rules file to 2:56establish persistent context and then 2:58uses Gemini 2.5 to scan a large 3:01codebase. Eric proven primes claude code 3:04with repo prompts onboard/onboard 3:07file for context and then enables 3:09structured XML edits from there. See 3:11these get very tactical and specific and 3:13by themselves you would be like I'm 3:15really you know I'm I'm overwhelmed. 3:17Repo prompt/onboard file what is that? 3:20Cursors.cursor rules files what is that? 3:23You can look those up and I give you 3:24more information in the report to dig 3:26into it. It's not that you are going to 3:28be lost looking things up. It's that you 3:31will be lost if you don't see the common 3:33pattern. And that's what I want to give 3:34you. Claire is doing initial assessment 3:36with Devon. CJ is loading PRD plans into 3:39cursor rules. Eric is load loading cloud 3:42code with repo prompt/onboard file. 3:44They're giving it the context it needs 3:46to map. Gurgaly Oros uses Claude codes 3:49uh codecs command line for session 3:51context on larger projects. Melvin uses 3:54Windsurf's Cascade for autocontext on 3:56large databases. You see all the 3:58different tools you can use for this. 4:00Simon Willis uses Claude Code/Onboard 4:02file as well, but he uses it for GitHub 4:04actions context. You can mix and match 4:06these tools a lot of different ways. And 4:08so the goal isn't for me to give you the 4:10single golden best way because someone's 4:13going to come along. chat GPT5 will come 4:15along and it will change the way we 4:17think about what is best on a particular 4:18tool basis. It will not change the fact 4:21that we will use AI for codebased 4:23mapping and onboarding that's going to 4:24stay and that's why I call it out. So 4:26what are some tools that get mentioned a 4:28lot here? Cloud code I've mentioned 4:29Devon cursor windsurf I also want to 4:31call out IDER. IDER is helpful here as 4:33well. The principles to pull out of this 4:36first mapping and onboarding piece point 4:39the AI at a repo. You want to prompt it 4:41for summaries or graphs and then refine 4:43from there. I would start with a small 4:45codebase. I would update context files 4:47regularly. And for teams, you can share 4:50those AI generated docs almost like 4:51documentation for collaborative 4:53onboarding. Let's jump to pattern two. 4:56Planning first development. I actually 4:58teach this one a lot when I am teaching 5:00my Maven course. You want to use AI as 5:02an architect to outline plans, 5:04functions, logic, edge cases before you 5:07generate code. Then you approve and 5:09refine and proceed. You can actually 5:11simulate pseudo code as a way of getting 5:14there. So you can have Claude code up a 5:16React artifact and it can be pseudo code 5:18that helps you to understand what you 5:19want. This prevents tangential outputs. 5:22It ensures coherence. It ensures 5:24maintainability and conveniently all the 5:26work you did doubles as documentation. 5:28We'll go back to these leaders to see 5:30how they're doing it. CJ Zafir asks 5:33cursor for approaches and plans. 5:35Generates actions list, builds in 5:37chunks, and uses 03 mini to develop, I 5:40kid you not, 40step plans. Dan Shipper 5:43sets up opponent processors in Claude 5:45code for parallel sub agents with 5:46opposing goals and then synthesizes 5:48outputs on long runs. Again, it's a it's 5:50a planning action. Eric Provenar 5:53delegates planning to 03 via chat and 5:55then Claude applies edits. Gurgle uses 5:58plan mode and claude code and the codeex 6:00command line for roadmap sub aents for 6:02parallel tasks. I want you to think 6:04about this as these leaders have got 6:07tools they feel good about working with 6:09and then they're going through these 6:12workflow motions in a way that makes 6:14sense to them. You too will probably 6:16have tools you prefer. And even if your 6:19tools are not the same ones, maybe 6:20you're not using cloud code, you're 6:22using something else, you can still go 6:24through the same process. You can plan 6:27for lovable just like you plan for cla 6:29code. Peter Yang demos three agents in 6:31claude code with custom commands like 6:33think ultra hard for quality plans. 6:36Riley Brown uses cursor composer for 6:38planning in diagram phases. And he finds 6:41that if you're not planning outputs can 6:44be verbose and wrong. And that's 6:45actually something that we see ac across 6:46the board. Dan Shipper notes that you 6:48can have like high load claw throttling 6:51issues which you can get if you're not 6:52planning well. And you can get model 6:54refusals if things get pushed too far. 6:56So Claude uh can just absolutely refuse 6:59if it just runs out of context. CJ Zafir 7:01notes that winds sometimes will break 7:03after initial steps. So essentially the 7:06pitfalls are reminding us of the 7:08importance of planning because if you 7:10have something that's somewhat brittle 7:12like that, you'd better have a solid 7:14plan so that you can roll back to the 7:15plan and continue to follow it. The 7:17people I know who are able to build 7:19successful applications put their 8020 7:22effort into planning first and then 7:24execution because they can always go 7:26back to the plan side. So the principles 7:28are prompt for a breakdown. Look for 7:30like sketch solution design something 7:32that actually gets you a full picture of 7:34what you're trying to solve. Approve 7:37that plan before you code and then use 7:40you know whatever tools you need to use 7:43to actually plan out the layout of 7:45complex features. You can do it in claw 7:46code as you can see. Some people are 7:48doing it in cursor. Some people are 7:50doing it in wind surf. You can probably 7:51do a version of it in lovable as well. 7:53And then you want to wherever you can go 7:56back to the plan over time. You need to 7:58have habits of work that push you back 8:00to the plan. Okay. Pattern number three. 8:03This one is related to tools. It is not 8:06going anywhere. You know what the 8:07fastest tool to $100 million is? It's 8:10not cursor anymore. It's lovable. The 8:13vibe coding tool. It is such a big deal 8:15that Microsoft launched their own 8:17copycat version for GitHub called Spark. 8:20I think natural language driven coding 8:22or vibe coding. It is in a sense its own 8:24pattern. I wanted to give it its own 8:26airtime. You prompt in natural language 8:28for code generation. You iterate for 8:30refinements. It's ideal for prototypes, 8:32for scripting, for exploration. And you 8:34can honestly build real applications. I 8:36know people who have built CRM for small 8:40services businesses off of lovable. I 8:43know people who have built small 8:45applications that are focused on uh 8:47crypto monitoring off of lovable. The 8:50strength is speed. You can get through 8:52things very very rapidly. Absolutely 8:54zero setup in many tools and non-coders 8:57are not blocked. The only thing blocking 8:59you if you are a non-coder increasingly 9:01is the clarity of your intent. If you 9:04are clear about what you want, you can 9:06make it. So Riley Brown uses Cursor for 9:08a 100% AI workflow and then uses Replet 9:12to go quickly from idea to deploy. So 9:15Riley's demonstrated a CRM, similar to 9:18what I was talking about in one prompt 9:20and so he can quickly game out the UI. 9:22Melvin Vivos prompts Windsurf for 9:24deploys and switches to Gemini for the 9:26UI. Similar thing, Peter Yang types app 9:29descriptions in Cloud Code and asks 9:31agents to build them. You'll notice even 9:33though it's called vibe coding and I 9:35talk about lovable, these leaders are 9:37not just confining themselves to prompt 9:40driven tools like lovable or bolt or 9:42GitHub spark or what have you. They're 9:44using cursor for this. They're using 9:45claude for this. You can vibe code in 9:48these tools. CJ Zafia prompts cursor for 9:50tweaks v0ero for UI. CJ wrestles with 9:55the idea that if you have ambiguous 9:57prompts, you are aiming the code off 9:59base. And CJ is not the only one. Peter, 10:01Melvin, Riley, others mentioned, I have 10:03seen cases where if you don't prompt 10:07with intention when you're using natural 10:08language for prompting, you end up 10:10steering your codebase arise. One of the 10:12biggest challenges with natural language 10:15driven development is that you have to 10:17interpret an ambiguous human phrase into 10:19very unambiguous code. The fact that it 10:21kind of works is a miracle. And it's 10:24getting much better. Lovable actually 10:26launched agent over I think last 10:28weekend. And really the focus of agent 10:31is to help you burn fewer tokens on 10:33lovable by making surgical fixes and 10:35improving the accuracy of code editing 10:38and updates because lovable is aware 10:40that you need the option to refine and 10:42iterate as you see what the system 10:45initially infers about your human 10:48language intent. So application 10:50principles for vibe coding you got to 10:52describe very clearly. You have to 10:54review for security and style. You 10:56should start small and iterate. And you 10:58need to pair with planning. I emphasize 11:00that so much. I said it above. Pair it 11:01with planning. If you want to read more, 11:04I have a whole article on vibe coding 11:05that's separate. I think it's called the 11:06vibe coding bible. And it will help you 11:08get deep into it. I think it's a 11:10discipline that everybody would benefit 11:12from playing around with given the 11:13strength of these tools. Let's move to 11:15pattern four. AI augmented debugging. 11:18Bug, bug, bug, bug, bug. I hear bugs so 11:20many times. You want to pull AI into 11:23debugging. You want AI to help you 11:25analyze errors, to suggest fix, to loop 11:27until resolved, to automate fixed run 11:29cycles and tests as much as you can. So, 11:31examples from leaders that have tackled 11:33this, Clarevo uses Devon for debugging 11:35with data dog and generates tests with 11:37human review. Riley Brown uses cursors 11:40terminal access for API setups and then 11:43fixes them via diffs. Simon Willis 11:45reviews commits file by file in cloud 11:47code. And if you think about like what 11:49this takes, you have to recognize that 11:51it's not always going to work well. I 11:53have seen cases where you just pound on 11:55that bug over and over again and you 11:56just don't get anywhere. You need to re 11:59recognize that any fix may introduce 12:01regressions. Reducing that was actually 12:03the goal of the new lovable agent build. 12:06Logical bugs may need humans. So if 12:09windurf is just stopping mid session, 12:11which Zafir had happen, humans may need 12:13to step in to figure out what's going 12:14on. And Devon may in particular, not to 12:17pick on the tool, but Devon may 12:18underperform in messy repos. So keeping 12:20your code organized and neat is one of 12:22those hidden success stories for build. 12:24So from a principal perspective, if 12:26you're lading back out, you want to be 12:28be responsible for making sure that your 12:30error traces are very clearly presented 12:32to the AI. One of the hidden things with 12:34these tools is they can't see the local 12:36host previews or the local the previews 12:39on your screen that you sometimes see if 12:40you're prompting in lovable. They don't 12:43know. You have to paste the error traces 12:45that you're getting clearly into the 12:47model. You should be able to prompt for 12:50a clear root cause assessment. I have a 12:51I have a prompt for that that that I can 12:54share that is a clear dig in find the 12:58root cause and then come back with a 12:59proposed fix. And then you need to make 13:02sure that you are fixing cautiously. 13:03Make sure you know what files are being 13:05touched. I would recommend sandboxing if 13:07it's a real production build so that you 13:10can see it working in the sandbox before 13:12you deploy to production. Okay, let's 13:14jump to pattern five. AI assisted code 13:17reviews and refactors. Imagine AI as a 13:20pre-pole reviewer. Prompt it for 13:23feedback, automatically refactor, etc. 13:26So, Claire vote chains Devon to chat PRD 13:29for uh PRs and cursor for surgical 13:31change. It enables Devon to act as 13:34initial reviewer. Gurgly uses cursor and 13:36Windsor Windsurf for inline edits during 13:39rollouts. Simon Willis commits file by 13:41file after reviewing claude code. So 13:43some of the pitfalls here are that if 13:45you trust blindly if you do not actually 13:47check what the system is doing you can 13:49get nasty regressions cursor can edit 13:51outside scope per zapier but it's not 13:54just cursor like I have seen report 13:55after report does this sometimes lovable 13:58does this sometimes uh claude code will 14:01sometimes do this complexity can lead to 14:03confusion and so you can get these 14:05larger beyond scope edits now the 14:09principle to call out here is that you 14:10want to prompt for a review and you want 14:12a prompt very specifically for the 14:14constraints and guardrails around that 14:16review. If you wanted to review a 14:17particular part of the code, say it. Say 14:19what it is. Constrain it as much as you 14:22can to avoid that overediting problem. 14:24Make sure you have humans for final sign 14:26off. And make sure that you have clear 14:28rules on how you want the resulting code 14:31to look and work and any dependencies 14:33that it's related to. Pattern six, 14:35context engineering and consistency 14:37enforcement. Yes, a lot of words there, 14:40but we'll get there. You want to 14:41maintain AI readable files like a clawed 14:44markdown file or dotcursor rules file 14:47with clear guidelines that prepen to 14:49prompts for ontarget outputs. This is 14:51when I talk about like maintain your 14:53house style. This is what I mean. It's 14:55going to reduce drift. It will reduce 14:57hallucinations. It reinforces best 14:58practices across your codebase and it 15:00compounds benefits. CJ Zafir uses 15:03docursor rules and cursor for this. Eric 15:06a provenar uses claw.md in repo. Gurgle 15:09uses model context protocol uh to for to 15:12handle context limits. Melvin Vivas uses 15:14cascade for autoc context. One of the 15:16things that you see here is that I am 15:18combining both context engineering and 15:21consistency because I think they're 15:22related. If you have consistent rules in 15:25a markdown file, you can more reliably 15:28go through model context protocol to go 15:30get context and to surgically get 15:33context and not over get context. It's 15:36really important that you have root 15:39files that have clear principles and 15:40examples. I also want to call out that 15:43just a couple of days ago claude code 15:47sub aents launched. Sub aents enable 15:50multi- aent workflows like opponent 15:52processors or parallel task agents and 15:54this allows you to build promptbased 15:56setups that are quite complex. Now, if 15:58you can get into responsible manual 16:02orchestration of these sub agents and 16:03give them separate rules and give them 16:05separate tasks, you can do incredible 16:07incredible things. You can have one 16:08agent that's expanding your pre your 16:10your PRD. You can have another agent 16:12architecting. You can have a third agent 16:14building etc. It is really really 16:16important though to follow these 16:18principles as you apply it because sub 16:20agents essentially just accelerate you. 16:22They don't actually give you different 16:24capabilities. They just enable you to go 16:25faster if you're applying these 16:27principles successfully. And so I want 16:29to go through these six again and make 16:30sure you really understand them. I don't 16:32want you to be confused by the tool 16:35mention. I want you to think about the 16:37principles. So codebased mapping and 16:39onboarding. The durable principle here 16:41is that you are pointing the AI at the 16:43repo. You are prompting for summaries of 16:46the repo and you're refining manually. 16:48Planning first development. You are 16:50prompting to understand what you want to 16:53build first. You're developing a plan 16:55first. You're approving the plan before 16:57coding. It's really, really critical. 16:59Vibe coding or natural language driven 17:01development. It doesn't have to be in a 17:02vibe coding tool like Lovable, although 17:04of course for many people that's a great 17:06spot. You must describe clearly what you 17:08want. It goes back to that planning 17:10piece. You have to review it for 17:12security, for style. You should probably 17:15start small if you haven't done it 17:16before and be willing to iterate 17:18thoughtfully. Sometimes in parallel. I 17:20will sometimes build with bolt and with 17:22lovable and with replet and parallel to 17:24see what gets me the farthest. Pattern 17:26four, AI augmented debugging and 17:28testing. You want to make sure that you 17:30are pasting your actual error traces and 17:32communicating clearly what's wrong. That 17:35you are being very specific and 17:36prescriptive about how the system will 17:39get to root cause about the kind of 17:41suggested fix you will accept in line 17:44with your house rules. And you should 17:46apply fixes cautiously, especially in 17:49production databases. Pattern five, AI 17:52assisted code reviews. You want to be in 17:55a place where you can prompt for review 17:57of the code and then constrain the 18:00review to just the space you want looked 18:03at. Keep in mind, humans will have to do 18:05a second pass, but you can use a tool 18:07like Devon to go and get a lot done 18:10quickly. I think Claire's a great 18:11example there. Finally, pattern number 18:13six, context engineering. It is really 18:16important to look at context engineering 18:19as an opportunity to reduce drift, 18:22reduce hallucinations through two basic 18:25principles. One is maintaining AI 18:27readable files and the other is being 18:30clear about your prompts for ontarget 18:31outputs. And so when gurgly uses model 18:33context protocol, it's a combination of 18:36having clear rules and then having clear 18:38prompts that tell model context protocol 18:40where to go. So, these are the six 18:41patterns and I'm going to dive deeper 18:43into them. I'll talk about each of the 18:44leaders in the article. I'll talk about 18:47all of the different pitfalls that we've 18:49seen come across from those tools. No 18:51tool is perfect. But the thing I want to 18:54emphasize is that this overall review of 18:58workflow patterns is durable. When Chat 19:01GPT5 comes out, maybe later this week, 19:03you are not going to lose your way 19:06because you can slot it into these 19:08durable patterns. It's something you can 19:10hang on to in a world that's changing 19:12very very very fast. I want to close 19:15with a question that I get a lot. Why 19:17should I care? Why should I care? 19:21And I want to tell you that prompting 19:24for development or using code to develop 19:26with AI is one of the easiest and most 19:30efficient ways I have ever seen at 19:33helping people understand what AI can 19:35actually do because it's so clear. The 19:36prompt runs or it doesn't run. And so 19:39even if you don't plan to ever be a 19:41builder, I encourage you to think about 19:44exploring lovable, exploring a simple 19:46tool that lets you play around with 19:48developing code to express your idea. 19:51The the most powerful thing I have 19:54shared with many people in the last year 19:56is that the old's era fears that they 19:59were not technical enough, that they 20:01could not be their own technical 20:03founder, that they could not be their 20:04own builder are not true anymore. You 20:07can be your own technical founder. You 20:09can be your own builder for any idea you 20:11want to create. Now, I have yet to see 20:13people really not be able to do 20:16something because they didn't have the 20:18knowledge. When you know how to ask AI 20:20to teach you, which is something I've 20:22written about, and when you know how to 20:24apply these six work patterns in ways 20:26that enable you to build with AI, you 20:30were in a position to make the dreams of 20:32what you want to make come true. Now, 20:33I'm not saying and I don't believe in a 20:36future where all of us will only build 20:37our own apps and we won't ever buy apps 20:39from each other. I don't think that's 20:40true. Cooking has been around for a long 20:42time. We have kitchens, we cook, but we 20:44still go out to restaurants. We still 20:45Door Dash. In the same way, we're still 20:48going to buy software. But I think 20:50knowing how to cook and knowing how to 20:52build are equivalently useful skills. 20:55And actually, knowing how to code is not 20:57more difficult than knowing how to cook 20:59now. It's it's become much simpler 21:01thanks to artificial intelligence. And 21:03so if you are listening to this and you 21:06have never tried building, this is my 21:08plea to you. I want you to not be left 21:11behind. I want you to be able to try. 21:14And that is why I have taken the time to 21:16break out these these tools and these 21:18leaders examples into discrete, 21:21specific, durable patterns. And if you 21:23are building, the durable patterns are 21:25pretty helpful, too. Because one of the 21:27things I hear from people who build is 21:28that it's hard to keep up. Everything 21:30changes. These durable patterns aren't 21:32going to change. They may have new tools 21:34that slot in, but the patterns are still 21:36going to be there. For example, vibe 21:38coding will still be there tomorrow. 21:39It's not going to go anywhere. And so, 21:41look at these as the six underlying work 21:44patterns of the AI development 21:45revolution and figure out where you want 21:48to level up your own work so that it is 21:51more effective. Now, maybe you're 21:53really, really good at vibe coding. 21:54Maybe what you need to learn about more 21:55is planning. Maybe what you need to 21:57learn about more is review. There's 21:59things we can all grow in. I personally 22:01I think I can get better at at 22:02test-driven development. I think I can 22:04get better at like telling the AI to run 22:07unit tests as I build. That's an area of 22:10growth for me. Everyone has their area 22:11of growth. My goal with this is just to 22:14lay out the patterns so that you can 22:17jump on them and find them useful. I 22:19hope this was helpful. I hope this 22:20demystified some of the chaos and some 22:22of everything that's changing with AI 22:24right now in development.