Learning Library

← Back to Library

Nine Overlooked Lessons for AI Builders

Key Points

  • Building AI‑driven products is challenging because each prompt is essentially a piece of the final system, and many developers overlook recurring pitfalls throughout the journey from chat interfaces to fully integrated apps.
  • Chat models are “weakly intelligent”: they lack direct access to a user’s data environment, making them useful as rapid task starters but insufficient for high‑precision, end‑to‑end workflows.
  • This weakness creates a strategic split: precise, enterprise‑grade AI solutions are needed for complex tasks, while casual‑use AI tools must fight against the dominance of the weak‑intelligence layer that already satisfies most everyday needs.
  • The sticky, low‑bar nature of weak AI mirrors how Instagram rode the mobile phone wave, suggesting that despite its limitations, weakly intelligent chat can become a pervasive consumer platform and reshape how both builders and ordinary users engage with AI.

Sections

Full Transcript

# Nine Overlooked Lessons for AI Builders **Source:** [https://www.youtube.com/watch?v=bjcDgqKgvho](https://www.youtube.com/watch?v=bjcDgqKgvho) **Duration:** 00:25:26 ## Summary - Building AI‑driven products is challenging because each prompt is essentially a piece of the final system, and many developers overlook recurring pitfalls throughout the journey from chat interfaces to fully integrated apps. - Chat models are “weakly intelligent”: they lack direct access to a user’s data environment, making them useful as rapid task starters but insufficient for high‑precision, end‑to‑end workflows. - This weakness creates a strategic split: precise, enterprise‑grade AI solutions are needed for complex tasks, while casual‑use AI tools must fight against the dominance of the weak‑intelligence layer that already satisfies most everyday needs. - The sticky, low‑bar nature of weak AI mirrors how Instagram rode the mobile phone wave, suggesting that despite its limitations, weakly intelligent chat can become a pervasive consumer platform and reshape how both builders and ordinary users engage with AI. ## Sections - [00:00:00](https://www.youtube.com/watch?v=bjcDgqKgvho&t=0s) **Overlooked Lessons in AI Building** - The speaker outlines nine common misconceptions about using chat AI—highlighting its weak intelligence, data isolation, and non‑specialized nature—to guide users from casual conversation to building robust AI‑powered applications. - [00:04:45](https://www.youtube.com/watch?v=bjcDgqKgvho&t=285s) **Rethinking Development Vocabulary for AI** - The speaker argues that while AI accelerates coding, developers remain constrained by legacy, data‑centric design patterns and fragmented UI flows, necessitating a new building vocabulary that treats whole conversations as core user experiences. - [00:08:34](https://www.youtube.com/watch?v=bjcDgqKgvho&t=514s) **Planning Is the Real AI Leverage** - The speaker argues that people underestimate AI because they ignore the outsized value of intentional, well‑planned work—enhanced by AI tools—where thorough planning, not the technology alone, yields the greatest impact. - [00:12:21](https://www.youtube.com/watch?v=bjcDgqKgvho&t=741s) **Talent-Driven Coding Tool Preference** - The speaker argues that developers will select AI coding assistants (Claude, Cursor, Lovable) based on their skill level and existing talent, leading to brand loyalties akin to the 1990s Mac‑Windows rivalry rather than purely on technical capabilities. - [00:15:50](https://www.youtube.com/watch?v=bjcDgqKgvho&t=950s) **Data Middleware Bottleneck** - The speaker explains how corporate lock‑ins and lack of a data‑middleware layer restrict AI access to enterprise data, creating costly token consumption and hindering AI’s effectiveness. - [00:20:33](https://www.youtube.com/watch?v=bjcDgqKgvho&t=1233s) **Standardizing Work for AI Readability** - The speaker explains that the growing value of AI is driving people to adopt uniform, tokenizable templates so machines can process their work, a shift that will blur the line between AI‑generated and human‑crafted output until a new professional standard emerges. - [00:24:03](https://www.youtube.com/watch?v=bjcDgqKgvho&t=1443s) **Key Factors for Effective AI Agents** - The speaker outlines nine critical considerations—including precise token‑depth control, aligned incentives, privacy‑driven data middleware delays, under‑invested distribution experience, and the rise of tokenizable templates—that shape intentional and successful AI agent deployment. ## Full Transcript
0:00Building with AI is hard. Every time we 0:02prompt, we're building something. Even 0:04if it's as simple as a conversation with 0:06AI. These are the nine most overlooked 0:11lessons that I have seen come out again 0:14and again and again as I have coached 0:17people through the journey of getting 0:19from the chat GPT interface to actually 0:22vibe coding and building apps with AI. 0:24I've done it for dozens and dozens and 0:26dozens of people. I've done it at 0:28company scale. This is what pops out to 0:30me that people don't understand that I 0:32wish they would. And if you think to 0:34yourself, I'm not a vibe coder, Nate. 0:36Why would I care? You care because this 0:39shapes the entire strategic landscape. 0:41And you care because this also shapes 0:43the way you engage with chat today. Even 0:46if you're just a casual chat interface 0:48user. Number one, chat is not a 0:52specialized tool. And that's a very 0:54interesting problem. I have seen claim 0:56after claim after claim that the chat 0:58interface won't stay. It's heavy on UX. 1:01It's not intuitive. How can it get 800 1:04million users for chat GPT? I have come 1:06to a deeper understanding. Chat is 1:09dangerous and it's a problem because it 1:12is a weekly intelligent layer. And you 1:14will tell me, well, it depends on the 1:16model, Nate. I have a great model. Why 1:17is it weakly intelligent? It is weakly 1:19intelligent because the true 1:21intelligence of the system depends on 1:23the data inputs and most chat models are 1:26strikingly isolated from the data 1:29environment you operate in dayto-day. I 1:32will stick to it. It is weekly 1:33intelligent. It is good enough to be a 1:35task starter. You can get going quickly 1:38but it's not ultimately good enough to 1:40finish the job. That is a lot of the 1:42promise of AI agents that they will be 1:45good enough to integrate with the data 1:46layer. We have not seen that really 1:48transpire yet in 2025. So what does this 1:51mean for builders and users? Number one, 1:53this incentivizes serious AI builders 1:56that require precision because if you're 1:58doing a big task that requires 2:00precision, you cannot do it well in a 2:02chat model. It's not good enough. On the 2:04other hand, this disincentivizes 2:07AI tools that fall in the casual use 2:11category. If you are a casual use tool 2:14builder, if you are interested in that 2:16category, it is tougher because the 2:18weekly intelligent layer will eat you 2:20alive. People are habitually addicted to 2:24weekly intelligent AI. It is good enough 2:26for most things. Weekly intelligent AI 2:29task saturates for most people. And so 2:32I've been wondering, a lot of people 2:33have been wondering, is there an 2:35Instagram, a wide consumer success story 2:38that should live off the top of a new 2:40underlying technology? Insta lived off 2:42the mobile phone. When when the iPhone 2:44took off, Insta was the runaway success. 2:46I still remember that. Maybe chat GPT is 2:50the Instagram of this era. It is showing 2:53that kind of consumer stickiness and it 2:56is making me think that this kind of 2:58weakly intelligent AI is surprisingly 3:02sticky for casual application. I wish 3:04people understood this better because it 3:06actually suggests a lot of upside for 3:08people building serious tools because 3:10the weekly intelligent AI is just good 3:12enough to get you started and I've seen 3:14this happen over and over again. Anyone 3:16working seriously with AI does not 3:19finish the work in chat GPT in Claude in 3:22whatever tool you're using. They may 3:24start there, but they're moving 3:25elsewhere to get the job done if they're 3:27real crafts people. That's number one on 3:29chat as a tool. Number two, one turn 3:32versus multi-turn conversation. Almost 3:34everyone that I encounter that's getting 3:36started in a building journey or getting 3:38started with AI thinks in one turn 3:41conversations. You know what? That's 3:43really natural because AI is 3:45reinforcement learned. It is RL is built 3:49for one turn conversations. The AI 3:51itself is architected for that. But the 3:54value is in multi-turn. That's where you 3:57focus. That's where you find. That's 3:59where refine. That's where you do real 4:01intellectual work. And so if you think 4:03about it, a good prompt is not 4:06necessarily designed to get you a 4:08oneshot answer for the whole thing. Most 4:10people using AI seriously think about it 4:14as I have an anchor prompt at the top 4:17that shapes the parameters of this 4:19thread and then the thread itself is 4:22what exposes the intelligence I need to 4:24do interesting work the conversation the 4:26back and forth the refinement but people 4:28who are casual users don't understand 4:30that people and we don't understand that 4:32as builders because we tend to build and 4:34assume these systems are one-turn 4:36systems one of the biggest gaps in AI 4:39today is that we aren't building for 4:41conversations. We're building for chats. 4:44We should be thinking about how you 4:45surface intelligence out of 4:47conversations and how you make whole 4:49conversations a fundamental unit of the 4:52user experience because increasingly it 4:54is. But we're stuck searching in the 4:57sidebar for these chat and it's just 4:59it's really painful. It's really awful. 5:01Number three, we need a new building 5:04vocabulary if we are going to build 5:06successfully. Most people who start to 5:09build don't get stopped by AI. They get 5:13stopped by 2000s and 2010s and maybe 5:16earlier era building systems. If you 5:20think about it, the fundamental way 5:22we've coded has not evolved that much. 5:25Yes, AI enabled coding is making us able 5:28to produce that code faster, but the 5:30vocabulary of building still revolves 5:32around having a database, being able to 5:35transmit data securely to and from the 5:37database, doing operations with business 5:38logic against the database. None of that 5:41has changed. And so when people want to 5:43vibe code and they run into trouble, it 5:45is almost always not the AI. It is 5:47almost always that they are struggling 5:49with how to build a datadriven 5:52application of some sort. Maybe it's the 5:53transactions piece that messes them up. 5:55Maybe it's login. Maybe it's some custom 5:57integration they need. And what that 5:59suggests to me is that our vocabulary of 6:02building is due for a change. And there 6:05is opportunity on the table for people 6:07who can figure that out. With cloud as 6:09an example, we still needed to know 6:12files and file structures and code, but 6:14we could care a lot less about where all 6:17of that lived because cloud made it 6:19possible to live anywhere on the planet. 6:22You can put that code anywhere you want 6:24and you'll be good to go. Now we need 6:26the same kind of convers conver to occur 6:30around how we take the intelligence we 6:32develop in conversational threads and 6:35get that into a build environment that 6:37abstracts away from the fundamentals 6:40that are tripping up lots and lots and 6:43lots of wouldbe builders. It's like we 6:45are introducing people to a kitchen and 6:47the first thing we're doing is saying 6:49this is how a refrigerator works. 6:51Architecturally, you need to understand 6:52a heat pump to make this work. Or this 6:55is how your burner processes natural gas 6:58or your electric induction burner. This 7:00is how it inducts. If you have to know 7:02that to cook, you're not really getting 7:04it done. Now, tools like Lovable have 7:06done a great job trying to abstract 7:08that, but it is not seamless yet. We 7:12need tools that will take conversational 7:15intelligence which I talked about 7:16earlier in this video and will make that 7:20central unit of understanding. I think I 7:23want to suggest something slightly 7:25spicy. I think the conversation is due 7:29to take the place of the file. Yes, we 7:32may have files underneath. I'm not 7:34saying we won't have an underlying 7:36substrate that's files because that's a 7:38surprisingly sticky workspace. But 7:41systems need to be enabled that let you 7:43build over the top and hide the 7:46underlying complexity. Open the 7:48refrigerator door, there's food there, 7:49start cooking, right? That's the 7:51analogy. And that is the dream of these 7:53AI building tools. And I am here to say 7:55they are only partway there. And I think 7:57most of them would admit that they're 7:59only partway there. It is still a case 8:01where it is complex to hook up a real 8:04application. You have to understand 8:05something about data. You have to get 8:07into the wires and integrate stuff. If 8:09we really want to enable the power of AI 8:12agents to build across conversations, we 8:15have to make conversations a fundamental 8:18unit of computing. And we haven't done 8:20that yet. Number four, AI planning and 8:23underestimation. Most people still 8:26underestimate what AI can do 8:28dramatically. And I think that that's 8:30not a surprise. That's not the insight. 8:32What's interesting to me is that you can 8:34put a multiple on that underestimate if 8:37you tell them that there's more value in 8:39planning than in the AI itself. If I 8:41plan my conversation for 2x the length 8:44of time that I would have allocated 8:46otherwise if I plan it for 20 minutes 8:48instead of 10, I get far more value 8:51because I took the time to think about 8:53the conversation I wanted to have. Now, 8:55we're all ripping off casual 8:56conversations all day. That's not what 8:57I'm talking about. I'm talking about an 8:59intentional conversation where you're 9:00trying to understand and build really 9:03interesting intellectual work, a complex 9:05document, a piece of code, whatever it 9:06is. The planning matters and people 9:10dramatically underestimate AI precisely 9:12because they don't understand the 9:14leverage that planning can provide. Now, 9:17AI can help with planning. It's not just 9:18me and my brain and a pencil anymore. 9:21You can use AI and prompts and things 9:23like that to help you get to clarity, 9:25but you still have to invest in the 9:27planning stage. And so when I talk about 9:30people underestimating AI, yes, they 9:32probably don't realize the power of the 9:34newest models. Yes, they don't realize 9:36that there are specialized tools that 9:38enable you to do really cool stuff. All 9:40of that is true, but they really really 9:43don't understand, most people really 9:44don't understand that the leverage is in 9:46the planning if you're doing serious 9:48work. And it and that leverage like you 9:50can say in management theory the 9:51leverage was always in the planning 9:53right like we were always supposed to do 9:54the planning but with AI power law 9:57returns are accelerated. In other words 9:59the things that were true before are 10:01even more true now. And so planning has 10:03more leverage because you have more 10:05intellectual horsepower behind you when 10:07you execute. So get the planning right 10:08or you end up in really wrong territory. 10:11Number five, build tool are in a really 10:15interesting position right now and I 10:17haven't had time to really dig into this 10:18and unpack it, but it's important to 10:20talk through. Fundamentally, you have 10:22three classes of building tools. You 10:25have cursor, which is a dedicated 10:27development environment powered by AI. 10:29You have Claude code, which is a 10:31terminal that you can invoke Claude on, 10:34and Claude will just build for you in 10:35the background. You don't really touch 10:37the files. And then you have the AI 10:40powered prompting build tools like 10:42lovable. Those are your three basic 10:43classes. What's interesting is Lovable 10:46is aggressively scaling up its 10:49capability set. Lovable is tackling 10:52exactly what I'm describing here as they 10:54enable you to write more and more 10:55complex applications and finish them in 10:58the next year or two with the power of 11:00underlying models scaling up. Hello chat 11:02GPT5. We don't know if it will work, but 11:04maybe it will. They've certainly been 11:06hinting that it's a strong model. But 11:07the point is the models will get better. 11:09Lovable is going to get better. Lovable 11:11is going to eat cursor's lunch from the 11:13casual users side because before casual 11:16users eventually hit a point where they 11:18had to graduate to a more complex 11:19development environment to get something 11:21done. I have seen personally how much 11:23those tools have evolved. Replet is also 11:26in that class. how much they've evolved 11:28to the point where you could do some 11:29serious work with those tools and they 11:31are continuing to evolve very rapidly 11:34which means you can do more and more 11:35serious work in vibe coding tools at the 11:38same time code is taking advantage of 11:42the agent layer cla code is saying you 11:44don't want to mess with files you can 11:46just type and I'll go do the work and 11:49because it's claude code and it was 11:50developed to be a development assistant 11:52within enthropic it is an excellent 11:54model at doing exactly that like the 11:57anthropic team built it to help them and 11:58boy does it show. It's a really good 12:00model. And so cursors squeezed on the 12:02serious builder side by claude code and 12:04on the casual builder side by tools like 12:06lovable. And the question becomes, is 12:09there a middle ground? Is there a middle 12:11ground for a development environment 12:12where you're going to have some AI 12:14agents, you're going to have some 12:16conversations that lead to code, and 12:18you're going to have some hand coding. 12:19That is one of the biggest and most 12:21interesting questions in tech, right? 12:22And my thesis is that that is going to 12:25be a talent dependent huristic. You will 12:28pick the tool across those three 12:31depending on the talent set you have. 12:33And your top tier engineers eventually 12:36are going to be moving to claude code or 12:39something a lot like it because they 12:41have the power of the LLM. They have the 12:44agents that they can go after and claude 12:46code will evolve a way to get into 12:48editing code very very easily. People 12:50are already rolling their own. that's 12:51just going to become part of the fabric 12:53of the tool. And then the mid-tier 12:55engineers that still want to have some 12:57hands-on code, mid-tier talent are going 13:00to stick with cursor because it feels 13:02like the development environment that is 13:03familiar to them. And the people who are 13:05aggressively getting into tech for the 13:07first time who never went to engineering 13:09school are going to be living and dying 13:12on lovable. And in a sense, it will be 13:14less and less about the capabilities and 13:16it will be more and more about the brand 13:19affinities that these tools produce. We 13:21are going to get into a place like the 13:22Mac and Windows wars of the '90s where 13:24people have a brand affinity for 13:26something and they really believe in it 13:28and they will go to the mat for it. But 13:29if you dig underneath, it's because 13:31their life experiences line them up 13:33where it makes more sense for them to 13:34use that tool. 13:36All right, let's move on. Number six, 13:39let's talk about token depth. People 13:43don't realize that these tools that we 13:46use, these AI tools don't have common 13:49token depth. And so what I mean by that 13:51is the amount of tokens they are willing 13:54to burn to get a solution done varies 13:58and is not transparent and is something 14:00the model makers are incentivized to 14:02constrain and people are incentivized to 14:06get more of. When you talk about these 14:07like declining quality, I I see people 14:09saying declining quality of this tool 14:11net to I lose track. Cursor had it, cla 14:13code had it, others. What you see under 14:15the surface is that people can't measure 14:17the intelligence they're getting, but 14:18they suspect based on fingertippy feel 14:20of what they're using that something has 14:22changed in the model. And if you look 14:23under the surface, often what has 14:25changed is that there is less tokens 14:27available to be spent because tokens 14:29aren't cheap. And so model makers are 14:32incentivized to constrain them a little 14:33bit if they get product adoption. 14:34Ironically, token depth is nonlinear. 14:37The primary value of agents is to 14:40increase token depth because problems 14:42tend to be token fungeible. Let me say 14:45that again. The primary value of agents 14:48is to increase token depth. That is 14:50actually directly from a factorial study 14:52that anthropic conducted which said that 14:55the primary value of multi- aent systems 14:58is to achieve additional token burn 15:01against problems. And I wish people 15:03understood that better because at the 15:05end of the day, token depth is something 15:08that it is very hard to control right 15:10now as a user. And anyone who launches 15:13something that goes beyond think hard as 15:17a command or as a toolclick setting is 15:20going to enable users to have a 15:22previously unprecedented level of 15:25control over how much they want to solve 15:27a problem. I actually want to call out 15:29Manis here because Manis has done a 15:31great job basically saying you pay as 15:34you go. Now, there's other issues with 15:35Manis, but they've done a phenomenal job 15:37saying, "We want our incentives to be 15:39aligned with yours." And so, if you want 15:40an excellent AI agent that solves a real 15:43problem, we can do that for you. You'll 15:45pay this much. You pay as you go. When 15:47you run out of tokens, you pay for more. 15:48Everybody's aligned. You don't have the 15:50same issues because you can pay for the 15:52token depth you need to solve the 15:54problem. We need that kind of tooling 15:56across the entire ecosystem. Number 15:58seven, data incentives are finally 16:02delaying the data middleware layer. Like 16:04it is it is a real problem and it is 16:06becoming a worse problem. Let me explain 16:08what I mean. Salesforce locked off 16:11access to Glean. Why? Because Salesforce 16:14is incentivized to keep Slack data 16:16inside the house so Glean the AI tool 16:19can't access it. That is happening 16:20everywhere. The problem is that data 16:23needs a middleware layer to actually re 16:26realize the value of AI. AI needs data 16:29to work well. Remember when I talked 16:31earlier in this video about the idea 16:32that most of our chat bots are isolated? 16:35That's not an accident. That's the 16:38result of the fact that data middleware 16:40doesn't exist. Data middleware to 16:43translate large volumes of data into our 16:47AI experiences is largely missing. There 16:50are players that want to make that 16:53happen. Lately, it's been agentic search 16:55and the idea that you can use AI agents 16:57to go and far it out the data that seems 16:59like a very expensive way to solve that 17:01problem. As I said earlier, agents cost 17:04token burn. You'd rather just have the 17:06data made available really easily and 17:08you can review it. But either way, data 17:11availability is more of a bottleneck 17:13than data. data is being incentivized to 17:15be locked off because boardroom after 17:17boardroom is being told don't let your 17:20data out of the house. Everybody's 17:22putting up walls and moes around their 17:24data and the intelligence needs that 17:26data to operate successfully. And so 17:29there is a missing middle of data layer 17:31companies that I really want to see 17:34exist that would enable us to have 17:36datadriven connections for our AI 17:38tooling. Without that, it is very very 17:41difficult to get effective AI agents 17:44that operate as crossf functionally as 17:47those demos would have us dream. It is 17:49going to be difficult for chat GPT to 17:51launch a magical AI office assistant. If 17:53the data layer isn't fully integrated, 17:55if the data layer isn't fully there, now 17:58that's not going to stop them trying. 17:59That's not going to stop them 18:00negotiating with major players. Neither 18:03will it stop Anthropic or others who are 18:04in that race. The point is that we need 18:08a data availability incentive shift. 18:12People need to see maybe not releasing 18:15confidential information, but releasing 18:17some structured information in a data 18:19stream is productive for everyone. It's 18:22kind of like the HTML of the internet. 18:25You want to have a web page. Sure, it 18:27has company information, but you want to 18:29have it. In the same way, you need to 18:31have data availability that makes it 18:33easy for AI agents and AI tooling to 18:36operate against your data, not just your 18:39graphical user interface. And now we 18:41come to number eight. Distribution and 18:43data user experience design. Yes, 18:46distribution is king. We've talked about 18:49that a lot. I'm not the only one to say 18:51it. Distribution is king in the age of 18:53AI. But the way to win distribution if 18:56you're starting from scratch is through 18:57seamless data experiences. And this is 19:00where I see so many tool builders go 19:02wrong. They invest in the isolated 19:05experience of the CX and they don't 19:08invest in the data integrations that go 19:10with it. That is a big issue and it 19:12means that people will feel like that 19:15experience is more isolated. They have 19:16to port more of the data in. It's a 19:18heavier experience. They're not going to 19:19come back. This is severely 19:21underestimated in most MVPs. It's 19:24underinvested. The data is there. The 19:26question becomes how do you get it? And 19:28right now, agentic search seems to be 19:30one of the ways people are going after 19:32it because they have to solve the 19:33problem themselves in the absence of 19:36that common norm and etiquette around 19:38making data that is appropriate easily 19:40available to agents. I do think we need 19:42that latter piece as I've said, but in 19:44the meantime, people are using something 19:46like agent search to go and get the data 19:48and ferret it back. Perplexity has been 19:50very vocal about doing that. So if you 19:52want distribution, win it through 19:54seamless data experiences. Last but not 19:57least, templates and AI structure. This 20:00is an observation about the way we will 20:01be working in the future that I think is 20:03really important because so many of our 20:06problems are token fungeible which I 20:09talked about earlier in this video. The 20:11world is going to norm to tokenizable 20:14templates. In other words, when OpenAI 20:17released agent mode and I found that it 20:19couldn't handle my particular workflows 20:22very well, other people rightly pointed 20:23out, well, it handles mine fine. And you 20:25know what was the common attribute of 20:26the people who said it handled theirs 20:28fine? They had tokenizable templates. 20:31They had workflows that were fairly 20:33standard that would have been 20:34reinforcement learned against uh things 20:36like a very standard discounted cash 20:38flow sheet. Yeah, I bet agent mode can 20:40do that. It is standard. It is middle of 20:42the distribution. It is not too hard. AI 20:44is not going to handle the 25,000 row 20:47spreadsheet that someone in your office 20:49maintains to keep all of marketing 20:50operations online. We are not close to 20:52that. What's interesting about that is 20:55that because AI is so valuable, we see a 20:58pull factor toward those more standard 21:01templates over time. It's not just that 21:04your manager will tell you, hey, we have 21:06to use the normal template today I can 21:07read. It's that you yourself want to use 21:09it because you want AI to be able to 21:11read it and help you. I see this in 21:12myself. I am using more normal and 21:15standard templates where I can because I 21:17want AI to have an easier time reading 21:19it. I think in terms of chunks of 21:21information that AI can consume. Our 21:23work is becoming tokenizable templates 21:25because we need to make it AI readable 21:27for a bit. This is going to make it 21:29harder to distinguish between AI slop 21:32and real good work because they're both 21:34going to start using the same template 21:35for a while. That's going to be one of 21:37the confusing things about the next year 21:38or two. That being said, we will 21:40eventually reach a professional standard 21:42where we say this is what good work 21:44looks like. Yes, it's AI readable, but 21:46humans with taste also contributed 21:49because even if you have standardized 21:52pieces of work, the taste and craft that 21:54enables someone to define a usable 21:56business mode is something that humans 21:58are going to be bringing to the table. 21:59the skin in the game, the ownership 22:01sense. These are things that humans are 22:03going to be bringing to the table, but 22:04they may be bringing them, they may be 22:05doing this work through easily 22:07tokenizable templates. And so I think 22:09that our work norms and artifacts are 22:12due for a shift. And yes, if you want to 22:14go there, maybe this means there's a 22:16shift in tooling. Maybe this means that 22:17Word is right for disruption. But boy, 22:19have I heard that before. Some of these 22:21tools are surprisingly sticky. So there 22:23you have it. nine different principles 22:26that I have seen come up over and over 22:28and over and over again as I look at the 22:30journey people take from individual 22:33contributor chatting with chat GPT or 22:35co-pilot or whatever you have to I'm 22:38using it seriously as a workflow tool 22:40maybe I'm vi coding and building or 22:42maybe I'm using it to create complex 22:44documents but it's a big workflow those 22:46are the things that most people don't 22:48recognize they have trouble articulating 22:50that I wish we knew and talked about 22:52more I'll review them briefly here One, 22:54chat is not a specialized tool. Chat is 22:57weekly intelligent and that produces 22:59really interesting incentives for 23:01builders and we should be aware of it as 23:03users. Two, people think in one turn 23:06conversations. They should think in 23:07multi-turn conversations. Three, we need 23:10a new build vocabulary. We should think 23:12about conversations as a fundamental 23:15unit of computing, not files. Number 23:17four, AI planning and underestimation. 23:20Most people dramatically underestimate 23:23the leverage they get from planning 23:25because there's a power law in execution 23:27with AI and you will go dramatically off 23:29the rails or dramatically on the rails 23:31if you get it right. Number five, tool 23:33wars and cursor build tools are 23:35converging on the middle. Lovable is 23:37getting better. Clawed code is getting 23:39better. Cursor stuck in the middle. I 23:41think we are headed toward an OS style 23:43brand war like the 1990s between Windows 23:46and Mac users where you will have an 23:47affinity but it's actually your own life 23:51experiences that shape that and we 23:53should be aware as users of where we may 23:55fit best. Then hidden layers and data 23:58incentives. So token depth is nonlinear. 24:01The primary value of an agent goes 24:03beyond token depth. problems are 24:05fungeible. And so what we need to get to 24:07is control settings that enable us to 24:08set token depth more precisely, 24:11reliably. And we need agents that enable 24:13us to accelerate that token depth to 24:15solve hard problems. I gave Manis as an 24:18example of at least aligned incentives 24:19because model makers are not always 24:21aligned with users. Here number seven, 24:23data incentives are delaying the data 24:26middleware layer. Privacy incentives in 24:28particular and the concern about leaking 24:30data are keeping a data middleware layer 24:32from existing that we need for agents to 24:35succeed. Number eight, distribution and 24:37data user experience. Yes, distribution 24:40is king, but if you're not building for 24:42that data integration, it won't feel 24:43seamless enough even if your AI is 24:45intelligent. And that is a severely 24:47underinvested experience. That that 24:49integrations piece is underinvested in 24:51most MVPs. And number nine, templates 24:54and AI structure. The world is norming 24:57to tokenizable templates cuz AI can eat 24:59them. Which means we have this weird 25:01period coming where it's going to be 25:02harder to distinguish AI slop from 25:04really good work. But eventually our 25:06work patterns will get more structured 25:07and we'll be able to start to see this 25:09is an example of an artifact that has AI 25:12influence but that also has human craft 25:14and taste over the top. So there you go. 25:16The nine things. I hope they're helpful. 25:18I hope they made you think. I hope they 25:20make you use AI with more intention. 25:23Tears.