Learning Library

← Back to Library

Breaking Legacy Walls for AI Agents

Key Points

  • Enterprise AI agents often falter because, even with memory, they lack the “primitives” — shared, reliable building blocks that let humans and agents collaborate without heroic effort.
  • Most organizations still operate on legacy, opaque workflows (hidden drafts, permission walls, tribal knowledge) that prevent agents from moving beyond drafting or summarizing tasks.
  • These entrenched 20th‑century work patterns create a wall where agents can generate plausible content but cannot actually ship or execute it within the existing environment.
  • A case study of Cursor’s migration from a headless CMS back to raw code and markdown illustrates how re‑architecting workflows for AI‑native tooling can break the bottleneck and enable agents to be truly productive across non‑technical teams.

Sections

Full Transcript

# Breaking Legacy Walls for AI Agents **Source:** [https://www.youtube.com/watch?v=4Bg0Q1enwS4](https://www.youtube.com/watch?v=4Bg0Q1enwS4) **Duration:** 00:23:24 ## Summary - Enterprise AI agents often falter because, even with memory, they lack the “primitives” — shared, reliable building blocks that let humans and agents collaborate without heroic effort. - Most organizations still operate on legacy, opaque workflows (hidden drafts, permission walls, tribal knowledge) that prevent agents from moving beyond drafting or summarizing tasks. - These entrenched 20th‑century work patterns create a wall where agents can generate plausible content but cannot actually ship or execute it within the existing environment. - A case study of Cursor’s migration from a headless CMS back to raw code and markdown illustrates how re‑architecting workflows for AI‑native tooling can break the bottleneck and enable agents to be truly productive across non‑technical teams. ## Sections - [00:00:00](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=0s) **Enterprise Agents Hit Legacy Walls** - The speaker argues that even with memory, AI agents falter because companies haven’t adopted shared primitives and remain trapped in opaque, outdated workflows that stop agents from moving beyond drafting and summarizing tasks. - [00:03:12](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=192s) **Abstraction Cost in AI Coding** - The speaker shows how AI agents rapidly built a website for minimal cost, but the introduction of a CMS abstraction forced manual UI interactions, illustrating that added layers raise expense and diminish the efficiency gains of AI-driven coding. - [00:07:34](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=454s) **Democratizing Code with Visible Workflows** - The speaker argues that exposing raw workflow code lets non‑technical users and AI agents directly edit and audit processes, turning technical competence into a default organizational posture rather than a specialized department. - [00:11:20](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=680s) **Code Wins as Organizational Strategy** - The speaker asserts that “code wins” shifts from an engineering brag to a strategic framework that extends legibility, investment, and leverage across an organization by encoding work into artifacts (code, tests, logs) so AI agents can autonomously fast‑track execution and collaboration. - [00:14:53](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=893s) **Work Primitives Driving Operational Change** - The speaker explains how essential work primitives—system of record, gates, checks, roll‑backs, and traceability—form the foundation for reliable software changes, and how companies like Lee’s CMS and Enthropic embed these primitives into artifact‑based workflows to make operational change legible and trustworthy. - [00:18:28](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=1108s) **Turning Teams Artifact‑Native for AI** - The speaker urges enterprises to shift from GUI‑based tasks to “code‑like” artifact workflows, training non‑engineers to make their work legible to AI agents so the agents can act as operators and dramatically increase organizational velocity. - [00:21:34](https://www.youtube.com/watch?v=4Bg0Q1enwS4&t=1294s) **Primitive Fluency Drives AI Advantage** - The speaker argues that an organization’s competitive edge in the AI era comes from teaching code‑like (primitive) fluency to all employees so work can be recorded in agent‑readable form, enabling simpler, faster, and safer delivery rather than merely copying tools or processes. ## Full Transcript
0:00In last week's executive briefing, I 0:01argued that most enterprise agents are 0:03sophisticated amnesiacs. The models are 0:06capable, but without domain memory, 0:08explicit goals, progress tracking, 0:10operating procedures, well, multi 0:12session work just turns into a lot of 0:13thrash, and you don't get very far. This 0:15week is the uncomfortable followon. Even 0:19if you solve for memory, most companies 0:22still won't get agent leverage because 0:25they haven't taught the organization to 0:27work in primitives. Not prompting, not 0:30tooling, but primitives. The shared 0:33building blocks that let humans and 0:35agents reliably ship work without 0:38heroics. The real failure mode that I 0:41want to talk about this week is that AI 0:43agents run into walls even with memory 0:47because the work that you have is 0:50usually stuck in 20th century work 0:53patterns. Most agent deployments are 0:55going to stall at a similar place where 0:57the agent can write a plausible draft, 1:00the agent can summarize a meeting, the 1:01agent can generate options, propose 1:03plans, and and it just can't get 1:05farther. The wall is your operating 1:07environment. In most Fortune 500 1:09companies and even most smallmedium 1:11businesses, important work lives inside 1:15opaque workflows. Let me give you some 1:17examples. Maybe it's behind the click 1:19pads in business software like behind an 1:22admin portal, behind a ticketing tool, 1:24behind a CMS screen, behind a dashboard. 1:28Maybe it's trapped in a hidden state 1:30somewhere. It's in draft mode. It's in 1:32an unpublished version. It's in 1:34permission rules that aren't visible 1:35until you hit them. Maybe it's encoded 1:37as tribal knowledge. Ask Sarah. 1:40Actually, finance owns that. You know 1:42what? We don't touch that system. An 1:44agent cannot reliably operate inside 1:47that environment. It cannot advise. It 1:50cannot draft. And most important, it 1:53cannot ship with you. So, you can't 1:56accelerate. So, the company that buys 1:58agents is actually buying a conversation 2:02layered on top of those same 2:03bottlenecks. Now I want to give you a 2:05case study around how you can unlock 2:07this and think differently. Lee Robinson 2:10is a longtime builder and writer. He was 2:12previously at Versell. He now works at 2:14Curser teaching about AI. Curser is 2:17obviously one of the breakout companies 2:19in AI assisted coding. The product is an 2:21IDE designed around AI agents. If 2:24anybody is AI native, you'd think it 2:26would be Cursor. And in this case, 2:28that's true. Just a few days ago, Lee 2:31published an essay that reads like a 2:32weekend project, but we should treat it 2:35as more of a strategic signal. I think 2:37it's a fantastic case study in how we 2:39can make AI agents native in non- tech 2:42organizations. Lee was able to migrate 2:44cursor.com from a headless CMS or 2:48content management system right back to 2:50raw code and markdown. Now, for decades, 2:52we would say that was regressing 2:54capability. we would say we need 2:56marketers to have a CMS so they can 2:59reliably make changes. Well, Lee didn't 3:02think that was true anymore and we'll 3:03get into why. He estimated it would take 3:05weeks and maybe an agency and this is a 3:08guy at an AI native organization, right? 3:10He estimated it would take weeks. He 3:12himself was surprised because he 3:14finished the job in three days over a 3:16weekend using about $260 in tokens and 3:21hundreds of agents. That is hundreds of 3:23agent pull requests and calls. I think 3:25300 and some before it was all done. The 3:28headline is not AI agents can code nor 3:30is it AI agents are fast. What I noticed 3:34is that they used to be able to ask the 3:37agent to modify the website directly and 3:39then when they introduced a CMS in the 3:42middle which was a change cursor made as 3:44a way of trying to become a grown-up 3:45marketing organization, suddenly they 3:47were back to clicking through UI menus 3:49instead of just delegating work to 3:51agents. And so what Lee realized is that 3:55there is a larger lesson here with AI 3:58and coding agents. The cost of an 4:00abstraction has never been higher. An 4:03abstraction is just a layer that hides 4:04the underlying work behind a simplified 4:06interface. We needed it for a long time 4:10because most organizations are mostly 4:12not technical. A CMS is an abstraction, 4:15right? It hides the messy parts. It 4:17hides the files, the structure, the 4:18deployment, the version controls, and 4:20gives humans a nice friendly screen that 4:22says edit, page, and publish. For 4:24decades, that's been a good trade. And 4:26even AI native organizations like cursor 4:29think in those terms, which I guess 4:31should be encouraging to us because if 4:33cursor thinks that way, we we can take 4:36some comfort in taking a minute to get 4:38AI native here. But the the thing that 4:40you need to realize is the minute you 4:42want agents, all that stuff that you hid 4:46becomes really expensive. That was Lee's 4:49core insight is that before they 4:51actually added in their CMS at cursor, 4:54it was actually easier to update the 4:56site. You just said at cursor and you 4:59changed your marketing copy. And that's 5:01as simple as you want it to be. Look, 5:03agents don't thrive in clicky 5:05environments where state is scattered 5:07across different screens, different 5:09permissions, different draft modes, 5:10hidden dependencies, different roles. 5:12Agents thrive in environments where the 5:15underlying work is visible and editable. 5:18So the CMS stopped being a helpful tool 5:21as agents gained a capability. Instead, 5:24it became a wall between the agent and 5:26the work. And that is why this story 5:28matters. It's not really about a 5:30website. It's not about CMS. It's not 5:32even about cursor. It's about the new 5:34economics of complexity. So Lee's essay 5:38is unusually valuable because he doesn't 5:39just say the CMS is bad. He actually 5:42lays out the hidden tax that we don't 5:44realize we're paying by using these 5:47abstractions. If you strip away the 5:49webdev details, the list looks like a 5:52kind of familiar executive pattern. You 5:55have multiple identity systems. Some 5:57people are in GitHub and and in the CMS. 6:00Someone always needs to be added for 6:02permissions. There's operational drag. 6:04There's permissions risk. There's 6:06preview complexity, right? Draft content 6:08requires special access paths and 6:10brittle preview logic. Sharing what 6:12we're about to ship becomes a friction 6:14point in and of itself. And more moving 6:16parts are required to keep everything 6:19fast and reliable. The site wants to be 6:22simple and it wants to be stable and it 6:24wants to be pre-built. The CMS 6:25introduces additional uptime 6:27dependencies, additional special modes, 6:30and so there's a legitimate cost markup 6:32to this that the organization pays even 6:36if you're not thinking about the agent 6:37side. Spent and lease shares this cursor 6:40spent $56,000 6:43on CMS usage since September, which is a 6:46really hefty markup for the convenience 6:48of a graphical user interface. It also 6:51introduced hidden dependencies. When 6:52pieces of the site come from network 6:54fetches or humans can't easily answer 6:56where does this piece of the site come 6:58from, Lee points out that that inability 7:01to completely and clearly explain what 7:03is going on the site that introduces 7:06additional costs in time and knowledge 7:08and maintaining hidden state in our 7:10heads that is really expensive. So this 7:14is the actual state most of us face with 7:18graphical user interfaces. This is the 7:20size of the tax that we have been paying 7:23for non-technical people to use 7:26non-technical tools that are 7:28abstractions over a technical workflow. 7:31What has changed is that it is now 7:34absolutely possible for non-technical 7:37people to drive technical agents 7:39directly against the workflow code 7:42itself. And so why would you pay the tax 7:45on the abstraction anymore? Why not move 7:48back to the work surface that is legible 7:50to both humans and agents, raw code and 7:53marked outdown? That's what Lee did. And 7:55that move collapsed all of the 7:56complexity of cursor's website into a 7:59single inspectable place. So if you're 8:01not technical, what you need to 8:03understand is that the work moved from 8:05state that was hidden inside of a tool 8:08to artifacts everybody, including agents 8:11can see, review, and undo. And that's 8:13what really matters here. So Curser's 8:16advantage isn't really AI agents. 8:18Everyone's going to make a big deal out 8:19of the fact that Lee used 300 some agent 8:22pull requests. That's not really it. 8:24It's that the whole company is technical 8:27by default. Because when I ask myself 8:30what is stopping other companies from 8:32doing this, the thing that I come up 8:34against over and over again is that 8:37cursor is building a culture where 8:39technical is not a department. It's a 8:41default posture. And Lee describes this 8:44explicitly when describing user 8:45management. He says designers are 8:48developers. This is absolutely the case 8:50at OpenAI as well. And it's absolutely 8:53the case at AI native organizations that 8:55I have run into all over the world in 8:58the last year. It can sound radical if 9:00you're in a traditional business, but at 9:02Kurser and other companies, it's really 9:04just normal. It does not mean that you 9:07don't have non-technical staff. It means 9:10that everybody is semi-technical. And 9:13when you talk to folks like designers, 9:15which I've done, who are also committing 9:18code, they're surprisingly open-minded 9:20about it. They say, you know what, I 9:22don't worry a lot about exactly where my 9:25job family ends up. I'm interested in 9:27solving problems. I bring a design 9:29mindset to solving problems. I happen to 9:32have a tool set that allows me to 9:33iterate really quickly with code, which 9:35keeps me closer to the problem space, 9:38and I love that. That is the kind of 9:40attitude that we need across the board 9:43in our organizations. By the way, this 9:45is not just Lee's personal vibe or take. 9:48Colossus has actually written up a 9:50comprehensive profile on Cursor's go to 9:52market team and noted that they're 9:53surprisingly technical and they use 9:55Cursor itself for website updates, for 9:57dashboards, for internal tooling and 10:00they ship that work directly. They do 10:02not route everything through 10:03engineering. And so if you think about 10:05it, that mindset of getting as close to 10:08the code as you can and allowing the 10:10code to establish primitives that agents 10:12can build against, that is scalable and 10:15that leads to surprisingly efficient 10:17thinking because you're thinking in 10:19terms of the core artifacts that 10:21everybody can see and work against. And 10:23so internal debates at Cursor look like 10:26the kinds of debates we all dream about 10:27as leader. People talk about whether 10:29something gets used and if not they 10:31proactively move to kill it. There's a 10:33culture of dog fooding and testing that 10:35keeps the defaults continuing to evolve 10:37as we try stuff. Cursor has a pre-hip 10:39ritual called a fuzz where everyone 10:41tries to break the release and then 10:43fixes land fast because everyone has 10:46actually tried the new thing. This is 10:49not startup chaos. This is not something 10:51that's impossible to copy at a larger 10:53scale. It's actually a very coherent 10:55operating model. is just foreign to the 10:57way we've worked before because 20th 11:00century defaults are stuck in the age of 11:02the graphical user interface. So this 11:05model emphasizes shared agency. It 11:07emphasizes a shared substrate of 11:09primitives and it emphasized shared 11:11responsibility. And that brings me to 11:13the deeper thesis that we've been 11:15circling around in this conversation. 11:17The real thesis is that code wins is not 11:20about engineers. It's about how you 11:23extend legibility, investment, and 11:26leverage across your entire 11:28organizations. When executives hear code 11:30wins, it's often translated as an 11:32engineering slogan that means engineers 11:34win versus design versus product versus 11:37business because they can actually put 11:39the product out there. In the agentic 11:41era, it's different. It becomes more of 11:43a strategic law for how we operate our 11:46businesses. Work that can be expressed 11:49in codelike form gets a fast track to 11:52agents because the entire industry is 11:56investing its best models, its best 11:58tools, its best safety investments and 12:00mechanisms and every evaluation 12:02discipline they can find all into the 12:04code pathway. And you can see this 12:06clearly in anthropics own engineering 12:08guidance on longrunning agents. They 12:11describe the core long-running problem 12:13the same way. Agents work in discrete 12:16sessions and every new session begins 12:18with no memory. We talked about this 12:19last week. Their solution is not magical 12:22AI memory. It's a disciplined harness. 12:24Right? Remember we talked about an 12:26initializer agent, a coding agent and 12:28clear artifacts for the next session. So 12:31the key thing to remember is that those 12:33artifacts are not just for agents. The 12:36world is being built where artifacts are 12:38already native. The world is being built 12:41now around code and repos and tests and 12:43logs and markdown files. So if a 12:46workflow resolves to artifacts plus 12:49validation and checks, it's likely that 12:51agents can participate in execution 12:54today. But if a workflow lives inside a 12:57tool user interface state and a 12:58graphical clicky based software that 13:02humans have to operate and humans have 13:03to remember how to adjust and it's all 13:06dependent on human memory plus that GUI 13:09agents at best remain advisers and this 13:12is why software is winning first. It's 13:16not because engineers are better people 13:18or are more deserving. It's because 13:20software is already built around the 13:23infrastructure of legibility. It's built 13:25around version history and diffs and 13:27tests and rollbacks and audit trails. So 13:30fundamentally when you are looking at 13:32composable workflows, those are the 13:35primitives you build around. And that's 13:36what Lee saw. Lee saw you didn't need 13:39the graphical user interface. You could 13:42have all the muscle and power of the 13:43software just by tagging cursor in the 13:47command line. And why did you need to do 13:49anything else if you actually decompose 13:50the workflow down? And so I got to 13:53thinking, how do we as enterprise 13:55leaders start to teach our teams to 13:58think this way? First, I think we need 14:00to really emphasize an understanding of 14:04primitives as places where work lives. A 14:07primitive is not complicated. It's just 14:09a small stable building block that stays 14:11useful even when tools change. There are 14:14a couple of basic primitive stacks out 14:16there that I think I want to emphasize. 14:18You need a clear definition of done. You 14:20need a persistent record of state or 14:22domain memory. I talked about that. You 14:24need a process for how work progresses 14:26and how it's validated. Without those 14:28three things, it's hard to make an agent 14:30reliable. But beyond that, you also need 14:33to understand how work gets done across 14:36the organization. And most enterprises 14:38never define this explicitly. And I 14:40think we're paying for that lack of 14:42definition. We've been paying for it 14:43through graphical user interfaces where 14:45humans use tribal knowledge to 14:48accomplish workflows by clicking things. 14:50That's no longer needed really. And so 14:53we need to start to ask ourselves harder 14:55questions. What are the work primitives 14:57that drive operational change for us? 15:00For example, where's the thing that we 15:01change? That would be the system of 15:03record. How do we see that it changed? 15:05That's a readable before after state. 15:07How do we approve it? That's a defined 15:09gate. How do we prove that it worked? 15:12That's a check. How do we undo it? 15:14That's a roll back. How do we know who 15:16did what and why? That's traceability. 15:18All of those questions are questions 15:21business stakeholders ask all the time 15:23when vetting software. And really, 15:26they're coding questions. Lee's CMS is 15:29really a story about work primitives 15:32being extended across a technically 15:34fluent company. He replaced a UI state 15:37workflow with an artifact workflow 15:39because artifact workflows have natural 15:41review, roll back, and traceability and 15:44they just make sense when that's how 15:46agents are being built. Enthropic's 15:48longrunning harness is the same work 15:50primitive story. They made progress 15:52reliable by insisting on artifact, 15:54state, and incremental steps. and they 15:56were fluent enough to build the business 15:58around that. Domain memory is not just 16:01an agent technique. It's really one of 16:03several work primitives. It's the 16:05written state of the project. And so if 16:07your organization doesn't know how to 16:10write state down in canonical places, 16:13memory itself won't save you because the 16:15work is still not legible. So the 16:19cultural pattern that we're seeing at 16:20cursor and openai that enables this and 16:23at anthropic and at other major studios 16:25is that non-technical people are 16:28learning the substrate of code and 16:30they're learning it well enough that 16:31they can commit code. They're learning 16:33it well enough that they can be fluent 16:34in understanding how code drives 16:36workflows and they're learning it well 16:38enough that they can operate against the 16:41code with the help of agents. By the 16:43way, this does not mean they learn it 16:46well enough to authentically correctly 16:48write JavaScript without help from an AI 16:50agent. I'm not saying they do. I'm not 16:52saying they ever will. I know lots of 16:54design engineers who cannot do that, but 16:56who nonetheless commit pull requests. 16:59So, the reason cursor can kill a CMS is 17:02not because they have this bias against 17:05graphical user interfaces. Don't hear 17:07that. It's that their people understand 17:10that they can operate on one simple 17:13underlying workflow substrate of code 17:16and common primitives and that's enough 17:18and they can do that with agents. And 17:20the reason I'm calling this out today is 17:23that I have a strong conviction that 17:25simple wins. If if you are working in a 17:29world where you could have a more 17:31complex graphical user interface or a 17:34simpler substrate that gets closer to 17:36the code, especially given the pace of 17:38AI agent change, I would opt for the 17:41simpler solution. Simple is going to win 17:44when AI agents change fast. Simple is 17:47going to win when LLMs change fast. If 17:50you can get to a a block of stable basic 17:53work primitives, if you can help your 17:55people understand that they need to have 17:58mental models of workflows and code and 18:01how code operates to get work done 18:04through read write on databases, through 18:06approvals, etc. Even if they can never 18:09write that code individually themselves, 18:11you are going to be in a position where 18:13you can ask them to operate agents and 18:15that's going to give you a tremendous 18:17amount of freedom to delete abstraction 18:21layers from your business that are 18:22extremely expensive like the CMS cursor. 18:25So code wins becomes an operating model 18:28for the business as a whole and it 18:30allows us to extend the premise that 18:33simplicity is what wins in the age of 18:35AI. The organization needs to train more 18:38people to think and act in 18:40artifact-based workflows because that's 18:42where agents are strongest and that's 18:44where they're safest. And this is the 18:46piece that most enterprise leaders are 18:49not really ready to hear. If you keep 18:51your workforce in the 20th century, if 18:53you keep them graphical user interface 18:55native because your security department, 18:57your IT department tells you this is 19:00what we should do because it's safe and 19:02only the engineers should commit code. 19:04your agents are going to be stuck as 19:06drafting assistants. You're not going to 19:08unlock the kind of velocity that you see 19:10and want from these AI native companies. 19:13Whereas, if you teach your workforce to 19:15be artifact native, your agents can then 19:18become operators and collaborators with 19:20your teams. And so, this has some 19:22profound implications for training. It 19:24has profound implications for our 19:26security policies. If the strategic 19:28claim is that codelike work wins because 19:31it's agentable, then the logical 19:33conclusion is that enterprise AI 19:35training must transform into something 19:39like code concept training for 19:41non-engineers. Not learn Python, not 19:44become an engineer, but learn the 19:46concepts that make work legible to 19:48agents. Because the goal is not to turn 19:51everybody into programmers. The goal is 19:53to turn more of the company into people 19:55who can express work in a form that 19:57agents can safely act against. So things 20:01like state, what is the current status 20:03of your work? Where is that written 20:04down? The artifact, what is your system 20:06of record? What is the real thing we 20:08ship and maintain? Is it a document, a 20:10data set, a configuration, a product 20:13catalog, a policy? If the truth lives in 20:16a hidden state in the UI, your agent 20:18can't reliably operate. Another example, 20:21what's a change record? Can we see what 20:22changed without an argument? Software 20:24has diffs. Enterprises need the 20:26equivalent. Checks. Who proves this is 20:29correct? It looks good is not the check. 20:31A check is something objective, a test, 20:33a reconciliation, a policy rule, a 20:35validation script. Roll backs. How do we 20:38undo what we've done? Agents increase 20:40throughput. Roll back is how you keep 20:42that throughput from becoming a risk. 20:44Traceability. Who changed what? When? 20:47Why? When your AI workforce grows, this 20:50stops being a compliance nicity. It 20:52becomes existential as you get lots and 20:54lots of agents. If you can teach these 20:57kinds of ideas broadly, you create the 20:59precondition for real agent adoption 21:02because your organization stops treating 21:04tools themselves as sacred. Oh, I'm a 21:07Salesforce guy, right? Because everyone 21:10understands what must stay true 21:11underneath. It's not Salesforce that's 21:13the thing. It is the workflow that's the 21:16thing. And that's why cursor can ask do 21:17we really need a CMS and actually 21:20execute the deletion. They weren't being 21:22reckless. They understood what it would 21:23replace. They understood they could 21:25replace it with a simpler substrate with 21:27a stronger core primitive set that both 21:30agents and humans could operate against. 21:32So if you're a Fortune 500 exec, if 21:34you're a small business leader, the 21:35mistake would be to interpret this as 21:37let's copy cursor and put marketing into 21:39GitHub. I don't mean do that exactly. 21:42That is not the point. The point is that 21:44there is an underlying competitive 21:46advantage in the age of AI. Primitive 21:49primitive fluency diffuses power. When 21:52teams share a mental model of state of 21:55artifacts of checks, roll back, 21:57traceability, the company gains a new 22:00superpower because people can see that a 22:03standard tool is just a convenience 22:05layer with a measurable tax. People can 22:07propose a simplification without 22:09triggering fear because they can 22:10articulate what stays safe and 22:12everyone's technically fluent enough to 22:14follow. Work can be recomposed into 22:16forms agents can operate against instead 22:18of being trapped in process. That is 22:20what moving fast looks like in a mature 22:23agentic organization. It's not speed for 22:25its own sake. It's just simpler, less 22:27hidden state, fewer brittle handoffs, 22:30and more of the company able to safely 22:32ship changes. Last week, I called out 22:34that agents fail because they don't 22:36remember. And you can fix that with 22:38domain memory, but it's not enough. This 22:40week, I want to remind you that 22:42organizations fail because they don't 22:44write work down in agent legible forms. 22:47You fix that by teaching primitive 22:49fluency, which is really code concept 22:51fluency, across your entire 22:53organization. Agent strategy is not 22:56really a procurement decision. It's a 22:58literacy decision. The winners won't be 23:00the companies that have agents. They'll 23:03be the companies where enough people 23:04understand the primitives that they can 23:06delete sacred workflows and frankly 23:08notice where they're incorrect. And that 23:10will allow them to move work into more 23:12legible artifacts for agents. And that 23:14will allow them to unlock agents 23:16actually operating. And that's what 23:18unlocks 10x speed. Good luck with your 23:20primitives. Good luck with your AI 23:22agents.