Learning Library

← Back to Library

AI's Time Compression and Intent

Key Points

  • The AI revolution is “hyper‑compressing” time for humans, making us feel constantly rushed to keep up with new news, prompts, and agents.
  • Unlike humans, whose perception of time is subjective and non‑linear, AI experiences time as a logical, clock‑driven metric that speeds up as compute power grows.
  • For AI, the compression isn’t about shortening tasks but about fitting more work into the same unit of time, effectively expanding what can be done per second.
  • A major current limitation is AI’s inability to maintain intent and contextual awareness over long periods, a capability humans still outperform despite our own forgetfulness.
  • Experts project that by around 2026 AI agents may be able to sustain focus on a task for a week, but the rapid scaling of raw AI intelligence is outpacing the development of long‑term intent retention.

Full Transcript

# AI's Time Compression and Intent **Source:** [https://www.youtube.com/watch?v=3fQbxz--nFk](https://www.youtube.com/watch?v=3fQbxz--nFk) **Duration:** 00:12:41 ## Summary - The AI revolution is “hyper‑compressing” time for humans, making us feel constantly rushed to keep up with new news, prompts, and agents. - Unlike humans, whose perception of time is subjective and non‑linear, AI experiences time as a logical, clock‑driven metric that speeds up as compute power grows. - For AI, the compression isn’t about shortening tasks but about fitting more work into the same unit of time, effectively expanding what can be done per second. - A major current limitation is AI’s inability to maintain intent and contextual awareness over long periods, a capability humans still outperform despite our own forgetfulness. - Experts project that by around 2026 AI agents may be able to sustain focus on a task for a week, but the rapid scaling of raw AI intelligence is outpacing the development of long‑term intent retention. ## Sections - [00:00:00](https://www.youtube.com/watch?v=3fQbxz--nFk&t=0s) **AI’s Logical Compression of Time** - The speaker explains how AI accelerates work by compressing time through a clock‑driven, rational perception, contrasting it with humans’ subjective, “hyper‑compressed” experience of rapid information flow. - [00:03:36](https://www.youtube.com/watch?v=3fQbxz--nFk&t=216s) **Beyond Conversation: The Physical Turing Test** - The speaker notes that AI can already fool humans in conversational Turing tests, but the far tougher physical Turing test—requiring robots to move and interact in the world like people—is still distant, exposing the gap between sci‑fi visions and present technology. - [00:06:41](https://www.youtube.com/watch?v=3fQbxz--nFk&t=401s) **Compressing Decades of AI Training** - The speaker explains how Nvidia’s virtual simulation environment and massive parallel compute can collapse ten years of AI model training into just a few hours, illustrating how scaling hardware expands the effective “time” available for AI work. - [00:09:48](https://www.youtube.com/watch?v=3fQbxz--nFk&t=588s) **AI Agent Devon: Scope and Limits** - The speaker explains how to effectively manage the Devon engineering agent—assigning well‑defined tasks, verifying tools, and reviewing results—while highlighting its strengths as an intern‑level coder and its shortcomings for broader architectural decisions or extensive work due to token limits and occasional off‑track behavior. ## Full Transcript
0:00I want to talk today about one of the 0:01most subtle aspects but pervasive 0:03aspects of this AI revolution. AI 0:07compresses 0:09time. And we are living through it, 0:12right? We are living through this moment 0:14where we feel like we're always trying 0:17to catch up. DM after DM, email after 0:20email I get is, "Nate, tell me how I can 0:24keep up with this. The news drops on 0:26Tuesday. The news drops on Wednesday. 0:27The news drops on Wednesday night. I 0:29can't keep up with everything. My 0:31prompting has to evolve. I have to pick 0:33up agents now. The list goes on. We feel 0:37like we're living through 0:38hypercompressed time is what I'm getting 0:39at. But the thing I want to talk about 0:42today is not our experience of time. 0:46It's the AI's experience of time because 0:48I think that's actually something we 0:49need to understand better. We experience 0:53time the way our species has always 0:54experienced time. Going forward, we have 0:58uh a surprisingly nonrational perception 1:01of time. The older you get, the more you 1:03understand the idea that the years that 1:05you've had as an adult feel shorter than 1:08the years that you had as a kid. So, our 1:11sense of time is wonky and tied to our 1:13species. I could go on and on. You can 1:16read up on it. AI doesn't have any of 1:18that. AI has logical clockdriven 1:21perception of time. And because of 1:24compute advances, AI ability to do 1:27things in time continues to accelerate. 1:30And so for AI, time is compressing as 1:32well. But it doesn't feel the same way. 1:36It's compressing the work you can do in 1:38a unit of time, not compressing the time 1:40it takes to do work. I'm going to say 1:42that one more time. For humans, it feels 1:45like time is getting short because there 1:47is so much work to do. For AI, it feels 1:51like work is getting compressed in 1:53because there's so much more compute and 1:55time is therefore expanding. And so even 1:58though I'm not here to talk about the 2:00interior perception of time for AI, I'll 2:02leave that to the philosophy students, 2:04it is clear that we have a very 2:07different understanding of what can be 2:09done in a given unit of time and it has 2:11realworld implications for us. As an 2:15example, we are not very far along on 2:19the idea of AI agents maintaining intent 2:22over time. It's very difficult to do 2:24this. The projection right now is that 2:26by 2026, maybe we will get to a point 2:29where an AI agent can spend a week on a 2:32task, which is a big deal. I got to tell 2:34you, people at my work spend months on 2:38tasks. We have to maintain strategic 2:40alignment over, you know, a year's time. 2:43We have to look multiple years into the 2:44future. We need to have a much larger 2:47sense of time. And when we do tasks, we 2:50need to retain important context for 2:52longer stretches of time, too. Now, we 2:54have all been there. The juror tickets 2:56do drop out of our brains. We do forget 2:59context. We are forgetful. And so, it's 3:01important not to judge the AI too hard 3:03if it also forgets. But the point is 3:06that humans generally speaking are 3:08better at maintaining intent over time 3:10than AI agents right now. And critically 3:13AI intelligence scaling is happening 3:16faster than intent over time scaling. So 3:19in intelligence is going like this, 3:21right? We all talk about it all the 3:22time. It's going vertical. Great. But 3:25the ability to scale intent over time is 3:27moving like this. Not moving very fast. 3:32And the only reason those slow advances 3:36are super meaningful is because compute 3:39advances and intelligence advances 3:41continue to enable the AI to do more 3:44with that time. And I want to come up 3:47with a little case study that I ran 3:50across from a Sequoia talk given by Jim 3:53Fan who works at NVIDIA. Now, Jim does 3:56robotics at NVIDIA and he proposes 3:59something he calls the physical touring 4:02test. So, if you're familiar with the 4:05touring test, the idea is you don't know 4:06if you're talking to a human or a robot. 4:08And I'm vastly simplifying. We basically 4:11have AIs that pass that now. Like, you 4:14can literally run a classical touring 4:16test and it will pass. And we've mostly 4:18not noticed. And that's really funny 4:20because all of the science fiction books 4:22thought that when a robot could pass the 4:24touring test, the whole world would 4:26change. I guess you could say we're 4:27living through an AI revolution and it's 4:29changing. I don't have my flying cars 4:31yet. I can't look out of the window of 4:33my space 4:34castle. That being said, the physical 4:37touring test is a much harder bar and we 4:41are not anywhere close to passing it. 4:43And I suspect that science fiction has 4:46sort of combined the conversational 4:49touring test with the physical touring 4:50test in most writing. And that's part of 4:52the disconnect right now between the 4:55future the writers of the 60s and 70s 4:58and 80s envisioned and what we actually 5:00have because the physical touring test 5:03requires a robot to be able to 5:06physically navigate a space like a 5:09human. And I give credit to Jim. don't 5:12really have words for it until he 5:14started to put it together. So he talks 5:16about the idea of like you're at a 5:18hackathon, you are cleaning up after the 5:21next morning, it's a complete mess. Your 5:22living room is a disaster. There's a 5:24pizza box here, there's a beer can 5:26there, you were on the Balmer curve, you 5:28have disorganized couch cushions, your 5:31video game was up cuz you were playing 5:33video games. Everything's a mess. 5:36And what he wants to challenge us to do 5:38is imagine a world where you go off to 5:40work. You come back at 5, everything is 5:44put back together. It's neat. The lamp 5:46is standing up. Now, you cannot tell if 5:51it was a robot that did it or a person 5:53that did it because you live in a world 5:55where robots can do all of that 5:58complicated physical work without 6:00issues. 6:01And if you've seen any footage of robots 6:04lately, you know that we are not close. 6:08We are not close to a world where a 6:10robot can go over to a beer can, gently 6:13pick it up, put it in recycling, 6:15navigate the whole house to do so, dodge 6:17the dog, avoid the tennis ball that's on 6:19the bottom of the stairs, uh, and then 6:21come back and set the pillows out and 6:22all of that. And do that for the entire 6:25room and clean it up entirely, 6:26autonomously, without direction. 6:30Now, intelligence is scaling really, 6:32really, really, really fast, but that 6:34kind of ability to be in physical space 6:36isn't. And that's sort of Jim's point. 6:39And this is where it comes back to time 6:41and AI. Jim suggests that part of what's 6:46interesting about the physical touring 6:47test is we can use virtual environments 6:50like the one Nvidia launched this year 6:53to compress a lot of training work for 6:58AI into a small amount of actual clock 7:02time. And so he talks in his 7:05conversation with Ed Sequoia about the 7:07fact that they were able to at 7:09Nvidia take 10 years worth of training 7:13in like ordinary time and com compress 7:17it down to 2 hours 10 years to two hours 7:22in a special simulated environment. 7:26Because in the simulated environment, 7:28you could parallelize, you could run 7:30stuff super fast, and the chips and the 7:32processors kept up, and the LLMs kept 7:34up, and there was no reason to go as 7:36slow as you would go in the real world 7:38with training. And so, they were able to 7:41take a 10-year training task and 7:42compress it to two 7:44hours. That is the kind of thing that 7:47gets my wheels turning when I think 7:48about how AI with compute scaling is 7:52going to enable new kinds of work in 7:54shorter time spans. And so if we go back 7:56to this idea that for us work is growing 7:59but time isn't changing and for the AI 8:02time may seem to be expanding because 8:04the compute and the capacity is 8:06expanding and they can do so much more 8:08as time passes inside a given unit of 8:10time that given a new Blackwell chip you 8:14can do more in an hour as an LLM than 8:16you could do before with an H100. Right? 8:18Just getting really physical. 8:21So when you think about it that 8:23way, if you have an agent working, I'm 8:28back to software now. We're we're 8:30leaving robotics behind. We'll do the 8:31robotics conversation another time. If 8:33you have an 8:35agent and you are asking that agent at 8:38work to do a task that normally takes 8:40you three months and the agent does very 8:43very high quality work but has a short 8:45ability to maintain intent over time. 8:49the intent over time may matter less 8:53because the agent has all the tools it 8:55needs, tremendous compute and throughput 8:58and works very very fast. And so it may 9:01be able to get done in four hours what 9:05would take you three 9:06months. Now I'm not here to say that 9:09means the end of work because I believe 9:12in Poleani's paradox which is that work 9:14is more than we can speak. I don't think 9:15work can be efficiently tokenized. Um, 9:18there's a lot more to work than our 9:20ability to describe a task. I can leave 9:23all of that aside. You can go find that 9:24on my Substack if you want to read about 9:26it. I've written about it pretty 9:28extensively. But I do think that with 9:31the right scopes, with with the right 9:33autonomy of decision-m around a 9:35particular problem scope, with the right 9:38tools, I can see a world where agents 9:42become incredibly intelligent interns. 9:45It's like you can manage an agent and 9:48you can give it a tough task. You have 9:50to give it the scope. You have to 9:51confirm the tooling is right. You have 9:52to validate the results. But it does an 9:56incredible amount of work in a short 9:57amount of 9:58time. We will see if that's true. Right 10:00now, probably the closest parallel to 10:03that is Devon. Devon's an engineering 10:05agent and really Devon acts like an 10:08engineering intern. If you are a senior 10:10engineer and you know what you're doing 10:12and you could code your way out of a 10:14corner, Devon's great if you want 10:17someone who will pick up your P3s, your 10:19SE3s and knock those out. Devon's great 10:23if you want someone to tackle a specific 10:26defined task and go after it. If you 10:27want someone to break out and go after 10:30um a few pieces of work in a particular 10:32area you want to code in today and get 10:35you some pull requests that you can 10:37review at the end of the day. If you 10:39want someone who will be your founding 10:42engineer, which some people have tried 10:43to use Devon for, it is a bad idea. 10:45Devon is not ready for that level of 10:47responsibility, Devon cannot decide or 10:50define system 10:52architectures. And people overusing it 10:54that way get frustrated. People also get 10:57frustrated to some extent because Devon 10:59isn't perfect. Devon will sometimes 11:00stray from the point. Devon will 11:02sometimes not be able to complete the 11:04work because Devon runs out of tokens. 11:06other issues come up that I suspect will 11:08come up with agents. In fact, I see 11:10Devon as an agent as sort of a first 11:13approximation preview of what it's going 11:15to look like to work with agents in the 11:17future. I think we're going to have 11:19running out of token issues. I think 11:21we're going to have did you give the 11:22agent a clear enough task issues. I 11:24think we're going to have did you give 11:25the agent too much responsibility 11:27issues. I think you're going to have did 11:29you scope what the agent was working on 11:31to the degree that fit its intelligence 11:34issues. And we're all going to have to 11:36be figuring that out, not just 11:38engineers, within the next year or 11:40so. But the point is this. If you think 11:43about that time piece again as we circle 11:45back, the more compute and the smarter 11:48these models get, the more they can get 11:50done with that time. And so it may be 11:52that even if that intent over time is 11:55only a week by the end of next year, it 11:58is enough time that real meaningful 12:00project work can get done if we define 12:03that scope correctly. And that's just 12:05kind of weird to me because as a human, 12:07I don't think about it that way. I don't 12:10think about it as I can get all of this 12:13work done next year because I'm going to 12:14get that much smarter. I mean, sure, I'm 12:17going to try and read. I'm going to try 12:18and learn from AI. We all try and get 12:20smarter, but I have no illusions. I 12:21can't upgrade the CPU up here. The 12:24machine can. And so I think one of the 12:27most interesting things we're not 12:28talking about is that intelligence gains 12:31are related to the way we use agent 12:34intent over time. And we should probably 12:37talk about that more. Chips.