Learning Library

← Back to Library

Evolving Prompt Strategies for GPT‑5

Full Transcript

# Evolving Prompt Strategies for GPT‑5 **Source:** [https://www.youtube.com/watch?v=POLFZdG54Kw](https://www.youtube.com/watch?v=POLFZdG54Kw) **Duration:** 00:12:12 ## Sections - [00:00:00](https://www.youtube.com/watch?v=POLFZdG54Kw&t=0s) **Preparing Prompt Strategies for GPT‑5** - The speaker asserts that, by analyzing current AI trends and the imminent multi‑model landscape, we can already begin shaping our prompting approaches to effectively engage the forthcoming GPT‑5. - [00:03:05](https://www.youtube.com/watch?v=POLFZdG54Kw&t=185s) **Maximizing LLM Context Efficiency** - The speaker explains how to exploit expanding token windows by loading extensive, relevant deterministic context to steer LLM reasoning, while emphasizing token‑efficiency in high‑volume production prompts and highlighting the emergence of native multi‑phase workflow architectures. - [00:06:30](https://www.youtube.com/watch?v=POLFZdG54Kw&t=390s) **Forcing Decisions in LLM Prompts** - The speaker advises using precise context, constraints, and forced trade‑offs to prevent AI hedging and drive critical, decisive responses, noting that explicit role cues are less crucial now. - [00:10:25](https://www.youtube.com/watch?v=POLFZdG54Kw&t=625s) **Agile Prompting as AI Partnership** - The speaker likens LLM prompting to agile software development, stressing iterative learning, chunking, and designing a collaborative partnership architecture with AI instead of a rigid, pre‑defined workflow. ## Full Transcript
0:00It is possible to prepare for chat GPT5 0:03now. And I don't mean like spiritually 0:06prepare. I don't mean have big debates 0:08about AGI, artificial general 0:10intelligence. We have plenty of those 0:12debates. I promise you, we're going to 0:14keep having those debates afterward. No, 0:16what I mean is that it is possible to 0:19skate toward where the puck is going. in 0:21the words of the great Wayne Gretzky and 0:23we can actually figure out from current 0:26trends in artificial intelligence where 0:29our prompting needs to evolve and that's 0:32been my focus. I am trying to take the 0:35published guidelines we get from open AI 0:37the published guidelines we get from 0:39other major model makers because spoiler 0:41alert we don't live in what is called a 0:43singleton world. This is not a world 0:46where we are only going to have one 0:48artificial intelligence. You're going to 0:50have artificial intelligence like a fish 0:52has water. It's going to be everywhere. 0:54It's going to be local on your device. 0:55It's going to be global. It's going to 0:57be multiple model makers. And we're 0:59still evolving into that world. And it's 1:01not as clean and simple and seamless as 1:03you might want. And your refrigerator 1:04doesn't yet argue with you, which I for 1:06one am thankful for. But the principle 1:08is there. The writing is on the wall. 1:11And so when we look at GPT5, I think 1:13it's reasonable to say we know enough 1:15based on where all these models are 1:16trending that we can start to have some 1:18opinions about where prompting is going 1:21and we can start to write to that. We 1:23can write prompts that take advantage of 1:25the best of today's models but also 1:28prepare us for Chad GPT5. And that's 1:31what I'm thinking about. So number one, 1:33let's look at architecture. What are the 1:35things we can think about with Chad GPT5 1:38that we know from publicly released 1:39statements that we can then infer back 1:41into prompts that are useful even with 1:43today's model with claude 4 with Gro 3 1:46with Gemini 2.5 Pro with the 03 and 03 1:49Pro models I'm going to suggest a few 1:51ideas for you number one extreme 1:53specificity is a focusing mechanism 1:56these models are so big GPT5 will handle 1:59even more complex constraints if you 2:02master precise specifications now If you 2:05can specify word counts, if you can 2:06specify formats, if you can specify 2:08requirements by number, if you can use 2:11XML tags in some cases, that specificity 2:15does not overwhelm these models. It 2:17actually helps you to focus them in ways 2:19that are useful. Number two, text is 2:22currency. Current models handle over 2:25100,000 tokens, 200,000 tokens. GPT5. We 2:29don't know what the published specs will 2:30be, but it is not unreasonable to think 2:32that we are headed toward a future with 2:34millions of tokens by the end of the 2:36year. Start getting into the habit of 2:38front-loading rich context. Instead of a 2:40two sentence description, if you're an 2:42operator, if you're just chatting in the 2:44chat window, get into the habit of doing 2:46a lot of context loading, putting the 2:48documents in, putting your full 2:50statement, your full emotions, your full 2:52thinking in your full voice statement if 2:53you're using voice. Load up that 2:55context. full situation, constraints, 2:57history. And I say full context for 3:00operators, but that's just as true if 3:02you're running production prompts, too. 3:03You want to be in a position where you 3:05can take full advantage of that context 3:06window because these models are actually 3:08built more and more to handle reasoning 3:10at the hundreds of thousands of tokens 3:12and potentially up to millions of tokens 3:14window shortly. So, think about what 3:17you're putting in there. And I had a 3:18whole video on this where I talked about 3:19context as a rudder and context sort of 3:22shaping uh your your journey through 3:24latent space where you are providing a 3:27deterministic context that will then 3:29shape a probabilistic context that your 3:32LLM agent will discover. That is still 3:34the mindset to have. I'm focusing right 3:36now on the idea that you can put a lot 3:39in the deterministic context. Now if you 3:42are running production prompts and 3:44you're running them millions of times a 3:45day, you want to be token efficient. you 3:47want to save as much as you can. That's 3:48a different use case. If you are trying 3:50to get a very full answer and you are on 3:52a chat screen or you're using cloud code 3:55or whatever it is, load up that context 3:57so Claude code can see the as much of 3:58the codebase as it can, right? Point it 4:00at all of the MCP servers you want to 4:02point it at. Give it a lot of context. 4:04Keep the context relevant though. You're 4:06not trying to load it up with like 4:07grandma's chicken soup recipes. You want 4:09to actually make it relevant. Okay. So, 4:12specificity is a focuser. Context is 4:14currency. Number three for architecture 4:16multi-phase workflows are becoming more 4:19and more native. They are not 4:20workarounds. We no longer have to assume 4:23that we cannot take the prompt on a 4:25journey. More and more we can take the 4:27prompt on a journey in the course of a 4:29single prompt. Now I will be the first 4:31to admit there are still signs that in 4:34today's models that is not as true as it 4:37could be. I think it is easier right now 4:39to take the model through multiple 4:41thinking stages than to take the model 4:44through multiple document creation 4:46stages where it's separate documents 4:48each stage. I expect that to go go away 4:51very quickly potentially as soon as GPT5 4:54or shortly thereafter. And so I want you 4:56to start thinking when you're prompting 4:58in terms of multi-phase workflows as 5:00native to AI ask for the whole workflow 5:03which comes back to specificity. Number 5:05four. Number four. Ha. Structured output 5:08is a baseline. So stop asking just for 5:11thoughts. Demand scorecards. Ask for 5:13matrices. Ask for tables if you're in 5:1603. I'm just kidding. It will give you 5:18tables anyway. Ask for phased plans. Ask 5:21for structured outputs in your document. 5:24Whatever it is. The more specificity you 5:26give in the output, the more it's going 5:29to give you what you are looking for. I 5:31don't think this is necessarily new. We 5:34started to talk about this for a while, 5:36but we haven't put it in the context of 5:37GPT5. GPT5 is going to reinforce the 5:41value of those best practice prompt 5:43architectures. Let's look at prompt 5:45designs. You need to have an 5:47interrogative principle. The best 5:49prompts expect the model to ask 5:52questions. I would assume GPT5 will take 5:55some of what OpenAI practiced in deep 5:58research and will ask questions. And I 6:01think as you see models work to become 6:03more proactive, Anthropic is doing this 6:05now. You're going to see more emphasis 6:08on models asking questions sometimes 6:10whether you've asked for it or not, but 6:11it's always good to encourage it. Number 6:14two for prompt designing, build in self 6:17evaluation loops. Every major prompt 6:20needs to have a check your work, 6:22validate your work, go look. Especially 6:24as models have access to a wider world. 6:27That probabilistic context I talk about. 6:30Ask them to use it. Ask them to 6:32evaluate. Number three, force tradeoffs 6:35and force prioritization. Don't let the 6:37AI hedge. Have you noticed that? I've 6:39noticed that even with 03 Pro, if you 6:42give it two choices a lot of the time, 6:44if you give it room, it will come back 6:46with a compromised decision. Don't let 6:48it hedge. Make it choose. Make it rank. 6:51Make it cut. This is a skill that you 6:53you're basically teaching the large 6:55language model to think critically, that 6:57you value thinking critically. Push it. 6:59And the last principle, remember that 7:01some of those old formulas still work. 7:03If you give it context, if you give it 7:05constraints, if you give it a goal, the 7:08more precise you are about all of that, 7:10the more you're going to be able to 7:11scale. People ask about role. Look, it 7:14can be helpful for tone. It was more 7:16helpful in earlier models. The point is 7:18not whether you specify the role or not. 7:21The point is whether you're able to 7:22paint the picture clearly for the model 7:24of what it needs to do so that it can go 7:27do it. It's that precision of context 7:30that matters more than the exact words 7:33now because the LLMs are so big. They're 7:35able to understand variance in wording. 7:38Magic word is not a magic word the way 7:39it was in 2023. And I think that's where 7:41a lot of the 2025 debate around role has 7:44come in was popularized as a magic word. 7:47Pretend you're a brilliant marketer. We 7:49can get into latent space other ways. We 7:51can get into latent space much more 7:52specifically by talking about the 7:53marketing outputs we want. The large 7:55language models have grown up and we can 7:57grow up too. So as you think this 7:59through, I want you to take away some 8:00meta lessons. Prompts are thinking 8:03tools. They are thinking tools that you 8:05give to a thinking machine. They're not 8:08really delegation tools. They structure 8:10your thought as much as the AI. Chad 8:12GPT5 should not replace your thinking. 8:15It should amplify it. That's something I 8:18emphasize over and over on this channel. 8:20Number two, good prompts teach by 8:22directionality. You learn how the model 8:25thinks while you teach it how you think. 8:27This is actually symbiosis. You are 8:30learning how to work with the model and 8:32the model is learning how you think. 8:33Welcome to the weird new world we live 8:35in. Number three, specificity is 8:38liberating. It's counterintuitive, but 8:40the tighter your constraints, the more 8:42specific you are about your output, the 8:44more you're going to be able to get what 8:46you want in your creative vision. I am 8:48astonished continually by how specific 8:52extraordinary image generation prompts 8:54are for these image diffusion models. I 8:57look at them and I'm like, "Wow, who 8:59came up with that specific prompt?" And 9:00it works and it produces something that 9:02is brilliantly beautiful and creative. I 9:05was having fun producing 3D isometric 9:07views of cities. And it's a very 9:10specific prompt. And so, think about it 9:12as specificity is like learning to have 9:14a fine paint brush to paint what you 9:16want to paint on the canvas of the LLM. 9:19It's worth it to be disciplined. And 9:21number four, phase complex work as if 9:24you were a project manager. I am now at 9:26the point where if I have multiple 9:29complex research patterns I want to 9:31execute and I want to keep my context 9:32windows clean, I will break and chunk 9:35those multiple research efforts into sub 9:38outputs. Four, five, six sub outputs 9:40that in and of themselves produce 7, 8, 9:4210, 12page documents. And then as a 9:46synthesis step, I will start to pull all 9:49of those together into a larger piece 9:53into a larger research project. You have 9:56to phase that complex work. And I think 9:58one of the interesting things about GPT5 10:00is that habit of phasing may be 10:03something that is in tension a little 10:05bit as GPT5 rolls out. I do not expect 10:08us to get to a world with GBT5 where we 10:11can give it a gigantic multi-phase 10:13project that we know will be completed 10:16with virtually no hallucinations and no 10:19misunderstanding as the project evolves. 10:22And part of why that is is that the LLM 10:25goes through the same learning process 10:27that we do when we are doing research. 10:30And so it would be a little weird to 10:32specify in advance like waterfall 10:34software style. this is all the stuff 10:35you're going to do and nothing will 10:37change. Anyone who's built waterfall 10:39software will tell you that never works. 10:41It just doesn't. Instead, you want to 10:43get into a place where you can actually 10:44build value, see if it works, and keep 10:46going. In a sense, the prompting 10:48approach that I think works works kind 10:50of because agile works well. We are 10:52prompting in order to learn how the 10:54model is thinking in order to come back 10:56and prompt again. I think that process 10:58is still going to work with GPT. And I 11:00think we're going to have value in 11:02chunking. And so thinking like a project 11:04manager, thinking about how you delegate 11:06work and chunk it is going to matter. 11:08Okay, I could keep going. I'm excited 11:10for GPT5. I think the thing that I want 11:12to call out last of all is that we are 11:15moving from AI might and can help to how 11:18do I structure a partnership with AI. We 11:21need to assume capability and focus on 11:24an architecture partnership that helps 11:26us to move forward. And I talk about 11:28prompting because people learn, 11:30understand, search for, and get 11:32prompting with AI now. But really, if 11:34you step back, I'm talking about the 11:36architecture of a partnership. I'm 11:37talking about the architecture of how 11:39our minds connect. I'm talking about how 11:41we start to develop shared context. I 11:44hope that's helpful. There's tons more 11:46on the gigantic uh 139 page document 11:50that I put together on prompting on 11:52this, and you can hit it in the link in 11:54the Substack. It's well it was 139 pages 11:57and I kind of pounded my head on the 11:58desk but it was worth it because I was 12:00able to cross reference it with the 12:02major model makers prompting guides. I 12:05was able to make sure it works across 12:06different models and I'm a geek. It was 12:08fun to write. So there you go. Cheers.