Learning Library

← Back to Library

Meta-Prompting: Dual Strategies Revealed

Key Points

  • The way prompts are worded and structured dramatically impacts AI behavior, and mastering these details enables tailored, goal‑specific outputs.
  • By presenting two versions of the same prompt—a “hard‑mode” framework prompt and a beginner‑friendly, diagnostic‑question flow—the speaker illustrates how subtle tweaks produce different learning systems rather than single responses.
  • Logging prompts in tools like Notion allows you to attach an AI assistant (e.g., Comet) to evaluate and compare prompt effectiveness, turning AI into a self‑reviewing coach.
  • A robust prompt template typically includes a role, purpose, instructions, references, and desired output, even though the role itself may not boost factual accuracy but sets the contextual tone.
  • Prompting should be viewed as a process for driving iterative learning systems, not merely a one‑off request, and using AI to refine AI prompts accelerates mastery of the technology.

Sections

Full Transcript

# Meta-Prompting: Dual Strategies Revealed **Source:** [https://www.youtube.com/watch?v=2uC5WllehxY](https://www.youtube.com/watch?v=2uC5WllehxY) **Duration:** 00:24:15 ## Summary - The way prompts are worded and structured dramatically impacts AI behavior, and mastering these details enables tailored, goal‑specific outputs. - By presenting two versions of the same prompt—a “hard‑mode” framework prompt and a beginner‑friendly, diagnostic‑question flow—the speaker illustrates how subtle tweaks produce different learning systems rather than single responses. - Logging prompts in tools like Notion allows you to attach an AI assistant (e.g., Comet) to evaluate and compare prompt effectiveness, turning AI into a self‑reviewing coach. - A robust prompt template typically includes a role, purpose, instructions, references, and desired output, even though the role itself may not boost factual accuracy but sets the contextual tone. - Prompting should be viewed as a process for driving iterative learning systems, not merely a one‑off request, and using AI to refine AI prompts accelerates mastery of the technology. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2uC5WllehxY&t=0s) **Meta Prompting: Teaching AI via Examples** - The speaker outlines an interactive session where they reveal how they refine prompts across multiple versions to achieve two goals—demonstrating prompt‑engineering techniques and teaching AI concepts—highlighting prompts as systems that drive learning. - [00:03:20](https://www.youtube.com/watch?v=2uC5WllehxY&t=200s) **Prompt Blueprint for AI Tutor** - The speaker outlines how to define a two‑step goal—first a diagnostic quiz, then progressively harder lessons—in a prompt that transforms the assistant into a personal AI learning tutor. - [00:07:09](https://www.youtube.com/watch?v=2uC5WllehxY&t=429s) **Custom Prompt Blueprint Workflow** - An overview of how the Prompt Coach guides users to create a tailored AI prompt by selecting strategy, effort level, and agentic mode, and either answering step‑by‑step questions or supplying all inputs up front to generate a ready‑to‑use prompt blueprint. - [00:10:17](https://www.youtube.com/watch?v=2uC5WllehxY&t=617s) **Strategic Prompt Design Techniques** - The speaker explains how to craft sophisticated, multi‑layered prompts—using example blurbs, placeholders, and explicit role/goal structures—to guide advanced language models without exposing full prompt content. - [00:13:22](https://www.youtube.com/watch?v=2uC5WllehxY&t=802s) **Easy‑Mode AI Tutoring Blueprint** - The speaker outlines a beginner‑friendly “easy mode” prompt that launches a personal AI tutor using single‑question diagnostics and micro‑lessons to quickly start learning AI without overwhelming the user. - [00:16:30](https://www.youtube.com/watch?v=2uC5WllehxY&t=990s) **Active Micro-Learning Blueprint Design** - The speaker outlines a revised prompt that generates a learning blueprint using single-question micro‑lessons, active‑learning tactics, pacing controls, and a structured output format (diagnostic, concept, practice, stretch goal). - [00:20:13](https://www.youtube.com/watch?v=2uC5WllehxY&t=1213s) **Building a Custom AI Tutor** - The speaker walks through creating and testing a prompt‑driven learning tutor, toggling an “easy mode” that uses chain‑of‑thought reasoning, and briefly explains back‑propagation in simple terms. - [00:23:54](https://www.youtube.com/watch?v=2uC5WllehxY&t=1434s) **Managing Prompt Variations Effectively** - The speaker explains they'll share full-text prompts on Substack for easy use, demonstrate how different wordings achieve the same goal, and highlight how structure and wording influence prompt outcomes. ## Full Transcript
0:00You know, the details of how we prompt 0:02profoundly influence AI. And most people 0:04know that, but they don't know how to 0:06shape those details so they matter. And 0:09I find that whenever I do prompt 0:11content, people get really excited. But 0:13there's been a gap. There's something 0:15that's been missing. And it's been me 0:17talking you through multiple versions of 0:20the same prompt that I created so that 0:23you can see how I tweak and change the 0:26differences to get slightly different 0:28variance according to my goals. Today I 0:31want to get you two for the price of 0:33one. So it's not just two prompts, it's 0:36also two goals. You're going to learn 0:38about how I tweak prompting and the 0:40structure of a prompt through an 0:42interactive video like this where I 0:44share all of my details on how I 0:46construct the prompt. And you're also 0:49going to learn about AI itself because 0:51you're going to see the prompt. And the 0:53prompt is a prompt to teach you AI. I 0:56know that's very meta, but we're going 0:57to get into it and you're going to see 0:59why it works. And it's going to 1:00introduce you to the idea of prompting 1:02as systems of learning and systems that 1:06drive process. I think one of the 1:07biggest misconceptions of prompting is 1:09that you prompt for just one response. 1:12And you're going to see both of these 1:13prompts are not just for one response. 1:16They're actually to drive systems of 1:18learning. With that in mind, let's get 1:20to it. Okay, here we are. Prompts for 1:22learning AI. And do you know the first 1:24thing about this? I have my Comet 1:26assistant pulled up on the side. I can 1:28chat with this prompt using AI. That's 1:31one of the advantages of logging your 1:33prompts in a place like Notion. You can 1:35actually use an AI assistant to review 1:37your prompts. And I do that here. I 1:38asked it which prompt is more helpful 1:40for beginners. And it analyzes the 1:43prompts. It says this first one lays out 1:45a framework, purpose, instructions, 1:47reference, output, which we can see 1:48right here. That's perfectly correct. 1:51And it calls out that version two 1:54focuses heavily on single questions. So 1:58begin with one diagnostic question, 2:01record my answer, and then ask the next 2:03question. That's also perfectly correct. 2:06And so it concludes that version two is 2:08easier for beginners and creates a nice 2:10little table here. It is great to use AI 2:13to help you with AI. It's one of my 2:15biggest tips. AI is a self-learning 2:17technology. As you are more into it, as 2:21you are more hands-on with AI, you're 2:23going to do better. Okay, let's get into 2:24the prompts today. Version one is sort 2:27of hard mode. It's the one where you 2:28define your own AI goal. The first thing 2:31we do is we take a role and we give it a 2:33purpose. You are my prompt coach. This 2:35is our shared mission. So our mission is 2:37to craft a prompt blueprint that turns 2:39the assistant into a personal AI tutor. 2:42So what this section does is twofold. 2:43First, it adopts a role. And people will 2:45tell you the role doesn't matter because 2:48the role has been shown and tested to 2:50not improve factual accuracy on recall. 2:54That's true. That is not the point of 2:56the role. And people who think it is 2:58misunderstand it. The point of the role 3:01is to help the model get into a semantic 3:04space so that the conversation flows 3:08more smoothly so that the model is able 3:11to understand more easily where we are 3:14trying to go with the conversation. It 3:16has nothing to do with factual recall. 3:18It may have helped with factual recall 3:20in the beginning in 2022, but it 3:22certainly doesn't now. Now we have an 3:24outcome or goal. Our shared mission is 3:26to craft a prompt blueprint. Already you 3:29can see some of the differences here 3:31between prompts. This prompt is focused 3:33very heavily on learning together and 3:36expects a lot from you the user that 3:38turns the assistant into a personal AI 3:40tutor for AI learning that a quizzes 3:44methodically and b delivers 3:46progressively harder lessons. So this is 3:48the heart of what you want the model to 3:50do. This is what we would call 3:52definition of goal. It's quite a complex 3:54definition of goal. So you basically 3:56have to get the model to understand that 3:58it wants to do two things and it needs 4:01to do them in a particular order. And we 4:04signify that by being clear about the 4:06overall goal, what we want the assistant 4:09to do, the semantic space it occupies, 4:13the stance that it takes, whether it's 4:14interrogative or not. Clearly it's 4:16interrogative here. And then what steps 4:18it takes to reach that goal at a high 4:20level. First it has to quiz methodically 4:23to diagnose my current level. I'm still 4:25using technical language here because 4:27this reinforces that we are in a place 4:29where we care about hard learning and 4:31then deliver progressively harder 4:33lessons. That is doing a lot of work 4:35right there. Progressively is really 4:37laboring to make it clear to the LLM 4:40that we should not start with hard mode 4:43in the beginning. So it says we'll 4:44follow the prompt blueprint framework 4:46from your prompt is the product. That is 4:48actually an earlier version, an earlier 4:51article of mine that has I think made it 4:54into AI land. Now other people are using 4:57it and seeing some success. So we we we 4:59are trying it. Then we outline the 5:00sections, right? So we reinforce the 5:02parameter. That framework has four 5:03sections that are in this order. 5:05Purpose, instructions, reference, 5:07output. Here is 5:10what's critical. We then we we've laid 5:12out what we expect the model to do in 5:14this first paragraph up here. We explain 5:16where we're going with the prompt as a 5:18whole. This is all preamble. Now, we're 5:20getting to where the prompt actually 5:22begins to have teeth. You can see how 5:24this is a very sort of advanced prompt 5:26because it has has a lot of setup to get 5:30the model where it needs to go. And most 5:32prompts I see don't put this much effort 5:34into the setup. And this is part of how 5:36you get more complex prompt results. 5:38Okay. Workflow rules. Now, we're telling 5:40it how do you use this stuff? We haven't 5:42even given it the purpose, instructions, 5:44reference, and output yet. We're telling 5:45it how it uses it. And we're using 5:47markdown throughout. So when you see 5:49these little asterisks, it is 5:50intentional because it helps the model 5:52to see emphasis. It reads it as bold. So 5:55section by section, no skipping ahead. 5:58That's critical because the model might 6:00be tempted. Full question set. Show me 6:02every question I must answer and provide 6:04a concrete example answer for each. Now, 6:07what's interesting here is that this is 6:10potentially 6:13going to make the user work very very 6:17hard because it may display 6:21a bunch of questions at once. And you'll 6:23see how we tweak that for the easy mode 6:25in the next prompt. So, this is one 6:27that's definitely an example of where 6:29we've changed it and made it hard mode 6:30because we've allowed the model to be 6:32complete. Gatekeeping. wait until I 6:35answer all the questions. If an answer 6:36is unclear, ask a follow-up. Again, this 6:39is an example of going to hard mode 6:40because an easy mode would understand 6:42that if you answered one, two, and three 6:45incompletely, you probably don't know 6:47four, five, and six. This one is going 6:48to assume that you have enough that you 6:50can reasonably answer the AI. We then go 6:53to memory. Carry my confirmed answers 6:56forward. Do not ask for them again. I 6:58don't want it to bug me again. Examples 7:00for reference. When illustrating, draw 7:03inspiration from the sample prompts 7:05below. 7:07Pricing, strategy, content, calendar, 7:09agentic, monitor, pitch, deck, review. 7:11Finish line. After all four sections are 7:13filled, assemble and display the final 7:16prompt blueprint in this format. And do 7:19you see what we're doing? 7:21Do you see what we just did? Think about 7:23it. This prompt coach, 7:26I was waiting for this. It's going to be 7:27a nice surprise. This prompt coach 7:30exists 7:31to help you build a prompt that is 7:35custom to you and your sort of knowledge 7:38level of AI so that you can learn about 7:41AI the way you need to. That's why it's 7:43hard because you have to answer all of 7:45these questions and then it has to 7:47output a prompt in the right structure 7:49that you can then use for lesson plan. 7:51It goes into the prompt blueprint mode 7:54reflection action agentic. Those are 7:57three different options like light 7:58switches. You have to specify them. 8:00Effort, quick, standard, deep. Again, 8:02you have to specify. You have to specify 8:05your goal. And what's interesting 8:08is that you have two ways to do this. 8:11You can either let it ask you questions 8:14piece by piece and it will eventually 8:16develop that as it asks you questions 8:18or you can get skip ahead, answer the 8:21questions it's going to give you, but 8:23also give it something to work with here 8:26at the start. And both of those work 8:28because prompting essentially just gives 8:30you ways to pull the model where you 8:32want it to go. And in this case with the 8:34purpose, you know where you want it to 8:36go. Great. You don't have to make the 8:38model work for that. It can ask other 8:40questions. 8:42Instructions, behavioral guidelines, 8:44task description, constraints are really 8:46important. Unallowed tools is really 8:48important. Those are 8:51things you can fill out again if you 8:52feel like you have an opinion or they 8:54are things the model is instructed to 8:58fill out through questioning you. So you 9:00don't have to know at the start but you 9:02will know by the end. Reference files, 9:04tables, numbers, external knowledge and 9:06relevant context. 9:08It will fill that in depending on the 9:10context it has for you. Or you can call 9:13out lessons. One of my favorites is to 9:15invoke Andre Carpathy who's strongly 9:17parameterized in the model and say I 9:20want you to follow his lesson planning. 9:22It will do that and it's very very easy 9:25as a shortcut. Expected output format 9:27you can make it an essay back uh once it 9:30teaches you. It can be a JSON if you're 9:32technical etc. And then the length 9:34instructions you can frame this as 9:36tokens or words. I used words because 9:38I'm assuming you might want an essay or 9:40you might want to frame it in markdown. 9:41You can also constrain it to tokens. It 9:43will work just fine. Sample prompt 9:45references. Now, isn't this interesting. 9:48I almost gave this away earlier. These 9:50four which we referenced up here. So, we 9:54referenced them 9:56up here in the Pens here. Examples for 9:59reference. Draw from them for 10:00inspiration. These are not actually 10:04fully vetted prompts 10:08and they don't necessarily have to be to 10:11do some good work here. They could be. 10:14If we wanted to make this more in-depth, 10:17we could add additional blurbs on these 10:20prompts even without pasting the full 10:22prompt. If we pasted the full prompt, 10:24there is some risk that we would hijack 10:27the model and get it to run like a 10:28pricing strategy prompt. And that is why 10:31I did not put the full prompt in here. 10:34Instead, I invoked the kind of depth I 10:38want in other examples. Like this is 10:40what I want from a pricing strategy 10:42perspective. If we're running this 10:43through pricing strategy, it's what I 10:45want from a content calendar 10:46perspective. You get the idea. And so 10:49what I'm doing there is I'm challenging 10:51the model in four different examples to 10:54think about how deeply I want it to 10:56think. And then it needs to read that 10:58back with the earlier part of the prompt 11:01and just draw inspiration. It draws 11:03vibes from that so that it knows to go 11:05deep. That is a fairly sophisticated 11:07example implementation because within 11:09the same prompt I call out the example 11:12and then I reference it farther down and 11:14I reference it with a placeholder. And 11:17so if we zoom out and look at this 11:18prompt overall we have a role at the 11:21top. You are my prompt coach getting you 11:23into semantic meaning. We have a shared 11:26mission, a shared goal 11:28and then we have a way we get that goal 11:30done in order. Again, we we have 11:33constructed this very carefully so it 11:35will do it in order A and B. This will 11:39tend to be followed better by a thinking 11:42model, a Gemini 2.5 Pro, Claude Opus 4, 11:45an 03, because they can parse the 11:49instruction set. We then give it a sense 11:51of what's in the box. We say refer to 11:53your prompt as the product which to some 11:56extent may be in the model at this point 11:58and it has these four framework things. 12:00So we don't assume it's in the model. We 12:02refer to it if it's helpful and then we 12:04define what is important to us. The 12:06prompt that this is outputting because 12:08remember the big surprise. This is a 12:10prompt to develop a custom learning 12:13prompt for you. And so it needs to have 12:16a purpose that matches you, instructions 12:18that match you, references that match 12:19you, output that matches you. It needs 12:21to get there by following these workflow 12:24rules. It has to go section by section 12:26and be methodical. It has to ask the 12:28full question set. It has to gatekeep 12:32and expect you to answer the all the all 12:34the questions. It has to use its memory 12:37and not just reask. And it has to refer 12:40to these examples which are basically 12:41placeholders for thinking deeply about 12:43various subjects. And they're picked to 12:46be different subjects than what we have 12:48in the prompt so it doesn't confuse the 12:49prompt. In other words, referencing them 12:52helps the prompt understand the depth, 12:54but the prompt understands that this is 12:57about building a prompt for AI at this 12:59point, so it's not going to get too 13:00distracted. If we wrote out the full 13:02prompt, that might be too much. Finish 13:04line, assemble and display the final 13:05prompt. We mentioned that you now have 13:07options for these sections. You can fill 13:09them out now before you paste the 13:11prompt, or you can opt to have the model 13:14fill them out as you go. And that's how 13:17it works. Let's go to prompt number two. 13:20Let's say you're impatient. You don't 13:22want to do all of this work just to 13:24build a prompt, just to get you an 13:26actionoriented plan, etc., etc. Instead, 13:30all you want to do is get started. Well, 13:33that's what easy mode is for. And we are 13:35again going to wrap the code. We start 13:38with the same role. This has the same 13:40purpose. Our shared mission is to run a 13:43personal AI tutoring program that 13:44diagnoses my current level and delivers 13:46progressively harder lessons. Very 13:49similar except we include the line 13:50without overwhelming me because we know 13:52it's a little bit more aimed at 13:53beginners. Again, we invoked your prompt 13:56is the product and we begin to talk 13:58about the constraints before we get to 14:00the blueprint. These are constraints 14:02that are designed to make it easier to 14:04consume. One is single question mode, 14:06one is micro lessons. So, it's not too 14:09much. At the end of this video, I'm 14:11going to show you how each prompt looks 14:14in reality, 14:16at least the first turn or two. The 14:18prompt blueprint then has the similar 14:21purpose, mode, effort, goal, 14:24uh, and that hasn't changed. 14:27And it fills in stuff you would 14:30otherwise have to fill in. So, the 14:32purpose is minimum viable understanding. 14:34The mode is default agentic. So it's 14:36going to be more active with you and you 14:38can overwrite it anytime with this 14:40command. The effort is default standard. 14:42You could change that in this prompt but 14:44it prefills it so it won't ask you. The 14:46goal is learn AI fast via single 14:48question diagnostics toward tougher 14:50lessons. Very simple. Quick start 14:52diagnostic. It's a shorter version of 14:54the workflow. Begin with one question. 14:57Again, we're simplifying. Record my 14:59answer. Respond with short feedback. And 15:01then ask a single question. You cannot 15:04ask more than five. Again, we're looking 15:06to simplify. For any clarification or 15:09follow-up, pose onepointed question, 15:11wait for my reply, and resume. Here is 15:13how micro lessons work. Ask a diagnostic 15:16question. Teach. Give a task or code 15:20snippet to practice. And then an 15:21optional harder challenge. Escalate the 15:24difficulty only when I score more than 15:2680% on the prior practice task. It will 15:29sit there until you learn it. Again, 15:31this is designed for folks that don't 15:33know their level and need help to learn 15:35defaults and overrides. 15:37This is 15:39exactly what we just described effort 15:42standard uh and it adds time horizon at 15:4512 weeks which is interesting. 15:48Essentially what I am doing with the 15:5012week I am not saying actually take 12 15:53weeks and the model won't. I'm saying 12 15:56weeks because that triggers a part of 15:58semantic space where the model believes 16:00this is a real course because the 16:03model's courses that it studied during 16:05pre-training if it's an AI course 16:08they're like 12week course is a complete 16:10course I'm invoking that here I can send 16:12batch to allow up to three questions at 16:15once or come back to shorten lessons 16:17further so this gives me controls this 16:19is why you read your prompts if a 16:21missing detail blocks progress ask only 16:23one clarifying question. Retain 16:26confirmed answers. That's the same. 16:30This does 16:32still get you to a blueprint for 16:35learning, but you know what's 16:36interesting? It doesn't stop you 16:38learning along the way. Whereas the 16:41earlier prompt was going to more or less 16:43delay a lot of the learning until you 16:46answered all the questions in a row. 16:47It's going to be very overwhelming for 16:49some folks who are earlier on in their 16:51learning journey. This is how you want 16:53to teach. We didn't have any of this 16:55before. So, use active learning tactics, 16:58mini projects, code snippets, thought 17:00experiments, site authoritative sources, 17:02and markdown. Accept pacing commands. 17:05So, this gives you tips on how to have 17:07pasting commands. I can tell it to skip 17:09if I want. Uncheckpoint, please 17:11summarize my progress. That's very 17:13handy. And then I seed references. And 17:15these are all great standard references, 17:17so you don't have to go after them. And 17:19then this is the output format per 17:21lesson. So when it produces the 17:23blueprint, it's supposed to follow a 17:24diagnostic, a concept, a practice, and a 17:27stretch goal. And then begin execution 17:29now. And this is how you begin. Okay, 17:31that is a very different prompt. It 17:33accomplishes the same goal, but you can 17:35see how flipping a few things at the 17:37beginning really changed it. It is 17:39asking only one question at a time. It 17:41is focusing on micro lessons. We are 17:43weaving that through 17:46by emphasizing single question in 17:50multiple places. 17:52We are also filling in things that are 17:54not filled in in the prompt up above 17:56where we're saying, you know, the mode 17:58is default agentic. Well, if you roll up 18:00above, you still have a mode, which is, 18:02I think, a helpful thing, but it doesn't 18:04say you have to pick. And so, we're just 18:06making it easier and giving you fewer 18:08choices so you can get started. Let's 18:11see how these prompts actually run. 18:14Okay, let's run the first prompt. Here, 18:17you are my prompt. Now, remember this is 18:19the hard version. quiz methodically and 18:22deliver progressively harder lessons. 18:23We're going to go ahead and run it. My 18:25model over here is 03. So, let's see how 18:28we do. Let's think it. It gives me a 18:30table of questions. It gives me example 18:32answers it will accept. I can type them 18:34in and answer. So, we're going to keep 18:35this pretty simple. What is your 18:37overarching learning goal? I want to 18:39learn AI basics fast before chat GPT5 18:44gets here. Two, 18:47let's go with action. And three, I want 18:50quick. I don't have a lot of time. So, 18:52you can see it's just starting to walk 18:53you through. Now, it wants to ask me all 18:56of these questions at once. Interaction 18:58cadence. So, these are all things that 19:00we'll put into the prompt. Uh, yes, one 19:03question. 19:05Two, 19:07150 words sounds good. Three, I'm 19:10keeping it like I could answer longer, 19:11but I'm keeping it quick. Multiple 19:12choice, please. Four, tone is 19:16conversational. 19:17Five, give me hints. Six, allowed 19:20references. 19:22Yes, go get references. Okay, we're 19:25going to keep it simple. One, 19:28carpathy as a trusted source plus folks 19:32on that level. Like we're going to be 19:34deliberately vague. Uh, personal 19:36context. This is making it harder for 19:38them all. I don't know a lot about AI 19:41existing notes. I don't have notes 19:45for no off topics. Great. So, this is 19:49just the model is just kind of going 19:52through and chewing through and 19:53eventually it's going to find that it 19:55has everything it needs to produce the 19:57blueprint. What kind of format do we 19:59want? Base format is markdown. I'm not 20:03even going to number it. Like you can 20:05also do this four options. 20:09Uh, show correct answer with explain. 20:13I'm not even using good grammar. Again, 20:14I'm trying to show you that you can 20:16really mess with this prompt. 20:18Recap after three questions. Great. So, 20:22it thinks it's close. And there's the 20:24prompt blueprint. And now it's writing 20:26the prompt out. Well, with that in mind, 20:29let's go ahead and write the prompt. And 20:30we have our prompt. And we're off to the 20:32races. I'm not going to do the rest of 20:33this, but you can see you built your own 20:35learning tutor system, which is really 20:37cool. Well, you might wonder what does 20:39the easy mode do? Let's see what easy 20:41mode does. We're going to go over to 20:42version two. We're going to copy it. 20:44We're going to paste it in here. Here we 20:46are on easy mode. See, you think it's 20:50easy? How is it this easy? Watch. I 20:52honestly don't know. You see how it's 20:54using chain of thought there to pick an 20:56easier path uh forward? No worries. Back 21:00propagation is the algorithm that 21:01propagates the error signal from a 21:03network's output layer back through its 21:04hidden layer, letting each weight adjust 21:06itself to reduce future error. It's the 21:08workhorse behind neural network 21:10training. And then it asks a question. 21:12So I'm going to give it two answers. I'm 21:13going to say one, 21:15a training data set is like the internet 21:19or books 21:21that models read prior to completing 21:24training. 21:26Again, keeping the answer simple. two, 21:32tell me in simpler words how back 21:35propagation 21:37works. And I didn't even spell it 21:39correctly, but the model will figure it 21:40out. And so I'm immediately pushing the 21:42model on the prompt and I'm basically 21:44saying yes, but you have to make room 21:45for me to have questions along the way. 21:47And we'll see how it does. Models tend 21:49to do okay with this usually. So it 21:51likes my answer and then it expands it. 21:53It's labeled or structured examples. 21:55It's flashcards for the lesson, not the 21:57internet, which is a great visual. And 21:59then it answers my other question. This 22:01is how back propagation works. It 22:03guesses. It measures mistakes. It sends 22:05the blame backwards. And it tweaks the 22:07weights. That is super clear. Like it 22:09actually answered my my concern. And 22:11then diagnostic question three, what 22:14problem does a validation or test data 22:16set help detect when training a model? 22:18And it actually gives you a reference 22:21that it's following for this, which is 22:22kind of handy because I can now look 22:23this up. I won't, but I can. Okay, you 22:26have now seen how both of these prompts 22:28work. I don't know which one you want to 22:30pick, but the idea is that you have come 22:33away from this particular exercise 22:38understanding how small changes in the 22:43prompt itself actually shape what the 22:46prompt can do. Prompt one effectively 22:49became a full learning system for us. 22:52You can use it to construct any version 22:54of learning. It's like clay, but it's 22:56structured clay. And so you can flip 22:58those switches like to agentic mode or 23:00hard mode or whatever and get what you 23:02want. Whereas easy mode fills in a lot 23:05of those and also imposes some extra 23:08structure like one question at a time 23:10that help you just get started learning 23:12right now. And then you saw that in hard 23:15mode you could actually build your own 23:17learning system by using the prompt to 23:19build the prompt. It's a very sort of 23:21advanced technique. A lot of people roll 23:22their eyes, but it's actually really 23:23helpful. The prompt becomes the scaffold 23:26that you can use to build what you want 23:27that's custom to you. I don't know where 23:30you're at on your knowledge. I want you 23:32to get the most value possible. So, part 23:34of why I picked this exercise today is 23:37because I wanted you to see both how to 23:40use these prompts for things like 23:42building additional prompts. They're 23:44called meta prompts when you use them to 23:45build additional prompts and also 23:47because I wanted you to get a concrete 23:49action out of this where you could 23:50actually learn AI with these prompts. 23:52So, I'll be putting these down in the 23:54substack in full int text so you can see 23:57them and grab them really easily and 23:58then you're off to the races. You can 24:00use them for whatever you like. I hope 24:02this has been helpful. In particular, I 24:04wanted you to see how I manage 24:05differences with different kinds of 24:07prompts that accomplish the same goal. 24:08And I want you to get a deeper 24:10understanding of how structure and 24:11wording influence prompts.