Learning Library

← Back to Library

Contract-First Prompting for Clear Intent

Key Points

  • Prompt failures usually stem from vague intent, as human language and individual expertise make it hard to convey precise meaning to an LLM.
  • “Contract first prompting” is proposed as a technique that establishes a clear, shared technical agreement with the LLM before it begins work.
  • Relying solely on the LLM’s clarifying questions is insufficient because it leaves the model to choose unstructured queries, leading to continued ambiguity.
  • A contract‑first prompt should explicitly state the mission, goals, and detailed requirements—mirroring how engineering teams write service contracts.
  • The method isn’t about magic wording; it’s about framing the prompt as a concise, contract‑like description that ensures the LLM fully understands the intended task.

Full Transcript

# Contract-First Prompting for Clear Intent **Source:** [https://www.youtube.com/watch?v=i4Jfl1IW-_U](https://www.youtube.com/watch?v=i4Jfl1IW-_U) **Duration:** 00:09:28 ## Summary - Prompt failures usually stem from vague intent, as human language and individual expertise make it hard to convey precise meaning to an LLM. - “Contract first prompting” is proposed as a technique that establishes a clear, shared technical agreement with the LLM before it begins work. - Relying solely on the LLM’s clarifying questions is insufficient because it leaves the model to choose unstructured queries, leading to continued ambiguity. - A contract‑first prompt should explicitly state the mission, goals, and detailed requirements—mirroring how engineering teams write service contracts. - The method isn’t about magic wording; it’s about framing the prompt as a concise, contract‑like description that ensures the LLM fully understands the intended task. ## Sections - [00:00:00](https://www.youtube.com/watch?v=i4Jfl1IW-_U&t=0s) **Contract-First Prompting for Clear Intent** - The speaker explains that vague intents cause most prompt failures and proposes a “contract first” prompting method—mirroring software service contracts—to establish a precise, shared understanding with LLMs before they begin work. - [00:03:15](https://www.youtube.com/watch?v=i4Jfl1IW-_U&t=195s) **Iterative Prompt Clarification** - The speaker describes how an AI automatically scans a vague prompt, identifies missing constraints, and asks a series of targeted questions until it reaches high confidence—illustrated with a 500‑word summary request about Bacans history. - [00:06:37](https://www.youtube.com/watch?v=i4Jfl1IW-_U&t=397s) **Prompt‑Driven PRD Clarification Process** - The speaker explains how an iterative AI prompting technique helped resolve ambiguous scope and intent for a multi‑channel live‑stream comment aggregation tool, ultimately producing a clear product‑requirements document. ## Full Transcript
0:00Almost every prompt that fails fails 0:02because intent wasn't clearly 0:04communicated. Human language is really, 0:06really rough on intent. And it's not 0:08just a function of a particular 0:10language. It's not just a function of 0:12the fact that it's human language. It's 0:13really the fact that we as individual 0:15people bring so much domain expertise. 0:18We bring so much passion. We bring so 0:20much energy, so much experience to a 0:22particular subject we want to work on. 0:24and we try and convey it in these these 0:26words to the LLM and say please work on 0:29this with me and I will tell you even as 0:31someone who's relatively experienced 0:32with prompting it is also frustrating 0:34for me sometimes I also struggle with 0:36getting that intent across in a way the 0:38LLM understands. I want to suggest a 0:41technique I haven't seen elsewhere that 0:42I've had success with to you today. It's 0:44called contract first prompting. And you 0:46might think to yourself contracts like 0:49are we signing things? No, that's not 0:51what I mean. I mean contracts in the 0:53sense that engineering teams use them 0:55where they write contracts and 0:57agreements with one another about how 0:58their microservices will interact, what 1:00the service level agreement will be, 1:02what the latency will be, all these 1:04technical specifications. In the same 1:06way, we need to get to a point where we 1:08have very tight technical shared 1:09understanding with the LLM of the 1:12meaningful work we want to do together 1:14before it starts to work. And that has 1:16been very difficult to do. And I am not 1:18satisfied with the usual answer here, 1:20which is just ask the LLM to ask some 1:21clarifying questions. People report 1:23success with that. I have also had some 1:26success with that. But I want to 1:28emphasize to you that that is a very 1:30scattershot unprofessional approach to 1:34actually dealing with this issue. You 1:36are giving the LLM, which is swimming in 1:37a sea of ambiguity, free reign to pick a 1:41question that it thinks may help. you 1:43are not really giving it any parameters 1:44or structure around that question set so 1:46that it knows that it got it right and 1:48it knows that it understood your intent. 1:50And that is why asking clarifying 1:52questions can be helpful but not 1:53sufficient. So what's better? What does 1:56contract first prompting look like? 1:58Well, I'm so glad you asked. We're going 2:00to actually look at a prompt I wrote 2:02that illustrates contract first 2:03prompting and I am going to talk you 2:05through. It's quite fun. Okay, here we 2:06are. I want to call out each of the key 2:08elements here. I am not a believer of 2:10like trying to claim to you that there 2:11are magic words in prompting. Everything 2:13has a reason, but obviously you are 2:15going to be able to also build these in 2:17ways that are useful to you. So, it's 2:18not that this is the only way to build a 2:20contract first prompt. I want you to 2:21walk away with the intent, not the magic 2:23words. First of all, it's always good to 2:25give an LLM a mission. Uh, your goal is 2:28to turn my rough idea into a very clear 2:30work order. I am assuming you have work 2:32that matters here. So maybe it's 2:33building educational materials or a 2:35PowerPoint presentation or the script 2:37for a PowerPoint presentation. I don't 2:39expect Chad GPT to do a great job with a 2:41PowerPoint presentation. Uh or maybe 2:43it's building software. Whatever it is 2:44is meaningful work and it will deliver 2:47the work only after both of us agree 2:49it's right, which is the critical sort 2:50of contract piece to this. So what goes 2:52into this? Number one, we need to make 2:54sure that it understands what are the 2:57gaps to goal, what are the gaps to 2:59intent. And so when you write and sort 3:01of print this thing initially into the 3:02chat and you say go, it's first going to 3:04say, I'm ready. What do you need? And 3:06you're just going to write out a 3:07rambling sentence or two because so 3:09often when we're defining work, we don't 3:11know any more than a sentence or two. 3:14And that's what sort of stops people 3:15from doing a better job uh prompting 3:17initially. So you just write what you 3:18have, right? It can be really messy. 3:20It's then going to go into sort of 3:22number zero here and silently scan and 3:24list every fact or constraint that it 3:26still needs. And then it's going to 3:27start digging. It's going to ask one 3:29question at a time until it gets to 95% 3:32confidence that it can ship the correct 3:33result. And this gives it some examples 3:36of places to dig for purpose, audience, 3:38facts, success criteria, length, text 3:40stack, if code, edge cases, risk, 3:42tolerance, etc. But I will tell you from 3:45experience running this prompt, that is 3:47not an exhaustive list. It will go other 3:49places. So an example here that's really 3:51useful, I wanted to really stretch it 3:53with a highly ambiguous human prompt. 3:55And so I asked it for a 500word summary 3:57of the history of the Bacans since 1660. 4:00Why? Cuz that's pretty ambiguous. 4:02There's a lot that goes on since 1660 in 4:04the Bacans. And you know what it figured 4:06out? It figured out that one of the key 4:08leverage points to writing a good 4:10500word summary was how it was going to 4:13handle the evolution of political 4:15entities and their naming conventions 4:16across all of that time period. it 4:19needed to figure out what kind of scope 4:23I wanted so it could cover the arc of 4:26history in a way that made sense for my 4:28work assignment. And so even though it 4:30wasn't named as a constraint, it had 4:32three or four rounds of questions for me 4:34asking me to pull apart my intentions 4:36around political entity discussion and 4:38description for this 500word uh 4:40description. And by the way, why did I 4:42put 500 words? Because I wanted to 4:43challenge it. Shorter is harder than 4:45longer here. and it eventually got to 4:47something that was a really solid 4:48summary of Bach and history since 1660 4:50in 1600 words. And all of that 4:52clarification was really helpful. The 4:54echo check is when it thinks it's close. 4:57So, it replies with a crisp sentence. It 4:59states the deliverable. It states 5:01something that it knows it needs to 5:02include and it states a hard constraint 5:04that is designed to make it a very 5:06easily readable summary of work that you 5:08can engage with. And then this prompt 5:11has what is effectively a mini program 5:13inside. You can say yes and lock it. You 5:15can edit it. You can ask for a blueprint 5:17or outline of what's going on. Or you 5:19can call out the risks uh and ask the LM 5:22to define what's risky about the prompt 5:24as it stands. And it gives it it gives 5:27it directions for what to do here. How 5:30do you handle yes to lock is very 5:32intuitive, edits as intuitive, but it 5:34defines blueprints and arrest so the LLM 5:35understands it. When it's building and 5:37self- testing, it gives it special 5:39instructions for how to handle code. So, 5:41it's responsible and reminds it to 5:43review code, which is something that I 5:45could have extended into documents, 5:47etc., but code is often errorprone, and 5:49so I thought that was worth it. Um, and 5:51it gives you the option to reset. This 5:54is really short. You might wonder, how 5:56on earth does this work? Well, it turns 5:58out you don't necessarily need a long 6:01prompt to get to contract first intent. 6:03You just need clarity around the 6:05sequence of steps. All we're doing is 6:07we're saying, one, list the gaps to 6:10goal, which I almost never see in 6:11prompts. Two, dig for those gaps until 6:15you get to 95% confidence. And then from 6:18there, 6:20offer a path forward that I can choose 6:23and control because we're trying to 6:24write a contract together. Is this the 6:27only way to write contract first 6:28intents? Absolutely not. It's not the 6:30only way. Is it a really useful way to 6:33talk about getting to clarity of intent? 6:35Yes. And I didn't just do this with 6:37history. I did it with software. I 6:39actually have been working on a software 6:41project because I'm interested in 6:42centralizing comments in live streams 6:45across multiple channels. So I've been 6:46playing around with a software idea for 6:48that. And I it's it's again it's 6:50ambiguous. How many channels? What do I 6:52include? What counts? What's what's an 6:53MVP? The number of users. All of these 6:55things that I could like try and put 6:57into a heavy PRD prompt initially, but 6:59I'm not really there yet. I really want 7:01to just talk about it and I want to talk 7:03about it in a structured way. this was 7:04really useful for that because I could 7:06actually say I really want you to 7:08produce a PRD for this but I don't have 7:11the intent yet and so dig with me until 7:13we get to a agreed contract of work to 7:16produce a PRD with clean clear intent 7:19and it did. Now if you look at that and 7:21that and that feels really obvious to 7:23you congratulations that means that it 7:25should exist in the world and you should 7:27try it. But I will tell you I have done 7:30a fair bit of digging. It is not as 7:32obvious as you would think. This is not 7:34a technique that I can find other 7:36places. And I'm a little bit surprised 7:38because I think it's a very token 7:39efficient way of getting to clarity of 7:42intent when we assume that humans are 7:44humans. And I'm increasingly interested 7:46in prompting techniques that assume that 7:48humans are humans. We are not perfect. 7:51We do not always write the full prompt 7:53out. We do not always have the full 7:56crisp complete intent. In fact, mostly 7:57we don't have any of those things. What 7:59we have is a vague human idea backed by 8:03a tremendous amount of context and 8:05experience and we need help fishing that 8:07out of our heads and getting to clarity. 8:09That is what a contract first approach 8:12to prompting seeks to do. How can we get 8:15to a point where the LLM deeply, fully, 8:18completely understands your intent with 8:20this piece of work in a way that you can 8:22just converse with it and and like let 8:24it ask you questions and let it dig out 8:26for you. You might think this is only 8:28for product managers like PMs need to 8:30get clarity on intent when writing 8:31requirements or it's only for this or 8:33only for that. This is a very 8:35intentionally wide-ranging prompt set. 8:38It is supposed to be something that is 8:40workable for virtually any piece of 8:43serious work where you need to define 8:45intent first. And I wrote it that way on 8:47purpose because I think our use cases 8:49for AI are really wide ranging. any 8:51survey you see, any white paper you see 8:53on how we use AI, we do a lot of 8:55different kinds of serious work. But the 8:57common failure mode remains clarity of 9:00intent. That is what this is designed to 9:02fix. So if this was fun, if you enjoyed 9:04it, great. Go run some contract first 9:07prompts. Tell me how they worked for 9:09you. Or if you already have a word for 9:10this or if you're already using this, I 9:12would love to hear about it. So often 9:13when I do these prompt videos, people 9:15say, "Oh yeah, I have a different word 9:16for this, but I've been trying it at 9:17home. I didn't know it was a thing." 9:19It's part of why we talk about this is 9:20that like we learn together what the 9:23common terms of art are. So there you 9:25go. Contract first prompting.