Contract-First Prompting for Clear Intent
Key Points
- Prompt failures usually stem from vague intent, as human language and individual expertise make it hard to convey precise meaning to an LLM.
- “Contract first prompting” is proposed as a technique that establishes a clear, shared technical agreement with the LLM before it begins work.
- Relying solely on the LLM’s clarifying questions is insufficient because it leaves the model to choose unstructured queries, leading to continued ambiguity.
- A contract‑first prompt should explicitly state the mission, goals, and detailed requirements—mirroring how engineering teams write service contracts.
- The method isn’t about magic wording; it’s about framing the prompt as a concise, contract‑like description that ensures the LLM fully understands the intended task.
Sections
- Contract-First Prompting for Clear Intent - The speaker explains that vague intents cause most prompt failures and proposes a “contract first” prompting method—mirroring software service contracts—to establish a precise, shared understanding with LLMs before they begin work.
- Iterative Prompt Clarification - The speaker describes how an AI automatically scans a vague prompt, identifies missing constraints, and asks a series of targeted questions until it reaches high confidence—illustrated with a 500‑word summary request about Bacans history.
- Prompt‑Driven PRD Clarification Process - The speaker explains how an iterative AI prompting technique helped resolve ambiguous scope and intent for a multi‑channel live‑stream comment aggregation tool, ultimately producing a clear product‑requirements document.
Full Transcript
# Contract-First Prompting for Clear Intent **Source:** [https://www.youtube.com/watch?v=i4Jfl1IW-_U](https://www.youtube.com/watch?v=i4Jfl1IW-_U) **Duration:** 00:09:28 ## Summary - Prompt failures usually stem from vague intent, as human language and individual expertise make it hard to convey precise meaning to an LLM. - “Contract first prompting” is proposed as a technique that establishes a clear, shared technical agreement with the LLM before it begins work. - Relying solely on the LLM’s clarifying questions is insufficient because it leaves the model to choose unstructured queries, leading to continued ambiguity. - A contract‑first prompt should explicitly state the mission, goals, and detailed requirements—mirroring how engineering teams write service contracts. - The method isn’t about magic wording; it’s about framing the prompt as a concise, contract‑like description that ensures the LLM fully understands the intended task. ## Sections - [00:00:00](https://www.youtube.com/watch?v=i4Jfl1IW-_U&t=0s) **Contract-First Prompting for Clear Intent** - The speaker explains that vague intents cause most prompt failures and proposes a “contract first” prompting method—mirroring software service contracts—to establish a precise, shared understanding with LLMs before they begin work. - [00:03:15](https://www.youtube.com/watch?v=i4Jfl1IW-_U&t=195s) **Iterative Prompt Clarification** - The speaker describes how an AI automatically scans a vague prompt, identifies missing constraints, and asks a series of targeted questions until it reaches high confidence—illustrated with a 500‑word summary request about Bacans history. - [00:06:37](https://www.youtube.com/watch?v=i4Jfl1IW-_U&t=397s) **Prompt‑Driven PRD Clarification Process** - The speaker explains how an iterative AI prompting technique helped resolve ambiguous scope and intent for a multi‑channel live‑stream comment aggregation tool, ultimately producing a clear product‑requirements document. ## Full Transcript
Almost every prompt that fails fails
because intent wasn't clearly
communicated. Human language is really,
really rough on intent. And it's not
just a function of a particular
language. It's not just a function of
the fact that it's human language. It's
really the fact that we as individual
people bring so much domain expertise.
We bring so much passion. We bring so
much energy, so much experience to a
particular subject we want to work on.
and we try and convey it in these these
words to the LLM and say please work on
this with me and I will tell you even as
someone who's relatively experienced
with prompting it is also frustrating
for me sometimes I also struggle with
getting that intent across in a way the
LLM understands. I want to suggest a
technique I haven't seen elsewhere that
I've had success with to you today. It's
called contract first prompting. And you
might think to yourself contracts like
are we signing things? No, that's not
what I mean. I mean contracts in the
sense that engineering teams use them
where they write contracts and
agreements with one another about how
their microservices will interact, what
the service level agreement will be,
what the latency will be, all these
technical specifications. In the same
way, we need to get to a point where we
have very tight technical shared
understanding with the LLM of the
meaningful work we want to do together
before it starts to work. And that has
been very difficult to do. And I am not
satisfied with the usual answer here,
which is just ask the LLM to ask some
clarifying questions. People report
success with that. I have also had some
success with that. But I want to
emphasize to you that that is a very
scattershot unprofessional approach to
actually dealing with this issue. You
are giving the LLM, which is swimming in
a sea of ambiguity, free reign to pick a
question that it thinks may help. you
are not really giving it any parameters
or structure around that question set so
that it knows that it got it right and
it knows that it understood your intent.
And that is why asking clarifying
questions can be helpful but not
sufficient. So what's better? What does
contract first prompting look like?
Well, I'm so glad you asked. We're going
to actually look at a prompt I wrote
that illustrates contract first
prompting and I am going to talk you
through. It's quite fun. Okay, here we
are. I want to call out each of the key
elements here. I am not a believer of
like trying to claim to you that there
are magic words in prompting. Everything
has a reason, but obviously you are
going to be able to also build these in
ways that are useful to you. So, it's
not that this is the only way to build a
contract first prompt. I want you to
walk away with the intent, not the magic
words. First of all, it's always good to
give an LLM a mission. Uh, your goal is
to turn my rough idea into a very clear
work order. I am assuming you have work
that matters here. So maybe it's
building educational materials or a
PowerPoint presentation or the script
for a PowerPoint presentation. I don't
expect Chad GPT to do a great job with a
PowerPoint presentation. Uh or maybe
it's building software. Whatever it is
is meaningful work and it will deliver
the work only after both of us agree
it's right, which is the critical sort
of contract piece to this. So what goes
into this? Number one, we need to make
sure that it understands what are the
gaps to goal, what are the gaps to
intent. And so when you write and sort
of print this thing initially into the
chat and you say go, it's first going to
say, I'm ready. What do you need? And
you're just going to write out a
rambling sentence or two because so
often when we're defining work, we don't
know any more than a sentence or two.
And that's what sort of stops people
from doing a better job uh prompting
initially. So you just write what you
have, right? It can be really messy.
It's then going to go into sort of
number zero here and silently scan and
list every fact or constraint that it
still needs. And then it's going to
start digging. It's going to ask one
question at a time until it gets to 95%
confidence that it can ship the correct
result. And this gives it some examples
of places to dig for purpose, audience,
facts, success criteria, length, text
stack, if code, edge cases, risk,
tolerance, etc. But I will tell you from
experience running this prompt, that is
not an exhaustive list. It will go other
places. So an example here that's really
useful, I wanted to really stretch it
with a highly ambiguous human prompt.
And so I asked it for a 500word summary
of the history of the Bacans since 1660.
Why? Cuz that's pretty ambiguous.
There's a lot that goes on since 1660 in
the Bacans. And you know what it figured
out? It figured out that one of the key
leverage points to writing a good
500word summary was how it was going to
handle the evolution of political
entities and their naming conventions
across all of that time period. it
needed to figure out what kind of scope
I wanted so it could cover the arc of
history in a way that made sense for my
work assignment. And so even though it
wasn't named as a constraint, it had
three or four rounds of questions for me
asking me to pull apart my intentions
around political entity discussion and
description for this 500word uh
description. And by the way, why did I
put 500 words? Because I wanted to
challenge it. Shorter is harder than
longer here. and it eventually got to
something that was a really solid
summary of Bach and history since 1660
in 1600 words. And all of that
clarification was really helpful. The
echo check is when it thinks it's close.
So, it replies with a crisp sentence. It
states the deliverable. It states
something that it knows it needs to
include and it states a hard constraint
that is designed to make it a very
easily readable summary of work that you
can engage with. And then this prompt
has what is effectively a mini program
inside. You can say yes and lock it. You
can edit it. You can ask for a blueprint
or outline of what's going on. Or you
can call out the risks uh and ask the LM
to define what's risky about the prompt
as it stands. And it gives it it gives
it directions for what to do here. How
do you handle yes to lock is very
intuitive, edits as intuitive, but it
defines blueprints and arrest so the LLM
understands it. When it's building and
self- testing, it gives it special
instructions for how to handle code. So,
it's responsible and reminds it to
review code, which is something that I
could have extended into documents,
etc., but code is often errorprone, and
so I thought that was worth it. Um, and
it gives you the option to reset. This
is really short. You might wonder, how
on earth does this work? Well, it turns
out you don't necessarily need a long
prompt to get to contract first intent.
You just need clarity around the
sequence of steps. All we're doing is
we're saying, one, list the gaps to
goal, which I almost never see in
prompts. Two, dig for those gaps until
you get to 95% confidence. And then from
there,
offer a path forward that I can choose
and control because we're trying to
write a contract together. Is this the
only way to write contract first
intents? Absolutely not. It's not the
only way. Is it a really useful way to
talk about getting to clarity of intent?
Yes. And I didn't just do this with
history. I did it with software. I
actually have been working on a software
project because I'm interested in
centralizing comments in live streams
across multiple channels. So I've been
playing around with a software idea for
that. And I it's it's again it's
ambiguous. How many channels? What do I
include? What counts? What's what's an
MVP? The number of users. All of these
things that I could like try and put
into a heavy PRD prompt initially, but
I'm not really there yet. I really want
to just talk about it and I want to talk
about it in a structured way. this was
really useful for that because I could
actually say I really want you to
produce a PRD for this but I don't have
the intent yet and so dig with me until
we get to a agreed contract of work to
produce a PRD with clean clear intent
and it did. Now if you look at that and
that and that feels really obvious to
you congratulations that means that it
should exist in the world and you should
try it. But I will tell you I have done
a fair bit of digging. It is not as
obvious as you would think. This is not
a technique that I can find other
places. And I'm a little bit surprised
because I think it's a very token
efficient way of getting to clarity of
intent when we assume that humans are
humans. And I'm increasingly interested
in prompting techniques that assume that
humans are humans. We are not perfect.
We do not always write the full prompt
out. We do not always have the full
crisp complete intent. In fact, mostly
we don't have any of those things. What
we have is a vague human idea backed by
a tremendous amount of context and
experience and we need help fishing that
out of our heads and getting to clarity.
That is what a contract first approach
to prompting seeks to do. How can we get
to a point where the LLM deeply, fully,
completely understands your intent with
this piece of work in a way that you can
just converse with it and and like let
it ask you questions and let it dig out
for you. You might think this is only
for product managers like PMs need to
get clarity on intent when writing
requirements or it's only for this or
only for that. This is a very
intentionally wide-ranging prompt set.
It is supposed to be something that is
workable for virtually any piece of
serious work where you need to define
intent first. And I wrote it that way on
purpose because I think our use cases
for AI are really wide ranging. any
survey you see, any white paper you see
on how we use AI, we do a lot of
different kinds of serious work. But the
common failure mode remains clarity of
intent. That is what this is designed to
fix. So if this was fun, if you enjoyed
it, great. Go run some contract first
prompts. Tell me how they worked for
you. Or if you already have a word for
this or if you're already using this, I
would love to hear about it. So often
when I do these prompt videos, people
say, "Oh yeah, I have a different word
for this, but I've been trying it at
home. I didn't know it was a thing."
It's part of why we talk about this is
that like we learn together what the
common terms of art are. So there you
go. Contract first prompting.