Taste: The Secret Skill for AI Success
Key Points
- Success with AI in 2025 hinges on cultivating “taste”—the gut‑level sense of what’s right, valuable, and improvable—rather than just technical prompt‑engineering skills.
- Taste is often seen as elitist (fashion, fine dining) but it’s actually a universal, experience‑based judgment that anyone can develop and apply across domains.
- It originates from accumulated experience that forms strong, intuitive opinions, whether in hobbies like fantasy football, fashion, books, or any field you care about.
- Developing taste means deliberately exposing yourself to diverse examples, reflecting on what feels right or wrong, and sharpening that internal compass as a practical skill.
- Leveraging taste helps you navigate AI tools, teach others (including kids), and make strategic career choices by recognizing and acting on what truly resonates and adds value.
Sections
- AI Success Requires Cultivated Taste - The speaker argues that developing a refined sense of taste—traditionally seen as an elite, aesthetic skill—is essential for effectively collaborating with increasingly sophisticated AI systems and teaching this ability to others.
- Evolving Taste Amid AI Automation - The speaker explains how personal tastes and expertise evolve over a career, and how, as AI handles routine tasks, human judgment and adaptability become the valuable “taste” that remains.
- AI Embedding to Capture Work Time - The speaker argues that platforms like Claude, OpenAI, and Gemini are integrating AI directly into work workflows to monopolize users’ attention, while emphasizing that true career advancement still relies on human expertise, personal taste, and embodied experience that AI cannot replicate.
- Embedding Human Taste in AI Interactions - The speaker argues for giving language models precise, preference‑driven feedback—critiquing style, accuracy, and formatting—to let users shape outputs as AI systems become increasingly intelligent.
- Cultivating Taste in AI - The speaker urges users to apply personal judgment—“taste”—when interacting with AI tools, assessing what works, discarding what doesn’t, and treating AI as a flexible toolkit amid rapidly evolving model capabilities.
Full Transcript
# Taste: The Secret Skill for AI Success **Source:** [https://www.youtube.com/watch?v=A_Lv0Ze272g](https://www.youtube.com/watch?v=A_Lv0Ze272g) **Duration:** 00:15:45 ## Summary - Success with AI in 2025 hinges on cultivating “taste”—the gut‑level sense of what’s right, valuable, and improvable—rather than just technical prompt‑engineering skills. - Taste is often seen as elitist (fashion, fine dining) but it’s actually a universal, experience‑based judgment that anyone can develop and apply across domains. - It originates from accumulated experience that forms strong, intuitive opinions, whether in hobbies like fantasy football, fashion, books, or any field you care about. - Developing taste means deliberately exposing yourself to diverse examples, reflecting on what feels right or wrong, and sharpening that internal compass as a practical skill. - Leveraging taste helps you navigate AI tools, teach others (including kids), and make strategic career choices by recognizing and acting on what truly resonates and adds value. ## Sections - [00:00:00](https://www.youtube.com/watch?v=A_Lv0Ze272g&t=0s) **AI Success Requires Cultivated Taste** - The speaker argues that developing a refined sense of taste—traditionally seen as an elite, aesthetic skill—is essential for effectively collaborating with increasingly sophisticated AI systems and teaching this ability to others. - [00:03:06](https://www.youtube.com/watch?v=A_Lv0Ze272g&t=186s) **Evolving Taste Amid AI Automation** - The speaker explains how personal tastes and expertise evolve over a career, and how, as AI handles routine tasks, human judgment and adaptability become the valuable “taste” that remains. - [00:06:36](https://www.youtube.com/watch?v=A_Lv0Ze272g&t=396s) **AI Embedding to Capture Work Time** - The speaker argues that platforms like Claude, OpenAI, and Gemini are integrating AI directly into work workflows to monopolize users’ attention, while emphasizing that true career advancement still relies on human expertise, personal taste, and embodied experience that AI cannot replicate. - [00:10:03](https://www.youtube.com/watch?v=A_Lv0Ze272g&t=603s) **Embedding Human Taste in AI Interactions** - The speaker argues for giving language models precise, preference‑driven feedback—critiquing style, accuracy, and formatting—to let users shape outputs as AI systems become increasingly intelligent. - [00:14:12](https://www.youtube.com/watch?v=A_Lv0Ze272g&t=852s) **Cultivating Taste in AI** - The speaker urges users to apply personal judgment—“taste”—when interacting with AI tools, assessing what works, discarding what doesn’t, and treating AI as a flexible toolkit amid rapidly evolving model capabilities. ## Full Transcript
I think we don't talk enough about what
it really takes to be successful in 2025
with AI. And I know, right, that's me
saying that and I do prompts and this
and that. But one of the things that
hasn't really been mentioned that I
haven't really talked a ton about that I
see a fair bit of in actual AI
interactions is the necessity and
importance of taste. And so when we
think of taste, like when I think of
taste, I think, oh, someone who has
fashion sense, right? someone who goes
to the great restaurant and can order
the right sushi and the French and food
and this and that. And so people feel
like it's off-putting. It feels very
elitist. It feels like I couldn't have
taste right here. I am in my little
t-shirt like how could I have good
taste? I want to make it accessible for
you today because I think it is one of
the skills that we most need not just as
people who work with AI but as humans in
the age of AI where we have to assume
the models are getting better. The
models are getting better and better and
better and better and we have to figure
out how to work with them as they scale
in intelligence. We have to figure out
if we're parents how to teach our kids
or if we're teachers how to teach our
kids about them in ways that make sense.
Your secret weapon in all of that is
taste.
It's taste. How do you develop taste? I
look I have seen the articles. If you
Google for AI and taste, you see piles
and piles of AIdriven crap for lack of a
better term, right? Like just AI dril
about the importance of taste and value
of it. So what what matters is that you
actually understand where taste comes
from inside you. how you develop and
sharpen it as a skill and then how you
apply it. And that's really what I want
to work through. I'm all about
practical. So, first, where does where
does taste come from inside of you?
Taste is your gut knows best. Taste is
the sense inside of you that something
is right or something is wrong. Taste is
the sense that something could be
better. If you want to break it up even
further, taste is where you start to
accumulate enough experience in a
particular area that you begin to have
strong opinions. And this can happen
outside of work. Like there are people
who play fantasy football and they have
very strong opinions on draft order and
which players they're going to pick.
They have taste in that area. There are
people who have taste in widely known
areas where taste is considered a skill
set like clothes. I talked about
clothes. That is considered an
acceptable place for taste. And what I
want to suggest to you is that taste is
not just a skill in places where people
honor it. Taste is actually something we
all do every day. I would like to say
that I have taste in books. I do a lot
of book shopping. I do a lot of book
collecting. Do reading. Taste is
something that matters to me in that
area. Not for everybody, but for me. And
so when you're thinking about career
pathing, when you're thinking about what
you're good at, when you're thinking
about what persists in the age of AI,
think about it from a taste perspective.
Think about what are the areas where you
have experience and you can lean on
them. And those can change, by the way,
over the course of a career. If I go
back decades in my career, the things
that I thought I had taste about or had
opinions about have changed
unrecognizably
since then, I have not had taste about
any one thing for the decades I've been
working. But my tastes have evolved. My
I have transitioned into new areas as I
have followed my curiosity. Why does
this matter so much now? Why are we
talking about this? Because taste is not
a new skill. I just talked about this.
I've seen this in people who were more
senior than me long before AI became a
thing. People had taste and used taste
at work. We just didn't talk about it as
such. It's a thing now because taste we
are discovering is what is left when AI
can do a lot of the grunt work. So, as
an example from this week, when Claude
can produce a workable discounted cash
flow sheet, when Claude can produce a
PDF that looks okay or a PowerPoint that
looks okay in like one shot with just a
little bit of context, what's left?
What's left in the work stack that we
do? And a lot of people look at this and
they start to write those like panicky
internet articles. They throw up their
hands like, "Oh my god, this is the end
of all things." No, no. Actually, this
is a chance for us to transition. And
we're very good at this. Humans are
flexible tool users. I think that's one
of the best definitions of humans I've
ever heard. We're embodied flexible tool
users. And we can flexibly use a
different part of our skill set. And
whereas before when you were producing
workable value in the '9s in the 2000s a
lot of the value was in time spent with
other people physically and in work
product that you produced by typing like
it was physical creation of information
and we were called knowledge workers was
very fancy was the future of the world
for about 10 years well that's changed
that's not where the leverage is. That's
not where the juice is anymore at work
because I will tell you Chad GPT5 is
faster at typing than anybody I know.
And it's also got the option to generate
20 ideas in the time it would take me to
generate one. And because of the
internet and high bandwidth connections,
we can now do digital collaborations. We
don't have to physically be in the same
space, which by the way further enables
AI collaboration because AI is
disembodied. So if it's arriving over
the internet, it sort of feels like a
colleague, right? It pops up in Slack.
You can have AI and Slack. This is all
shifting our leverage as people. Our
skill sets, the value of them is
changing. And that's what drives a lot
of the stress is because we have assumed
if you're building your career, you
assume like I read this book on product
management. I'm going to get it out. I'm
going to open it and like as soon as I
get it out and I'm going to know the
things I need to know or I did the
projects. I I was the project man the
technical project manager and I drove
these big projects for multi-million
dollar companies and I have the
experience of managing 200 stakeholders
in a room this and that right those are
the things that we think give us the
value and what I'm starting to realize
is that taste is a word that has teeth
that allows us to think about a wide
range of related skill sets that AI is
not seeming to take over like if you
look at the strategy for model makers
right now they are going after time
spent at work. They're going after the
work stack. So Claude wants you to be
thinking in claude and then producing
artifacts in Excel and Word and so on.
Claude wants you to be thinking in
claude at work and do team projects.
That was another feature they
emphasized. Open AAI wants you to be
thinking in code in codeex. Similarly
with cla code etc. Gemini wants you
thinking in images in nano banana. And
so I think of it as they are trying to
capture more of your time the way
Facebook tried to capture more of your
time in the 2000. Tik Tok tried to
capture more of your time in the 2020s.
It's like what can we do to keep you
engaged and keep you focused and in
their case it's like intelligence in the
work stack feels like a way to do that.
Fine. What that means is your value lies
in your ability to leverage those work
primitives, the AI things that that can
like produce that in ways that reflect
your taste, in ways that reflect your
expertise. And this can sometimes feel
like it's it's a nod to gray hairs,
right? It's it's a nod to the ability to
have deep expertise. I have seen people
who were just getting started in their
career who have taste. They have figured
out a particular corner that they are
passionate about. They figured out
something they have experience in and
they figured out how to have taste in
that area and they insist on it. That is
a recipe for rapid career growth whether
you're starting out or whether you're
experienced because
AI is not good at it. AI is really,
really not good at it. And part of why
it's not good at it is that AI isn't
embodied. AI doesn't develop the kind of
deep, nuanced, metabolized expertise
that we get from living in society for
decades before we become adults and go
to work. There's no substitute for that.
And so that is part of how we shape
taste as creatures and that is part of
what we bring
that AI has trouble mimicking. And so if
you've ever looked at something and you
look at the work product that AI gives
you and you're like it feels hollow. It
feels artificial. I can't put my finger
on it, but it just doesn't feel right.
That's your taste speaking. That's your
taste. Listen to it. And a lot of people
then take that and they say, "Well, I'm
going to just like throw out AI fluff
and I'm going to not do this and I'm
just going to have a pure human created
world." It's like, I don't think that's
the answer. I think the answer is having
the taste to demand useful work. And
this is important because increasingly
our jobs are going to depend on our
ability to demand useful, not perfect,
but useful work. And so think about it.
If your work is really defining what
matters and what's high taste and what's
good quality, and that's your job, can
you do it? Do you get excited about
doing that? Are you okay pushing back
and not being overly differential with
the model? One of the things I've seen I
get privileged to see a lot of AI
conversations because well people share
them with me and I get on calls etc etc.
A lot of the time people are overly
differential with the model which I mean
by that is they give the model too much
rope. They give the model a lot of
space. Now that shows up in prompting as
you're not giving the model the guidance
it needs. It shows up in building
systems where you just talk to Claude or
talk to chat GPT or talk to Gemini and
you say design the system for me and
then you just sort of trust it. But it
also shows up in more individual
interactions where you talk to the model
and if the model gives you something,
you just assume it's probably right. And
if you're a skeptic and you see one
wrong thing, then you just throw it all
out. And I see those two sides flip
really easily. They feel related. It
feels like the same emotional response.
It's either I'm going to trust the
giving being to give me a good answer or
I'm going to throw it out. Well, what if
we just brought our taste into model
interactions? What if when we were
chatting with Chad GPT5 or Claude or
your model of choice, we said, "I think
you can do better. I don't like this
particular piece. I like that particular
piece." Like, we can be very specific,
right? Like, I like the numbering you
used here. Your phrasing is
overdramatic. Or, I really, really don't
appreciate it when you make up numbers.
Two of these numbers are made up. The
other 16 are, "Please tell me when you
need me to get you more information."
Or, "Please go and do research and come
back and get the answers." We have a
pallet of responses to work with and the
choices that we make to sort of have
those conversations to prompt are in
many ways driven by taste by our
willingness to have a gut opinion to
have have a sense of like this could be
better. And I emphasize that because I
don't think we talk enough about how
it's formed. So, I've talked about
domain experience and being sort of in
your in your work experience over time,
understanding all the nuances or if
you're new at work, understanding what
you're passionate about and leaning on
that for taste. And I've talked a little
bit about how it applies. But if we look
ahead, the third thing I want to call
out is that taste is partly hard right
now because models are gaining
intelligence
so so quickly. And so models are getting
smarter and smarter and smarter and
smarter on a time frame of months. We're
not used to that. This is a new
experience for the species. And if
you're trying to apply taste, it feels
jarring, right? It feels like, okay, but
this new model, I have to get used to
this model and this other new model
comes along and very tired.
The the simplest way to think about it
is that we are moving into a world where
you need to depend even more on taste,
not less. And so when we were first
working with Chat GPT two years ago, I
think taste mattered less because chat
GPT could do less of the workday. Claude
wasn't even there. You you just had less
options to do the whole workday in an
AI. Now models have come along very
quickly. more of the workday is there,
more of our personal lives is there,
health decisions are there, all of that
stuff, and you need taste more. In other
words, you could trust your own
instincts cuz you were producing more of
the work and it was all inside your head
and like you had the meeting with
colleagues, you metabolized it the way
you usually do. You had a gut instinct.
It came from the compost pile inside
your brain and you were like, "This is
the right way to go and then you wrote
the thing." All of that happened in your
head. Not much of it happened in Chad
GPT. And gradually we've shifted that.
So now more and more the meeting notes
are in chat GPT you send them out but
maybe you check them and the project
plan might have come partly from Chad
GPT partly from you and now increasingly
the strategic analysis and like the way
forward doing some thinking in an AI and
some thinking not. This is why taste
matters more because the thing that that
still runs the central processing unit
for all of this is still your taste. it
is still your ability to say no thank
you right like I don't think that's
correct and not to say it in a binary
way where you get frustrated and throw
the model out but to actually recognize
that in some ways the model is better I
am fully convinced that GPT5 pro has
thinking abilities in specific domains
that vastly outpace me and that is true
for most people I know if you're von
Newman raise your hand obviously that's
not true physics people will have will
have a good laugh over that one but but
for the most part GPT T5 Pro is really
really smart and you have to take almost
a like I talk about it as talking to the
oracle. You sort of put your little prop
together, you wait several minutes, it
comes back with a response and you have
to like interpret it, understand what it
means. You still have to have taste. I
have looked at GPT5 Pro responses and I
have said, I see where you got that. I
think it's correct given the inputs I
gave you, but what I learned from this
interaction is that you don't know the
context I know in my head and that's why
this feels off. And so instead of saying
I should trust GPT5 Pro because it's
smarter than me, I say good strategic
analysis, I'm learning that I need to
give you different kinds of inputs.
That's a much more nuanced response. It
doesn't denigrate GPT5 Pro is unhelpful.
It's just understanding what the model
can do and what it can't do and having
some taste about it. And so as models
get smarter, GPT6 may come along before
the end of the year. You never know.
There's been some hints. You need to
lean in on taste. Lean in on the gut
feeling. Lean in on your your ability to
say this bit is good, this bit is not
good. Look at all the ways that improves
your experience with AI. Look at how it
helps your prompting if you insist on
taste. Look at how it helps your
multi-turn conversations if you insist
on taste. Look at how it helps you when
you're reviewing other people's work
that may have been assisted by AI. If
you insist on taste, this doesn't give
you a license to be annoying and say no
AI. I keep I keep calling that out. AI
is like a toolkit. You can go in and
rate it for tools and come back and keep
what you like, throw out what you don't
on your tastes. I hope this has been
helpful. I think this is one of the
skills that I see clickbait on, but I
don't see us having a meaningful
conversation on where it comes from, how
it works for early career folks, how it
works for more mid and senior career
folks, and above all, how we handle this
in an age when intelligence is
accelerating. And so, it feels like our
relationship is evolving as we talk
about it. I hope this has been a good
sort of starter. Drop something in the
comments. Let me know what you think.
We're all learning to do taste together.
We're flexible tool users. We're
learning this skill that suddenly has, I
would argue, 100x more value than it had
in 2000. Uh so good luck out there. You
also you also are a taste maker.