Reframing Jobs as Trainable Skills
Key Points
- The future of work requires shifting from static job titles to a dynamic, skills‑first model, where competencies are cultivated and measured rather than assumed from a role.
- Knowledge workers currently lack systematic training—unlike athletes or musicians—so we must create practice routines that break down complex tasks into repeatable, feedback‑driven micro‑skills.
- Existing hiring and compensation tools embed the assumption that specific skills belong to specific jobs, but AI enables us to decouple skills from roles and evaluate people based on outcomes they can achieve with those abilities.
- By leveraging AI to deliver targeted, real‑time feedback on narrow, repeatable scenarios, individuals can continuously improve their recognition and response patterns, turning career development into an efficient, practice‑based process.
Sections
- Shifting From Jobs to Skills - The speaker argues that we must replace traditional job‑centric hiring and promotion systems with a skill‑focused model, using AI‑driven training to let knowledge workers develop abilities independent of specific roles.
- Core Skills for AI‑Driven Work - The speaker outlines five repeatable, practice‑oriented capabilities—judgment, orchestration, coordination, taste, and a final skill—as essential for professionals navigating high‑stakes, AI‑augmented environments.
- Crafting Rubrics with Trusted Feedback - The speaker stresses that before leveraging AI, teams should consult trusted colleagues to define clear, concrete criteria for key artifacts and convert that input into consistent rubrics for evaluation.
- AI-Driven Decision-Making Practice - The speaker explains how to use AI‑generated rubrics and prompts to transform film‑review style feedback into regular, repeatable drills—such as writing one‑page decision documents—to systematically improve judgment and specification skills.
- Iterative AI-Driven Team Skill Building - A manager outlines a habit loop where AI critiques are reviewed before human feedback, teams hold brief weekly practice sessions on flagged growth areas, track measurable rubric improvements, and apply the same skill set to hiring, emphasizing continuous, measurable skill development over perfection.
- Evaluating Human Skills Amid Shadow AI - The speaker highlights that most AI use goes unreported, and argues that interview and development practices should focus on live, constraint‑based conversations to surface reasoning, risk assessment, and trade‑off articulation, ensuring candidates demonstrate genuine thought processes rather than merely relying on AI shortcuts.
Full Transcript
# Reframing Jobs as Trainable Skills **Source:** [https://www.youtube.com/watch?v=Td_q0sHm6HU](https://www.youtube.com/watch?v=Td_q0sHm6HU) **Duration:** 00:20:42 ## Summary - The future of work requires shifting from static job titles to a dynamic, skills‑first model, where competencies are cultivated and measured rather than assumed from a role. - Knowledge workers currently lack systematic training—unlike athletes or musicians—so we must create practice routines that break down complex tasks into repeatable, feedback‑driven micro‑skills. - Existing hiring and compensation tools embed the assumption that specific skills belong to specific jobs, but AI enables us to decouple skills from roles and evaluate people based on outcomes they can achieve with those abilities. - By leveraging AI to deliver targeted, real‑time feedback on narrow, repeatable scenarios, individuals can continuously improve their recognition and response patterns, turning career development into an efficient, practice‑based process. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=0s) **Shifting From Jobs to Skills** - The speaker argues that we must replace traditional job‑centric hiring and promotion systems with a skill‑focused model, using AI‑driven training to let knowledge workers develop abilities independent of specific roles. - [00:04:31](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=271s) **Core Skills for AI‑Driven Work** - The speaker outlines five repeatable, practice‑oriented capabilities—judgment, orchestration, coordination, taste, and a final skill—as essential for professionals navigating high‑stakes, AI‑augmented environments. - [00:07:37](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=457s) **Crafting Rubrics with Trusted Feedback** - The speaker stresses that before leveraging AI, teams should consult trusted colleagues to define clear, concrete criteria for key artifacts and convert that input into consistent rubrics for evaluation. - [00:11:07](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=667s) **AI-Driven Decision-Making Practice** - The speaker explains how to use AI‑generated rubrics and prompts to transform film‑review style feedback into regular, repeatable drills—such as writing one‑page decision documents—to systematically improve judgment and specification skills. - [00:14:18](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=858s) **Iterative AI-Driven Team Skill Building** - A manager outlines a habit loop where AI critiques are reviewed before human feedback, teams hold brief weekly practice sessions on flagged growth areas, track measurable rubric improvements, and apply the same skill set to hiring, emphasizing continuous, measurable skill development over perfection. - [00:17:30](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=1050s) **Evaluating Human Skills Amid Shadow AI** - The speaker highlights that most AI use goes unreported, and argues that interview and development practices should focus on live, constraint‑based conversations to surface reasoning, risk assessment, and trade‑off articulation, ensuring candidates demonstrate genuine thought processes rather than merely relying on AI shortcuts. ## Full Transcript
We need to move from a jobs format to a
skills format for our roles and our
career growth. And no one's ready to
talk about it. That's what this video is
all about. How do you think about your
job differently and think about it in
terms of skills you can train and
improve, preferably with the help of AI?
One of my inspirations for this post was
a 2019 blog post by Tyler Cohen where he
talked about this idea that athletes
train and musicians train, performers
train, but knowledge workers really
don't train. We don't train. I don't I
don't shoot free throws. There's no
knowledge work equivalent. And so I
started to ask myself, what does it take
to do something like a pianist
practicing scales, but for knowledge
work? And is there a way to start to
address this in the AI age that helps us
think about skills differently than we
traditionally have? Because I got to be
honest with you, traditionally our
assumptions about skills have been so
laded into jobs that it's literally
baked into our software. Right? If
you've ever been a hiring manager and
you've ever used a software tool for
hiring for compensation estimates, for
promotions, do you know what it starts
with? It starts with the assumption that
you need to layer specific skills into a
job post. It's as if we can't imagine a
world where skills might exist
independently of a role. And yet, that
is exactly the world we're headed
toward. We're headed toward a world
where skills are something that we
acquire because we can use them with AI
to get meaningful work done. And we
should be measured on our outcomes. We
should be measured on our ability to
drive with those skills, not necessarily
compensated just because we have job
title A or job title B, product manager
or engineer. So in that skills world,
what does practicing really look like? I
know that we talk about this physically
and I think that metaphor is helpful,
but I want to get it into the knowledge
work space because we just haven't
talked about that enough. So in the
physical world, you think of skills as
being fractal. They're they're they're
tiered, right? So, if you are trying to
practice your fluency with the piano and
you're moving your fingers and you're
playing the scales up and down, part of
that is the subskll of finger movement
in a pattern and part of it is the
subskll of how much pressure you place
on the keys and part of it is the
subskll of the speed of movement. And
each of those can be practiced and
repeated and you can get feedback and
you can progress. For knowledge workers,
we need to find a way to get to narrow
situations with repeated specific
feedback that's designed to strengthen a
particular pattern of recognition and
response in our brain so that we get
better at our skills because otherwise
we do our whole careers as live
performance and that's an extremely
inefficient way to learn. So what does
that look like? Well, the good news is I
think we have never had a better chance
to do that than we do now in the age of
AI because AI gives us the chance to
have custom feedback on practice that we
just never would have been able to scale
otherwise. It's just most of us aren't
doing. It's tempting to say at this
point that knowledge workers are lazy,
but it's structural. I don't believe
that we are lazy. I believe our
environment fights against this approach
to practicing our skills in three
different ways. Number one, we live in a
world with fuzzy outcomes. In
basketball, the ball goes in or it
doesn't. You shoot the free throw and
you miss or you make it. It's a clear
signal. In product or strategy or
leadership or engineering, good can mix
in so many different dimensions. It can
be confusing. Like speed, like quality,
like politics, like relationships, like
risk. There's no single bit that flips
from 0ero to one. The other reason this
is difficult is that we get really
delayed and noisy feedback, right? You
might make a big decision in Q1 and you
might learn in Q3 at best maybe whether
it really paid off. Meanwhile, the
market may have shifted, maybe a
competitor launched something, a key
higher left. You almost never get the
clean comparison. If I had written the
spec differently, we would have avoided
X or Y event. The third issue is low
repetition. A serious musician is going
to play scales hundreds of times a week,
but how many truly consequential
decision docs do you have? How many
product specs? How many strategy docs?
How many technical architecture memos do
you write in a quarter? If each one of
these is entangled with real money and
real people, there are no low stake
sandboxes in traditional career pathing.
And so the default is that most of us
spend like 95 or more percent of our
quote unquote reps on live games. We're
practicing in front of the crowd. We're
practicing literally for our careers. I
guess that's better than nothing, but
it's not the same. So the next question
I wanted to ask is I wasn't satisfied
with just a a general challenge. I
wanted to ask myself, what are some
skills that are repeatable, practicable
that we could talk about in the age of
AI? I would argue that there are five
that keep showing up. I think number one
is judgment. How you frame decisions.
How you define your options. How you
choose when conditions are uncertain.
Number two is orchestration. How do you
turn fuzzy goals into concrete workflows
that humans and AI can execute together?
Can you bring clarity out of the
ambiguity? Number three is coordination.
How do you move groups of humans through
ambiguity without creating more chaos?
Right? You are still going to need the
skills to coordinate. And as agents get
better, you may need to learn the skill
to coordinate agents and humans. Number
four is taste. Do you have a meaningful
quality bar for your product, for
writing, for design, for strategy? Do
you have a sense of what is good? And
can you talk about it and improve it
like a skill? And number five is
updating. How do you change your mind as
evidence and context shift without
getting whipped around by the noise?
What is your huristic? What is your
rubric? How do you think about updating
your priors? How do you think about
changing your mind in meaningful ways?
Now, none of these really live in a
LinkedIn tagline. They live in what you
write and leave behind. We could call
that artifacts, right? Judgment can show
up in your decision documents. Judgment
can show up in experiment designs. It
can show up in prioritization writeups.
Orchestration can show up in handoff
documents. It can show off in specs. It
can show off in the way you plan a
project and what that looks like.
Coordination can show off in emails. It
can show up in meeting notes. It can
show up in stakeholder maps. Taste will
show up in how your UX looks, right?
Which examples, which metaphors you're
going to pick. And your ability to
update will show up in how you evolve
your plans over time in the written
reference to rationale and what that
looks like. So the key is that these
skills, they're not adjectives. We name
them as adjectives. We associate them
with roles as adjectives, but really
when you come right down to it, they're
not. They're patterns in the things that
you produce that I produce. And once you
accept that, you stop arguing about
who's strategic in the abstract, and you
start looking at how people actually
write, how they behave, and how they
decide. This has always been the gold
standard in behavioral interviewing, but
we've really struggled to get to this
level of clarity, especially post AAI.
So, what does AI actually change? AI is
not a magic brain. I say that all the
time. AI is a tool that can read text.
It's following instructions and it can
apply a rubric consistently. This is
beautiful because it gives us a wall to
practice against. So, your first step
has nothing to do with your models if
you're serious about practicing, right?
You just want to pick one artifact that
matters for your team, like a decision
doc for a product manager, and you want
to sit down with the people whose
judgment you trust. And you just want to
ask them a really simple question. When
you say that a decision dock is good,
can you tell me what you mean
specifically
and and just push ask gently, ask
clearly, ask persistently, and push on
the people in your life that you trust
until you have a small concrete list,
right? Maybe it's is the decision stated
in a sentence. Are there at least two
real options? Are the stakes and metrics
explicit? Is there a clear
recommendation? Are risks and trade-offs
surfaced? I could go on, but it's not
just for that one thing, right? That's
an example for one artifact. You need to
look at it for all of your artifacts,
the ones that are relevant to your
discipline. Whether that's architecture
docs for engineering, whether that is
call summaries for CSMS, whether that is
uh pipeline expectations for sales,
there's all kinds of ways to do this.
But the key is asking someone in your
life what's good. And then you turn that
into a a grade, right? A rubric. You
make it clear like what good looks like.
And you set that out one to five. And
then you pull three to five real
examples. And you mark them up, right?
Like you get a red pen out, right? And
you say, "This one is really good at
clarity. This one is good at risks, but
it has these weaknesses. This is the
rationale." You notice how none of this
is with the AI yet? I promise we'll get
there. But I want you to recognize that
human skills are human skills and I'm
asking you to take some human
responsibility for developing your
skills. Only then after you've red
penned a few things do you bring it to
an LLM. You give it the rubric and you
give it literally your annotated
examples. We are at a point where you
could actually use a red pen, scribble
all over the doc and it would still work
because usually handwriting recognition
is good enough now to pick it up. And
then you say in effect, "When I send you
a new doc, please score it like this.
Quote the parts you're reacting to.
Explain briefly why you gave each a
score. And please suggest edits that
would move one of these dimensions up by
a point or two points or whatever."
Suddenly, look what that changes. Look
what your effort to define good for your
role shifts. Instead of a manager
skimming through and thinking, "Ah, that
feels fuzzy. I don't I have 15 minutes.
I'm going to turn it over." No, you get
a structured critique that can be
applied to every single doc of that
type. This one has a two on options,
right? That one is a four on clarity,
but a one on how I structured risk. This
is what I need to do to change it. And
so they give you something like a rough
consistent view of how the skill is
showing up across your real work. We've
been missing that. That is our signal.
That is the basketball going into the
basket. And yeah, you can actually log
this. You can say over a quarter, what
are the patterns I'm starting to see in
my own behavior? How are my scores
changing? And yes, you can really score
this out of five. So even though I air
quote it, you can get actual scores. And
with that, we now have something the
preAI world just couldn't have. When
Tyler wrote this, this wasn't possible.
We can do effectively film review like
athletes on our thinking and our writing
at scale without having to hire an army
of coaches. just with some good prompts
which I'm putting together. The next
move once you've got that is to turn the
film review into repeatable drills that
train on the patterns that you care
about. So take judgment as an example,
right? In an artifact form, judgment
often looks like, can I write a decision
document that lets a reasonable person
say yes or no without a 2-hour meeting?
With your rubric in place, you can
create a practice scale, right? a
practice exercise that looks like this.
Once a week, take a real messy
situation, a slack thread, a super vague
request from your manager, a fuzzy idea
you had in the shower. Write a one-page
decision doc that hits at the pattern
that you've identified as good, clear
decision, options presented, stakes,
recommendations, etc. Now, run it
through the same AI rubric you use on
real docs. Compare your version to a
stronger version that the model
generates. Notice what you miss. That is
your practice. You compare it to what's
good. You get focused on a subs skill
and you practice and practice and
practice every single week. You can do
this for orchestration where you define
what a good spec looks like in your
environment, explicit goal, inputs,
outputs, constraints, etc. And you can
create drills where people practice
turning fuzzy objectives into timebound
specs and timebound uh organizational
decisions for coordination. You can
define a pattern for your executive
updates. The important thing is to see
the chain of behavior you need to adopt
to level up. You have a skill. You
identify your recurring behavior. You
figure out how that maps to a
recognizable pattern in the artifacts
that you leave behind. Then you
establish a grade and then you start to
practice. That's what it takes to go
from being like, "Oh yeah, Tyler wrote a
good thought. I'm not really changing my
behavior." to, "Wow, I have AI. I have a
personal coach. I just need to configure
it right." And now you start to get
better. What does this look like if
you're a team lead? This is mostly just
conceptual because I'll be honest with
you, very few team leads do this, but
let's play out the operations, right?
Suppose you run a team at a midsize
company. You decide that for the next
quarter you're going to focus on a
particular artifact you want to level
up. So, you and your team define a
rubric together. It's not just an
individual. You guys together pull
example docs that are good. You see how
this is often the same set of
activities, but now we're doing it at a
team level. This is so much more
powerful, right? You then can wire up a
team LLM so that whenever someone marks
a doc as ready for review, it will run
the rubric pass. Basically, what this
does is it's like the engineers who have
codecs automatically review their PRs.
Well, now you're having like Claude or
Chat GPT automatically review your docs.
Same thing, leaves comments. You can ask
any of your teammates to do two things.
Let the AI critique hit the dock before
a human review, and that's a management
decision. And then once or twice a week,
as a team, set a 10-minute timer and
practice on something that the AI keeps
flagging as individually to you, a
growth area, and report on it. Talk
about it. Humans do better with goals
when we articulate them. And so this is
a case where the team gets stronger and
we individuals progress faster because
we're in a team environment. The goal is
not to demand perfection. The goal isn't
even to tie this to performance ratings.
It's to ask that we use small steady
habits to actually build and scale
useful skills that we will need in the
age of AI. I am such a fan of these
practical solutions because I think so
often we stop at the generic. We stop at
the vague. We don't need to do that.
Right? By the end of your quarter, you
should be able to have a conversation
with your team where you say, "Have we
improved on our on our rubric for this
artifact? Did the scores get higher? Are
docs getting approved with fewer
iterations? Are key decisions happening
faster and with less what are we
deciding confusion?" If these are moving
in the right direction, what you're
learning is that a practice loop changes
how your team thinks and writes. That's
the core, right? Like that's what you're
betting on. And what's interesting is
that you can use the same skill set in
interviews, right? In hiring. So most
companies are hiring for skills in a way
that is comically indirect. So we might
ask, tell me about a time you influenced
a stakeholder. We will listen to the
story. we will kind of squint and try to
infer whether they can do the work that
we need done in the next couple of
quarters. If you've already done the
work to define a pattern for a
particular artifact, there's really a
much more grounded way to to evaluate
people. Give them the same game that you
play as a team and see how they'll do on
the job. So, instead of a traditional PM
interview, maybe the PM gets a short
take-home where they write or repair a
decision document based on a really
realistic prompt and then there's a live
session where you work through that doc
and you and you change a constraint like
legal is going to block this or the
timeline shrinks and you see how they
think through it and adjust and then
there's a critique exercise where you
show them a deliberately mediocre AI
generated doc and ask them what's wrong
with it. So the beauty of this is that
you can use the same rubric you develop
internally and even the same AI model as
a first pass scorer for consistency. The
point is not to let AI decide who to
hire. It's to have a shared concrete
lens on what good looks like on the work
you're actually doing. And the nice side
effect is that hiring and development
they now point at the same thing, right?
The skills you test for in candidates
are the skills you help them practice
once they're inside the door. It's not,
you know, we hired them for their
strategic thinking and they're bad at
Jira tickets. These are the skills we
tested for and these are the skills we
work on as a team. And I want to call
something out here. None of this
presumes that you cannot use AI to get
better. You are going to be using AI.
One of the things that came out is that
Anthropic has called out that I think 64
something like twothirds of AI usage is
shadow AI usage. People not reporting
it. People aren't incentivized to report
it right now. This doesn't make you hide
your AI. You can be open about using AI
and still get better at these skills
because the goal is the outcome. And so
if your interviewee is using AI, you're
going to find out real quick whether
they have a healthy relationship with AI
when they turn something in
and then you give them a constraint live
and they fumble and they can't handle
it, right? Like you're going to see
where the edges of those skill sets are.
So, the practice loops I'm describing
are designed to reinforce the kinds of
skills we humans need in the age of AI.
They're going to push people to clarify
decisions, to surface risk, to
articulate trade-offs. If someone uses
AI for a pass at that, that's great, but
you're going to catch them if they
haven't done the heavy thinking. What's
freeing about this is that you're
enabling a real evaluation through live
conversation where people talk through
their choices, talk through how they
respond, and really the interview and
the development conversations feel very
similar. And it's not about trying to
catch people cheating with AI in either
case. All you're trying to do is you're
trying to see if they have a stable
pattern of thought that remains visible
even when their ability to do like tab
tab tab as we say in cursor is gone.
Right? If they're having a conversation
and you change some dynamics and we talk
about quality and they just stumble
because they're not in front of a
screen, you're going to know. Whereas,
if you set it up and they have the
conversation and yeah, maybe AI helps
them get there faster, but they can
articulate the trade-offs and they're
able to start to point those skills in
the right direction and practice them.
That's fantastic. Now, you can measure
it. Now, I don't want to overromanticize
this. There there are going to be real
limits. Rubric scores will be noisy. I
would not treat them as precise
numerical representations. I would not
treat them as a basis for promotions. I
don't want people to feel like there's a
surveillance risk where every single
document is scored. The goal is to get
better. The goal is to become useful.
And I don't want program fatigue to eat
this. So instead of like trying to start
really big, I would strongly recommend
that you start small. Pick one little
thing, a short change in habit, start to
practice and just start to feel into it
because really the goal is to get in the
habit of being athletes about our
knowledge work. How do we intentionally
name a skill, measure a skill, see what
good looks like and use the power of AI
to train and get better? If we go after
that, if we have that sort of focus and
goal, whether as an individual or a team
manager, we are going to be in good
shape and we are going to be in a
position where we can actually answer
Tyler's question because I think part of
why Tyler wrote the question he did back
in 2019 is we didn't have AI. AI
couldn't be there to coach us. It was
too expensive for most people to get
coached. Well, not anymore. Now we have
AI. AI can help each of us individually
and help our teams to actually grow in
our skill sets. And that's really
exciting to me.