Learning Library

← Back to Library

Reframing Jobs as Trainable Skills

Key Points

  • The future of work requires shifting from static job titles to a dynamic, skills‑first model, where competencies are cultivated and measured rather than assumed from a role.
  • Knowledge workers currently lack systematic training—unlike athletes or musicians—so we must create practice routines that break down complex tasks into repeatable, feedback‑driven micro‑skills.
  • Existing hiring and compensation tools embed the assumption that specific skills belong to specific jobs, but AI enables us to decouple skills from roles and evaluate people based on outcomes they can achieve with those abilities.
  • By leveraging AI to deliver targeted, real‑time feedback on narrow, repeatable scenarios, individuals can continuously improve their recognition and response patterns, turning career development into an efficient, practice‑based process.

Sections

Full Transcript

# Reframing Jobs as Trainable Skills **Source:** [https://www.youtube.com/watch?v=Td_q0sHm6HU](https://www.youtube.com/watch?v=Td_q0sHm6HU) **Duration:** 00:20:42 ## Summary - The future of work requires shifting from static job titles to a dynamic, skills‑first model, where competencies are cultivated and measured rather than assumed from a role. - Knowledge workers currently lack systematic training—unlike athletes or musicians—so we must create practice routines that break down complex tasks into repeatable, feedback‑driven micro‑skills. - Existing hiring and compensation tools embed the assumption that specific skills belong to specific jobs, but AI enables us to decouple skills from roles and evaluate people based on outcomes they can achieve with those abilities. - By leveraging AI to deliver targeted, real‑time feedback on narrow, repeatable scenarios, individuals can continuously improve their recognition and response patterns, turning career development into an efficient, practice‑based process. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=0s) **Shifting From Jobs to Skills** - The speaker argues that we must replace traditional job‑centric hiring and promotion systems with a skill‑focused model, using AI‑driven training to let knowledge workers develop abilities independent of specific roles. - [00:04:31](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=271s) **Core Skills for AI‑Driven Work** - The speaker outlines five repeatable, practice‑oriented capabilities—judgment, orchestration, coordination, taste, and a final skill—as essential for professionals navigating high‑stakes, AI‑augmented environments. - [00:07:37](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=457s) **Crafting Rubrics with Trusted Feedback** - The speaker stresses that before leveraging AI, teams should consult trusted colleagues to define clear, concrete criteria for key artifacts and convert that input into consistent rubrics for evaluation. - [00:11:07](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=667s) **AI-Driven Decision-Making Practice** - The speaker explains how to use AI‑generated rubrics and prompts to transform film‑review style feedback into regular, repeatable drills—such as writing one‑page decision documents—to systematically improve judgment and specification skills. - [00:14:18](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=858s) **Iterative AI-Driven Team Skill Building** - A manager outlines a habit loop where AI critiques are reviewed before human feedback, teams hold brief weekly practice sessions on flagged growth areas, track measurable rubric improvements, and apply the same skill set to hiring, emphasizing continuous, measurable skill development over perfection. - [00:17:30](https://www.youtube.com/watch?v=Td_q0sHm6HU&t=1050s) **Evaluating Human Skills Amid Shadow AI** - The speaker highlights that most AI use goes unreported, and argues that interview and development practices should focus on live, constraint‑based conversations to surface reasoning, risk assessment, and trade‑off articulation, ensuring candidates demonstrate genuine thought processes rather than merely relying on AI shortcuts. ## Full Transcript
0:00We need to move from a jobs format to a 0:02skills format for our roles and our 0:04career growth. And no one's ready to 0:05talk about it. That's what this video is 0:07all about. How do you think about your 0:09job differently and think about it in 0:11terms of skills you can train and 0:13improve, preferably with the help of AI? 0:16One of my inspirations for this post was 0:18a 2019 blog post by Tyler Cohen where he 0:21talked about this idea that athletes 0:23train and musicians train, performers 0:26train, but knowledge workers really 0:27don't train. We don't train. I don't I 0:29don't shoot free throws. There's no 0:31knowledge work equivalent. And so I 0:33started to ask myself, what does it take 0:36to do something like a pianist 0:38practicing scales, but for knowledge 0:40work? And is there a way to start to 0:42address this in the AI age that helps us 0:45think about skills differently than we 0:48traditionally have? Because I got to be 0:50honest with you, traditionally our 0:52assumptions about skills have been so 0:55laded into jobs that it's literally 0:58baked into our software. Right? If 1:00you've ever been a hiring manager and 1:01you've ever used a software tool for 1:04hiring for compensation estimates, for 1:07promotions, do you know what it starts 1:09with? It starts with the assumption that 1:13you need to layer specific skills into a 1:17job post. It's as if we can't imagine a 1:20world where skills might exist 1:22independently of a role. And yet, that 1:24is exactly the world we're headed 1:26toward. We're headed toward a world 1:28where skills are something that we 1:30acquire because we can use them with AI 1:34to get meaningful work done. And we 1:36should be measured on our outcomes. We 1:38should be measured on our ability to 1:40drive with those skills, not necessarily 1:44compensated just because we have job 1:46title A or job title B, product manager 1:48or engineer. So in that skills world, 1:51what does practicing really look like? I 1:54know that we talk about this physically 1:56and I think that metaphor is helpful, 1:58but I want to get it into the knowledge 1:59work space because we just haven't 2:01talked about that enough. So in the 2:02physical world, you think of skills as 2:05being fractal. They're they're they're 2:07tiered, right? So, if you are trying to 2:09practice your fluency with the piano and 2:12you're moving your fingers and you're 2:13playing the scales up and down, part of 2:15that is the subskll of finger movement 2:19in a pattern and part of it is the 2:21subskll of how much pressure you place 2:23on the keys and part of it is the 2:25subskll of the speed of movement. And 2:27each of those can be practiced and 2:30repeated and you can get feedback and 2:32you can progress. For knowledge workers, 2:34we need to find a way to get to narrow 2:38situations with repeated specific 2:41feedback that's designed to strengthen a 2:43particular pattern of recognition and 2:45response in our brain so that we get 2:47better at our skills because otherwise 2:49we do our whole careers as live 2:52performance and that's an extremely 2:54inefficient way to learn. So what does 2:57that look like? Well, the good news is I 2:59think we have never had a better chance 3:02to do that than we do now in the age of 3:04AI because AI gives us the chance to 3:06have custom feedback on practice that we 3:10just never would have been able to scale 3:12otherwise. It's just most of us aren't 3:14doing. It's tempting to say at this 3:16point that knowledge workers are lazy, 3:18but it's structural. I don't believe 3:19that we are lazy. I believe our 3:21environment fights against this approach 3:24to practicing our skills in three 3:27different ways. Number one, we live in a 3:30world with fuzzy outcomes. In 3:32basketball, the ball goes in or it 3:34doesn't. You shoot the free throw and 3:35you miss or you make it. It's a clear 3:37signal. In product or strategy or 3:40leadership or engineering, good can mix 3:42in so many different dimensions. It can 3:44be confusing. Like speed, like quality, 3:46like politics, like relationships, like 3:48risk. There's no single bit that flips 3:51from 0ero to one. The other reason this 3:54is difficult is that we get really 3:55delayed and noisy feedback, right? You 3:57might make a big decision in Q1 and you 4:00might learn in Q3 at best maybe whether 4:03it really paid off. Meanwhile, the 4:05market may have shifted, maybe a 4:06competitor launched something, a key 4:08higher left. You almost never get the 4:11clean comparison. If I had written the 4:14spec differently, we would have avoided 4:16X or Y event. The third issue is low 4:19repetition. A serious musician is going 4:21to play scales hundreds of times a week, 4:23but how many truly consequential 4:25decision docs do you have? How many 4:27product specs? How many strategy docs? 4:28How many technical architecture memos do 4:31you write in a quarter? If each one of 4:32these is entangled with real money and 4:34real people, there are no low stake 4:36sandboxes in traditional career pathing. 4:40And so the default is that most of us 4:42spend like 95 or more percent of our 4:45quote unquote reps on live games. We're 4:48practicing in front of the crowd. We're 4:49practicing literally for our careers. I 4:52guess that's better than nothing, but 4:54it's not the same. So the next question 4:56I wanted to ask is I wasn't satisfied 4:58with just a a general challenge. I 5:00wanted to ask myself, what are some 5:01skills that are repeatable, practicable 5:04that we could talk about in the age of 5:06AI? I would argue that there are five 5:08that keep showing up. I think number one 5:10is judgment. How you frame decisions. 5:13How you define your options. How you 5:15choose when conditions are uncertain. 5:18Number two is orchestration. How do you 5:20turn fuzzy goals into concrete workflows 5:23that humans and AI can execute together? 5:25Can you bring clarity out of the 5:27ambiguity? Number three is coordination. 5:30How do you move groups of humans through 5:32ambiguity without creating more chaos? 5:34Right? You are still going to need the 5:37skills to coordinate. And as agents get 5:39better, you may need to learn the skill 5:41to coordinate agents and humans. Number 5:43four is taste. Do you have a meaningful 5:46quality bar for your product, for 5:48writing, for design, for strategy? Do 5:51you have a sense of what is good? And 5:53can you talk about it and improve it 5:55like a skill? And number five is 5:58updating. How do you change your mind as 6:01evidence and context shift without 6:02getting whipped around by the noise? 6:04What is your huristic? What is your 6:06rubric? How do you think about updating 6:08your priors? How do you think about 6:10changing your mind in meaningful ways? 6:12Now, none of these really live in a 6:14LinkedIn tagline. They live in what you 6:18write and leave behind. We could call 6:20that artifacts, right? Judgment can show 6:22up in your decision documents. Judgment 6:24can show up in experiment designs. It 6:26can show up in prioritization writeups. 6:28Orchestration can show up in handoff 6:31documents. It can show off in specs. It 6:34can show off in the way you plan a 6:37project and what that looks like. 6:39Coordination can show off in emails. It 6:42can show up in meeting notes. It can 6:44show up in stakeholder maps. Taste will 6:46show up in how your UX looks, right? 6:49Which examples, which metaphors you're 6:51going to pick. And your ability to 6:52update will show up in how you evolve 6:54your plans over time in the written 6:56reference to rationale and what that 6:58looks like. So the key is that these 7:00skills, they're not adjectives. We name 7:04them as adjectives. We associate them 7:07with roles as adjectives, but really 7:09when you come right down to it, they're 7:12not. They're patterns in the things that 7:15you produce that I produce. And once you 7:17accept that, you stop arguing about 7:20who's strategic in the abstract, and you 7:22start looking at how people actually 7:24write, how they behave, and how they 7:26decide. This has always been the gold 7:28standard in behavioral interviewing, but 7:30we've really struggled to get to this 7:32level of clarity, especially post AAI. 7:35So, what does AI actually change? AI is 7:37not a magic brain. I say that all the 7:39time. AI is a tool that can read text. 7:43It's following instructions and it can 7:45apply a rubric consistently. This is 7:47beautiful because it gives us a wall to 7:50practice against. So, your first step 7:52has nothing to do with your models if 7:54you're serious about practicing, right? 7:56You just want to pick one artifact that 7:58matters for your team, like a decision 8:00doc for a product manager, and you want 8:01to sit down with the people whose 8:03judgment you trust. And you just want to 8:05ask them a really simple question. When 8:07you say that a decision dock is good, 8:10can you tell me what you mean 8:11specifically 8:13and and just push ask gently, ask 8:16clearly, ask persistently, and push on 8:18the people in your life that you trust 8:20until you have a small concrete list, 8:22right? Maybe it's is the decision stated 8:24in a sentence. Are there at least two 8:26real options? Are the stakes and metrics 8:28explicit? Is there a clear 8:29recommendation? Are risks and trade-offs 8:31surfaced? I could go on, but it's not 8:34just for that one thing, right? That's 8:35an example for one artifact. You need to 8:38look at it for all of your artifacts, 8:40the ones that are relevant to your 8:42discipline. Whether that's architecture 8:44docs for engineering, whether that is 8:47call summaries for CSMS, whether that is 8:50uh pipeline expectations for sales, 8:53there's all kinds of ways to do this. 8:55But the key is asking someone in your 8:56life what's good. And then you turn that 8:59into a a grade, right? A rubric. You 9:02make it clear like what good looks like. 9:04And you set that out one to five. And 9:06then you pull three to five real 9:08examples. And you mark them up, right? 9:10Like you get a red pen out, right? And 9:12you say, "This one is really good at 9:14clarity. This one is good at risks, but 9:16it has these weaknesses. This is the 9:18rationale." You notice how none of this 9:20is with the AI yet? I promise we'll get 9:22there. But I want you to recognize that 9:25human skills are human skills and I'm 9:27asking you to take some human 9:28responsibility for developing your 9:30skills. Only then after you've red 9:32penned a few things do you bring it to 9:34an LLM. You give it the rubric and you 9:36give it literally your annotated 9:38examples. We are at a point where you 9:40could actually use a red pen, scribble 9:43all over the doc and it would still work 9:45because usually handwriting recognition 9:47is good enough now to pick it up. And 9:48then you say in effect, "When I send you 9:51a new doc, please score it like this. 9:53Quote the parts you're reacting to. 9:55Explain briefly why you gave each a 9:57score. And please suggest edits that 9:59would move one of these dimensions up by 10:00a point or two points or whatever." 10:02Suddenly, look what that changes. Look 10:05what your effort to define good for your 10:08role shifts. Instead of a manager 10:11skimming through and thinking, "Ah, that 10:12feels fuzzy. I don't I have 15 minutes. 10:15I'm going to turn it over." No, you get 10:17a structured critique that can be 10:19applied to every single doc of that 10:21type. This one has a two on options, 10:24right? That one is a four on clarity, 10:26but a one on how I structured risk. This 10:29is what I need to do to change it. And 10:30so they give you something like a rough 10:32consistent view of how the skill is 10:35showing up across your real work. We've 10:37been missing that. That is our signal. 10:40That is the basketball going into the 10:42basket. And yeah, you can actually log 10:45this. You can say over a quarter, what 10:48are the patterns I'm starting to see in 10:50my own behavior? How are my scores 10:53changing? And yes, you can really score 10:54this out of five. So even though I air 10:56quote it, you can get actual scores. And 10:58with that, we now have something the 11:00preAI world just couldn't have. When 11:02Tyler wrote this, this wasn't possible. 11:04We can do effectively film review like 11:07athletes on our thinking and our writing 11:09at scale without having to hire an army 11:12of coaches. just with some good prompts 11:14which I'm putting together. The next 11:16move once you've got that is to turn the 11:18film review into repeatable drills that 11:22train on the patterns that you care 11:24about. So take judgment as an example, 11:26right? In an artifact form, judgment 11:28often looks like, can I write a decision 11:31document that lets a reasonable person 11:33say yes or no without a 2-hour meeting? 11:36With your rubric in place, you can 11:38create a practice scale, right? a 11:40practice exercise that looks like this. 11:43Once a week, take a real messy 11:45situation, a slack thread, a super vague 11:47request from your manager, a fuzzy idea 11:49you had in the shower. Write a one-page 11:52decision doc that hits at the pattern 11:55that you've identified as good, clear 11:57decision, options presented, stakes, 12:00recommendations, etc. Now, run it 12:02through the same AI rubric you use on 12:04real docs. Compare your version to a 12:07stronger version that the model 12:08generates. Notice what you miss. That is 12:11your practice. You compare it to what's 12:13good. You get focused on a subs skill 12:16and you practice and practice and 12:18practice every single week. You can do 12:20this for orchestration where you define 12:22what a good spec looks like in your 12:24environment, explicit goal, inputs, 12:26outputs, constraints, etc. And you can 12:29create drills where people practice 12:31turning fuzzy objectives into timebound 12:34specs and timebound uh organizational 12:38decisions for coordination. You can 12:41define a pattern for your executive 12:42updates. The important thing is to see 12:45the chain of behavior you need to adopt 12:49to level up. You have a skill. You 12:52identify your recurring behavior. You 12:54figure out how that maps to a 12:56recognizable pattern in the artifacts 12:59that you leave behind. Then you 13:00establish a grade and then you start to 13:03practice. That's what it takes to go 13:05from being like, "Oh yeah, Tyler wrote a 13:07good thought. I'm not really changing my 13:09behavior." to, "Wow, I have AI. I have a 13:12personal coach. I just need to configure 13:14it right." And now you start to get 13:16better. What does this look like if 13:18you're a team lead? This is mostly just 13:20conceptual because I'll be honest with 13:22you, very few team leads do this, but 13:25let's play out the operations, right? 13:27Suppose you run a team at a midsize 13:30company. You decide that for the next 13:32quarter you're going to focus on a 13:34particular artifact you want to level 13:36up. So, you and your team define a 13:39rubric together. It's not just an 13:41individual. You guys together pull 13:43example docs that are good. You see how 13:45this is often the same set of 13:47activities, but now we're doing it at a 13:49team level. This is so much more 13:51powerful, right? You then can wire up a 13:54team LLM so that whenever someone marks 13:57a doc as ready for review, it will run 13:59the rubric pass. Basically, what this 14:02does is it's like the engineers who have 14:06codecs automatically review their PRs. 14:09Well, now you're having like Claude or 14:11Chat GPT automatically review your docs. 14:14Same thing, leaves comments. You can ask 14:18any of your teammates to do two things. 14:20Let the AI critique hit the dock before 14:23a human review, and that's a management 14:25decision. And then once or twice a week, 14:28as a team, set a 10-minute timer and 14:30practice on something that the AI keeps 14:32flagging as individually to you, a 14:35growth area, and report on it. Talk 14:37about it. Humans do better with goals 14:39when we articulate them. And so this is 14:41a case where the team gets stronger and 14:43we individuals progress faster because 14:46we're in a team environment. The goal is 14:48not to demand perfection. The goal isn't 14:50even to tie this to performance ratings. 14:53It's to ask that we use small steady 14:55habits to actually build and scale 14:58useful skills that we will need in the 15:01age of AI. I am such a fan of these 15:03practical solutions because I think so 15:05often we stop at the generic. We stop at 15:09the vague. We don't need to do that. 15:11Right? By the end of your quarter, you 15:13should be able to have a conversation 15:15with your team where you say, "Have we 15:17improved on our on our rubric for this 15:19artifact? Did the scores get higher? Are 15:22docs getting approved with fewer 15:24iterations? Are key decisions happening 15:27faster and with less what are we 15:29deciding confusion?" If these are moving 15:31in the right direction, what you're 15:32learning is that a practice loop changes 15:35how your team thinks and writes. That's 15:38the core, right? Like that's what you're 15:40betting on. And what's interesting is 15:42that you can use the same skill set in 15:45interviews, right? In hiring. So most 15:48companies are hiring for skills in a way 15:50that is comically indirect. So we might 15:53ask, tell me about a time you influenced 15:55a stakeholder. We will listen to the 15:57story. we will kind of squint and try to 15:59infer whether they can do the work that 16:01we need done in the next couple of 16:02quarters. If you've already done the 16:04work to define a pattern for a 16:07particular artifact, there's really a 16:09much more grounded way to to evaluate 16:12people. Give them the same game that you 16:14play as a team and see how they'll do on 16:17the job. So, instead of a traditional PM 16:20interview, maybe the PM gets a short 16:21take-home where they write or repair a 16:24decision document based on a really 16:26realistic prompt and then there's a live 16:28session where you work through that doc 16:30and you and you change a constraint like 16:32legal is going to block this or the 16:34timeline shrinks and you see how they 16:36think through it and adjust and then 16:37there's a critique exercise where you 16:39show them a deliberately mediocre AI 16:42generated doc and ask them what's wrong 16:44with it. So the beauty of this is that 16:46you can use the same rubric you develop 16:48internally and even the same AI model as 16:50a first pass scorer for consistency. The 16:53point is not to let AI decide who to 16:56hire. It's to have a shared concrete 16:58lens on what good looks like on the work 17:00you're actually doing. And the nice side 17:02effect is that hiring and development 17:04they now point at the same thing, right? 17:06The skills you test for in candidates 17:08are the skills you help them practice 17:10once they're inside the door. It's not, 17:12you know, we hired them for their 17:13strategic thinking and they're bad at 17:15Jira tickets. These are the skills we 17:17tested for and these are the skills we 17:18work on as a team. And I want to call 17:20something out here. None of this 17:23presumes that you cannot use AI to get 17:26better. You are going to be using AI. 17:28One of the things that came out is that 17:30Anthropic has called out that I think 64 17:33something like twothirds of AI usage is 17:35shadow AI usage. People not reporting 17:37it. People aren't incentivized to report 17:40it right now. This doesn't make you hide 17:43your AI. You can be open about using AI 17:46and still get better at these skills 17:49because the goal is the outcome. And so 17:50if your interviewee is using AI, you're 17:54going to find out real quick whether 17:56they have a healthy relationship with AI 17:58when they turn something in 18:01and then you give them a constraint live 18:03and they fumble and they can't handle 18:05it, right? Like you're going to see 18:06where the edges of those skill sets are. 18:08So, the practice loops I'm describing 18:11are designed to reinforce the kinds of 18:14skills we humans need in the age of AI. 18:17They're going to push people to clarify 18:19decisions, to surface risk, to 18:20articulate trade-offs. If someone uses 18:23AI for a pass at that, that's great, but 18:25you're going to catch them if they 18:26haven't done the heavy thinking. What's 18:28freeing about this is that you're 18:30enabling a real evaluation through live 18:32conversation where people talk through 18:34their choices, talk through how they 18:36respond, and really the interview and 18:38the development conversations feel very 18:40similar. And it's not about trying to 18:42catch people cheating with AI in either 18:44case. All you're trying to do is you're 18:45trying to see if they have a stable 18:47pattern of thought that remains visible 18:49even when their ability to do like tab 18:52tab tab as we say in cursor is gone. 18:55Right? If they're having a conversation 18:57and you change some dynamics and we talk 18:59about quality and they just stumble 19:00because they're not in front of a 19:02screen, you're going to know. Whereas, 19:04if you set it up and they have the 19:06conversation and yeah, maybe AI helps 19:08them get there faster, but they can 19:10articulate the trade-offs and they're 19:11able to start to point those skills in 19:13the right direction and practice them. 19:14That's fantastic. Now, you can measure 19:16it. Now, I don't want to overromanticize 19:19this. There there are going to be real 19:20limits. Rubric scores will be noisy. I 19:23would not treat them as precise 19:24numerical representations. I would not 19:27treat them as a basis for promotions. I 19:29don't want people to feel like there's a 19:31surveillance risk where every single 19:32document is scored. The goal is to get 19:35better. The goal is to become useful. 19:38And I don't want program fatigue to eat 19:40this. So instead of like trying to start 19:42really big, I would strongly recommend 19:45that you start small. Pick one little 19:47thing, a short change in habit, start to 19:49practice and just start to feel into it 19:51because really the goal is to get in the 19:55habit of being athletes about our 19:57knowledge work. How do we intentionally 19:59name a skill, measure a skill, see what 20:01good looks like and use the power of AI 20:04to train and get better? If we go after 20:08that, if we have that sort of focus and 20:10goal, whether as an individual or a team 20:12manager, we are going to be in good 20:14shape and we are going to be in a 20:16position where we can actually answer 20:18Tyler's question because I think part of 20:20why Tyler wrote the question he did back 20:22in 2019 is we didn't have AI. AI 20:25couldn't be there to coach us. It was 20:28too expensive for most people to get 20:30coached. Well, not anymore. Now we have 20:32AI. AI can help each of us individually 20:35and help our teams to actually grow in 20:37our skill sets. And that's really 20:39exciting to me.