Managing AI Skills for Real Value
Key Points
- The rapid, unchecked adoption of AI tools—like Claude’s new “Skills” feature—can create a chaotic, unmaintained sprawl of custom solutions that add activity but no real value.
- Organizations often rush to deploy AI (custom GPTs, Zapier, N8N, etc.) to appear innovative, yet without disciplined governance these projects fade as day‑to‑day priorities take over, leaving only vague time‑saving claims.
- Effective AI integration requires leaders to develop new fluency skills that focus on maintaining, tracking, and measuring AI assets rather than simply proliferating them.
- Empirical studies show AI can boost team productivity dramatically (40‑50% up to several hundred percent), confirming that the technology’s potential is real when applied correctly.
- The core challenge for leaders is identifying whether obstacles are talent, culture, process, or tooling—and then implementing the three key principles of AI fluency to turn activity into measurable business value.
Sections
- Managing AI Skill Sprawl - The speaker warns that unchecked proliferation of AI tools and “skills” can create chaotic, low‑value activity and stresses the need for new leadership capabilities to keep AI initiatives organized and effective.
- Infrastructure Boundaries vs Gatekeeping Culture - The speaker explains that regulated‑data constraints act as a secure infrastructure boundary that fosters creative AI skill development through test cases and documentation, while warning that excessive approval gates and review boards create a restrictive, culture‑killing environment.
- Essential AI Prompting Skills - The passage outlines crucial, transferable abilities such as breaking complex problems into AI-sized chunks and deciding when to iterate versus restart prompts to achieve rapid, effective AI-driven solutions.
- Deciding When to Use AI - The speaker explains that discerning whether to begin a task with AI or handle it manually is a learnable meta‑skill involving problem decomposition, context awareness, and examples of successes and failures.
- Start Simple, Add Infrastructure - The speaker urges teams to prioritize delivering value with minimal infrastructure, adding complexity only when necessary, to foster safe experimentation, skill growth, and effective AI problem‑solving.
- Avoiding AI Infrastructure Overkill - The speaker warns against building unnecessary AI infrastructure—often driven by vanity or hype—and urges teams to prioritize delivering real value, iterating based on breakages, and evolving toward genuine fluency and multiplicative business impact.
Full Transcript
# Managing AI Skills for Real Value **Source:** [https://www.youtube.com/watch?v=rlJmALoNl5g](https://www.youtube.com/watch?v=rlJmALoNl5g) **Duration:** 00:20:13 ## Summary - The rapid, unchecked adoption of AI tools—like Claude’s new “Skills” feature—can create a chaotic, unmaintained sprawl of custom solutions that add activity but no real value. - Organizations often rush to deploy AI (custom GPTs, Zapier, N8N, etc.) to appear innovative, yet without disciplined governance these projects fade as day‑to‑day priorities take over, leaving only vague time‑saving claims. - Effective AI integration requires leaders to develop new fluency skills that focus on maintaining, tracking, and measuring AI assets rather than simply proliferating them. - Empirical studies show AI can boost team productivity dramatically (40‑50% up to several hundred percent), confirming that the technology’s potential is real when applied correctly. - The core challenge for leaders is identifying whether obstacles are talent, culture, process, or tooling—and then implementing the three key principles of AI fluency to turn activity into measurable business value. ## Sections - [00:00:00](https://www.youtube.com/watch?v=rlJmALoNl5g&t=0s) **Managing AI Skill Sprawl** - The speaker warns that unchecked proliferation of AI tools and “skills” can create chaotic, low‑value activity and stresses the need for new leadership capabilities to keep AI initiatives organized and effective. - [00:04:36](https://www.youtube.com/watch?v=rlJmALoNl5g&t=276s) **Infrastructure Boundaries vs Gatekeeping Culture** - The speaker explains that regulated‑data constraints act as a secure infrastructure boundary that fosters creative AI skill development through test cases and documentation, while warning that excessive approval gates and review boards create a restrictive, culture‑killing environment. - [00:07:43](https://www.youtube.com/watch?v=rlJmALoNl5g&t=463s) **Essential AI Prompting Skills** - The passage outlines crucial, transferable abilities such as breaking complex problems into AI-sized chunks and deciding when to iterate versus restart prompts to achieve rapid, effective AI-driven solutions. - [00:10:49](https://www.youtube.com/watch?v=rlJmALoNl5g&t=649s) **Deciding When to Use AI** - The speaker explains that discerning whether to begin a task with AI or handle it manually is a learnable meta‑skill involving problem decomposition, context awareness, and examples of successes and failures. - [00:14:57](https://www.youtube.com/watch?v=rlJmALoNl5g&t=897s) **Start Simple, Add Infrastructure** - The speaker urges teams to prioritize delivering value with minimal infrastructure, adding complexity only when necessary, to foster safe experimentation, skill growth, and effective AI problem‑solving. - [00:19:12](https://www.youtube.com/watch?v=rlJmALoNl5g&t=1152s) **Avoiding AI Infrastructure Overkill** - The speaker warns against building unnecessary AI infrastructure—often driven by vanity or hype—and urges teams to prioritize delivering real value, iterating based on breakages, and evolving toward genuine fluency and multiplicative business impact. ## Full Transcript
You know what's undertalked about? We
don't talk about what happens when you
have everybody using AI at work and your
whole team is not building velocity.
They're not building value. You can't
tell the difference. It's a bunch of
activity. And in fact, by some measures,
maybe you slowed down. That does happen
and it happens by default. And so I want
to talk a little bit about the new kinds
of skills as leaders that we need to be
building in order to make sure that
organizations take AI aboard and are
able to actually get leverage, able to
actually get value out of it. What's the
trigger for this one? Well, it's the
launch of Skills. Skills was the big
launch this week from Claude. I made a
whole video about it. And the thing with
skills is it has all of the hallmarks of
a brilliant technology that people who
don't understand it can turn into a
complete spaghetti code activity mess in
your organization. And what I mean by
that is fast forward 3 months or four
months, you now have 5,000 skills in
your organization for a team of 300
people or you know, pick your number and
no one's maintaining them. No one can
track where they all are. They're in
some sort of enterprise instance. Do you
use the Excel version two or the Excel
version 3 or the Excel NATE version?
It's going to be a complete mess. And so
when I think about that, and it's not
the first time I've seen this, we have
this same problem with custom GPTs. We
have it with AI integrations like
Zapier. We have it with N8N agents.
There's this excitement that comes when
these AI tools burst onto the scene and
executives greenlight it. and teams want
to get a lot done and they want to show
they're doing the AI and so they build a
bunch of stuff but they don't
necessarily always deliver value and
those things gradually fall by the
wayside as the evergreen priorities keep
the team busy and so long-term 3 months
four months 5 months 6 months the team
is busy they have AI they'll report on
surveys they're saving time but you
never see that value come through
anywhere I think that there are three
key principles for building AI fluency
that the organiz organizations that do
add value, that do figure out how to
make AI actually work, they figure these
out. And most organizations that don't
figure them out, end up in that activity
bucket. Before we get into them, I do
want to underline for you the case for
productivity at the team level is
closed. AI 100% confident. We've seen
the studies, it can enhance activity at
the team level hugely. We're talking a
range from 40 to 50% all the way up into
the multiple hundreds of percent. It is
a massive breakthrough. So far, we
mostly see that that super bullish
optimistic case for small startups that
are AI native. And a lot of the question
that we've been wrestling with as
leaders has been is that a talent issue
or is that a culture issue or is that a
process issue? Is that a tools issue? I
want to start to break that log jam for
us. And I'm going to start with the
assumption that if you have rolled out
AI and it has been at least by a corner
of the company enthusiastically
received, then your first question
should not be about talent and it should
not be about tools. You should be asking
yourself about fluency. And that gets a
little bit at that culture piece. So
let's dive in. What are the principles
for building AI fluency that we don't
often talk about? Number one is enabling
constraints rather than processes. And
so before we go further, constraints are
a bad word. Like a lot of people think
constraints are not good because it's a
negative connotation word. And if you're
in IT and security, you think
constraints are something that enables
you to control, right? You can you can
surround the risk and control it with
constraints. I don't mean either of
those things. I don't think it's a bad
word and I don't think it enables you to
control and manage all risk. That's not
my point here. My point is to set
structure and boundaries that make good
work and healthy work patterns with AI
feel natural and that make bad unhealthy
work patterns with AI feel hard. We are
setting the incentives. So the goal is
not to control what people build. It's
to set the constraint so people feel
good building in healthy patterns. What
do I mean by that? One good example is
every skill you build includes a test
case. Another good example is skills
have a named maintainer in our business.
Another example outside of skills AI
cannot access regulated data outside our
virtual sandbox where the data is secure
and it's not even possible right you
can't go and get it. In other words,
these constraints are a building block
that enables you to be as creative as
you want within that space. Skills
include a test case. Doesn't mean you
can't create a skill. It just means
think a little bit, produce a test case
that goes with the skill so we can see
how it works. Document it. Include that
markdown in the file. AI can't access
regulated data. That might seem like a
negative or controlling sort of case,
but it's not. It's just an
infrastructure boundary. It's it's a
business rule encoded as infrastructure.
You can do whatever you want inside.
What is the opposite? What is a process
killing negative control case look like?
There's three types I want to call out
here. These are all bad examples where
you're using constraints to try and
control people and you're going to kill
culture. Number one, an example would be
AI skills require it approval. That's
gatekeeping. I see it a lot, right? Or
this AI tool requires g, you know, IT
approval. The more you do that, the
worse it gets. Gatekeeping creates a
gatekeeping culture and it drives out
value. Another one, you need to submit
your skills or your prompts to the
review board and the review board will
approve it. I've heard stories of people
trying to argue about whether prompts
are somehow copyrightable intellectual
property inside the company. And so they
have to like go through the lawyers like
no bureaucracy will kill value too. It
kills the culture. It kills the ability
to build. That's not the constraint you
want. A third example of a constraint
that you don't want. Use only approved
patterns for prompts. Use only approved
patterns for skills. Only these
templates. No. Again, you don't want to
kill people's creativity. So, the way to
think about it is this. Enabling
constraints raise the floor for the
team. They make it easier for the team
to move at their best. Process lowers
the ceiling. It makes it hard for the
team to excel. So, the next time you
think about a constraint, and I'm not
saying don't do it. I'm actually asking
you to have healthy constraints. Think
about finding constraints that raise the
floor. Principle number two for AI
fluency. Learn problem solving skills
that are AI fungeible. This is a huge
one. It is under discussed and I want to
get into it a little bit because I think
that most people don't understand and
think I mean like learn prompting. I
don't. The people getting 10x better,
they're not magically learning tools.
They're not magical prompters by
default. They aren't only learning
prompting. They aren't only learning
skills from Claude. They're learning the
judgment that transfers across AI
systems. And that is not something that
is easy to learn on the open web because
most of the videos are made for clicks
and entertainment. And hard stuff like
how to decompose complex problems, well,
that doesn't get as many views, does it?
So, let me go into the specific skills
that I have seen that start to transfer.
This would probably be a whole another
hour of video, but we're going to get a
start and give you a sense of what those
skills are, and we can get further into
it in later editions of this executive
briefing. What transfers? The skill to
decompose complex problems into AI sized
pieces is a new skill in our world. It's
a big deal and if you have it, it's one
of those universal skills that transfers
across tools and prompts everywhere you
go. So, think about it. That helps you
with prompting. It helps you with
context engineering. It helps you to
know which tool to use. Understanding
how to decompose a problem into AI sized
pieces means that you understand how AI
models work. You understand your
problem. You are experienced enough with
articulating problem framing that you
can break the problem into separate
chunks and then you can put the chunks
into the model. It's a very advanced
skill when I talk about it out loud. It
is one of the underlying skills that
these teams that are delivering 300%
400% 500% speedups have. Let me give you
another one. When to iterate versus when
to start over. That one's a little bit
easier. I think it's a little bit more
of an easy mode skill. It's learning in
any given LLM interaction. When do you
wipe the context window versus when do
you not? When do you provide course
correction versus when do you just say,
you know what, we're going to start over
with a better prompt. Let me give you
another one. How do you recognize
intuitively when AI is confident and
incorrect? That's an extremely high
value skill. It is also a very fuzzy
skill. You have to know enough about
your domain. You have to know enough
about how AI speaks and the utterances
it uses. You have to know enough about
the relationship between AI language and
AI truth claims that you can read a
particular statement and say, I've seen
several hundred or several thousand AI
statements before. This one feels more
like a false category AI statement
because it doesn't quite ring true for
my domain and because I've noticed that
when AI is hallucinating in this model
in this chat, it feels more confident
because it's actually doubtful, right?
Like something something like that as an
example. Like I'm not entirely making
that up. Like I caught Claude
hallucinating today in a very similar
situation. I stared at it and I said I
think 60% is incorrect here. It doesn't
smell right with my intuition of the
domain and it also feels like Claude in
particular likes to get weirdly specific
and weave it into pros because Claude is
a great writer and I was like this just
feels wrong and I just challenge it and
I caught it. That's an advanced skill.
That's a skill that teams that go fast
have. Another one, what kinds of context
actually matter for a problem? That's a
very advanced skill. Does quantitative
data matter here? What cut of
quantitative data? How clean does the
data need to be to be good enough? What
kinds of context will help the LLM make
the next step, but maybe not all the
steps? How do I chunk that context?
These are hard questions when I start to
talk about them, aren't they? I'll give
you a couple more and then I'll give you
some counter examples. So, here's one.
When do I use AI versus when do I do it
myself? When do I start with AI versus
when do I start with myself? When do I
start with myself and just extend into
AI right away? These are all different
questions, but they're all related.
You're trying to figure out the real
working relationship for a given task,
given intent, given domain. And that is
a skill. It's actually a learnable
skill. And that's one of the things I
want to emphasize. I've talked about
decomposing problems, iterating,
recognizing when AI is wrong, the kinds
of contexts that matter, how to use AI
or just do it yourself. Those are all
learnable skills by your team. And in my
experience, the more we are able to name
them, which is exactly what I'm doing
here, the more we are able to start a
conversation about successful cases in
our organizational context and
unsuccessful cases, things that didn't
go well. That really matters because if
teams can't see good examples of these
skills being practiced and bad ones, it
is hard to get it into their heads.
These are meta skills, right? They're
advanced skills. You can't just go to
one LLM and practice it and make it
work. What isn't one of these skills?
What's a counter example for you where
this is not a skill that's AI fungeable?
I'll give you a really example. How to
how to structure a specific prompt for a
given language model. This is how you
hack GPT5 with this magical prompt. It's
a no. That's not a transferable skill.
This is the exact workflow to use to use
N8. No, not a transferable skill. You
need to think about can your people get
similar results in different AI tools?
Can your people preferentially choose AI
tools not based on familiarity but based
on these kinds of meta skills? Can
someone tell you when I decompose my
problems, I find that this particular
context window and this model right now
works better for this type of problem
and here is why. If you have 10 or 20
people in an organization of 500 that
have that skill set, you will probably
make more of a difference to your
business than if you have all 500
trained on chat GPT. I'm not kidding.
The nonlinear unlock you get from small
teams that operate like this is
enormous. Let's go to the third fluency
principle. Do not overinfrastructure
your AI. In fact, where you can adopt
the rule of thumb that says we don't add
AI infrastructure until our workflows
break. There's a there's a real pattern
right now driven by vendors on AI
adoption. Teams see the power of AI.
They immediately start building
infrastructure to contain it. And so
I've seen teams go off the rails where
they have barely any users and they're
building custom harnesses for agent
orchestration and complicated rag
systems for knowledge management of a
dirty wiki and elaborate frameworks for
prompt management when nobody's
prompting at work and sophisticated tool
chains for AI workflows. You get the
idea. Often times this is premature. If
you are building, don't do anything but
start to build and see how far you can
get. Like if if I can tell one thing
especially to engineers, it is start
simple and add infrastructure when the
simple approach breaks. And that works
for CTO's as well, right? If you're
thinking about build versus buy and
vendor solutions, I would encourage you
to think about it as are your people in
a place where their workflows are
breaking despite current use of AI and
they need this tool to unbreak the
workflow. And there are absolutely cases
like that. you can get into more
complicated orchestration, more
complicated memory management scenarios
that naturally are required for a given
use case. If you have complicated data
lake and you're going to need to migrate
it into a place where you have a very
strong use case for AI and you know the
traditional data lake architecture won't
work, sure, you're going to need to
build some infrastructure. I'm not
saying don't do it. I'm saying don't
start by building infrastructure.
Start by building value and then build
the infra when workflows actually break.
Build it when you really need it. So
start simple. And I think that this will
help like so many teams that I have seen
that go off the rails initially because
the instinct to complicate as soon as
you see something like AI is strong. I
don't quite know why it is. I have
sometimes wondered if it is because it
gives teams an illusion of certainty
because it's a brand new technology. But
regardless of the inner reason, just
start simple. Add infrastructure when
the simple approach breaks. That's it.
It's not that complicated. And you know,
the three principles I've spent some
time outlining here, they they work
together. If you enable constraints that
raise the floor, you let people
experiment safely without having to get
permission. That means people actually
can get better as they start to work
within structures that push them toward
healthy work habits. And that applies to
skills, this week's release, as much as
it applies to anything else, right?
Minimal infrastructure is going to keep
the focus on developing judgment and not
managing systems. So if you start
simple, you are encouraging people to
spend more time learning the art of
problemolving. Principle number two,
learning the art of tackling
increasingly complex problems with AI.
And that my friends is where the real
value lies. If you look at these teams
that are getting twox improvements and
you ask yourself where is this massive
multiple coming from it is coming from
the ability to tackle much much harder
problems in AI like order of magnitude
10x harder problems in AI than people
who don't know these skills can tackle
and that is why I spent so much time
talking about problem solving skills
with AI it is a new class of problem
solving understanding it is a big deal
and we are not teaching it well today.
My goal with this video has been to give
you a sense of how you start to
structure your team, your organization,
your incentives so that people are
focused on that. People are focused on
how they can start to learn to solve
problems differently. And I'm going to
remind you again, this doesn't need to
be your whole organization to deliver
extraordinary value. You can get extreme
value from a group of 10 or so people
putting this together. It's a big deal.
And so my encouragement to you is pretty
simple. This week I want to suggest that
if you are a team leader, if you're a
director, if you're an executive, anyone
who has people responsibilities, there
are three questions that I think would
be really productive for you to get
engaged with your team on number one,
what are our enabling constraints? What
boundaries here at work on our team in
our org make good AI work feel natural?
And if the answer is we don't know what
good AI work feels like, there's your
answer, right? That's where you need to
dig in, peel that onion back, and start
to figure out what good AI workflows
look like. Someone on your team, some
champion has a good example. Go find
them. Question two, how do we develop
good judgment in problem solving? Are we
just training to the tools or are we
truly building the capability to problem
solve with AI? Are we multiplying our
problem solving muscles because people
understand how to structure pieces of
work in ways that AI can use and assist
them on? It's essentially learning the
skill of playing with robots, right? You
have to learn the skill of working side
by side with a robot, of passing the
work over, letting the robot take a
turn, and then coming back. That is
literally what we're doing. It is a new
skill versus passing it to a human. It
is not the same thing. And people have
to learn the difference. And that is why
I spent so much time in this video on
talking through that skill set. It's a
critical one. So ask your team, how are
we doing at developing judgment? Ask
your learning and development team if
you have one. Are we just training to
the tools? The vendors will encourage
that. They want you to train to the
tools and buy more tools. Or are we
training to the skill? Are we training
to the capability? And if you're
confused, you can always ask me.
Finally, ask where are we overbuilding?
What infrastructure have we been tempted
to add on before we know we need it? Is
it a vanity project? Is it something for
the board? Is it something that we said
we'd commit to because we saw a LinkedIn
post? I've seen that done. Most of us
have. But seriously, what infrastructure
don't we really need for AI? Can't we
just focus on building the value and
find out what breaks? It's really
important to think that way. and you
will have things break and you can add
necessary complexity and infrastructure
at that point but then you're not
wasting your effort. So there you go.
That is my words of wisdom for you. I
think it's especially important in an
era when we are going to get release
after release that feels a lot like
skills from Claude democratizing
empowering everyone loves it. Soon it
will proliferate across your business.
It may not deliver actual value. And so
this piece is all about how can we move
from activity to true fluency and
multiplicative value for teams and for
the business.