AI Destroys Hiring Signals
Key Points
- The job market’s traditional “expensive” signals—well‑crafted resumes, cover letters, and portfolios—lost their value because AI can generate high‑quality versions at zero marginal cost, turning those signals into noise (a Shannon‑entropy collapse).
- This collapse hurts both sides: candidates flood jobs with countless AI‑crafted applications, and hiring managers drown in thousands of indistinguishable submissions, while the usual advice (“post more,” “yell louder,” “build a social presence”) only adds to the cacophony.
- The pre‑2022 information equilibrium, where effort differentiated strong candidates from weak ones, is permanently broken; simply increasing the volume of signals can no longer restore meaningful hiring signals.
- To navigate this new landscape, job seekers and recruiters must abandon the old playbook and adopt fresh, concrete tools and strategies that create genuine differentiation beyond generic, effort‑free AI outputs.
Sections
- AI Destroys Hiring Signals - The speaker argues that AI-generated resumes have reduced the cost of creating hiring information to zero, eroding the traditional signals that once separated genuine candidate effort from noise and leaving both job seekers and employers without reliable guidance.
- Shifting From Credentials to Verification - The speaker argues that in an era where information is abundant, traditional resumes and certifications add noise, and proposes a move toward provable verification of skills—outlining five principles for a new talent marketplace model.
- Process Over Outcome in Hiring - The speaker argues that hiring should prioritize observable, verifiable work processes—such as live problem‑solving trials—rather than polished resumes or portfolios, which can be easily faked in the AI era.
- Adaptive LLM Competence Assessment - The speaker proposes using progressively harder LLM‑driven tests to generate reliable signals of candidate ability and help companies validate their own hiring needs, moving AI use beyond simple resume generation.
- Building Capability‑Based Job Search - The speaker argues for replacing keyword job matching with a semantic, capability‑focused search using RAG tools, highlighting how transparent verification becomes increasingly valuable amid AI‑generated resume noise.
Full Transcript
# AI Destroys Hiring Signals **Source:** [https://www.youtube.com/watch?v=KT4v_I9zvH4](https://www.youtube.com/watch?v=KT4v_I9zvH4) **Duration:** 00:16:11 ## Summary - The job market’s traditional “expensive” signals—well‑crafted resumes, cover letters, and portfolios—lost their value because AI can generate high‑quality versions at zero marginal cost, turning those signals into noise (a Shannon‑entropy collapse). - This collapse hurts both sides: candidates flood jobs with countless AI‑crafted applications, and hiring managers drown in thousands of indistinguishable submissions, while the usual advice (“post more,” “yell louder,” “build a social presence”) only adds to the cacophony. - The pre‑2022 information equilibrium, where effort differentiated strong candidates from weak ones, is permanently broken; simply increasing the volume of signals can no longer restore meaningful hiring signals. - To navigate this new landscape, job seekers and recruiters must abandon the old playbook and adopt fresh, concrete tools and strategies that create genuine differentiation beyond generic, effort‑free AI outputs. ## Sections - [00:00:00](https://www.youtube.com/watch?v=KT4v_I9zvH4&t=0s) **AI Destroys Hiring Signals** - The speaker argues that AI-generated resumes have reduced the cost of creating hiring information to zero, eroding the traditional signals that once separated genuine candidate effort from noise and leaving both job seekers and employers without reliable guidance. - [00:03:06](https://www.youtube.com/watch?v=KT4v_I9zvH4&t=186s) **Shifting From Credentials to Verification** - The speaker argues that in an era where information is abundant, traditional resumes and certifications add noise, and proposes a move toward provable verification of skills—outlining five principles for a new talent marketplace model. - [00:06:23](https://www.youtube.com/watch?v=KT4v_I9zvH4&t=383s) **Process Over Outcome in Hiring** - The speaker argues that hiring should prioritize observable, verifiable work processes—such as live problem‑solving trials—rather than polished resumes or portfolios, which can be easily faked in the AI era. - [00:09:37](https://www.youtube.com/watch?v=KT4v_I9zvH4&t=577s) **Adaptive LLM Competence Assessment** - The speaker proposes using progressively harder LLM‑driven tests to generate reliable signals of candidate ability and help companies validate their own hiring needs, moving AI use beyond simple resume generation. - [00:13:21](https://www.youtube.com/watch?v=KT4v_I9zvH4&t=801s) **Building Capability‑Based Job Search** - The speaker argues for replacing keyword job matching with a semantic, capability‑focused search using RAG tools, highlighting how transparent verification becomes increasingly valuable amid AI‑generated resume noise. ## Full Transcript
We all know that LinkedIn is dead. But
the problem is most of the advice that I
see online is still optimizing for that
dead system. I want to step back. I want
to look at the root causes of what's
going on with the AI job market
collapse. And I want to talk it through
step by step and get to a spot by the
end of this 10-minute video or so where
you actually have a clear perspective on
what's going on, a clear sense of your
actionables, the tools you have at your
disposal that are not just the standard
advice. and if you're in the hiring
chair, a clear sense of how you can
start to differentiate as you hire. So,
let's get into it. We don't have a lot
of time. Number one, the core issue here
is that signals in the hiring market
have collapsed to zero because marginal
cost of information production is also
zero. In other words, the job market
used to work because signals were
expensive to produce. So, a resume took
time. A good written resume took more
time. Cover letters took genuine
thought. I used to be able to read a
resume and I could read the effort
behind it. The cost worked because the
cost separated signal from noise. AI has
collapsed that cost to zero. We all know
that. We live that every day. When you
can write a good resume at zero cost and
in fact pump out 10 different custom
résumés, there is no information in that
signal. The fancy word for this is that
this is Shannon entropy and Shannon
entropy is playing out in the labor
markets, right? The less fancy way of
saying it is that because it doesn't
cost anything to make information, that
information loses signal, value, and
hiring and we're all in trouble. That's
what we feel, right? What's interesting
is we mostly talk about it from the
talent side, but the truth is both sides
are drowning. A thousand applications
per job sucks for everybody. And the
problem is both sides right now tend to
give advice that creates more noise to
cut the noise. So yell louder, but
everyone's being told to yell louder.
Everyone's being told to put a portfolio
out there. Everyone's being told to
start a social media presence of some
sort. Everyone's being told that like
you should be like putting more and more
job descriptions out there if you're a
hiring manager. And it all adds up to
this cacophony of noise in the AI job
market. And what I want to suggest to
you is that the information equilibrium
that we used to have before 2022 is
permanently gone. It is not coming back.
More noise does not fix this. In the
past, strong candidates could afford the
effort to raise the noise level and they
could break through with signal. And
weak candidates faced issues with their
ability to actually put the effort in
and generate quality work. And so we
started to have a useful signal when
people put more effort in. LLMs have
destroyed the value of effort from good
candidates and they make it equally
cheap for everyone to produce infinite
signals. And I think we have to start by
just admitting the old game that we
played before 2022 is over and we don't
know how to play the new game yet.
That's what I'm getting to with this
video. Every solution we have is adding
to that noise. And I want to be honest
about that, right? When you optimize
your resume, when you optimize your
portfolio website, it all adds to the
noise. And so what I want to suggest
here is that what we need to do is to
move from a world where information is
cheap to produce for everybody to a
world where we start to see verification
instead of credentiing. So credentiing
is what we used to do. Credentiing is
what a resume is for. Credentiing is
when we have certifications.
Verification actually shows in a
provable way that we have the skill. And
I think that we are trying to make our
little baby steps that way when we talk
about the idea of proving work through a
portfolio. But we can go a lot farther
than that if we go back to first
principles and actually reason this
through. Let's look at what it takes
when you see verification as the heart
of a new way of thinking about jobs in
the postAI era. In the era when
information costs nothing to produce. I
want to suggest five quick principles
for a verification world. And I want to
start to suggest to you how we could
start to build those out and game those
out even now. One of the hard things,
one of the things that has make it made
made it difficult to make this video is
that a marketplace like the talent
marketplace is sometimes stuck in a bad
equilibrium where every single
stakeholder has an incentive to change
it, but none of us can do it by
ourselves. I want to give you tools that
work even in a difficult equilibrium
like we're in right now. And that's what
I've been really wrestling with. So, the
five principles that follow are
scalable. They work both. You can see
elements of them now and they have teeth
that let you get into a better
equilibrium if we can all work together
as a tech ecosystem. So principle number
one, process over outcome. Outcomes are
more easily fakeable now. LLMs, as I've
been saying, they generated code, they
generate writeups, they generate demos.
Process patterns are closer to that
verification world. Process patterns are
hard to fake. We look for them in
interviews. The iteration cycles you
took to get something done, where you
got stuck, how you debugged some Vibe
code, what you would do differently.
Effective LLM use, effective LLM
building, effective LLM writing has a
shape. You can iterate, you can
backtrack, you can override, but it's
much much easier to distinguish the
shape of good LLM co-work versus blind
acceptance. And so I think that one of
the things that we should start thinking
about is making our process the product
when it comes to the talent marketplace.
This has concrete implications for your
portfolio. If you're looking at your
portfolio as an outcome, maybe you want
to look at it as a process or a story
that you're telling where you include
the debugging and the getting stuck and
what you do differently. The most
effective portfolio site I have ever
seen told a full three-year story of a
product. every stage along the way was
honest about mistakes, showed failed
designs. It was absolutely compelling.
The process matters more than the
outcome. And you can't fake the process
the way you can fake the outcome in the
age of AI. It's number one. Number two,
we need to make verification easier, not
make signals better. Companies don't
need better candidates. Actually, most
of them have all the candidates they
need sitting in the applicant pool, as
the applicants will tell you. it's that
they can't tell who's real. So, stop
optimizing for better resumes and
shinier portfolios in that world because
the companies won't be able to tell.
Instead, start optimizing for things
that are more verifiable. How can you
show work trials where you solved a real
problem? How can you and by the way, as
a hiring manager, you should be looking
at work trials. That is actually a good
way to get a sense of how people work in
this world and it gives them something
they can show. What about live
problem-solving videos where you get on
with a candidate and you solve a problem
together. That's a great way to sort of
make this work as well. And if you're a
candidate, you don't have to wait. You
can live solve a meaningful problem. And
I've seen people do it in videos where
they get on and they say, "You know
what? I took a look at your onboarding
funnel. These are the three things I
think I'd change. This is why. This is
how I'd change it. This is how I'd test,
etc." You can just start to problem
solve. And again, you're showing that
process and you're making it you're sort
of surfacing verification because one of
the things I will tell you on the talent
side, companies want to do this, but
they by and large don't know how and
they are stuck in the existing default
circumstance. The goal of this video is
to shake up the status quo a little bit
and get people thinking differently
because I think that both sides need to
think differently to shake this
equilibrium loose. Ultimately, the
winner in a system like this isn't the
one that yells the loudest. It is the
one who makes hiring decisions the
easiest. And if I could tell talent one
thing, if you're looking for a role,
make the hiring decision the easiest
thing. That is actually the mindset to
be in more than the noise. Principle
number three, we can start to use LLM to
generate verification, not just to
generate text. Now, this starts to get
creative, maybe a touch speculative.
There might be a product idea here, but
I think there's something for both the
talent side of the ball and also for the
hiring side of the ball here. The point
is this. We are mostly using LLM as
noise generators in the talent
marketplace. We shouldn't be LLMs are
actually really effective judge judges
of other people's work. They're
effective evaluators. They're effective
researchers. They're creative thinkers
and they're verifiers. In other words,
these are machines that compute with
words and we are just using them to
produce lots and lots of cheap text
instead of thinking more creatively.
What could we do with this capability?
As an example, a cryptographically
signed LLM conversation shows your
prompt quality and your iteration
pattern. Now, you may not be able to
cryptographically sign it because I'm
not sure I know of a startup that does
that, but you can still right now show
your prompt quality and iteration
pattern. Again, we're going back to that
process piece, aren't we? LLM generated
adaptive assessment finds your competent
ceiling efficiently. What that means is
you can actually get the LLM to
progressively test you and ask you
harder and harder and harder questions.
I wrote an AI fluency assessment just a
few days ago on the Substack and it had
some of that built into it, but you can
go farther. You can actually design an
LLM competence assessment that asks
harder and harder and harder questions
as you go to eventually find where you
top out. And I think that that's
actually useful not just for hiring
managers to find signal. It's also
useful again on the process side for
talent to show what what you're capable
of, right? Like if you can go through
and you can take the hardest, most
gnarly product management questions that
an LLM can throw at you and answer them
in a highquality way after going through
15 easy, medium, increasingly difficult
ones, that says something, especially if
you can see the whole process. If you
can see that you're not gaming the
system. So I think that we are overdue
for using LLMs to create signal where
there just hasn't been any signal
whatsoever. Right? It's like we're
pouring all of this energy for AI into
making noise in a crowded, noisy
marketplace, but there are quiet spaces
where nobody's talking at all. Why
aren't we using AIs a little bit more
creatively beyond just generating
resumes, right? Beyond just generating
cover letters. All right, principle
number four, bilateral value creation.
You want to help companies to verify
themselves. I know this sounds funny if
you're talent like why do the companies
need the help? But trust me, most
companies do not know what they really
need. They don't. They're posting LLM
generated job descriptions for fuzzy
roles and they need help to clarify in
most cases. You can interview them about
the problem space, right? You can write
analyses of their challenges. You can
offer trials that validate their needs.
I know people who are doing this and are
sort of taking command of the job
process because the company is trying to
figure out the answer and it feels
really good for them when a talented
candidate comes along and says, "Let me
help you get clarity on this role. This
is what you actually need." If you want
a cheat code for more senior interviews,
a lot of your senior interviews for
director and up roles look like that
because they're all custommade. And so
you end up in a place where you are
helping the company to figure out for
both of you what the company really
needs in the role and then secondarily
whether you're a fit. In that situation
you're not just proving your capability.
You're helping them understand what
capability they are looking for. That is
the kind of value that an AI resume
can't give. That is the kind of value
that reminds them that you produce value
that can't be gotten from Clo or Chat
GPT. Right? like it's something that is
essential in the humanto human
connection of work, which by the way,
lest we forget, is the whole point of
all of this. Principle number five, you
need to be looking at capability spaces
more than job titles. I saved the best
one for last. An AIPM means different
things at different companies. We all
know that, but we lack a vocabulary for
the next level. So, what I want to
encourage you to do is to think about it
this way. Job titles are often noise at
this stage because the roles are
evolving so quickly and it's part of
what makes the talent marketplace so
noisy. So instead of looking at all AIPM
roles, position yourself across
capability spaces, look at technical
communication. Maybe that's a strength
for you. Look at system design under
situations of uncertainty. Look at LLM
evaluation. Is that a skill that you
have? Look at rapid prototyping. Build a
project that works across multiple
capabilities. show your process, which
is one of the things I've been calling
out, match on problem types that they
need to have solved. And so, one of the
things that I think is actually really
slept on is that we have semantic search
available now that will allow you to
match on much more than just keywords.
And yet, our entire job ecosystem still
runs like the on keywords. Why is that?
Why can't we have a job semantic search
that matches not on keywords but on the
capabilities? This it's not that hard.
And there you can actually build one
yourself. If you wanted to do a project
where you built a rag and you could
build out a listing of jobs in a
particular job family and you could
semantically search to see where the
correct role targets are. All of the
tech is on the table. That is basically
a weekend project. At this point, you
can transcend the title matching game
entirely with with work like that. And
the larger point, whether you want to
build a rag for your personal job search
or whether I'm inspiring someone to do
that, because I bet I am, the larger
point is this. Think in capability
spaces. Think in terms of what are the
capability sets you can show. How can
you lay out that process really
transparently? And then and then you can
get into a space where you can start to
show what you know in a way that's
provable. And that gets all the way back
to verification. The larger point is
this. As more LLMs create more noise, as
the crowd runs to have LLMs generate
resume after resume, generate AI answer
for interviews after AI answer for
interviews, verification is only going
to become more valuable, not less
valuable. The tactics I'm laying out
here are designed to have increased
returns. The more the market breaks, the
bigger your advantage for making vetting
easier because that is the core problem
companies are facing. I don't want to
give you principles here that require
everyone who is listening to this to
yell louder and compete with each other.
Instead, I want to give you things that
let you zigg when the market is zagging.
And right now, the market is zagging
hard toward yelling in a noisy
marketplace with AI. So, let's find some
creative alternatives, shall we? you are
building with these kinds of moves
toward a new equilibrium while everyone
else is clinging to the old one and that
gap is going to widen with time. The LLM
noise crisis is not going away. I said
at the top this is a permanently broken
system. It's broken permanently not
because of anybody's bad intent but
because LLMs have permanently reset the
cost of this kind of information to
zero. So this is not really advice for
navigating a broken system. It is
positioning you for the future system
that will replace it. And it's setting
you up to work well even now in a system
that is not quite ready to reach that
new equilibrium. It is a principle for
bridging. How can we succeed now and
zigg while the market is zagging and how
can we build toward a better
equilibrium. The bottom line is this.
Information has become free in the last
two years. Verification has become
priceless. The winner makes
verification. Think about that. Good
luck in your job search. Good luck
hiring. is hard to hire to.