AI Search Inverts Rankings
Key Points
- The rise of AI‑driven search is causing top‑ranked sites to lose visibility while smaller players can see up to three‑fold gains, creating a 12‑ to 18‑month window before the rankings reverse.
- Large language models deliberately diversify sources, so aggressive SEO (especially geo‑targeting) by dominant sites triggers “position‑bias inversion” that pushes them lower in AI‑generated results.
- Over‑optimization and even being #1 on Google can hurt AI visibility; instead, incumbents should “under‑optimize,” relying on existing authority and minimal citations.
- The “18‑token magic number” is a proven pattern for Generative Engine Optimization (GEO), allowing content to be extracted more effectively by LLMs without traditional backlinks.
- Challenger brands and individual creators who aggressively adopt GEO can leapfrog established players during this malleable period, but must act now before the power structures solidify.
Sections
- AI Search Shifts Visibility Landscape - The speaker warns that AI-driven search is eroding the dominance of top sites, creating a 12‑to‑18‑month window where newcomers can gain threefold visibility, and explains how over‑optimization, being #1 on Google, and the “18‑token” rule give individuals a strategic edge through generative engine optimization.
- The 18‑Token Extraction Pattern - The speaker explains that AI models prioritize short, 18‑token citations to minimize hallucinations and maximize synthesis efficiency, shaping how marketers should structure content and dominate their AI positioning space.
- Citation Formatting and AI Visibility - The speaker explains how informal web citation styles obscure individual experts from LLMs, favoring institutions, and suggests using dedicated, concept‑specific claim pages to ensure proper attribution and increase citation frequency.
- Monetizing High‑Quality Data Signals - The speaker argues that creators with verifiable, expert content can capitalize on the demand from LLM developers for clean, authoritative data, positioning themselves as valuable “signal” sources amid rising synthetic noise.
- Amplitude Offers Free AI Analytics - The speaker describes Amplitude’s newly launched free AI visibility tool, likening it to Google Analytics' free debut, as a strategy to establish measurement standards, drive widespread adoption, and later monetize the platform.
- AI as New Web Lens - The speaker explains that AI adds an intelligence layer that mediates our interaction with the open web, urging creators to make their expertise visible so they can stand out in this evolving, AI‑filtered browsing experience.
Full Transcript
# AI Search Inverts Rankings **Source:** [https://www.youtube.com/watch?v=IwQYVQ3MohE](https://www.youtube.com/watch?v=IwQYVQ3MohE) **Duration:** 00:21:24 ## Summary - The rise of AI‑driven search is causing top‑ranked sites to lose visibility while smaller players can see up to three‑fold gains, creating a 12‑ to 18‑month window before the rankings reverse. - Large language models deliberately diversify sources, so aggressive SEO (especially geo‑targeting) by dominant sites triggers “position‑bias inversion” that pushes them lower in AI‑generated results. - Over‑optimization and even being #1 on Google can hurt AI visibility; instead, incumbents should “under‑optimize,” relying on existing authority and minimal citations. - The “18‑token magic number” is a proven pattern for Generative Engine Optimization (GEO), allowing content to be extracted more effectively by LLMs without traditional backlinks. - Challenger brands and individual creators who aggressively adopt GEO can leapfrog established players during this malleable period, but must act now before the power structures solidify. ## Sections - [00:00:00](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=0s) **AI Search Shifts Visibility Landscape** - The speaker warns that AI-driven search is eroding the dominance of top sites, creating a 12‑to‑18‑month window where newcomers can gain threefold visibility, and explains how over‑optimization, being #1 on Google, and the “18‑token” rule give individuals a strategic edge through generative engine optimization. - [00:03:36](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=216s) **The 18‑Token Extraction Pattern** - The speaker explains that AI models prioritize short, 18‑token citations to minimize hallucinations and maximize synthesis efficiency, shaping how marketers should structure content and dominate their AI positioning space. - [00:08:08](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=488s) **Citation Formatting and AI Visibility** - The speaker explains how informal web citation styles obscure individual experts from LLMs, favoring institutions, and suggests using dedicated, concept‑specific claim pages to ensure proper attribution and increase citation frequency. - [00:12:08](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=728s) **Monetizing High‑Quality Data Signals** - The speaker argues that creators with verifiable, expert content can capitalize on the demand from LLM developers for clean, authoritative data, positioning themselves as valuable “signal” sources amid rising synthetic noise. - [00:16:16](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=976s) **Amplitude Offers Free AI Analytics** - The speaker describes Amplitude’s newly launched free AI visibility tool, likening it to Google Analytics' free debut, as a strategy to establish measurement standards, drive widespread adoption, and later monetize the platform. - [00:19:48](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=1188s) **AI as New Web Lens** - The speaker explains that AI adds an intelligence layer that mediates our interaction with the open web, urging creators to make their expertise visible so they can stand out in this evolving, AI‑filtered browsing experience. ## Full Transcript
The open web is dying. You've probably
heard that. What you haven't heard is
that the top ranked sites are actually
losing visibility while nobody's are
getting 3x gains. And there's a 12 to
18month window now before all of this
inverts and the old players go back to
winning. Here's why that's happening and
what this opportunity means for you.
Even when we talk about AI search
killing the web, most of us don't
realize the mechanics that make that
possible. And I want to get into them so
you understand how you can change your
own visibility strategy whether you're
an individual or whether you're an
organization. So I'm going to be talking
about things like overoptimization and
why that kills you. I'm going to be
talking about why being number one on
Google might actually not be a good
thing. I'm going to be talking about the
18 token magic number. Yes, it's real.
It's actually a magic number. And I'm
going to explain why. And of course, I'm
going to talk about why individuals
legitimately have a better shot than
many brands right now at AI visibility.
This is all drawn from a Princeton
validated data set and study on what's
called generative engine optimization or
AI visibility. Take your pick. It's
basically how you get visible in LLM.
Most people slept on this. The strategic
implications of what they found really
determine who is going to win during
this rare malleable period when results
are shifting and who's ultimately going
to erase when the power structures start
to solidify. Let's start by talking
about the winner loser dynamic or what I
would call position bias inversion.
Fancy word, but we're going to get into
it. If you are already ranking, let's
say in the top three on Google,
aggressive GEO optimization can actually
kill your AI visibility because models
actively diversify sources to avoid
appearing captured by dominant players
right now. Princeton found this in their
data, but most people missed it. What
that means is your LLM, unlike Google,
is not optimizing for the first page and
wants to have a diverse perspective when
it comes back to you with answers. That
means if it sees the same players, the
top three, it is deliberately going to
go below. That is bad for existing
brands. It is good for the rest of us.
So the strategic playbook really splits
here, doesn't it? If you're an incumbent
with traditional authority, if you're
Nike, you need to underoptimize. So you
want to look at fluency overall and you
maybe have a citation or two, you want
to let your existing credibility carry
most of the water for you. But if you're
a challenger with genuine expertise but
no domain authority, this is a really
rare chance to be extremely aggressive
because you can actually leapfrog
potentially without back links because
of course backlinks aren't required for
generative engine optimization. So the
12 to 18month compression happens
because most top ranked content isn't
optimized for LLM extraction patterns
yet. And so there's this asymmetry where
lower ranked sources with proper
structure for AI are getting cited at
what Princeton found were two to 3x
higher rates, but once everybody
optimizes, that advantage disappears and
we're back to authority signals
mattering, just measured differently,
right? So what this means practically is
if you were a brand that everybody
knows, your competitor's blog post
structured in the way that AI can absorb
it might actually outrank you in AI
citations. Even though you dominate and
own the top of the traditional search
page, that's no guarantee. And that
window is the single most obvious
strategic opportunity I can tell you.
Now, I say it's obvious because once I
explain it, it makes sense. But most
people aren't picking up on this. And
until they do, it's yours, right? You
get to pick the space you want to
dominate from an AI positioning
perspective. And you get to start to
implement the tactics I'm going to lay
out to increase your visibility. And
they are very specific tactics, and
we're going to get into them. The first
one, I promised you an answer to this,
the 18 token extraction pattern. Why
content structure has changed. So if you
do a copy-paste audit out of GPT, Chad
GPT, right, you find that almost all
citations end up being synthesized. They
end up being single sentence extractions
that are under 18 tokens. Now, that's
not true if it's a deep research piece
and you have a lot more tokens to play
with, but for most of the models we work
with day-to-day and frankly for the vast
majority of the searches, the model is
optimizing for synthesis efficiency. And
anything longer than a short sentence is
going to require summarization and that
introduces potential errors and reduces
citation confidence. And so, the models
are trained to try to reduce
hallucinations. And if they have
something that is a clean sentence, 18
token sentence or so that they can just
deliver, they feel good because they
feel like they found the answer to
whatever you're talking about. It's
clear, it's quotable, it fits inside
their context window, and it works. This
is also drawing from the Princeton
study. This breaks traditional content
strategy, right? Traditional content
strategy is built around these long form
authority pieces where you build
arguments across many paragraphs. And
you know what you're really hoping for
is that the piece will be rich enough
that Google will pick it up. But but
here what actually gets extracted and
cited is a single confident claim that's
a clear sentence. It's a complete
self-contained statement that needs zero
surrounding context to be useful. It is
snacks sized for the LLM. Right? So, the
implication here is that your 30,000word
definitive guide or whatever you've
written for SEO or for visibility may
well get summarized while your
competitor's 600 guy 600word guide with
like five golden nugget sentences that
they've called out and highlighted that
ends up getting quoted verbatim by the
AI because it has a relationship between
content quality and citation capability
that yours doesn't. like the LLM is able
to site and so that one small piece can
invert years of work, right? If you
start to build up a little library and
you start to go from there because once
the AI starts to figure out it can get
stuff from this particular source, it's
going to keep coming back. LLMs, like
people can be creatures of habit. So, in
other words, this means you don't have
to write a long form piece and dedicate
lots of effort as a brand to owning
nuanced arguments and complexity. you
actually can split out your content
operations into very very clean content
that you have optimized for AI and also
if you want human readability at the
same time. Yes, it is possible. One of
the things that marks a weak SEO
strategy is people who tell you have
these hidden pages that AI can see and
humans can't. Ultimately, the incentive
of the LLMs is similar to the incentive
of Google. They want to find you useful
information as a person. If you start to
create pages that are not useful to
humans at all, you run the risk of
running a foul of any kind of search
tool update that OpenAI or Anthropic
ships. So if I were you with this
information, I would look at a new kind
of content structure that is designed to
be human readable but also have these
sort of snackable extracted moments that
are really easy for LLMs to come in. And
let's talk a little bit more about this
institution shadow issue. There was a
GEO bench personal entity study and it
tracked 3,200 experts and what it found
was that we have a real issue with
institutional shadows in individual
visibility for AI. For example, if you
if you let's say you're a researcher at
Google, right? And you have a name we'll
call we'll say Jane Doe, right? PhD Jane
Doe did the work on a Google paper. The
problem is the institution Google can
overshadow the value of the individual.
This isn't an AI limitation per se. It
is actually a formatting problem that
most experts aren't aware of. What when
you format as say here's my quote,
here's my first name Jane, my last name,
and then my title and my org like Google
in a line, the attribution is accurate
because the LLM can read and understand
the semantic relationship between all
those terms. But everyone knows on the
open web that we rarely get cited that
way. I have never been cited in that
formal a fashion. Quote, first name,
last name, my special or my year, all in
one clean line. People don't do that on
the web. And that means most experts end
up invisibly contributing to AI
knowledge while the institutions capture
the credit because any other citation
structure in the study reinforced Google
or the org name rather than the
individual. So there is an opportunity
here if you can set up a claim page a
page off your website that talks about a
particular concept and it only talks
about that concept. So like your
name.com/concept,
right? That gets cited, the study found
four times more often than a multi-topic
blog does. And I think that is part of
why we have seen such tremendous uptake
on recent very highprofile long- form
papers that are in a dedicated domain.
And so I don't know if you've noticed
this, but one of the things on the web
in the age of AI is that people who want
to sound serious write a special essay
and put it on a special domain. Ashen
Briner's situational awareness is one.
The AI 2027 domain is another one. These
sit on their own URL and they get cited
in queries even if they're longer. I
said longer doesn't always work, but
there's some exceptions to that, right?
Like they sit on their own URL. They're
only about this one particular thing.
Typically, they will have a cover page
that is full of the kind of juicy
tidbits under 18 tokens that LLM love
and then humans can go in and read more.
So, this is not about quality. This is
about architecture matching extraction
patterns. How do you build an
architecture that allows the LLM to
understand you're just talking about
this one thing? Are you seeing that
pattern? The LLM wants clarity. That's
what it's looking for and we need to
give it that clarity. But right now,
most experts that I talk to haven't
figured out what they're going to be an
expert in from an LLM perspective. They
don't have the idea of having a claim
page, something they're going to talk
about that they're going to own the
concept on. That means if you want to
own something, chances are nobody else
does yet. If you structure your
expertise in such a way that it's a
unique answer to a specific and actually
asked question and you have it be human
readable, you have a chance to establish
AI authority in the space while everyone
else figures this out. I also want to
talk about the noise floor paradox. So
the way I've phrased this is why spam
makes you more valuable, which I kind of
cringe at, but let's get into it. So, a
a a study by Spark Toro found that
approximately half of new pages are AI
generated spam, right? And everyone
thinks that this makes the web less
useful because there's lessformational
density. But here's what they're
missing. As the noise floor rises, as
you get more and more of these cheap
500word AI listicles that don't have
coherence, etc., AI is more and more
desperate to avoid hallucination
penalties. And that makes high signal
content rarer. It makes it more
valuable. One of the reasons I do video
is because it is hard to imitate video
in the same way. You can't get Nate
waving his hands in the same way. And
that makes me sort of a unique piece,
right? That's very intentional on my
part. In the same way, think about
places where you can have these sort of
intentional presence moments on the web.
Maybe not through video, right? maybe
through really good writing, but
whatever it is, think about how you can
be a place for signal in a world where
LLMs are searching through noise. I
think this is why Reuters licensed their
corpus to Anthropic. I think the deal
was like $5 million annually. Frontier
Labs need sources that they can site
with real confidence. And the more
synthetic garbage comes onto the web,
the more labs will pay for clean signal
and really the more LLMs will be trained
to find it. So the strategic implication
here is if you have genuine expertise
with verifiable data, you have a window
where you can actually establish value
on the web. And if your corpus of data
is rich enough, which not everybody's
is, but if it's rich enough, you may
even be asked to monetize it as training
data. Not going to say that's for you.
We're not going to say that's for
everybody, but it's a possibility at
this time because model makers are so
hungry for very high quality data. And
regardless of whether you end up having
Dario Amade calling you on the phone
offering you $5 million, most of us
don't. I certainly don't. You have the
chance to be the signal in the noise on
the on the web. And that matters because
that allows an LLM to bring you into the
chat in a way that's high authority.
Next, I want to talk about this idea of
citation churn and why static content is
such an issue on the AI web. You tend to
optimize. If you're doing a GEO
strategy, a generative engine strategy,
you tend to get cited initially, right,
in week one, and then you vanish by like
week three or week four because models
will reank based on other competitor
updates, on freshness, and so your
evergreen content does rot. And
competitors who do micro updates very
quickly maintain good visibility. And so
changing something on your page can help
to call attention to an AI that there's
life here. And I don't want you to make
this a situation where you are trying to
game the system. That is not the intent.
If you are putting fresh content out
there and it's meaningful and it's snack
size, it has some 18 token moments in
it, it's readable by humans, you're
going to be fine. But this does invert a
lot of the content investment thesis.
The content investment thesis sort of
says if you publish really comprehensive
good pieces, you can generate passive
traffic from search for years. It is not
as clear that AI does that. In fact, in
the AI citation economy, content can
require ongoing maintenance or it
effectively drops out of the model's
mind, which means your org structure
might need to look different if you're a
brand. You may need dedicated resources
for micro updates versus these long
pieces. I also want to talk about the
domain mismatch penalty. So, LLMs were
trained to cross-check domain alignment
as a way of looking at hallucinations
and trying to avoid them. But what that
means is that traditional build
authority through comprehensive coverage
can be actively toxic because the
content sprawl that worked for SEO,
write about adjacent topics, capture
longtail keywords, right? I've heard
this since, you know, for 20 years, that
now flags you as a non-expert because
you're not as focused. Do you see this
Uber theme, this larger theme of focus I
keep coming back to? That's your
takeaway. If the model sees you citing
outside your core domain, it may assume
that you're an aggregator. It may assume
you're less authoritative and that
breadth can actually harm your AI
citations. And so the implication for
you, it kind of goes back to focus. You
need to have a content focus that is
very specific and you need to be
aggressive about the domain you're in,
the sources you talk about in that
domain, and just obsess over that. This
is similar, in fact, to the Tik Tok
strategy where you just talk about one
thing all the time. On my Tic Tac
channel, it's it's all AI. I just talk
about AI and that's what works because
the algorithm knows what to expect and
because the people know what to expect.
You can't really sort of separate this
from the people who are actually
consuming the content. Now, why am I
making this video this week? The window
is getting compressed. Amplitude launch
of free AI visibility tooling that's
blowing up. You can use it as an
individual. You can use it as a brand.
It's completely free. I don't know how
long it's going to be free, but it's
free for now. And most people think, you
know, that's another analytics product.
But what they're missing is that this is
the first time a major platform has
given away measurement infrastructure
for free in an effort to define the
terms of the debate. And so what they're
signaling is GEO is going mainstream.
Everybody needs to be aware of it. We're
going to make it free and you are going
to be able to get not only your score,
but by the way, you can look up any
brand on there for free right now. I can
look up Nike for free. And so if you
want to look at any brand in the world
or you want to look at your buddy who
you think is doing very well, fine, you
can do it. And Amplitude will write you
a free report. It's a really cool
nugget. And the pattern is very similar
to when Google Analytics launched,
right? They made it free. And once
defining the measurement standard became
the property of that brand. So Google
Analytics effectively defined the
measurement standard and became the
Kleenex of measurement on the web.
Adoption starts to accelerate and you
find ways to monetize down the line.
That's the strategy Amplitude is is
using. They're using the Google Analytic
strategy here. The strategic implication
is that the playbook that I'm sharing is
not going to stay secret for long once
there are easy ways to measure it. And
so I do worry that this 12 to 18month
window is one that gets shorter and
shorter and shorter as more of these
tools come online. The last thing I want
to talk about is the under optimization
strategy. Why is less more? This is the
most counterintuitive finding in the in
the study. For topranked sites, using
only optimizing for a little bit of AI
fluency plus maybe one strategic
citation on the page produced an average
of 20 22% net gains. Well, aggressive
multi-technique optimization actually
triggered the AI to detect that the
brand was trying too hard and to reduce
visibility. And this runs counter to
every SEO instinct in our bodies, right?
And so I want to call that out because
it reminds me that intelligence is now
filtering our web experience. The LLM
figured out that you were trying to game
the system. This is why besides focus,
the other thing I have been emphasizing
to you is do not try to game the system
but try to convey real authority. If you
have an established brand, that means
resist the urge to overoptimize. Trust
your existing credibility. Make it a
little more legible with light touches,
but don't push it in the AI's face. If
you're a individual, a competitive small
brand, that means you can be aggressive
because you're essentially shouting from
a sea of small people to be heard. And
the AI may be more likely to pick up
your signal if it's focused, if it's
high authority, if it's reputable, if
it's put together, if it's clear, if
it's cited. I don't mean cited by
backlinks. I mean if you have citations
around your area of expertise that are
really useful. So where do we wrap up
here? The open web is dying. I don't
want to pretend it's not. The fact that
we have more and more Google searches
year-over-year, which is true, does not
translate into more and more clicks
through. As as anyone in SEO will tell
you, more and more searches every year
are ending on the search results page.
And there is no click-through because
Google is so good at giving answers. I
want to challenge you that what we are
seeing right now is the advent of a new
kind of web. It is not that we should
think about it the way most news media
portrays it as SEO go down or news media
or search goes down and then AI goes up.
That's not correct. Instead, we should
think about it as both are rising but AI
is opening up a fundamentally new
relationship with the web where the
intelligence layer disintermediates or
comes between the open web and the
individual. And so the art of this is
thinking of the AI as the pair of
glasses that you put on to view the open
web. And all you're trying to do is help
that pair of glasses focus on real
signal that's useful. You can trust it
to not want trash. I know that sounds
funny, but they're they're actively
working to make them better at that. You
can trust it to be hungry for signal.
The tips I'm giving you should help you
to make your expertise legible to the AI
so that when this whole new web
experience comes out where you are
essentially experiencing the web through
an AI, you get noticed. And that is not
only my best tips from the Princeton
study, but some real hints on how I am
intentionally thinking about my own
presence on the web as we move into this
new world of AI. I hope this has been
useful for you. It's not the end of the
world that the web is dying. And really,
I don't think the web is dying, per se.
I think it's just evolving. The web has
always evolved. This is the next part of
the story.