Learning Library

← Back to Library

AI Search Inverts Rankings

Key Points

  • The rise of AI‑driven search is causing top‑ranked sites to lose visibility while smaller players can see up to three‑fold gains, creating a 12‑ to 18‑month window before the rankings reverse.
  • Large language models deliberately diversify sources, so aggressive SEO (especially geo‑targeting) by dominant sites triggers “position‑bias inversion” that pushes them lower in AI‑generated results.
  • Over‑optimization and even being #1 on Google can hurt AI visibility; instead, incumbents should “under‑optimize,” relying on existing authority and minimal citations.
  • The “18‑token magic number” is a proven pattern for Generative Engine Optimization (GEO), allowing content to be extracted more effectively by LLMs without traditional backlinks.
  • Challenger brands and individual creators who aggressively adopt GEO can leapfrog established players during this malleable period, but must act now before the power structures solidify.

Sections

Full Transcript

# AI Search Inverts Rankings **Source:** [https://www.youtube.com/watch?v=IwQYVQ3MohE](https://www.youtube.com/watch?v=IwQYVQ3MohE) **Duration:** 00:21:24 ## Summary - The rise of AI‑driven search is causing top‑ranked sites to lose visibility while smaller players can see up to three‑fold gains, creating a 12‑ to 18‑month window before the rankings reverse. - Large language models deliberately diversify sources, so aggressive SEO (especially geo‑targeting) by dominant sites triggers “position‑bias inversion” that pushes them lower in AI‑generated results. - Over‑optimization and even being #1 on Google can hurt AI visibility; instead, incumbents should “under‑optimize,” relying on existing authority and minimal citations. - The “18‑token magic number” is a proven pattern for Generative Engine Optimization (GEO), allowing content to be extracted more effectively by LLMs without traditional backlinks. - Challenger brands and individual creators who aggressively adopt GEO can leapfrog established players during this malleable period, but must act now before the power structures solidify. ## Sections - [00:00:00](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=0s) **AI Search Shifts Visibility Landscape** - The speaker warns that AI-driven search is eroding the dominance of top sites, creating a 12‑to‑18‑month window where newcomers can gain threefold visibility, and explains how over‑optimization, being #1 on Google, and the “18‑token” rule give individuals a strategic edge through generative engine optimization. - [00:03:36](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=216s) **The 18‑Token Extraction Pattern** - The speaker explains that AI models prioritize short, 18‑token citations to minimize hallucinations and maximize synthesis efficiency, shaping how marketers should structure content and dominate their AI positioning space. - [00:08:08](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=488s) **Citation Formatting and AI Visibility** - The speaker explains how informal web citation styles obscure individual experts from LLMs, favoring institutions, and suggests using dedicated, concept‑specific claim pages to ensure proper attribution and increase citation frequency. - [00:12:08](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=728s) **Monetizing High‑Quality Data Signals** - The speaker argues that creators with verifiable, expert content can capitalize on the demand from LLM developers for clean, authoritative data, positioning themselves as valuable “signal” sources amid rising synthetic noise. - [00:16:16](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=976s) **Amplitude Offers Free AI Analytics** - The speaker describes Amplitude’s newly launched free AI visibility tool, likening it to Google Analytics' free debut, as a strategy to establish measurement standards, drive widespread adoption, and later monetize the platform. - [00:19:48](https://www.youtube.com/watch?v=IwQYVQ3MohE&t=1188s) **AI as New Web Lens** - The speaker explains that AI adds an intelligence layer that mediates our interaction with the open web, urging creators to make their expertise visible so they can stand out in this evolving, AI‑filtered browsing experience. ## Full Transcript
0:00The open web is dying. You've probably 0:02heard that. What you haven't heard is 0:04that the top ranked sites are actually 0:07losing visibility while nobody's are 0:09getting 3x gains. And there's a 12 to 0:1218month window now before all of this 0:15inverts and the old players go back to 0:16winning. Here's why that's happening and 0:18what this opportunity means for you. 0:21Even when we talk about AI search 0:23killing the web, most of us don't 0:25realize the mechanics that make that 0:28possible. And I want to get into them so 0:30you understand how you can change your 0:32own visibility strategy whether you're 0:34an individual or whether you're an 0:35organization. So I'm going to be talking 0:37about things like overoptimization and 0:39why that kills you. I'm going to be 0:40talking about why being number one on 0:42Google might actually not be a good 0:44thing. I'm going to be talking about the 0:4618 token magic number. Yes, it's real. 0:48It's actually a magic number. And I'm 0:50going to explain why. And of course, I'm 0:52going to talk about why individuals 0:54legitimately have a better shot than 0:56many brands right now at AI visibility. 0:59This is all drawn from a Princeton 1:01validated data set and study on what's 1:04called generative engine optimization or 1:07AI visibility. Take your pick. It's 1:09basically how you get visible in LLM. 1:11Most people slept on this. The strategic 1:13implications of what they found really 1:16determine who is going to win during 1:18this rare malleable period when results 1:20are shifting and who's ultimately going 1:22to erase when the power structures start 1:24to solidify. Let's start by talking 1:26about the winner loser dynamic or what I 1:28would call position bias inversion. 1:30Fancy word, but we're going to get into 1:31it. If you are already ranking, let's 1:33say in the top three on Google, 1:35aggressive GEO optimization can actually 1:39kill your AI visibility because models 1:42actively diversify sources to avoid 1:45appearing captured by dominant players 1:47right now. Princeton found this in their 1:49data, but most people missed it. What 1:51that means is your LLM, unlike Google, 1:54is not optimizing for the first page and 1:56wants to have a diverse perspective when 1:59it comes back to you with answers. That 2:00means if it sees the same players, the 2:03top three, it is deliberately going to 2:05go below. That is bad for existing 2:08brands. It is good for the rest of us. 2:10So the strategic playbook really splits 2:12here, doesn't it? If you're an incumbent 2:14with traditional authority, if you're 2:16Nike, you need to underoptimize. So you 2:19want to look at fluency overall and you 2:21maybe have a citation or two, you want 2:24to let your existing credibility carry 2:26most of the water for you. But if you're 2:28a challenger with genuine expertise but 2:30no domain authority, this is a really 2:32rare chance to be extremely aggressive 2:35because you can actually leapfrog 2:37potentially without back links because 2:39of course backlinks aren't required for 2:41generative engine optimization. So the 2:4412 to 18month compression happens 2:47because most top ranked content isn't 2:50optimized for LLM extraction patterns 2:52yet. And so there's this asymmetry where 2:54lower ranked sources with proper 2:56structure for AI are getting cited at 2:59what Princeton found were two to 3x 3:01higher rates, but once everybody 3:03optimizes, that advantage disappears and 3:06we're back to authority signals 3:09mattering, just measured differently, 3:11right? So what this means practically is 3:13if you were a brand that everybody 3:14knows, your competitor's blog post 3:17structured in the way that AI can absorb 3:20it might actually outrank you in AI 3:22citations. Even though you dominate and 3:25own the top of the traditional search 3:26page, that's no guarantee. And that 3:29window is the single most obvious 3:32strategic opportunity I can tell you. 3:34Now, I say it's obvious because once I 3:36explain it, it makes sense. But most 3:38people aren't picking up on this. And 3:40until they do, it's yours, right? You 3:42get to pick the space you want to 3:44dominate from an AI positioning 3:47perspective. And you get to start to 3:49implement the tactics I'm going to lay 3:50out to increase your visibility. And 3:52they are very specific tactics, and 3:54we're going to get into them. The first 3:55one, I promised you an answer to this, 3:57the 18 token extraction pattern. Why 4:00content structure has changed. So if you 4:03do a copy-paste audit out of GPT, Chad 4:06GPT, right, you find that almost all 4:10citations end up being synthesized. They 4:13end up being single sentence extractions 4:16that are under 18 tokens. Now, that's 4:18not true if it's a deep research piece 4:20and you have a lot more tokens to play 4:22with, but for most of the models we work 4:24with day-to-day and frankly for the vast 4:26majority of the searches, the model is 4:29optimizing for synthesis efficiency. And 4:32anything longer than a short sentence is 4:34going to require summarization and that 4:37introduces potential errors and reduces 4:39citation confidence. And so, the models 4:41are trained to try to reduce 4:42hallucinations. And if they have 4:44something that is a clean sentence, 18 4:47token sentence or so that they can just 4:48deliver, they feel good because they 4:50feel like they found the answer to 4:51whatever you're talking about. It's 4:53clear, it's quotable, it fits inside 4:56their context window, and it works. This 4:59is also drawing from the Princeton 5:00study. This breaks traditional content 5:03strategy, right? Traditional content 5:05strategy is built around these long form 5:07authority pieces where you build 5:09arguments across many paragraphs. And 5:11you know what you're really hoping for 5:13is that the piece will be rich enough 5:15that Google will pick it up. But but 5:16here what actually gets extracted and 5:20cited is a single confident claim that's 5:22a clear sentence. It's a complete 5:24self-contained statement that needs zero 5:28surrounding context to be useful. It is 5:30snacks sized for the LLM. Right? So, the 5:33implication here is that your 30,000word 5:36definitive guide or whatever you've 5:38written for SEO or for visibility may 5:41well get summarized while your 5:43competitor's 600 guy 600word guide with 5:46like five golden nugget sentences that 5:49they've called out and highlighted that 5:51ends up getting quoted verbatim by the 5:53AI because it has a relationship between 5:56content quality and citation capability 5:59that yours doesn't. like the LLM is able 6:02to site and so that one small piece can 6:05invert years of work, right? If you 6:07start to build up a little library and 6:09you start to go from there because once 6:11the AI starts to figure out it can get 6:13stuff from this particular source, it's 6:15going to keep coming back. LLMs, like 6:17people can be creatures of habit. So, in 6:20other words, this means you don't have 6:21to write a long form piece and dedicate 6:25lots of effort as a brand to owning 6:28nuanced arguments and complexity. you 6:30actually can split out your content 6:32operations into very very clean content 6:36that you have optimized for AI and also 6:41if you want human readability at the 6:43same time. Yes, it is possible. One of 6:46the things that marks a weak SEO 6:48strategy is people who tell you have 6:50these hidden pages that AI can see and 6:53humans can't. Ultimately, the incentive 6:56of the LLMs is similar to the incentive 6:59of Google. They want to find you useful 7:01information as a person. If you start to 7:04create pages that are not useful to 7:07humans at all, you run the risk of 7:10running a foul of any kind of search 7:13tool update that OpenAI or Anthropic 7:15ships. So if I were you with this 7:18information, I would look at a new kind 7:21of content structure that is designed to 7:22be human readable but also have these 7:24sort of snackable extracted moments that 7:27are really easy for LLMs to come in. And 7:30let's talk a little bit more about this 7:32institution shadow issue. There was a 7:35GEO bench personal entity study and it 7:38tracked 3,200 experts and what it found 7:42was that we have a real issue with 7:44institutional shadows in individual 7:46visibility for AI. For example, if you 7:48if you let's say you're a researcher at 7:50Google, right? And you have a name we'll 7:52call we'll say Jane Doe, right? PhD Jane 7:55Doe did the work on a Google paper. The 7:57problem is the institution Google can 8:00overshadow the value of the individual. 8:03This isn't an AI limitation per se. It 8:08is actually a formatting problem that 8:10most experts aren't aware of. What when 8:13you format as say here's my quote, 8:16here's my first name Jane, my last name, 8:18and then my title and my org like Google 8:20in a line, the attribution is accurate 8:24because the LLM can read and understand 8:27the semantic relationship between all 8:29those terms. But everyone knows on the 8:32open web that we rarely get cited that 8:34way. I have never been cited in that 8:36formal a fashion. Quote, first name, 8:38last name, my special or my year, all in 8:41one clean line. People don't do that on 8:43the web. And that means most experts end 8:46up invisibly contributing to AI 8:48knowledge while the institutions capture 8:51the credit because any other citation 8:53structure in the study reinforced Google 8:56or the org name rather than the 8:59individual. So there is an opportunity 9:01here if you can set up a claim page a 9:05page off your website that talks about a 9:08particular concept and it only talks 9:10about that concept. So like your 9:12name.com/concept, 9:14right? That gets cited, the study found 9:17four times more often than a multi-topic 9:19blog does. And I think that is part of 9:22why we have seen such tremendous uptake 9:26on recent very highprofile long- form 9:30papers that are in a dedicated domain. 9:34And so I don't know if you've noticed 9:35this, but one of the things on the web 9:37in the age of AI is that people who want 9:39to sound serious write a special essay 9:43and put it on a special domain. Ashen 9:45Briner's situational awareness is one. 9:47The AI 2027 domain is another one. These 9:51sit on their own URL and they get cited 9:54in queries even if they're longer. I 9:56said longer doesn't always work, but 9:57there's some exceptions to that, right? 9:59Like they sit on their own URL. They're 10:00only about this one particular thing. 10:02Typically, they will have a cover page 10:05that is full of the kind of juicy 10:07tidbits under 18 tokens that LLM love 10:09and then humans can go in and read more. 10:12So, this is not about quality. This is 10:14about architecture matching extraction 10:17patterns. How do you build an 10:19architecture that allows the LLM to 10:22understand you're just talking about 10:23this one thing? Are you seeing that 10:25pattern? The LLM wants clarity. That's 10:27what it's looking for and we need to 10:29give it that clarity. But right now, 10:31most experts that I talk to haven't 10:33figured out what they're going to be an 10:35expert in from an LLM perspective. They 10:37don't have the idea of having a claim 10:39page, something they're going to talk 10:40about that they're going to own the 10:42concept on. That means if you want to 10:44own something, chances are nobody else 10:46does yet. If you structure your 10:48expertise in such a way that it's a 10:50unique answer to a specific and actually 10:53asked question and you have it be human 10:56readable, you have a chance to establish 10:59AI authority in the space while everyone 11:02else figures this out. I also want to 11:04talk about the noise floor paradox. So 11:06the way I've phrased this is why spam 11:08makes you more valuable, which I kind of 11:10cringe at, but let's get into it. So, a 11:12a a study by Spark Toro found that 11:15approximately half of new pages are AI 11:18generated spam, right? And everyone 11:20thinks that this makes the web less 11:21useful because there's lessformational 11:23density. But here's what they're 11:25missing. As the noise floor rises, as 11:28you get more and more of these cheap 11:30500word AI listicles that don't have 11:33coherence, etc., AI is more and more 11:36desperate to avoid hallucination 11:40penalties. And that makes high signal 11:42content rarer. It makes it more 11:44valuable. One of the reasons I do video 11:47is because it is hard to imitate video 11:51in the same way. You can't get Nate 11:52waving his hands in the same way. And 11:56that makes me sort of a unique piece, 11:58right? That's very intentional on my 11:59part. In the same way, think about 12:02places where you can have these sort of 12:04intentional presence moments on the web. 12:07Maybe not through video, right? maybe 12:08through really good writing, but 12:10whatever it is, think about how you can 12:12be a place for signal in a world where 12:15LLMs are searching through noise. I 12:18think this is why Reuters licensed their 12:20corpus to Anthropic. I think the deal 12:21was like $5 million annually. Frontier 12:24Labs need sources that they can site 12:26with real confidence. And the more 12:28synthetic garbage comes onto the web, 12:31the more labs will pay for clean signal 12:35and really the more LLMs will be trained 12:36to find it. So the strategic implication 12:39here is if you have genuine expertise 12:41with verifiable data, you have a window 12:44where you can actually establish value 12:48on the web. And if your corpus of data 12:51is rich enough, which not everybody's 12:53is, but if it's rich enough, you may 12:55even be asked to monetize it as training 12:58data. Not going to say that's for you. 13:00We're not going to say that's for 13:01everybody, but it's a possibility at 13:03this time because model makers are so 13:06hungry for very high quality data. And 13:08regardless of whether you end up having 13:10Dario Amade calling you on the phone 13:12offering you $5 million, most of us 13:14don't. I certainly don't. You have the 13:16chance to be the signal in the noise on 13:19the on the web. And that matters because 13:22that allows an LLM to bring you into the 13:25chat in a way that's high authority. 13:26Next, I want to talk about this idea of 13:29citation churn and why static content is 13:32such an issue on the AI web. You tend to 13:34optimize. If you're doing a GEO 13:37strategy, a generative engine strategy, 13:39you tend to get cited initially, right, 13:40in week one, and then you vanish by like 13:43week three or week four because models 13:45will reank based on other competitor 13:48updates, on freshness, and so your 13:51evergreen content does rot. And 13:54competitors who do micro updates very 13:57quickly maintain good visibility. And so 14:00changing something on your page can help 14:04to call attention to an AI that there's 14:07life here. And I don't want you to make 14:09this a situation where you are trying to 14:12game the system. That is not the intent. 14:14If you are putting fresh content out 14:17there and it's meaningful and it's snack 14:19size, it has some 18 token moments in 14:21it, it's readable by humans, you're 14:23going to be fine. But this does invert a 14:26lot of the content investment thesis. 14:28The content investment thesis sort of 14:30says if you publish really comprehensive 14:32good pieces, you can generate passive 14:34traffic from search for years. It is not 14:37as clear that AI does that. In fact, in 14:39the AI citation economy, content can 14:42require ongoing maintenance or it 14:44effectively drops out of the model's 14:46mind, which means your org structure 14:49might need to look different if you're a 14:50brand. You may need dedicated resources 14:52for micro updates versus these long 14:54pieces. I also want to talk about the 14:56domain mismatch penalty. So, LLMs were 14:59trained to cross-check domain alignment 15:02as a way of looking at hallucinations 15:04and trying to avoid them. But what that 15:07means is that traditional build 15:09authority through comprehensive coverage 15:11can be actively toxic because the 15:15content sprawl that worked for SEO, 15:17write about adjacent topics, capture 15:19longtail keywords, right? I've heard 15:20this since, you know, for 20 years, that 15:22now flags you as a non-expert because 15:25you're not as focused. Do you see this 15:27Uber theme, this larger theme of focus I 15:29keep coming back to? That's your 15:31takeaway. If the model sees you citing 15:33outside your core domain, it may assume 15:35that you're an aggregator. It may assume 15:37you're less authoritative and that 15:39breadth can actually harm your AI 15:42citations. And so the implication for 15:44you, it kind of goes back to focus. You 15:47need to have a content focus that is 15:49very specific and you need to be 15:51aggressive about the domain you're in, 15:53the sources you talk about in that 15:55domain, and just obsess over that. This 15:58is similar, in fact, to the Tik Tok 16:01strategy where you just talk about one 16:04thing all the time. On my Tic Tac 16:06channel, it's it's all AI. I just talk 16:08about AI and that's what works because 16:10the algorithm knows what to expect and 16:13because the people know what to expect. 16:14You can't really sort of separate this 16:16from the people who are actually 16:17consuming the content. Now, why am I 16:19making this video this week? The window 16:22is getting compressed. Amplitude launch 16:25of free AI visibility tooling that's 16:27blowing up. You can use it as an 16:28individual. You can use it as a brand. 16:30It's completely free. I don't know how 16:31long it's going to be free, but it's 16:32free for now. And most people think, you 16:34know, that's another analytics product. 16:36But what they're missing is that this is 16:38the first time a major platform has 16:41given away measurement infrastructure 16:43for free in an effort to define the 16:45terms of the debate. And so what they're 16:47signaling is GEO is going mainstream. 16:50Everybody needs to be aware of it. We're 16:52going to make it free and you are going 16:55to be able to get not only your score, 16:58but by the way, you can look up any 17:00brand on there for free right now. I can 17:02look up Nike for free. And so if you 17:05want to look at any brand in the world 17:06or you want to look at your buddy who 17:07you think is doing very well, fine, you 17:10can do it. And Amplitude will write you 17:11a free report. It's a really cool 17:13nugget. And the pattern is very similar 17:15to when Google Analytics launched, 17:17right? They made it free. And once 17:19defining the measurement standard became 17:22the property of that brand. So Google 17:25Analytics effectively defined the 17:27measurement standard and became the 17:28Kleenex of measurement on the web. 17:30Adoption starts to accelerate and you 17:32find ways to monetize down the line. 17:34That's the strategy Amplitude is is 17:36using. They're using the Google Analytic 17:39strategy here. The strategic implication 17:41is that the playbook that I'm sharing is 17:43not going to stay secret for long once 17:46there are easy ways to measure it. And 17:48so I do worry that this 12 to 18month 17:50window is one that gets shorter and 17:53shorter and shorter as more of these 17:54tools come online. The last thing I want 17:56to talk about is the under optimization 17:59strategy. Why is less more? This is the 18:02most counterintuitive finding in the in 18:04the study. For topranked sites, using 18:06only optimizing for a little bit of AI 18:09fluency plus maybe one strategic 18:11citation on the page produced an average 18:14of 20 22% net gains. Well, aggressive 18:18multi-technique optimization actually 18:20triggered the AI to detect that the 18:23brand was trying too hard and to reduce 18:25visibility. And this runs counter to 18:27every SEO instinct in our bodies, right? 18:30And so I want to call that out because 18:32it reminds me that intelligence is now 18:35filtering our web experience. The LLM 18:37figured out that you were trying to game 18:39the system. This is why besides focus, 18:42the other thing I have been emphasizing 18:44to you is do not try to game the system 18:47but try to convey real authority. If you 18:51have an established brand, that means 18:53resist the urge to overoptimize. Trust 18:55your existing credibility. Make it a 18:57little more legible with light touches, 18:59but don't push it in the AI's face. If 19:02you're a individual, a competitive small 19:05brand, that means you can be aggressive 19:07because you're essentially shouting from 19:09a sea of small people to be heard. And 19:12the AI may be more likely to pick up 19:15your signal if it's focused, if it's 19:17high authority, if it's reputable, if 19:20it's put together, if it's clear, if 19:22it's cited. I don't mean cited by 19:24backlinks. I mean if you have citations 19:26around your area of expertise that are 19:28really useful. So where do we wrap up 19:30here? The open web is dying. I don't 19:33want to pretend it's not. The fact that 19:35we have more and more Google searches 19:36year-over-year, which is true, does not 19:39translate into more and more clicks 19:40through. As as anyone in SEO will tell 19:42you, more and more searches every year 19:44are ending on the search results page. 19:47And there is no click-through because 19:48Google is so good at giving answers. I 19:50want to challenge you that what we are 19:53seeing right now is the advent of a new 19:55kind of web. It is not that we should 19:57think about it the way most news media 20:00portrays it as SEO go down or news media 20:03or search goes down and then AI goes up. 20:06That's not correct. Instead, we should 20:10think about it as both are rising but AI 20:13is opening up a fundamentally new 20:15relationship with the web where the 20:18intelligence layer disintermediates or 20:20comes between the open web and the 20:23individual. And so the art of this is 20:26thinking of the AI as the pair of 20:28glasses that you put on to view the open 20:32web. And all you're trying to do is help 20:35that pair of glasses focus on real 20:38signal that's useful. You can trust it 20:40to not want trash. I know that sounds 20:42funny, but they're they're actively 20:43working to make them better at that. You 20:45can trust it to be hungry for signal. 20:48The tips I'm giving you should help you 20:50to make your expertise legible to the AI 20:53so that when this whole new web 20:55experience comes out where you are 20:57essentially experiencing the web through 20:59an AI, you get noticed. And that is not 21:02only my best tips from the Princeton 21:04study, but some real hints on how I am 21:06intentionally thinking about my own 21:08presence on the web as we move into this 21:10new world of AI. I hope this has been 21:12useful for you. It's not the end of the 21:14world that the web is dying. And really, 21:16I don't think the web is dying, per se. 21:18I think it's just evolving. The web has 21:20always evolved. This is the next part of 21:22the story.