THE CANON

What AI Search Actually Does
to Your Brand

Steve means well.

He has been your SEO guy for three years. He brings bagels to the quarterly. He has a slide deck that shows your brand ranking number four in ChatGPT for your top category query. He has circled it in orange. He is proud of it.

Steve is not wrong that the number exists. He is wrong about what it means.

AI search is not a ranking system. It does not work like Google. It does not assign positions that hold. It selects, freshly, probabilistically, differently, every single time someone asks a question. The orange circle on Steve's slide is a screenshot of one moment in a system that had likely already changed by the time he hit print.

This isn't instability at the edges. This is all of search. The whole thing re-stacks. Constantly. Rarely the same way twice.

For about thirty years, the game was this: whoever controlled the top of search results controlled the sale. It did not matter if you were the best answer. It mattered that you were the first answer. SEO was triage for a broken system, one we helped break. Keywords stuffed into pages no one read. Backlinks from sites that existed only to sell backlinks. Taglines like "Built for the road ahead." Every car company. Every software brand. Every agency that needed to sound like they meant something. Clever noise. Designed to rank, not to mean.

The consumer was the last consideration. We cared about the sale.

That is over now. AI search is a fundamentally different referee. It does not rank what you have optimized. It selects what it trusts. You cannot keyword-stuff your way into an AI citation. You cannot write a tagline that resolves cleanly when a model tries to find the meaning. "Built for the road ahead" is entropy. The model tries to extract meaning. What road? Ahead of what? It finds a hundred brands saying the same thing. The probability distributes across all of them. None of them get the citation. Or the model gets confused and gives you someone else's answer. That is the wobble. And wobble, at scale, can become hallucination. One and done.

The system is now fast enough to be the consumer's advocate in a way no search engine ever was. It limits hiding places. Stage30 is not the next trick. It is the honest response.

These are probabilistic case studies. The research is real, sourced, and linked. Some is still forming in real time. What we know is directional. What we do not know yet, we say so. Every stat has a source. Every source links out. Use it.

Selection, Not Ranking

AI does not maintain a ranked list. It evaluates every query fresh, from scratch, probabilistically. What looks like a position is a snapshot of a single output from a system that has already moved on. The brands that consistently appear are not holding positions. They are earning selection, over and over, because they make the model's job easiest.

Node 1.1

Does ranking higher in AI search mean my brand gets recommended more?

The situation

Steve's slide shows your brand at number four. The client nods. Someone asks if you can get to number three. A budget gets approved. A plan gets made.

The same thing happened in 2008 with Google rankings. And it mostly worked then, because Google was slow enough to stay gamed long enough to matter.

The truth

There is no number four. There is no number three. AI does not hold positions between queries. It re-evaluates from scratch every single time someone asks. What Steve screenshotted was one output of a system that had probably already changed by the time he hit print.

This is not instability at the edges. This is all of search. The whole thing re-stacks. Constantly. And the old playbook, be the result, hold the result, defend the result, has almost no application here.

The data
Less than 1-in-100 chance of the same brand list appearing twice.

SparkToro and Gumshoe.ai ran the largest public test of AI recommendation consistency ever conducted. 2,961 prompts. 600 real volunteers. Less than 1-in-100 chance of the same brand list appearing twice. Less than 1-in-1,000 chance of any brand appearing in the same position twice.

What it means

The report Steve brought this morning is a moment. A snapshot of a probabilistic system that has likely already moved. Most GEO companies respond to this by trying harder to be a result. Stage30's answer is different: don't be a result. Be a source. Sources earn selection consistently, across runs, across platforms, across re-evaluations, because the model trusts them.

Stage30

We don't optimize position. We engineer selection probability. A brand that shows up in 70% of relevant AI responses with no stable position beats a brand that ranked number one in Steve's orange circle every single time.

Full source: "Less than 1 in 100 chance of the same brand list appearing twice. Less than 1 in 1,000 chance of the same brand appearing in the same position twice." SparkToro & Gumshoe.ai, January 2026. AI Brand Recommendation Inconsistency Study. 2,961 prompts, 600 volunteers.
Read the original research at GetPassionFruit
Node 1.2

If my brand is a category leader, am I protected from AI search volatility?

The situation

You are the category leader. You have been there for years. The instinct is: we are safe. Even if AI is unpredictable, a brand with our history and spend will always be in the answer.

That is what market leaders said about Google too. Right up until a well-structured smaller brand started outranking them for their own product terms.

The truth

Size earns you a better probability. Not a guaranteed seat. Even the most dominant brands in a category go missing from AI responses, regularly, with no warning, for no apparent reason.

The data
Top brands appeared in 55 to 77 percent of runs. Still absent 23 to 45 percent of the time, even at the top.

In SparkToro and Gumshoe.ai's study, top brands in their categories appeared reliably, but only 55 to 77 percent of the time. That means category leaders are absent from AI answers 23 to 45 percent of all queries. Nothing changed on their end. The system just moved on without them.

What it means

If your brand touches ten thousand AI-mediated buyer moments every month, a 23 percent absence rate is 2,300 moments where the model picked someone else. Not because your product got worse. Because your content gave the model something harder to work with than your competitor's did.

The old game rewarded incumbency. The new one rewards clarity. A challenger brand with cleaner, more structured content will take those moments from you and you will not see it happening until it already happened.

Stage30

The model does not remember your history. It selects whoever makes its job easiest right now. That is a solvable problem, if you stop trying to defend a position and start building the architecture that earns selection across every evaluation.

Full source: "Top brands appeared in 55 to 77 percent of runs. Still absent 23 to 45 percent of the time with no change in content or authority." SparkToro & Gumshoe.ai, January 2026. AI Brand Recommendation Inconsistency Study.
Read the original research at GetPassionFruit
Node 1.3

How much does AI citation performance vary by platform?

The situation

You are tracking AI visibility on ChatGPT. The numbers look decent. Someone checks Perplexity. Different story. Someone checks Gemini. Different again. The room gets quiet. Someone suggests maybe the platforms just need time to normalize.

They will not normalize. They are different systems with different source logic, different trust hierarchies, and different answers to the same question about your brand.

The truth

Winning on one AI platform tells you almost nothing about your performance on any other. And your buyers are spread across all of them. The old SEO game had one referee: Google. AI search has multiple referees with different rule books.

The data
615 times more variation in citation rates for the same brand across different AI platforms.

According to Superlines' 2026 AI Search Statistics report, researchers found 615 times more variation in citation rates for the same brand across different AI platforms than within any single platform. Same brand. Different platforms. Completely different story.

What it means

That number on your ChatGPT dashboard is not your AI visibility score. It is your ChatGPT score, one slice of a fragmented system with no consistent logic across platforms. Your buyers are spread across all of them. Your measurement probably is not.

Even if you figure out the trick for one platform, you have solved maybe 20 percent of the problem. The old hiding places do not exist here. There is no single algorithm to exploit.

Stage30

The only signal that travels reliably across all platforms is structure. Not platform-specific tricks. Clean, well-organized content that any AI system can immediately extract and trust. That is what we build. That is the only answer that works across the whole board.

Full source: "615x variance in citation rates for the same brand across different AI platforms." Superlines, March 2026. AI Search Statistics 2026: 60+ Data Points.
Read the original research at GetPassionFruit
Node 1.4

If I optimize for ChatGPT, does that carry over to Perplexity?

The situation

You have put serious work into ChatGPT optimization. Structured content, clean pages, good schema. Someone asks whether that work transfers to Perplexity. You assume it mostly does.

The truth

It mostly does not. The platforms do not share source logic. They do not pull from the same pool. The idea that you can pick one platform and call it done is a remnant of the single-algorithm era. There was one Google. That era produced an entire industry of people who learned to speak one language. AI search requires fluency across several, and the only common dialect is structure.

The data
Only 11 percent of domains cited by ChatGPT are also cited by Perplexity.

Cross-platform analysis from Ahrefs and Profound found only 11 percent of domains cited by ChatGPT are also cited by Perplexity. 89 percent of cited domains are platform-exclusive. For every 10 sources ChatGPT trusts enough to cite, only one shows up in Perplexity's answers.

What it means

You can have a solid ChatGPT strategy and near-zero Perplexity presence and not know it, because you are only looking at one dashboard. Your buyers are not all using the same platform. Your visibility strategy probably should not pretend they are.

Stage30

We build for the underlying architecture all platforms evaluate the same way: clarity, organization, extractability. Not for any single platform's quirks. Because quirks change and architecture holds.

Full source: "Only 11% of domains are cited by both ChatGPT and Perplexity." Ahrefs and Profound cross-platform dataset analysis, August 2024 to June 2025.
Read the original research at GetPassionFruit
There is no rank to hold.
There is only selection
to engineer.

Being Cited Is Not the Same as Being Chosen

There is a difference between your content being used as source material and your brand being named in the answer. Most dashboards do not track that gap. Most brands do not know it exists. The gap is where the ghost citation problem lives, and it is where pipeline stalls while metrics look healthy.

Node 2.1

My AI tracking dashboard shows citations are up. Why is pipeline flat?

The situation

The AI dashboard looks great. Citations are climbing. You share it in the all-hands. Leadership is pleased. Someone asks why pipeline is not moving. Nobody makes the connection, because the two metrics are in different reports owned by different teams.

This is what the old game produced at scale too: impressions without outcomes, reach without resonance, visibility without trust. We optimized for numbers that felt good and ignored whether the person on the other end was actually helped.

The truth

Your content can be cited by AI, pulled as a source, used to build the answer, without your brand ever appearing in what the user reads. The model used your work. Your brand got nothing.

The data
100 citations in 25 days. Zero brand mentions.

Seer Interactive analyzed 541,213 LLM responses across 20 brands, 6 AI platforms, and 5 stages of the buyer journey. In one documented case, a client's blog post was cited over 100 times in 25 days with zero brand mentions across all 100 responses. The content fed the model. The brand fed nobody. Researchers call this a ghost citation.

What it means

Ghost citations are the AI era's version of the impression. A number that looks like performance and measures something else entirely. A brand optimizing for citation count while its recommendation rate is flat is running the same play that drove CMOs crazy for twenty years: vanity metrics dressed as strategy.

The difference now is the system is honest about it. The model does not pretend citations equal outcomes. The gap is right there in the data, if you know where to look.

Stage30

We build content where the brand is part of the answer, not just the source list. Not as a trick. As a structural commitment to being the entity the model is actually describing, not just the document it pulled from.

Full source: "In one case, a client's blog post was cited over 100 times in 25 days with zero brand mentions. Every citation was a ghost citation." Seer Interactive, March 2026. "LLM Ghost Citations: Why Your Content Is Working and Your Brand Isn't." 541,213 LLM responses analyzed.
Read the original research at GetPassionFruit
Node 2.2

What is the difference between an AI citation, a mention, and a recommendation?

The situation

The team is tracking AI citations as a single number. It goes up, things are working. It goes down, fix something. Nobody has paused to ask whether a citation, a mention, and a recommendation are actually the same thing with the same value. They are not. And conflating them is how you spend a year optimizing the wrong outcome.

The data
Three distinct outcomes. Three different mechanisms. Most tools only measure one.

Research by Seer Interactive and Omniscient Digital identified three distinct AI outcomes. Citations, your URL appearing as a source, driven by content structure and retrievability. Mentions, your brand named in the response, driven by how well the AI has encoded your brand in memory. Recommendations, the AI actively suggesting your brand as the answer, driven almost entirely by how consistently your brand appears in the AI's training history.

What it means

The distinction matters because the fix is different for each. Citations are a structural problem solved with better content architecture. Recommendations are a brand memory problem solved with consistent presence, earned media, and real-world brand signals. If your citations are up and recommendations are flat, you do not have a content problem. You have a brand problem. No amount of schema markup is going to fix a brand problem.

The old game blurred these lines on purpose. More impressions meant more budget. The new system makes the blur expensive.

Stage30

Ghost citations are a warning sign, not a win. We build for all three layers because you cannot reach recommendation without the others working first, and each layer requires a different structural input.

Full source: "Citations, mentions, and recommendations are three distinct outcomes driven by different mechanisms." Seer Interactive, March 2026 / Omniscient Digital / Peec AI, January 2026. 23,387 branded citations analyzed.
Read the original research at GetPassionFruit
Node 2.3

Where do AI citations about my brand actually come from?

The situation

The content team is heads-down on the website. Better pages, fresher posts, cleaner copy. The assumption, reasonable on its face, is that a better website means better AI presence.

The old game taught us to own the narrative. Write the story, publish it, optimize it, rank it. The problem with that model was always that the consumer knew it. They went to Reddit. They asked friends. They looked for sources that had no reason to lie. The AI learned from watching consumers do exactly that.

The data
48 percent earned media. 30 percent commercial content. 23 percent owned pages.

Omniscient Digital analyzed 23,387 citations across branded AI queries. Of those citations: 48 percent came from earned media, third-party coverage, reviews, and press. 30 percent from commercial brand content. 23 percent from owned brand pages. Nearly half of what AI says about your brand originated somewhere you did not write.

What it means

Your website is one quarter of your AI presence. The other three quarters lives in sources you do not own and may not be managing. Optimizing your homepage while ignoring that ecosystem is optimizing the part of the problem you can see while the larger part quietly shapes your brand's reputation in the one system that increasingly matters.

Stage30

We map and structure the whole signal ecosystem. Because that is where the citations actually live, and because the only path to a consistent AI presence is making sure everything the model reads about you is saying the same true thing.

Full source: "48% earned media, 30% commercial brand content, 23% owned brand content." Omniscient Digital / Peec AI, January 2026. 23,387 branded AI citations analyzed.
Read the original research at GetPassionFruit
Node 2.4

Is my website or third-party coverage more important for AI visibility?

The truth

What other people say about you is far more influential on AI responses than what you say about yourself. The AI was trained on the actual internet, including all the places consumers went to escape the pushed story. It learned what independent evidence looks like. And it trusts it accordingly.

The data
Brands are 6.5 times more likely to be cited from third-party pages than from their own domains.

AirOps found that brands are 6.5 times more likely to be cited by AI from third-party pages than from their own domains. The AI treats your homepage the way a skeptical buyer treats a company brochure: useful, but probably not the whole truth. Third-party sources are the references it actually trusts because they have no reason to spin things in your favor.

What it means

Most of your AI authority lives off your domain, in sources you do not control. Ignoring that layer while perfecting your owned content is leaving the majority of your AI authority unmanaged.

Stage30

We treat the owned pages and the ecosystem around them as one connected architecture. Because in AI search, they already are. Your authority does not live where you think it does.

Full source: "Brands are 6.5x more likely to be cited from third-party pages than from owned domains." AirOps, March 2026. "The Influence of Retrieval, Fan-out, and Google SERPs on ChatGPT Citations."
Read the original research at GetPassionFruit
Node 2.5

How much does earned media actually affect AI citation rates?

The situation

A press placement lands. The PR team puts it in the clips folder. The marketing team shares it on LinkedIn. Everyone moves on. Nobody asks what happened to AI citations that month.

The data
Earned media distribution increases AI citations by up to 325 percent.

Stacker's research found that earned media distribution increases AI citations by up to 325 percent. Every third-party source that names your brand and points back to your work is adding to the pool the AI draws from. The old game treated PR as awareness, soft metrics, long attribution windows, hard to prove. The new system makes it measurable.

What it means

A 325 percent citation lift from earned media is not a PR metric. It is a structural outcome. The press clip that ran last month is either in the model's citation pool or it is not. Structure it right and it is.

Stage30

We treat earned media as a structural decision: where you appear, how those placements are written, what they connect back to. Because in AI search, every earned mention is a signal the model is reading and weighting.

Full source: "Earned media distribution increases AI citations by up to 325%." Stacker, December 2025. Earned Media Distribution and AI Citation Lift.
Read the original research at GetPassionFruit

Most Content Never Makes It Into the Answer

There is a gap between "AI can find my content" and "AI uses my content" that most brands do not know exists. Being retrievable gets you into consideration. What happens next is a different evaluation entirely. Most content fails it. The brands that consistently appear in AI answers are not the ones with the most content. They are the ones with the most extractable content.

Node 3.1

If ChatGPT retrieves my content, does that mean users will see my brand?

The situation

The technical audit says the site is clean. Indexed. Crawlable. The AI can find the content. The team assumes that means the AI is using the content.

This was always true of consumers too. They found the page. They did not read it. They scanned for five seconds and left. We optimized for the arrival and never fixed the experience.

The truth

AI systems retrieve far more than they ever surface. Most of what they pull in never reaches the user. Being findable gets you into the room. What happens next is a different evaluation, and most content does not pass it.

The data
85 percent of content ChatGPT retrieved was never shown to the user.

AirOps analyzed 548,534 pages retrieved by ChatGPT across 15,000 prompts. Only 15 percent appeared in the final response shown to users. 85 percent of what ChatGPT fetched was retrieved, processed, and set aside. Eight and a half out of every ten pages, gone. Not because they were wrong. Because something else was easier to use.

What it means

There is a layer between "indexed" and "visible" that most brands do not know exists. You can pass every technical check, every crawlability audit, and still have near-zero presence in the answers users actually read. The old game optimized for arrival. The new one optimizes for use. Those require completely different approaches.

Stage30

Getting retrieved is not the goal. Getting used is. And the difference is entirely structural: how immediately and clearly your content gives the AI the answer it needs, without making it read the whole thing to find it.

Full source: "AirOps analyzed 548,534 pages retrieved by ChatGPT across 15,000 prompts. Only 15% of those pages appeared in the final response." AirOps, March 2026. "The Influence of Retrieval, Fan-out, and Google SERPs on ChatGPT Citations."
Read the original research at GetPassionFruit
Node 3.2

Why doesn't standard keyword tracking capture AI search performance?

The truth

Before AI responds to a question, it often asks itself several more. Follow-up searches, invisible to the user and to most tracking tools, that shape what ends up in the final answer. Your keyword list sees the surface. Everything underneath it is invisible.

Consumers were always asking questions brands were not answering. They went to Reddit because Reddit answered them. The AI has the same instinct: it goes wherever the real answer lives.

The data
89.6 percent of ChatGPT responses involve invisible follow-up searches before answering.

AirOps found that 89.6 percent of ChatGPT responses involve fan-out searches, follow-up queries the model runs automatically in the background before producing its answer. Standard keyword tracking tools miss all of them. Nine in ten responses. The AI is doing invisible research before it answers.

What it means

The brands that consistently show up in AI responses are not just answering what people typed. They are answering the hidden questions the AI ran before it responded. If your content only covers the visible query, you are addressing about 10 percent of the actual evaluation happening behind the answer.

Stage30

We map the full question tree: what someone asks, what they actually mean underneath it, what they are afraid of. Because that is the evaluation the AI is running, just faster and invisibly.

Full source: "89.6% of ChatGPT responses involve fan-out searches, follow-up queries the model runs invisibly before answering." AirOps, March 2026.
Read the original research at GetPassionFruit
Node 3.3

Does adding structured data and FAQ markup actually increase AI citations?

The truth

Structured content is one of the most direct inputs to AI citation rates. Not because it is clever. Because it removes work the AI would otherwise have to do. The old game had us writing for robots, stuffing keywords into pages humans did not enjoy reading. The new game asks us to write for understanding, which, it turns out, humans also prefer.

The data
Sites implementing structured data and FAQ blocks saw a 44 percent increase in AI citation rates.

BrightEdge research from 2025 to 2026 found that sites implementing structured data and FAQ blocks saw a 44 percent increase in AI citation rates. Two pieces of content covering the same topic with the same expertise. One organized so the AI finds the answer immediately. One requiring the AI to read through to find it. The first gets cited 44 percent more often.

What it means

Over time, across a whole content library, that gap becomes the difference between being the answer and being the research nobody shows anyone.

Stage30

Structure is not decoration. It is the admission requirement. Stage30 is the system for building it consistently, across every surface, without having to re-learn it for every new piece.

Full source: "Sites implementing structured data and FAQ blocks saw a 44% increase in AI citation rates." BrightEdge, 2025 to 2026.
Read the original research at GetPassionFruit
Node 3.4

Why is my content being retrieved but not used in AI responses?

The situation

The content is ranking. Getting retrieved. The team is confident. But AI citations are flat and nobody understands why. Something is happening between found and used.

The truth

Most content was written for humans who scan. They would find the point eventually, in the third paragraph or the pull quote or the bolded sentence mid-page. The AI does not scan. It evaluates. And if the answer is not clearly, immediately present, it moves to the next source. The old format does not survive the new evaluation.

The data
68.7 percent of pages retrieved by ChatGPT lacked the structural signals needed to be used.

AirOps' 2026 State of AI Search found that 68.7 percent of pages retrieved by ChatGPT lacked the structural signals needed to actually be used in responses: clear headings, front-loaded answers, definitive statements. More than two-thirds of everything AI retrieves gets set aside at the structure check.

What it means

Your content is probably in that majority. Most content is. It ranks. It gets retrieved. And then it quietly fails a structural test that nobody told you existed. The gap between retrievable and used is where most brands are losing citations they should be earning.

Stage30

We close the gap between retrieved and used. Every piece leads with the answer. The AI should not have to read the whole thing to find what it needs, and with Stage30 architecture, it does not.

Full source: "68.7% of pages retrieved by ChatGPT lacked the structural signals needed for content to be used in responses." AirOps, 2026 State of AI Search.
Read the original research at GetPassionFruit
Node 3.5

How much does content structure actually matter for AI search visibility?

The truth

Structure is a primary input to AI visibility, not a footnote. This is arguably the most subversive finding in all the AI search research because it means a smaller brand that organizes its content well can achieve AI visibility that rivals a larger brand that does not. The old game rewarded scale: more budget, more content, more backlinks. The new game rewards clarity. And clarity is not a budget problem. It is a discipline problem.

The data
Structured content increases AI visibility by up to 40 percent.

Princeton and Georgia Tech researchers studied this directly, publishing their findings at SIGKDD 2024. Structured content increases AI visibility by up to 40 percent. Not from more content. Not from more authority. From how you organize what you already have.

Stage30

Princeton and Georgia Tech mapped the mechanism. We build the architecture. Consistently, across every piece, for every client, because the 40 percent lift is available to anyone willing to actually do it.

Full source: "Structured content increases AI visibility by up to 40%." Aggarwal et al., 2024. Princeton University / Georgia Tech. "Generative Engine Optimization." Published SIGKDD 2024.
Read the original research at GetPassionFruit
Getting retrieved
is not the goal.
Getting used is.

Nothing Stays Put

AI results do not hold. They churn, reset, and re-evaluate constantly. The brands that maintain consistent AI visibility are not holding positions. They are maintaining the structural architecture that keeps making selection likely across every re-evaluation. Anything else is a tactic that wins one run and loses the system.

Node 4.1

How stable are AI Overview results from one week to the next?

The situation

A strong AI Overview citation lands. The team celebrates. Someone is now assigned to monitor it. The strategy is hold the position. The old game had a version of this that mostly worked. You earned a top Google ranking, you maintained it, it held for months. The reflexes trained over twenty years are now being applied to a system those reflexes do not fit.

The data
AI Overview content changes 70 percent of the time when the same query is re-run.

Ahrefs tested this directly, running identical queries and comparing results. AI Overview content changes 70 percent of the time when the same query is re-run. Seven in ten. Same question. Different answer.

What it means

The morning report showing your brand in a strong position is a moment. A snapshot of a system that has probably already moved. Tactics that earn you a citation today have roughly even odds of still working next month. You can win the run and lose the system.

Most GEO companies respond to this by trying harder to be a result. Better optimization, more coverage, faster iteration. Stage30's answer is different: stop trying to be a result. Become a source.

Stage30

We build the architecture that makes selection likely across every run. Not the tactic that wins one.

Full source: "AI Overview content changes 70% of the time for the same query." Ahrefs, November to December 2025.
Read the original research at GetPassionFruit
Node 4.2

How long does an AI citation typically last once a brand earns it?

The situation

The organic SEO logic is being applied to AI: build authority, earn citations, let them compound. The playbook worked for twenty years. The assumption is it transfers.

The truth

AI citations do not stack the way organic rankings do. The citation pool reshuffles constantly. The compounding asset model does not hold in a system with this level of churn. The organic SEO analogy breaks down. Brands that maintain consistent AI visibility are not holding positions. They are continuously earning selection in a system that re-evaluates constantly.

The data
40 to 60 percent of cited domains change month over month.

Profound tracked 680 million AI citations over nine months. Between 40 and 60 percent of cited domains changed month over month. Over a six-month window, the majority of citation slots had completely turned over. Half the citation landscape. Gone and replaced. Every six months.

What it means

Are you building something that earns selection every time, or something that earned it once?

Stage30

Durable AI presence comes from structural architecture, the kind that makes selection likely across every re-evaluation. Not the kind that earns one good result and waits.

Full source: "40 to 60% of cited domains change month over month. Over six months, the majority of citation slots turn over completely." Profound, August 2024 to June 2025. 680 million citations tracked.
Read the original research at GetPassionFruit
Node 4.3

Does publishing new content increase AI citations? What happens to old content?

The truth

AI applies a freshness filter that is sharper than most content strategies account for. The old game did not punish you for this. Google's algorithm rewarded age in many categories. Old pages with authority held their rankings for years. The new system has different instincts. It learned from a world that moves fast, and it weights currency accordingly.

The data
Stale content sees citation rates drop 3 times compared to recently refreshed content.

GenOptima's monitoring data shows content not updated in 12 or more months sees citation rates drop 3 times compared to recently refreshed content. New content can begin generating citations within days of publication. Three times. Not slow erosion. A cliff.

What it means

The comprehensive guide you published in 2022 and never touched again is running at a third of its potential citation rate right now. Multiply that across an entire content library and you are looking at a significant, invisible loss, happening every month, with no alarm going off.

Stage30

Freshness is structural. We build content maintenance into the architecture, active updates, correction signals, temporal anchors, so what you published keeps signaling current, not just indexed.

Full source: "Content not updated in 12+ months sees citation rates drop 3x vs. recently refreshed content." GenOptima, March 2026. AI Brand Visibility Report.
Read the original research at GetPassionFruit
Node 4.4

Why does Reddit dominate AI citation data?

The situation

Reddit keeps showing up in AI citation data. The team either ignores it or plans to route around it. The assumption: algorithm quirk, will normalize.

Consider the tagline "Built for the road ahead." Every car company has some version of it. To a consumer, it is noise. To an AI model, it is entropy. The model tries to extract meaning. What road? Ahead of what? It finds two hundred brands saying the same thing, none of them saying anything. The probability distributes. The model might give you someone else's answer. Might wobble. Might hallucinate a meaning that is not there. That is the structural cost of language designed to sound good rather than mean something. One and done.

Reddit does not do that. Reddit says: "I bought the product, here is what happened, here is what I wish I had known." The AI trusts it because it can find the meaning.

The data
Reddit citation share grew 73 percent across commercial categories in one quarter.

Tinuiti's Q1 2026 AI Citation Trends Report tracked high-intent commercial queries across nine verticals and seven platforms. Reddit citation share grew at least 73 percent across commercial categories between October 2025 and January 2026. In some categories, ChatGPT cited Reddit in nearly 60 percent of responses.

What it means

The AI learned the same lesson consumers learned about brand content twenty years ago. It goes where the honest answer lives.

Stage30

Reddit is proof of concept for the entire methodology. Real question. Direct answer. No marketing gloss. We build that, except the brand owns it. Same structural honesty, brand-controlled, accurate, and designed to be the source the model trusts.

Full source: "Reddit citation share grew at least 73% across commercial categories between October 2025 and January 2026." Tinuiti, Q1 2026. AI Citation Trends Report: 9 Verticals, 7 Platforms.
Read the original research at GetPassionFruit
Node 4.5

Has LinkedIn become a meaningful AI citation source for B2B brands?

The situation

LinkedIn is in the social budget. Content goes there for reach and employer brand. It is not part of the AI visibility conversation. Nobody flagged it.

The data
LinkedIn moved from outside the top 20 cited sources to approximately number three for professional queries in three months.

Research from Profound and Semrush found LinkedIn moved from outside the top 20 cited sources to approximately the number three most cited domain for professional queries between November 2025 and February 2026. It now appears in 16 percent of LLM answers. Outside the top 20 to number three in three months.

What it means

A surface that was not part of your AI strategy six months ago is now one of the most influential citation nodes for professional queries. The shift happened while most brands were looking at their ChatGPT dashboards. Your company page, your founders' profiles, the thought leadership you publish there, the model is reading all of it and weighting it.

Stage30

LinkedIn is a required node in any B2B brand's AI architecture now. We treat it that way: structured, consistent, and connected back to everything else, because disconnected signals in a fragmented system do not compound. Connected ones do.

Full source: "LinkedIn surged from outside the top 20 to approximately #3 for professional queries between November 2025 and February 2026. It now appears in 16% of LLM answers." Profound, November 2025 to February 2026 / Semrush, January to February 2026. 89,000 URLs analyzed.
Read the original research at GetPassionFruit

Each Platform Is Its Own World

ChatGPT and Perplexity trust almost entirely different sources. Optimizing for one does almost nothing for the other. Understanding what each platform actually pulls from changes what you build and where you build it.

Node 5.1

What sources does ChatGPT trust most when answering questions?

The truth

ChatGPT's citation behavior is heavily concentrated in a small number of trusted sources, and one source holds almost half the weight of the platform's entire top 10.

The data
Wikipedia accounts for 7.8 percent of all ChatGPT citations, holding 47.9 percent of top-10 source share.

Profound's analysis of 680 million citations found Wikipedia accounts for 7.8 percent of all ChatGPT citations and holds 47.9 percent of relative share among ChatGPT's top 10 citation sources. Nearly half the weight of ChatGPT's most trusted source tier sits in one place: Wikipedia.

What it means

For ChatGPT, a Wikipedia entity entry is not a vanity play. It is the single highest-leverage citation node on the platform. If your brand does not have one, or if what is there is thin or disconnected from your other web presence, you are missing nearly half the trust architecture of the most-used AI platform.

Stage30

Entity presence, including Wikipedia, is part of the structural foundation, not an optional extra. We connect your Wikipedia entry to your website, your schema, and your other citation sources so ChatGPT has a complete, consistent picture.

Full source: "Wikipedia accounts for 7.8% of all ChatGPT citations and holds 47.9% of relative share among its top 10 citation sources." Profound, August 2024 to June 2025. 680 million citations analyzed.
Read the original research at GetPassionFruit
Node 5.2

What sources does Perplexity weight most heavily?

The truth

Perplexity's citation logic is almost the mirror image of ChatGPT's. Where ChatGPT trusts encyclopedic reference sources, Perplexity trusts community and conversation, specifically Reddit. The same kind of concentrated dominance Wikipedia holds on ChatGPT, Reddit holds on Perplexity.

The data
Reddit is Perplexity's leading citation source at 6.6 percent of total citations, with 46.7 percent of top-10 share.

Profound's 680-million-citation dataset found Reddit is Perplexity's leading citation source at 6.6 percent of total citations, with 46.7 percent relative share among its top 10 sources. A buyer using Perplexity is getting answers heavily influenced by what Reddit and community sources say about your brand. A buyer using ChatGPT is getting answers shaped by Wikipedia and reference sources.

Stage30

Platform-specific presence matters. We build for the sources each platform actually trusts, not just the ones that feel most brand-appropriate. For Perplexity, that means managing your presence in community spaces the same way you manage your website.

Full source: "Reddit is Perplexity's leading citation source at 6.6% of total citations, with 46.7% of top-10 share." Profound, August 2024 to June 2025. 680 million citations.
Read the original research at GetPassionFruit
Node 5.3

How much does content age affect AI citation rates?

The truth

AI applies a recency filter that is more decisive than most content strategies are built to handle. Older content does not just rank lower. It mostly does not get cited at all. Not the most authoritative content. Not the best-written content. The most recent content.

The data
85 percent of AI Overview citations come from content published in the last two years.

Seer Interactive's January 2026 research found that 85 percent of AI Overview citations come from content published in the last two years. A content library more than two years old without active refresh is operating outside the citation window for 85 percent of AI Overview responses.

Stage30

Recency is structural. We build update cycles into the architecture so your content keeps signaling current, not just indexed.

Full source: "85% of AI Overview citations come from content published in the last two years." Seer Interactive, January 2026.
Read the original research at GetPassionFruit
Node 5.4

Can I trust that AI citations about my brand are accurate?

The situation

The citations are up. Someone does a spot check and notices one of them describes your pricing incorrectly. And then another. The AI is citing you. It is just not getting you right.

The truth

Being cited and being represented accurately are two different things. AI systems generate confident-sounding claims that can misrepresent the source, or reference things that do not exist at all. Citation count without accuracy auditing is a metric that can hide active damage.

The data
LLMs generated plausible-sounding citations that did not exist or misrepresented the original source.

A peer-reviewed study published in Nature Communications evaluated 7 large language models on 800 medical questions. The models generated citations that were plausible in format but did not exist, or significantly misrepresented the original source material. Confident. Wrong. And presented to users as fact.

What it means

If you are being cited 200 times a month and 40 of those citations describe your offer incorrectly, your process wrong, or your results in ways you would never approve, that is not a citation win. It is a slow-motion reputation problem you cannot see unless you are auditing for it.

Stage30

Accuracy is a structural outcome. When your content makes clear, specific, sourced claims, the AI has something accurate to cite. When it does not, the AI fills the gap with whatever it can construct. We close that gap.

Full source: "LLMs generated plausible-sounding citations that did not exist or misrepresented the original source." Venkit et al., April 2025. Nature Communications.
Read the original research at GetPassionFruit