The real risk of AI-generated content (it's not what you think)
Marc-Olivier Bouchard
LLM AI Ranking Strategy Consultant
The real risk of AI content
More words, same silence.
In January 2025, Grokipedia, an AI-generated clone of Wikipedia powered by Grok, started ranking in Google. By February, it had lost roughly 90% of its search visibility. Google didn't flag it as "AI content." The pages just weren't useful enough to keep.
That's the whole argument of this piece in one paragraph. The danger of AI-generated content isn't detection. It's publishing thousands of words that nobody needs, then watching your domain's reputation rot from the inside.
The playbook everyone is running
You've heard the pitch. "10x your content output." "Publish 50 blog posts a week." "Scale your content engine with AI."
The logic: more pages indexed = more keywords ranked = more traffic. That math worked in 2014. A version of it still worked in 2020. In 2026, it's a trap.
Here's what actually happens when you push raw AI output at scale:
Google's crawl budget gets wasted. Your 200 AI articles say the same thing as ten thousand other AI articles on the same topics. Googlebot spends time on those instead of your actually useful pages.
Site-wide quality tanks. Google's Helpful Content system evaluates your domain as a whole. 200 mediocre AI pieces drag down 20 genuinely good ones.
AI models skip you entirely. ChatGPT, Claude, and Perplexity cite pages that add something new: original data, a real opinion, a specific case study. Paraphrased summaries of existing knowledge give them nothing to point at.
The result isn't a dramatic penalty. It's worse. A slow, invisible plateau where your traffic flatlines and you can't figure out why, because no single article triggered anything. The damage is cumulative.
"AI content" isn't the problem. Empty content is.
Google has been clear about this. Their policy isn't "no AI content." It's "no content created to game rankings rather than help people." Big difference.
An article written with Claude that includes your own benchmark data, your team's analysis, and a recommendation you'd actually stand behind? That's useful content that happens to be AI-assisted.
An article generated by Claude that rehashes the top 5 Google results for a keyword? That's filler. The AI didn't make it filler. The absence of anything original did.
Three questions before you hit publish
Can a reader picture what you're describing? Can someone check whether the claims are true? Could a competitor publish the same piece word for word? If yes to that last one, don't publish.
Grokipedia: a case study in volume over value
The site used Grok to generate Wikipedia-style articles at scale. For a few weeks in late January 2025, some pages appeared in Google results. The content wasn't wrong. It was accurate, formatted reasonably well, and covered real topics.
The problem: every article was a worse version of the Wikipedia page it drew from. No new research. No updated numbers. No perspective from someone who actually worked on the topic. Just a language model restating public information in different words.
Google caught up by early February. Indexed pages collapsed. No "AI content penalty" was involved. The Helpful Content system looked at the domain as a whole and decided it wasn't adding anything to the web. Which it wasn't.
The part people miss: AI visibility
Google rankings are half the picture. The other half is whether ChatGPT, Claude, Perplexity, or Gemini mention your content when someone asks a question in your category.
This is where AI-generated content fails worst.
An LLM needs a reason to cite your page over the 50 others that say the same thing. That reason is almost always one of four things:
Original data. A survey you ran, a dataset you built, numbers that don't exist elsewhere.
Firsthand experience. "We tested this for six months and here's what happened." Not "experts say this works."
A specific, verifiable claim. "Brand X appeared in 73% of ChatGPT responses for this category" is citable. "Brand X has strong AI visibility" is not.
A contrarian position with receipts. If you're saying something different from the consensus and backing it up, language models take notice. The consensus is already baked into their training data. They don't need another copy of it.
Raw AI output, by definition, produces consensus. Language models predict the most probable next word. Their output is the median opinion on any topic. And the median opinion is the last thing another language model needs to cite.
The contamination effect
Here's the part that should worry you if you've already published a lot of AI content.
Google's Helpful Content system doesn't evaluate pages one by one. It evaluates your whole domain. If 60% of your site is low-value AI content and 40% is genuine expert material, the 40% gets pulled down with it.
We've tracked this in xSeek. Sites that started mass-publishing AI content in 2025 hit a consistent pattern:
Google organic traffic plateaued or dropped within two to three months, even for pages that existed before the AI content push started.
AI citation rates fell. Pages that ChatGPT and Perplexity used to cite stopped appearing in responses. Those pages hadn't changed. The domain's overall authority had eroded, and the models adjusted.
The fix isn't slapping a "Written by a human" badge on your blog. It's cutting the pages that are dragging you down.
A quick content audit
Open every article you published in the last six months. For each one: "Would I send this to a paying client as proof we know what we're doing?" If no, noindex it or rewrite it with real substance.
What works: AI as assistant, not author
I'm not saying stop using AI for content. That's unrealistic. The question is how.
The companies I've seen get results, both in Google and in AI citations, share the same approach:
Start with something only you know
Before opening Claude or ChatGPT, write down the insight. The data point. The customer conversation. The experiment result. The thing no language model could produce because it hasn't been published yet.
That's your seed. Everything else is scaffolding.
Use AI for structure, not substance
AI is good at turning a messy outline into clean paragraphs. It catches logical gaps, suggests better headers, smooths transitions. It can't generate insights it wasn't trained on.
Use it for what it's good at. You supply the substance.
Add proof that didn't exist before
Screenshots. Benchmark numbers. Before-and-after comparisons. Customer quotes (with permission). Charts built from your own data.
Every piece of original proof is a reason for Google to rank you and for an AI to cite you over the next site.
Resist the "ultimate guide" reflex
AI makes it easy to produce 5,000-word guides that cover every angle. The problem: so does everyone else. They all say the same thing because they all pull from the same training data.
A 1,200-word article with one original chart will outrank a 5,000-word AI guide with zero original data. I've watched it happen repeatedly in our tracking.
Check if AI models actually cite you
Most content teams track Google rankings and traffic. Almost none check whether ChatGPT, Claude, or Perplexity mention their brand or link to their pages.
That's a blind spot. xSeek tracks your AI visibility across all the major models. If you publish something and it never shows up in AI responses, that tells you something about the content's quality that Google Analytics can't.
What changed between 2024 and 2026
Two years ago, AI-generated content could rank. The models were newer, the web was less saturated, and Google's quality systems hadn't caught up.
Four things shifted:
| Factor | 2024 | 2026 |
|---|---|---|
| AI content volume | Early adopters experimenting | Everyone publishing at scale |
| Google quality systems | Helpful Content v1 | Multiple overlapping classifiers |
| AI citation behavior | Cited most relevant pages | Strongly prefers original sources |
| Reader tolerance | Accepted generic content | Scrolls past anything that reads like filler |
The window for volume-first AI content closed. The window for AI-assisted expert content is open.
What Google's quality rater guidelines tell us
Google's Quality Rater Guidelines are the closest thing to a roadmap you'll get. The 2025 update leaned hard into E-E-A-T: Experience, Expertise, Authoritativeness, Trustworthiness. The first E, Experience, matters most for this conversation.
Experience means firsthand involvement. A review from someone who bought the product. A tutorial from someone who built the thing. An analysis from someone with access to the numbers.
AI can mimic the language of experience. "In my testing, I found that..." It can't produce the artifacts of experience: the screenshots, the custom data, the specific details that only come from doing the work.
Google's raters are trained to spot the difference. Increasingly, Google's algorithms are too.
Where this leaves you
AI content won't get you penalized for being AI-generated. It'll get you ignored for being empty.
The companies I see winning in both Google and AI search right now do three things:
They publish less, and every piece has something original. A data point, a case study, an opinion they'd defend in a room full of skeptics.
They use AI to go faster, not to replace thinking. AI turns a three-hour writing session into one hour. It doesn't turn someone with no expertise into an expert.
They track what gets cited. Not just Google rankings. AI mentions, source links, brand visibility across ChatGPT, Claude, Perplexity, and Gemini.
The real risk of AI-generated content isn't a penalty. It's the opportunity cost. Months of publishing words that never earn a citation, never build trust, never turn a reader into a customer.
Use AI to write faster. Bring the thinking yourself.
See if AI models cite your content
xSeek tracks your brand mentions and source citations across ChatGPT, Claude, Perplexity, and Gemini. You'll know which pages work and which are invisible.

About the author
Marc-Olivier Bouchard runs AI visibility strategy at xSeek. He spends most of his time figuring out why some content gets cited by ChatGPT, Claude, and Perplexity while identical-looking content gets skipped.
Are AI models citing your content?
Track where your brand shows up across ChatGPT, Claude, Perplexity, and Gemini. See which pages get linked as sources, which prompts mention you, and where competitors appear instead.
Related articles

How to write listicle and comparison articles AI models cite
A framework for listicles and "X vs Y" posts: verifiable claims, proof links, prompt monitoring.
Read more
How ChatGPT, Perplexity, and Claude pick sources
The citation patterns behind AI responses, and what they mean for your pages.
Read more
SEO vs AEO: when to use which
The real differences between search engine optimization and answer engine optimization.
Read more
How GEO is rewriting the rules of search
Why traditional SEO isn't enough and what generative engine optimization changes.
Read more