GEO & AEO

AI Content Isn't the Problem. Poor Quality Is.

March 2026·5 min read

There has been a persistent myth circulating in marketing teams since generative AI tools went mainstream: that Google treats AI-written content as a red flag. Ahrefs addressed this directly in a detailed piece published in March 2026, and the conclusion is unambiguous. Google does not penalise content because it was written by AI. It penalises content that is thin, unhelpful, or spammy - and always has. The origin of the text is not the issue. The quality of it is.

For brands managing organic content programmes, that distinction matters. But for brands specifically trying to earn citations in AI Overviews, ChatGPT responses, Perplexity answers, or Gemini summaries, it matters even more. The same quality signals that protect you from a Google manual action are the signals that determine whether an AI system considers your content worth quoting.

The Scale Problem Nobody Is Talking About Honestly

AI writing tools make it trivially easy to produce content at volume. A team that previously published ten articles a month can now publish a hundred. That sounds like an advantage until you consider what most of that content actually looks like: generic, surface-level, structured around keyword density rather than genuine expertise. Google's systems - and increasingly, the retrieval systems powering AI search engines - are built to filter exactly this kind of output.

The Ahrefs analysis makes this explicit. The problem is not AI authorship. The problem is that AI makes it much easier to create low-quality content at scale, and low-quality content at scale is precisely what triggers quality signals to deteriorate. If your content strategy is essentially using a language model to replicate what already exists across a thousand other sites, you are producing the kind of content that neither Google nor any LLM-powered answer engine has any reason to cite.

For UK businesses operating in competitive sectors - financial services, legal, e-commerce, professional services - this is a practical risk right now. Content farms using AI to flood the web with derivative material are raising the bar simply by existing. To cut through, your content needs to bring something those outputs cannot: proprietary insight, genuine sector experience, or a point of view that reflects real-world expertise.

What AI Search Engines Actually Cite

To understand why content quality is central to GEO strategy, it helps to think about how systems like Google AI Overviews or Perplexity actually select their sources. These systems do not simply rank pages - they retrieve and synthesise content to answer a query. The content they pull tends to share certain characteristics: it answers questions directly, it demonstrates authority on a topic, it is structured clearly, and it contains information that goes beyond what every other source already says.

Thin AI content fails on almost every dimension here. It tends to be vague, to avoid specific claims, and to pad word counts without adding information. It is the opposite of what a retrieval system wants. Google's own guidance on helpful content - the criteria it uses in its quality assessments - aligns closely with what makes content citable by an AI system: first-hand knowledge, clear answers, demonstrated expertise, and content written primarily for people rather than algorithms.

This is the direct connection between the SEO quality discussion and GEO visibility. A piece of content that would pass a Google quality review for helpful content is also the kind of content that has a realistic chance of being surfaced in an AI Overview or referenced in a ChatGPT response. The underlying criteria are not identical, but they are far more aligned than most people assume.

Using AI Well - Without Producing Forgettable Content

None of this is an argument against using AI in content production. That would be a misreading of the situation. AI tools are genuinely useful for research, structuring, editing, and accelerating output. The question is how they are being used within a broader content process, not whether they are being used at all.

A content piece that starts with real input from a subject matter expert, gets structured with AI assistance, and is reviewed and sharpened by an experienced writer before publication is not thin content. It is efficient content production with human quality control at the critical points. That is meaningfully different from pointing a language model at a keyword, accepting the first draft, and publishing it without review.

For brands with genuine expertise - a specialist consultancy, a manufacturer with deep product knowledge, a financial advisory firm with years of sector experience - AI tools should be amplifying that expertise, not replacing it. The proprietary knowledge is the asset. AI can help you produce it faster and in more formats. It cannot manufacture it from nothing.

The GEO Implication for Content Audits

If your brand is investing in AI visibility - trying to appear in AI Overviews, earn citations in Perplexity, or be recommended by ChatGPT in response to relevant queries - a content audit is not optional. The question you need to answer is whether your existing content would be considered genuinely helpful by a human expert in your field. Not whether it is optimised, not whether it hits a word count, but whether it actually says something useful.

Practically, this means identifying pages where the content adds little beyond what a language model could generate in thirty seconds from publicly available information. Those pages are not contributing to your GEO visibility - they may actively be diluting the authority signals associated with your domain. Consolidating or substantially improving them is a higher priority than producing more content of the same type.

It also means thinking carefully about content formats. Long-form explainers with genuine depth, original analysis, direct answers to specific questions, and content that references real experience all tend to perform better as citation candidates. These formats take more effort to produce well, which is precisely why AI-generated volume content will not compete with them. That is the strategic opportunity - not publishing more, but publishing things that AI systems actually want to reference.

Quality Has Always Been the Answer

The core point from the Ahrefs analysis is simple: Google's standards have not fundamentally changed. Helpful, expert, original content has always been what search systems reward. AI tools have changed the economics of content production - dramatically lowering the cost of creating something mediocre - but they have not changed what good content looks like.

For brands building a GEO strategy in 2026, this is clarifying rather than complicated. You do not need to worry about whether your content was written with AI assistance. You need to worry about whether it is genuinely good. If it is, the mechanisms that make it rank in organic search are largely the same mechanisms that make it citable by AI search engines. That alignment is worth building a content strategy around.