Most AEO conversations focus on the macro level. Which platforms are citing sources. Whether AI Overviews are pulling from blogs or authority sites. How domain trust factors into Perplexity's source selection. These are all legitimate questions. But there is a more fundamental issue that gets far less attention: how the actual writing on your pages determines whether AI models can extract and cite your content at all.
Dan Petrovic's analysis published via Ahrefs makes a point that sounds obvious once stated but is routinely ignored in practice. Humans and AI models process long-form text in broadly similar ways - both are trying to extract meaning quickly from content that may or may not be clearly structured. The implication is direct: if your content is hard for a human to scan and absorb, it is likely hard for an AI model to parse and cite. Human-friendly writing is AI-friendly writing.
Why AI Models Struggle With the Same Content Humans Do
AI language models do not read in the linear, patient way that an expert researcher might. They are pattern-matching across your text, looking for clearly stated claims, structured information, and unambiguous answers. Dense, meandering paragraphs make that harder. So does burying the key point at the end of a long setup. So does writing that assumes contextual knowledge the reader - or the model - may not have.
This mirrors how most humans actually read online. Research on reading behaviour has long established that people scan before they read. They look for signals that a section will answer their question before they commit to reading it. If those signals are not present - a clear heading, a direct opening sentence, a structure that telegraphs its own content - they move on. AI models exhibit a functionally similar behaviour. Content that cannot quickly signal its relevance and clarity is unlikely to be surfaced in a citation.
Structure Is Not a Cosmetic Choice
One of the most persistent misconceptions in content marketing is that structure is a presentation concern - something you address after the thinking is done. In reality, structure is how meaning is communicated. A well-formed heading is not just navigation. It is a claim. It tells both the reader and the AI model what this section will establish. A direct first sentence beneath that heading either fulfils the claim or it does not.
For AEO purposes, this matters enormously. When ChatGPT, Gemini, or Google's AI Overviews pull a citation, they are typically surfacing a discrete piece of content - a specific answer, a clear definition, a concrete recommendation. That content needs to be findable and extractable. If your best insights are embedded in the middle of long paragraphs with no structural signposting, they are not going to be found. Breaking content into clearly scoped sections with specific, descriptive headings is not a formatting preference. It is a prerequisite for AI citation.
The Four Writing Frameworks Worth Understanding
Petrovic's article outlines four writing frameworks that improve AI visibility at the page level. Without reproducing his full analysis, the core principle running through all four is the same: clarity and structure serve both human readers and AI models simultaneously. Frameworks that front-load the key point, organise information into discrete answerable units, and avoid unnecessary abstraction consistently outperform content that prioritises style over scannability.
Practically, this means considering how each section of a page functions as a standalone answer. A section on, say, UK VAT implications for SaaS businesses should open with a direct statement of the position, not with three sentences of context-setting. The context can follow, but the answer comes first. This is the inverted pyramid structure that journalism has used for decades - and it maps almost exactly onto how AI models prefer to extract citable content.
List formats, definition blocks, and structured comparisons also perform well in AI citation contexts - not because AI models have a mechanical preference for bullet points, but because these formats force writers to make discrete, parallel claims rather than blending everything into continuous prose. Each item in a well-constructed list is a self-contained assertion. That makes it far easier to extract and attribute.
What This Means for UK Brands Investing in AI Visibility
For brands trying to appear in Google AI Overviews or earn citations in Perplexity and ChatGPT, the on-page writing quality is no longer a secondary concern. It may be the primary variable within your control. Technical SEO, backlink authority, and domain trust all matter - but if the writing itself is not structured for extractability, those other factors do not fully compensate.
UK brands in particular often operate in sectors - financial services, professional services, healthcare, legal - where content defaults to cautious, heavily qualified prose. That caution is often legitimate. But it frequently produces writing that is structurally opaque. Long sentences with multiple subordinate clauses. Conclusions buried in caveats. Key recommendations separated from the questions they are answering by paragraphs of scene-setting. These habits actively reduce AI visibility.
The fix is not to strip out nuance. It is to lead with the clear point and add the qualification after. State the answer, then explain the conditions. That approach satisfies compliance requirements while producing content that AI models can actually parse and attribute.
Auditing Your Existing Content Through This Lens
The practical starting point for most brands is not a full content rewrite. It is an audit of high-priority pages against a simple set of structural questions. Does each major section open with a direct, answerable statement? Are headings specific enough to function as standalone queries? Is the key insight of each section findable without reading the full paragraph? Are there sections that answer implicit questions that your target audience is asking?
Pages that fail these checks are pages that AI models are likely skipping over, even if the underlying content is genuinely authoritative. The information is there; it is just not structured in a way that enables extraction. Fixing that does not require new research or new arguments. It requires rewriting existing content with structure as the primary concern, not an afterthought.
If you have already done the work of building topical authority and earning credible backlinks, poor on-page writing is the thing most likely to be holding your AI visibility back. That is a tractable problem - and one that pays off across organic search, AI citation, and direct human readability at the same time.