AI PPC

When Search Terms Stop Being Search Terms

May 2026·6 min read

There is a quiet but consequential change buried in Google's Ads documentation. The search terms report - one of the most relied-upon tools in any paid search account - now contains a caveat: because of AI search features like AI Overviews, AI Mode, Google Lens, and autocomplete, the terms shown may represent the "best approximation of the user's intent" rather than what they actually searched for.

That distinction matters more than it might appear on first read. PPC managers have always understood that search terms reports do not show everything - privacy thresholds have been stripping out low-volume queries for years. But this is different. It is not just that some queries are hidden. It is that some of what you see may be an AI-generated interpretation of what a user meant, not a verbatim record of what they did.

Why AI Search Features Change the Data at Source

The root of this change is structural. When a user searches via Google Lens, they are not typing a query - they are pointing a camera at something. When AI Overviews generate a follow-up prompt, or autocomplete finishes a sentence the user barely started, the resulting "search" is a composite of user input and AI inference. There is no clean, literal string of words to report back.

AI Mode compounds this further. Multi-turn AI searches involve conversational exchanges where the triggering intent spans several messages. Reducing that to a single reportable keyword requires interpretation. Google's systems have to make a call about what the user was ultimately trying to find, and that inferred intent is what surfaces in your report.

This is not a bug or an oversight. It is an honest acknowledgement that the search experience has changed so fundamentally that traditional query-level reporting cannot keep up with it. The documentation update is Google telling advertisers: the rules of the data have changed.

What This Does to Your Negative Keyword Strategy

Negative keywords have always been built on trust in the search terms report. You see irrelevant queries, you exclude them, you repeat the process. That feedback loop depends on the data being a reliable record of what users searched. If some of those terms are approximations, you may be adding negatives based on inferred intent that does not accurately reflect the traffic you are actually receiving.

The practical risk is in both directions. You could add a negative that excludes legitimate intent because the approximated term looks off-brand in the report. Or you could fail to exclude genuinely poor-fit traffic because the AI-generated approximation looks more relevant than the underlying search actually was. Either way, your negative list is being built on slightly blurred data.

The response here is not to abandon negative keyword hygiene - it remains essential. But it does mean cross-referencing your search terms report against conversion data more rigorously before making exclusions. If a term looks odd but is converting, pause before excluding. The approximation may be capturing something real that the raw string would have obscured anyway.

Match Types in an Intent-Approximation World

Broad match has been Google's default recommendation for some time, and the justification has always been that Smart Bidding can assess context beyond the literal query. This documentation change adds a new dimension to that argument. If the search terms report is already showing approximated intent rather than literal strings, then optimising around exact or phrase match terms from that report is working from an already-abstracted dataset.

That is not an argument to abandon tighter match types entirely. For high-intent, high-value terms where you have strong historical conversion data and brand safety considerations, exact and phrase match still serve a purpose. But it does weaken the case for spending significant time sculpting keyword lists based on what you read in the search terms report, if those terms are themselves AI interpretations rather than literal searches.

Campaigns that are already leaning on Smart Bidding and audience signals - rather than exhaustive keyword control - are arguably better adapted to this shift. The signals feeding those bidding models include much richer contextual data than the search terms report ever did.

Performance Max and the Reporting Consistency Problem

For Performance Max campaigns, this change is less of a departure and more of a continuation. PMax has always offered limited query-level visibility. The search terms report for PMax has been a partial view at best, filtered through Google's own relevance thresholds. The intent approximation layer simply extends the same logic that PMax advertisers have been operating under for several years.

What is new is that this approximation now affects standard Search campaigns too. Advertisers who have kept Search campaigns precisely because they wanted cleaner, more auditable data now find that the data itself has shifted. The gap between Search and PMax reporting transparency is narrowing - not because PMax has become more transparent, but because Search reporting has become more opaque.

For UK advertisers managing accounts where budget accountability and client reporting are critical, this is worth flagging explicitly. If you are presenting search term analysis as part of a performance narrative, you now need to caveat that some of what you are reporting reflects interpreted intent rather than verbatim user behaviour.

How to Maintain Meaningful Oversight

The most practical response is to shift your analytical focus. Rather than treating the search terms report as a literal transcript of user queries, treat it as a signal about intent clusters. Look for patterns across groups of terms rather than obsessing over individual strings. Is the intent broadly commercial, informational, navigational? Are approximated terms clustering around topics that match your targeting? That is more actionable than parsing every row.

Conversion data becomes more important as a proxy for relevance. If clicks from certain approximated terms are converting at strong rates, that is a more reliable signal than the term itself looking right. Integrating first-party conversion data cleanly - via server-side tagging where possible - gives your bidding models better signals to work with, which partially compensates for the reduced legibility of the search terms data.

There is also a broader point here about expectations. The search terms report was never a complete picture - it has been an approximation for years due to privacy filtering. This latest change is a shift in kind rather than degree, but the adaptation required is similar: treat the report as directional intelligence, not ground truth, and weight your decisions accordingly.

The Bigger Signal Here

Google documenting this change is significant in itself. It is an acknowledgement that AI-mediated search creates a reporting problem that did not exist when queries were typed strings. The company is not hiding it - it updated the documentation to say so plainly. That transparency is useful, but it also signals that this complexity will only increase as AI Mode, Lens, and voice-driven search continue to grow.

For PPC practitioners, the discipline that will matter most going forward is not keyword research in the traditional sense. It is understanding how AI systems interpret and categorise intent - which is a skill set that sits closer to content strategy and information architecture than to the spreadsheet-based query mining that defined paid search a decade ago. The two disciplines are converging faster than most account structures currently reflect.