Google removes AI Overviews for certain medical queries
What Happened
This follows an investigation by The Guardian that found Google AI Overviews offering misleading information in response to some health-related queries.
Our Take
This isn't a bug; it's the feature working exactly as designed. Google shipped a hallucination engine and hoped the brand would carry it through. Turns out doctors, recipes, and medical queries don't forgive confidence plus wrongness.
The Guardian didn't discover a glitch—they found the floor of what LLM-over-retrieval actually does at scale. Google has zero guardrails between "it sounds plausible" and "publish it to 100 million people."
For client work, this is the reminder that slapping an LLM onto a customer-facing surface requires domain expertise and testing infrastructure, not just API calls.
What To Do
If a client asks for an 'AI-powered answer feature,' require them to fund the domain-expert review loop or walk away.
Cited By
React
