Google's new AI Overviews feature is providing users with inaccurate and potentially dangerous health information, according to multiple health organizations and patient advocacy groups. Investigations have revealed that the AI-generated summaries, which appear at the top of search results, have offered incorrect advice on serious conditions including pancreatic cancer, liver disease, and mental health disorders.
Health experts are raising alarms that this misinformation could lead individuals to misinterpret symptoms, follow harmful dietary plans, or avoid seeking necessary medical care, presenting a significant risk to public health.
Key Takeaways
- Google's AI Overviews have been found to provide incorrect medical advice for several serious health conditions.
- Experts warn that advice regarding pancreatic cancer is the opposite of recommended medical guidance and could be dangerous.
- Misleading information on liver function tests and cancer screenings could cause patients to ignore serious symptoms.
- Mental health charities have also identified harmful and stigmatizing advice generated by the AI tool.
- Google maintains that the majority of its AI Overviews are accurate and that it is continuously working on quality improvements.
A Pattern of Inaccurate Guidance
Concerns from several health charities and medical professionals prompted a closer look at the advice being generated by Google's AI Overviews. The tool, designed to provide quick summaries of information, has been shown to deliver dangerously flawed guidance across a spectrum of health queries.
Patient advocacy groups have described the findings as deeply concerning, highlighting that people often turn to search engines during moments of vulnerability and stress. Stephanie Parker, the director of digital at the end-of-life charity Marie Curie, noted the potential for serious harm. "People turn to the internet in moments of worry and crisis," she said. "If the information they receive is inaccurate or out of context, it can seriously harm their health."
What Are AI Overviews?
AI Overviews are summaries generated by artificial intelligence that appear at the top of Google's search results page. The goal is to provide a quick, synthesized answer to a user's query by pulling information from various web pages. However, the system's accuracy is dependent on its ability to correctly interpret and contextualize that information.
Life-Threatening Cancer Advice
Among the most alarming examples of misinformation relates to pancreatic cancer. When queried, the AI Overview advised patients to avoid high-fat foods. Health experts state this is not only wrong but could have severe consequences.
"This is completely incorrect," said Anna Jewell, the director of support, research and influencing at Pancreatic Cancer UK. She explained that this advice "could be really dangerous and jeopardise a person’s chances of being well enough to have treatment."
Jewell elaborated that patients with pancreatic cancer often need a high-calorie, high-fat diet to maintain their weight and strength for chemotherapy or life-saving surgery. Following the AI's advice could lead to malnutrition and a reduced ability to tolerate treatment.
Misinformation on Cancer Screenings
The AI also provided incorrect information regarding women's health. A search for "vaginal cancer symptoms and tests" incorrectly listed a pap test as a diagnostic tool for vaginal cancer.
Athena Lamnisos, chief executive of the Eve Appeal cancer charity, called this "completely wrong information." She warned that a woman who recently had a clear pap test might dismiss genuine symptoms of vaginal cancer based on this flawed advice. "Getting wrong information like this could potentially lead to someone not getting vaginal cancer symptoms checked," she stated.
Lamnisos also noted a concerning inconsistency, where the same search query would yield different AI-generated answers at different times, pulling from various sources and creating an unreliable user experience.
Flawed Data on Liver and Mental Health
The investigation also uncovered misleading results for queries about liver function tests. The AI Overview for "what is the normal range for liver blood tests" presented a list of numbers without the crucial context of a patient's age, sex, or ethnicity.
Pamela Healy, chief executive of the British Liver Trust, described the AI summaries as "alarming." She pointed out that what the AI presents as 'normal' can be very different from clinically accepted ranges, potentially giving false reassurance to individuals with serious liver disease.
Mental health is another area where the AI's guidance has been found lacking. Stephen Buckley, the head of information at the charity Mind, said that summaries for conditions like psychosis and eating disorders offered "very dangerous advice."
He noted that the AI-generated content was sometimes "incorrect, harmful or could lead people to avoid seeking help." Buckley also raised concerns that the summaries can reinforce existing biases and stereotypes about mental illness, while sometimes pointing users to inappropriate online resources.
Google's Position on AI Accuracy
In response to these findings, a spokesperson for Google stated that the company invests heavily in the quality of its AI Overviews, especially for sensitive topics like health. They asserted that "the vast majority provide accurate information."
The company also mentioned that many of the examples shared were based on "incomplete screenshots" but that from what they could assess, the AI linked to reputable sources and included recommendations to seek professional medical advice. Google said the accuracy of AI Overviews is comparable to its long-standing "featured snippets" feature and that it takes action when the AI misinterprets web content.
Despite these assurances, health organizations remain cautious. Sophie Randall, director of the Patient Information Forum, concluded that the examples demonstrate how "Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health.”





