Google’s AI Spills Risky Health Tips
The Rise of Google’s AI Overviews and the Dangers It Poses
In May 2024, Google introduced its AI Overviews feature, a bold move aimed at making information more accessible. However, this initiative quickly became controversial due to the AI’s tendency to generate misleading or even harmful content. One of the most notable examples was when the AI suggested users eat rocks or put glue on their pizzas—clear indicators of the challenges that come with relying on large language models.
While some errors might seem trivial, such as misstating the current year or inventing explanations for non-existent idioms, others can have serious consequences. A recent investigation by a major news outlet revealed that Google’s AI Overviews were providing inaccurate health information, which could potentially endanger users.
Health Information Missteps
The investigation uncovered several alarming instances where the AI’s summaries were not only incorrect but could lead to dangerous outcomes. For example, it advised individuals with pancreatic cancer to avoid high-fat foods, contrary to medical recommendations. Additionally, the AI provided flawed information about women’s cancer tests, which could result in people overlooking real symptoms of the disease.
This situation is particularly concerning because individuals who are vulnerable and in distress often turn to the internet for answers. Stephanie Parker, director of digital at end-of-life charity Marie Curie, emphasized that inaccurate information received online can significantly harm someone’s health during moments of worry and crisis.
Inconsistent Responses and Potential Harm
Another issue identified was the AI’s ability to provide different responses to the same prompts. This inconsistency is a well-documented flaw in large language model-based tools, which can lead to confusion among users. Stephen Buckle, head of information at mental health charity Mind, warned that the AI Overviews offered “very dangerous advice” regarding eating disorders and psychosis, with summaries that were “incorrect, harmful, or could lead people to avoid seeking help.”
Despite these concerns, a Google spokesperson stated that the company invests significantly in ensuring the quality of AI Overviews, especially for health-related topics. However, the results of the investigation suggest that there is still much work to be done to ensure the tool does not spread harmful misinformation.
Public Trust and Reliance on AI
According to a survey conducted in April 2025 by the University of Pennsylvania’s Annenberg Public Policy Center, nearly eight in ten adults are likely to seek health information online. Moreover, nearly two-thirds of them found AI-generated results to be “somewhat or very reliable,” indicating a significant level of trust in AI despite its shortcomings.
A separate MIT study found that participants considered low-accuracy AI-generated responses as “valid, trustworthy, and complete/satisfactory.” This perception led some to follow potentially harmful medical advice and seek unnecessary medical attention as a result. Despite these findings, AI models continue to prove inadequate substitutes for human medical professionals.
The Role of Doctors and the Need for Caution
Doctors now face the challenging task of correcting myths and guiding patients away from the pitfalls of hallucinating AI. The Canadian Medical Association has labeled AI-generated health advice as “dangerous,” highlighting that hallucinations, algorithmic biases, and outdated facts can mislead individuals and potentially harm their health if they choose to follow the generated advice.
Experts consistently advise people to consult human doctors and other licensed healthcare professionals instead of relying on AI. However, this recommendation is often difficult to follow due to the many barriers to adequate care around the world.
Acknowledging Flaws and Seeking Help
Interestingly, AI Overviews sometimes appears to recognize its own limitations. When asked if it should be trusted for health advice, the feature directed users to a specific article. The AI Overviews’ response read: “A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm.”
As the use of AI continues to expand, it is crucial for users to remain vigilant and seek guidance from qualified professionals. The dangers associated with relying on AI for health information are real and require careful consideration.
- Ferry Irwandi: Pendidikan Jembatani Perbedaan, Bukan Ciptakan Jarak - January 20, 2026
- Saya merasa baik,” kata Jovic, 22 tahun. “Saya merasa baik. Saya … - January 20, 2026
- Google’s AI Spills Risky Health Tips - January 20, 2026


Leave a Reply