reviews

44% of Google's Medical AI is WRONG: It Could Kill You

Investigation exposes errors that could kill patients. 44% of AI medical responses are incorrect.

AdScriptly.io Team
-January 27, 2026-13 min read
Share:
Medical stethoscope on blue background representing health and medicine

Photo by Hush Naidoo Jade Photography on Unsplash

Key takeaways

Google had to remove AI summaries from medical searches after an investigation revealed dangerously incorrect advice. A pancreatic cancer patient could die following their recommendations.

You search Google for "pancreatic cancer symptoms" and the AI tells you to avoid fats. Sounds like reasonable advice, right? You could die if you follow it.

This is exactly what an investigation by The Guardian discovered in January 2026: Google's artificial intelligence summaries, called AI Overviews, are giving medical advice so dangerously incorrect that experts describe it as "alarming" and potentially fatal.

Google had to remove these summaries from several medical searches after doctors and health organizations raised the alarm. But the damage is done: 72% of adults search for health information on Google, and millions have received advice that directly contradicts medical evidence.

What went wrong? How much can you trust "Dr. Google"? And most importantly: what should you use instead?

What Are AI Overviews and Why Google Put Them in Health Searches

AI Overviews are artificial intelligence-generated summaries (using the Gemini model) that appear at the top of Google search results. Instead of showing you a list of links, Google tries to give you "the answer" directly.

Timeline of the Disaster

Date Event
May 2023 Google presents AI Overviews at I/O
May 2024 US launch for hundreds of millions
Late 2024 Expansion to over one billion users
January 2026 The Guardian exposes dangerous medical errors
January 2026 Google removes AI Overviews from some health searches

The original idea seemed good: instead of reading 10 articles about your symptoms, the AI gives you a summary. The problem is that AI cannot distinguish between correct and incorrect information when it comes to medicine.

Why Health Is Especially Dangerous for AI

Medicine isn't like searching for "best Italian restaurant nearby." A health error can:

  • Delay a correct diagnosis
  • Make you ignore serious symptoms
  • Lead you to make decisions that worsen your condition
  • In extreme cases, kill you

And according to an Ahrefs study, 44% of medical queries trigger an AI Overview, more than double any other category.

The Errors That Almost Killed Patients

The Guardian's investigation, supported by British medical organizations, documented chilling cases.

Case 1: Pancreatic Cancer - "Avoid Fats"

A search about pancreatic cancer generated an AI Overview advising to avoid high-fat foods.

The problem: This is exactly the opposite advice these patients need.

Anna Jewell, from Pancreatic Cancer UK, explained:

"The advice to avoid high-fat foods is completely incorrect. Following that advice could be really dangerous and compromise someone's chances of being well enough to receive treatment."

Why? Pancreatic cancer patients have difficulty absorbing nutrients. They need high-calorie, high-fat diets to:

  • Maintain body weight
  • Tolerate chemotherapy
  • Be in condition for potentially life-saving surgery

Following Google's advice could literally prevent you from surviving a treatment that could cure you.

Case 2: Pap Test and Vaginal Cancer - 100% False Information

A search for "vaginal cancer symptoms and tests" returned a summary claiming that the Pap test detects vaginal cancer.

This is completely false.

The Pap test (cervical cytology) detects cervical cancer, not vaginal. They are different cancers in different organs with different tests.

Athena Lamnisos, from The Eve Appeal, stated:

"It's not a test to detect cancer, and it's certainly not a test to detect vaginal cancer – this is completely erroneous information. Getting incorrect information like this could potentially lead someone to not get their symptoms checked because they had a clear result on a recent cervical screening."

A woman with vaginal cancer symptoms might think: "My Pap test came back fine 6 months ago, it can't be cancer." And fail to see a doctor until it's too late.

Case 3: Liver Tests - Numbers Without Context

Searches for "normal liver test ranges" produced tables of numbers without considering:

  • Patient's age
  • Sex
  • Ethnicity
  • Specific laboratory methodology

Vanessa Hebditch, from British Liver Trust:

"A liver function test is a collection of different blood tests. Understanding the results and what to do next is complex and involves much more than comparing a set of numbers."

The danger: many liver diseases are asymptomatic until advanced stages. A person might see that their numbers "look normal" according to Google and not seek medical attention, when in reality they have developing cirrhosis.

Case 4: Mental Health - Advice That Pushes People Away from Help

For searches about psychosis and eating disorders, AI Overviews offered advice described by experts as "very dangerous."

Stephen Buckley, from Mind:

"Some AI summaries for conditions like psychosis and eating disorders offered very dangerous advice and were incorrect, harmful or could lead people to avoid seeking help."

In mental health, where stigma already makes it difficult for people to seek help, incorrect advice from an "authoritative" source like Google can be the difference between seeking treatment or not.

The Statistics That Should Scare You

The numbers behind this problem are alarming:

"Dr. Google" Usage

Statistic Meaning
72% of adults search health on Google Most people trust Google for health
89% search symptoms before seeing a doctor Google is the "first doctor"
35% try to self-diagnose online Avoid doctors completely
50 million health searches in UK (2023) Massive scale

AI Errors in Health

Statistic Source
44% of medical queries trigger AI Overview Ahrefs
70% of health AI Overviews rated as "risky" Medical panel
22% of AI responses have harmful recommendations Stanford-Harvard
1 in 8 summaries "very or extremely" risky Nurse study

The Problem of Blind Trust

Statistic Problem
2/3 of users find AI results "trustworthy" Trust incorrect information
MIT participants considered low-accuracy AI responses as "valid and reliable" Can't detect errors
25% of doctors say patient AI info contradicts medical advice Conflict in consultations

Google's Response: Insufficient

After The Guardian's investigation, Google removed AI Overviews for some specific medical searches:

What They Removed

  • "what is the normal range for liver blood tests"
  • "what is the normal range for liver function tests"

What They Did NOT Remove

Variations like:

  • "lft reference range"
  • "lft test reference range"

These searches still generate AI summaries. Simply changing how you write the question, you get potentially dangerous information.

Google's Defense

A Google spokesperson stated:

"We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information."

Google also claimed that:

  • Their internal team of doctors reviewed the examples
  • In many cases, the information "wasn't inaccurate"
  • The summaries link to "known and reputable sources"

The problem with this defense: Even if it links to Johns Hopkins, the AI can misinterpret that information or present it out of context.

Medical Organizations' Criticism

Vanessa Hebditch, from British Liver Trust:

"Our biggest concern is that this is a limited approach to one search result. Google could simply disable AI Overviews for that query, but that doesn't solve the bigger problem of using AI Overviews in healthcare."

In other words: Google is playing "whack-a-mole" with specific searches instead of recognizing that AI shouldn't give medical advice.

Why AI Fails in Medicine (And Always Will)

There are fundamental reasons why artificial intelligence cannot replace medical judgment:

1. AI Cannot Examine You

A doctor can:

  • Feel a lump
  • Listen to your heart
  • See your skin color
  • Notice your anxiety level

AI only has text. It cannot perform a physical examination.

2. It Doesn't Know Your History

Your doctor knows:

  • What medications you take
  • Your allergies
  • Previous illnesses
  • Family history
  • Previous test results

AI answers an isolated question without context.

3. "Normal" Ranges Are Not Universal

What's normal for:

  • A 25-year-old woman
  • A 65-year-old man
  • A person of Asian descent
  • Someone taking certain medications

...is completely different. AI gives generic ranges that don't apply to you.

4. AI "Hallucinates" With Total Confidence

Language models can invent information that sounds completely credible but is 100% false. And they present it with the same confidence as real information.

5. Medicine Evolves, AI Falls Behind

Medical recommendations change. What was standard 5 years ago may be obsolete today. AI may be trained on outdated information.

What to Use Instead of Google AI for Health

If you need to search for medical information online, use verified sources:

Top-Tier Medical Institutions

Source Specialty URL
Mayo Clinic General, high quality mayoclinic.org
Cleveland Clinic Medical research my.clevelandclinic.org
Johns Hopkins Global research hopkinsmedicine.org
M.D. Anderson Cancer mdanderson.org

Government Resources (US)

Source Description
MedlinePlus NIH, verified information
CDC Diseases and prevention
NIH Medical research
DailyMed FDA drug information

For UK Audience

Source Description
NHS National Health Service
NHS Inform Scotland health information

When to ALWAYS See a Doctor

Don't search Google if:

  • You have intense or sudden pain
  • Symptoms that are worsening
  • Unexplained bleeding
  • Difficulty breathing
  • Confusion or disorientation
  • Persistent high fever
  • Any symptom that seriously worries you

The golden rule: If you're searching symptoms because something worries you, that worry is already reason enough to call a doctor.

The Bigger Problem: Trust in AI

This case reveals something deeper about our relationship with artificial intelligence.

The Authority Bias

When Google gives you an answer, you assume it's correct because:

  • Google is a leading technology company
  • The answer "comes from advanced AI"
  • It's at the top of the page
  • It looks professional and well-written

But AI doesn't know what's true. It only knows what "sounds like" medical information.

The Danger of Automation in Health

Stephanie Parker, from Marie Curie:

"People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health."

When you search symptoms, you're vulnerable. You're scared. And you trust that Google will give you correct information.

That trust can kill you.

What Google Should Do (But Probably Won't)

Medical organizations have called for concrete actions:

1. Remove AI Overviews from ALL Medical Searches

Not just from specific queries that generate complaints, but from any health-related search.

2. Show Clear Warnings

If they insist on keeping AI Overviews in health, they should include:

  • "This information may be incorrect"
  • "Consult a medical professional"
  • "Ranges vary based on your individual situation"

3. Transparency About Errors

Publish statistics about:

  • How many corrections they've made
  • What error rate they accept
  • How they verify medical information

4. Collaboration with Medical Organizations

Work with hospitals, medical associations, and patient organizations to review content before displaying it.

Conclusion: Don't Trust Your Life to AI

The Guardian's investigation has exposed what many doctors feared: Google's AI is giving dangerous medical advice to millions of people.

A pancreatic cancer patient could die of malnutrition following the "avoid fats" advice. A woman could ignore vaginal cancer symptoms thinking a Pap test protects her. People with liver disease could think they're healthy looking at numbers without context.

And Google, instead of completely removing this dangerous functionality, plays whack-a-mole with specific holes while the fundamental problem persists.

What You Should Do Now

  1. Don't trust AI Overviews for health - Ever.
  2. Use verified medical sources - Mayo Clinic, MedlinePlus, your national health system.
  3. Consult professionals - If something worries you, call the doctor.
  4. Distrust answers that are "too clear" - Medicine is complex, simple answers are usually wrong.
  5. Share this information - Your family members also use Google for health.

Artificial intelligence can be useful for many things. But when it comes to your life, you need a human being with medical training, not a language model that "sounds like" it knows medicine.

Your health is too important to leave in the hands of an AI that's wrong 44% of the time.


Have you ever received incorrect health advice from Google? Next time you search symptoms, remember: AI is not your doctor.

Was this helpful?

Frequently Asked Questions

Has Google removed AI Overviews from all medical searches?

No. Google has only removed AI Overviews from very specific searches like 'normal liver test ranges', but variations of the same question still show AI summaries. The problem persists for most health-related searches.

What percentage of AI medical responses are incorrect?

According to studies, 44% of medical queries trigger AI Overviews, and up to 70% of these summaries have been rated as 'risky' by medical professional panels. A Stanford-Harvard study found that 22% of AI responses include potentially harmful recommendations.

Which websites should I use for reliable medical information?

The most trustworthy sources are: Mayo Clinic, Cleveland Clinic, Johns Hopkins Medicine for medical institutions; MedlinePlus and CDC for verified government resources; and NHS (UK) for official national health information. Avoid depending on AI summaries.

Why does AI give incorrect medical advice?

AI cannot physically examine you, doesn't know your medical history, doesn't understand that 'normal' ranges vary by age, sex, and other factors, and can 'hallucinate' false information with total confidence. Additionally, it may be trained on outdated information. Medicine requires clinical judgment that AI cannot replicate.

When should I consult a real doctor instead of searching Google?

Always when you have intense pain, worsening symptoms, unexplained bleeding, difficulty breathing, persistent fever, or any symptom that seriously worries you. The simple rule: if you're worried enough to search symptoms, you already have enough reason to call a doctor.

Written by

AdScriptly.io Team

#google#artificial intelligence#health#ai overviews#medicine#gemini#safety

Related Articles