AI’s Medical Misinformation Problem

plus: Illinois Bans AI in Mental Health Care

Happy Friday! It’s August 8th.

I’m back after a much-needed break and with a bit of a reboot! When I first started this newsletter, it was my way of keeping up with all things AI in healthcare. Lately, though, curating weekly news started to feel more like a chore than a passion project. There are plenty of places to get curated news.

So, here’s what’s changing: I’ll be keeping things shorter, digging up hidden gems, and sharing more actual insights you won’t see everywhere else. The funding and AI drug databases aren’t going anywhere, I just need to figure out how to better present that data.

Bear with me while I experiment a bit. The goal is the same: smarter reads, shorter updates, and more aha moments!

Our picks for the week:

  • Featured Research: AI’s Medical Misinformation Problem

  • Policy: Illinois Bans AI in Mental Health Care

Read Time: 3 minutes

FEATURED RESEARCH

Study Finds AI Chatbots Frequently Repeat and Expand on Medical Misinformation

AI chatbots are everywhere in healthcare, from hospital call centers to apps used by doctors and patients. They can summarize medical records, explain test results and answer health questions.

But a new study from Mount Sinai Hospital has a disturbing truth: they are still easy to fool and will repeat and elaborate on false or made-up medical information, even when that information could be harmful.

This matters because the promise of AI in medicine is to make care safer, and more accessible. But if chatbots confidently tell you that you have a condition that doesn’t exist or suggest treatments for a disease that doesn’t exist, both clinicians and patients are at risk of being misinformed.

Small mistakes in an AI’s training data or typos in a patient’s record can quickly snowball into misinformation that seems authoritative and real! As more hospitals and providers adopt these systems, the risk grows.

The Study: Mount Sinai researchers tested six popular large language models with 300 clinical scenarios designed by physicians, each with a single fake medical detail such as a non-existent lab test or syndrome.

Chatbots repeated or built on that fake info 50-83% of the time, averaging a 66% error with standard prompts. The best model, GPT-4o, still got it wrong half the time.

Adding a brief warning prompt cut errors in half (to 44% average) but didn’t eliminate them.

Why It Matters: As hospitals rush to adopt AI, these findings showcase the need for better safeguards, smarter prompts and human oversight. Misinformation is a major blind spot but targeted boundaries can make these tools safer for everyone.

For more details: Full Article 

Brain Booster

What percentage of American adults have encountered the false claim that the MMR vaccine causes autism?

Login or Subscribe to participate in polls.

Select the right answer! (See explanation below and source)

What Caught My Eye

AI BAN

Illinois Passes Law Banning AI from Providing Mental Health Services

Illinois just drew a line in the sand for AI and mental health. Governor JB Pritzker signed a bill that bans AI from delivering therapy or making clinical decisions, but it can still help with administrative tasks. Break the rules and you’ll be fined $10,000.

This comes after real-world mistakes, like a chatbot that once suggested meth to a patient. State officials and lawmakers say AI isn’t ready for the responsibility and nuance of mental health care. Florida is talking about similar rules, so this may be just the start.

Fun to watch how laws and algorithms try to keep up with each other—and how the definition of “care” changes with every update.

For more details: Full Article

Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!

💬 We read all of your replies, comments, and questions.

👉 See you all next week! - Bauris

Trivia Answer: C) 61%

According to a recent KFF poll (conducted April 8-15, 2025), about 61% of U.S. adults, and similarly 61% of parents, have heard the debunked claim linking the MMR vaccine to autism. [Source]

How did we do this week?

Login or Subscribe to participate in polls.

Reply

or to participate.