- AI in Lab Coat
- Posts
- AI Spots Dangerous Drug Candidates
AI Spots Dangerous Drug Candidates
plus: Teens Turn to AI for Mental Health

Happy Friday! It’s November 14th.
We’ve been talking a lot about regulating AI chatbots acting as therapists and featured in our last week’s post how the FDA might step in. So perhaps it’s not so surprising that this week we learned OpenAI is eyeing the consumer health space.
According to Reuters, the company is considering launching a generative-AI health assistant and has made executive hires in healthcare strategy.
If they really go there, it forces a bigger conversation about what we hand over to these systems and what parts of care still need a real human in the loop!
Our picks for the week:
Featured Research: AI Detects Deadly Food Toxins Fast
Perspective: Teens Turn to AI for Mental Health
Read Time: 3.5 minutes
FEATURED RESEARCH
AI Model Predicts Human Drug Toxicity Long Before Clinical Trials Begin

Drug development has a long history of surprises (and not the good kind). A treatment can look perfectly safe in mice, sail through preclinical testing, and then cause severe reactions in humans.
One of the most infamous examples is TGN1412, an experimental immunotherapy designed to activate T-cells. It worked in animals, but in its first human trial, it triggered a catastrophic cytokine storm within hours, sending all six volunteers into multi-organ failure.
Aptiganel, a stroke drug candidate, followed a similar path: strong results in animals, severe hallucinations in humans. These failures point to a simple reality. Animals and humans respond differently to drugs, and those differences often stay hidden until it’s too late.
Preclinical testing is meant to catch these issues early, yet more than half of first-in-human failures are tied to toxicity that never appeared in animal models. For pharmaceutical companies, that means wasted years and high costs.
What the researchers found: A team at POSTECH built an AI model designed to understand those cross-species gaps directly. They focused on the Genotype-Phenotype Difference (GPD), how a drug’s target genes behave differently in humans, mice, and cell lines.
They quantified three things: how essential each gene is for survival, how strongly it’s expressed across tissues, and how it connects within biological pathways.
Using toxicity data from 434 hazardous drugs and 790 approved ones, they trained a model that far outperformed traditional chemistry-based approaches.
Predictive power rose from an AUROC of 0.50 to 0.75, and precision nearly doubled.
In a harder test, the model was trained only on drugs known before 1991, then asked to predict which drugs would later be withdrawn for toxicity. It reached 95% accuracy.
Why it matters: By quantifying where species differ, this system offers an early warning signal before human trials begin.
It won’t replace lab research, but it has the potential to reduce costly failures and improve patient safety.
For more details: Full Article
Brain Booster
Which prescription painkiller, heavily promoted in the 1990s and 2000s as having low addiction risk, played a central role in the opioid crisis? |
Select the right answer! (See explanation below and source)
What Caught My Eye
MENTAL HEALTH
Why Young People Are Using Generative AI for Mental Health Advice at Unprecedented Rates
Generative AI has quietly become a mental-health lifeline for millions of young people, and now we finally have national data showing just how common it is.
A new JAMA study surveyed U.S. adolescents and young adults aged 12 to 21 and found that 13.1% (about 5.4 million youths) have used generative AI for mental-health advice when feeling sad, angry, or nervous.
Usage jumps to 22.2% among 18 to 21-year-olds. And once they start, they keep coming back: 65.5% use AI monthly or more, and 92.7% say the advice is helpful.
The backdrop is bleak. 40% of adolescents with major depression get no mental health care. Costs, waitlists, and privacy concerns push teens toward tools that are immediate, anonymous, and always on.
But the study also highlights gaps. Black respondents were significantly less likely to find AI advice helpful, raising cultural-competency concerns. And no one knows whether these tools deliver safe guidance for teens with deeper clinical needs.
This is the messy reality heading into 2026: millions of young people already use AI like an emotional support system, while the regulatory and clinical frameworks to evaluate that advice barely exist.
The demand is only going to rise from here. The question is whether the safeguards will catch up.
For more details: Full Article
Top Funded Startups

Byte-Sized Break
📢 Other Happenings in Healthcare AI
UpGuard’s new report shows that more than 80% of workers rely on unapproved AI tools, with executives using them most often and many employees trusting these tools more than coworkers. [Link]
Stanford researchers built a model that predicts whether donor livers will be viable in time for transplant, cutting futile procurements by about 60% and outperforming surgeons in accuracy. [Link]
OpenAI is exploring consumer health products, including a generative‑AI personal health assistant, after making key executive hires in healthcare strategy and seeing around 800 million weekly users of ChatGPT, many of whom seek medical advice. [Link]
Have a Great Weekend!
![]() | ❤️ Help us create something you'll love—tell us what matters! 💬 We read all of your replies, comments, and questions. 👉 See you all next week! - Bauris |
Trivia Answer: C) OxyContin
OxyContin, produced by Purdue Pharma, was aggressively marketed as being less addictive than other opioids, which turned out to be dangerously misleading. Its widespread use and abuse contributed to a massive wave of opioid addiction and overdose deaths, leading to lawsuits, bankruptcies, and major scrutiny of pharmaceutical marketing practices.
How did we do this week? |

Reply