Happy Friday! Here’s what’s ahead:

  • Story: AI as Regulatory Shortcut? FDA Says No.

  • Trial: Juvena tests AI drug in immobilized legs

  • Research: Your Chatbot's Medical Sources Are Made Up

Novo Nordisk partnered with OpenAI this week. The deal covers drug discovery, manufacturing, supply chain, distribution, corporate operations, and workforce training. At some point it might be faster to list what it doesn't cover.

The Bigger Story

📢 The FDA Isn't Against AI. It's Against Ignorance.

The FDA just issued a warning letter to a small homeopathic manufacturer for, among other things, letting AI write its drug specifications and production records without anyone actually reviewing the output. When investigators asked why basic process validations were missing, the company's defense was that AI didn't tell them it was required.

It's tempting to read this as the FDA drawing a line against AI in manufacturing. It's not. The agency's actual position is more nuanced. Use AI if you want, but someone in the room needs to understand what it's generating. The violation wasn't the tool. It was the absence of any human who could recognize the output was wrong.

That distinction matters as the industry scales AI across quality systems, regulatory submissions, and manufacturing workflows. The FDA seems fine with AI-drafted documents. What they won't tolerate is AI as a substitute for competence.

What’s worth watching is whether this letter becomes the precedent that shapes how every pharma company documents its AI oversight or not.

For more details: Full Article

Public AI Drug Discovery Companies

Absci is holding onto its gains as investors double down on the company's confirmed H2 2026 timeline for human data on its AI-designed hair-loss antibody, ABS-201.

Insilico Medicine is rallying after a massive showing at AACR 2026, where they unveiled data for four new AI-discovered cancer inhibitors, including a high-potential pan-KRAS program.

Brain Booster

Which plant hormone is primarily responsible for triggering flowering in many plants as days get longer in the spring?

Login or Subscribe to participate

Select the right answer! (See explanation below and source)

Clinical Trial Snapshot

📝 Clinical Trial Updates

Juvena Therapeutics is now recruiting for a Phase 1 trial testing an AI-discovered muscle atrophy drug in healthy volunteers. JUV-161, identified through Juvena's machine learning-driven secretomics platform that screens regenerative factors from young cells, will be evaluated in 40 volunteers whose legs are deliberately immobilized to induce muscle wasting. The trial tests whether the drug can prevent or reverse disuse atrophy during recovery, with potential applications in sarcopenia and myotonic dystrophy. [Link]

What Caught My Eye

The European Commission's proposed Biotech Act includes provisions for AI testing environments and requires the EMA to issue formal guidance on using AI across the drug development lifecycle. The act, aimed at closing the EU's competitive gap with the US and China, also establishes an advisory group to monitor biosecurity risks from AI models in biological applications. Regulatory sandboxes, faster clinical trial approvals, and a new EIB investment pilot round out the proposal, which is now heading into negotiations between the Commission, Parliament, and Council. [Link]

Boehringer Ingelheim is investing £150 million over 10 years to build a new AI and machine learning hub in London's King's Cross. The center, which aims to have 50 AI experts in place by end of 2027, will focus on using AI to identify biological mechanisms with higher probability of clinical success across Boehringer's pharma R&D pipeline. It joins existing computational innovation sites in Austria, Germany, and the US. [Link]

Merck is spending up to $1 billion on an enterprise-wide AI partnership with Google Cloud, spanning R&D workflows, manufacturing, and commercial operations. Google Cloud engineers will embed directly with Merck's teams to deploy Gemini agentic AI tools, with Merck's digital chief framing the push as preparation for "one of the most significant launch periods in our company's history." The deal follows Novo Nordisk's recent enterprise AI agreement with OpenAI. [Link]

Featured Research

The Internet's Nutrition Advice Is Garbage. Now Chatbots Are Recycling It With Confidence

Ask a chatbot whether alternative therapies are better than chemotherapy for cancer, and it probably won't tell you to stop asking dangerous questions. It'll give you a confident, well-structured answer that treats acupuncture and herbal remedies as legitimate contenders, then maybe suggest a clinic.

Out of 250 health questions posed across five popular AI chatbots, researchers recorded exactly two refusals to answer. Two.

A team led by Nicholas Tiller at the Lundquist Institute published a peer-reviewed audit in BMJ Open this month that stress-tested Gemini, DeepSeek, Meta AI, ChatGPT, and Grok across five misinformation-prone health categories: cancer, vaccines, stem cells, nutrition, and athletic performance.

Chatbots handled cancer and vaccine questions with relative accuracy, but stumbled badly on nutrition and athletic performance, the exact categories where internet misinformation is most pervasive. (Blue: non-problematic; gray: somewhat problematic; red: highly problematic.)

They used adversarial prompts designed to push models toward bad advice, the kind of red-teaming approach that's become standard for exposing where these systems break.

Nearly half of all responses (49.6%) were flagged as problematic by expert reviewers. About one in five were classified as highly problematic, meaning they could plausibly lead someone to a harmful health decision.

What's telling is where the models failed. Chatbots handled vaccine and cancer questions reasonably well, likely because the training data in those domains is dense, well-structured, and frequently reinforced by high-quality research. But nutrition and athletic performance were a mess. These are exactly the categories where the internet is flooded with influencer pseudoscience and conflicting claims, and the models absorbed all of it.

The pattern suggests something is wrong; chatbot accuracy is only as good as the quality of what the internet has to say on a topic.

The citation problem might be even worse than the accuracy problem. When asked for scientific references, no chatbot produced a single fully accurate reference list. The median completeness score was 40%. Fabricated citations, hallucinated authors, broken links.

Grok and DeepSeek were the least bad, but "least bad at 60% completeness" isn't a reassuring bar. And the most common failure mode wasn't silence or uncertainty. It was false balance, treating fringe therapies and established medicine as equally valid options, the kind of both-sides framing that's been shown to erode trust in scientific consensus.

This matters because over half of U.S. adults now use chatbots for everyday information-seeking, and most of those queries look nothing like a carefully worded exam question.

A separate study published in Nature Medicine this February found that LLMs identify the correct medical condition 94.9% of the time when tested alone, but when real people used those same models, accuracy dropped below 34.5%, no better than a control group using Google. The models know things. The problem is what happens when a non-expert tries to extract and act on that knowledge.

The regulatory response is already underway. Over 36 states have introduced more than 70 bills regulating AI chatbots in the first quarter of 2026 alone, many requiring disclosure of AI identity, crisis detection protocols, and prohibitions on chatbots posing as licensed healthcare professionals.

Whether regulation can keep pace with adoption is another question.

Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!

💬 We read all of your replies, comments, and questions.

👉 See you all next week! - Bauris

Trivia Answer: B. Gibberellin

Gibberellins are plant hormones that promote flowering and can trigger it under long-day (spring-like) conditions (NCBI Bookshelf; Encyclopaedia Britannica).

Reply

Avatar

or to participate

Keep Reading