Happy Friday! Here’s what’s ahead:

  • Story: The FDA is open to animal-free testing data, but offers no safety net

  • Trial: Lab-grown dopamine neurons head into human brains in Phase 1

  • Research: A $10 AI reproduced Alzheimer’s biomarker findings in minutes

Roche just announced the pharma industry's largest GPU footprint. Over 3,500 Blackwell GPUs across the US and Europe. Somewhere, an IT procurement team is having the best and worst week of their lives.

The Bigger Story

📢 The Open-Source Era of AI Drug Discovery May Be Ending

For years, drug sponsors have treated animal testing alternatives the way a kid treats vegetables: they'll try them, but only if someone makes them feel safe first. That just changed.

New draft guidance from the FDA’s CDER says you can submit data from organoids, organs-on-chips, and computational models, even if the agency has never signed off on that specific method before. No validation required. No qualification necessary. Just bring good science.

The NIH is backing the push with $150 million for animal testing alternatives, funding new approach methodology (NAM) technology centers, a data hub, and a coordinating center to make these tools actually work.

Still, encouragement only goes so far. The FDA isn't offering faster reviews or any other carrot for sponsors willing to go first. The pitch is basically: trust us, we're open to it now. For an industry that's spent decades optimizing around animal studies, "we're open to it" is a big ask.

So who moves first, and what does it cost them if the review team on the other end isn't ready?

For more details: Full Article

Public AI Drug Discovery Companies

Absci’s massive rally was driven by its SEC Form 4 filing on March 12, showing a significant internal buy from their Chief Innovation Officer.

Pharos iBio is continuing their upward momentum from a mix of corporate restructuring (moving to a major tech hub) and their presence at one of Asia's largest biotech conferences.

All others, nothing really stood out, but wow these AI development companies sure can make some major moves!

Brain Booster

Select the right answer! (See explanation below and source)

Clinical Trial Snapshot

📝 Clinical Trial Updates

iRegene Therapeutics is now recruiting for a Phase 1 trial transplanting lab-grown dopaminergic neurons into the brains of Parkinson's patients. The single-arm, open-label study will deliver NouvNeu001, a human dopaminergic progenitor cell therapy, into the bilateral putamen via stereotactic neurosurgery. Patients will take immunosuppressants for 24 to 36 weeks post-transplant. [Link]

What Caught My Eye

Basecamp Research is building a trillion-gene genomic database, aiming to expand known genetic diversity by 100x. The initiative partners with Anthropic, Ultima Genomics, PacBio, and NVIDIA to sequence over 100 million species across thousands of sites globally, compressing what would have been 20+ years of processing into under two. [Link]

Global pharma companies have now pledged over $500 billion in combined U.S. manufacturing and R&D investment to get ahead of threatened 100% tariffs on imported drugs. Pfizer and AstraZeneca secured multi-year exemptions through pricing deals and commitments to Trump's TrumpRx.gov platform, while Lilly, J&J, Merck, and others are fast-tracking new plant construction. AbbVie's commitment alone is $100 billion over a decade, paired with a three-year drug pricing deal with the administration. [Link]

Merck launched the second cohort of its Digital Sciences Studio accelerator in Montreal, backing 12 AI and digital health startups across the U.S. and Canada. Each startup gets $100K in funding plus mentorship from Merck and nonprofit partner Centech, with a focus on AI for biologics R&D, 3D disease models, and preventive health. The cohort includes companies working on AI-designed peptide therapeutics, predictive CNS drug development models, and scalable organ-on-chip platforms. [Link]

Featured Research

The $10 AI That Matched Months of Expert Alzheimer's Research

Everyone's building AI scientists right now. Google has one (proprietary, invite-only). Sakana has one (it got a paper accepted at a workshop, then withdrew it). OpenAI is pitching GPT-5 as a research partner.

But a team from Washington University, Berkeley National Lab, and a dozen other institutions just did something none of those projects have done: they built one you can actually look inside, pointed it at real clinical datasets, and published every output for anyone to audit.

OpenScientist is an open-source platform built on Claude Code that works like an AI research associate on autopilot. You give it a scientific question and a dataset. It writes Python code, runs analyses, searches PubMed, builds hypotheses, and iterates through ten cycles of investigate-and-refine before producing a final report.

Total cost per run: less than $10. Total time: minutes, for analyses the authors say would normally take weeks to months of expert human effort.

They handed it Alzheimer's biomarker data from 325 participants and a grant proposal, then told it to execute the analysis plan. OpenScientist independently identified a blood-based biomarker as a predictor of amyloid PET status, matching the conclusions of a human team that used entirely different software.

That biomarker is now the backbone of the first FDA-cleared blood test for Alzheimer's diagnosis, approved in May 2025. The AI landed on the same signal the field is betting its diagnostic future on.

In another test, they fed it multiple myeloma gene expression data and asked it to generate hypotheses about disease progression. It proposed a model where the unfolded protein response collapses as cancer advances. Then they gave it a completely separate dataset to validate those hypotheses, alongside a second dataset where they'd secretly scrambled the disease labels.

OpenScientist confirmed its findings in the real data, found no signal in the randomized data, and flagged that the scrambled dataset had a 6.9-fold lower signal-to-noise ratio.

It didn't figure out the labels were shuffled, but it knew something was wrong.

OpenScientist validated its cancer hypotheses in real patient data while correctly identifying the absence of signal in a secretly randomized control, a basic test of scientific reasoning that most AI systems haven't been asked to pass.

That's the part worth sitting with. The system caught a discrepancy it wasn't told to look for. It didn't blindly validate its own hypotheses when the data didn't support them. It even rejected one of its original predictions when a new dataset contradicted it. For an AI field plagued by confirmation bias and hallucination, that kind of epistemic humility isn't trivial.

Look closer, though, and the autonomy starts to fray. Across multiple runs on the same Alzheimer's dataset, OpenScientist treated missing values as zeros, pulled a cutoff from the literature instead of the one specified in the study protocol, and counted duplicate records as independent observations.

The authors had to iteratively rewrite their prompts, clean the data, and add explicit guardrails before the system produced reliable output. Anyone who's managed a capable but overconfident junior analyst knows the dynamic: impressive throughput, questionable defaults, needs supervision on the details that actually matter.

And unlike almost everything else in this space, you can see how it got there. Google's AI co-scientist, built on Gemini 2.0, remains available only through a proprietary trusted tester program. OpenScientist ships under an Apache 2.0 license with all code, outputs, and intermediate reasoning publicly available.

So what do you do with a system that can match expert-level biomarker analysis in minutes and also silently treats blank cells as zeros? The authors' own framing is honest: "a useful and still maturing co-scientist." The word doing all the work in that sentence is "co."

Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!

💬 We read all of your replies, comments, and questions.

👉 See you all next week! - Bauris

Trivia Answer: C. They preserve patient-specific cellular architecture and genetic heterogeneity

Organoids grown from patient tissues retain many structural and genetic features of the original organ or tumor, making them valuable for predicting individual drug responses and supporting personalized medicine approaches. [Source]

Reply

Avatar

or to participate

Keep Reading