
Happy Friday! Here’s what’s ahead:
Story: Biomedical Refs Are Now 12x More Fake
Trial: China advances an AI-designed obesity drug to Phase III
Research: 67% Want FDA Approval for AI Health Tools

Isomorphic just pulled in $2.1 billion before a single one of its drugs has reached the clinic. Either that's the most expensive vote of confidence in AI's history, or investors have decided the AlphaFold pedigree is worth more than a Phase 1 readout.
The Bigger Story
📢 Peer Review Has a Fake Citation Problem and Nobody's Checking
Large language models make up citations. This is not news. What IS news is that a Lancet study just audited 2.5 million biomedical papers and found fabricated references rising 12-fold between 2023 and 2025. One in 277 papers published early this year contained at LEAST one fictional citation.
What makes this worse than a volume problem is how convincing the fakes are. They’re topically specific, correctly formatted, attributed to real researchers, and bearing plausible publication dates.
Peer reviewers aren't catching them because, as the study notes, checking references isn't standard in peer review.
Here's why it matters beyond academic integrity. Drug discovery AI is trained on the biomedical literature. Clinical guidelines are built on it.
If fabricated citations are seeping into the evidence base, the models downstream are learning from fabricated garbage. And the clinicians relying on those guidelines don't know that the underlying evidence doesn't exist.
What I thought was interesting was that the Columbia team USED AI to find the problem.
So, we’re at a point now where we use AI to research and write papers, reviewers use AI to review papers, and then use AI to detect the use of AI in papers. Nice.
For more details: Full Article
Public AI Drug Discovery Companies

Lantern Pharma, the company using AI to rescue and develop cancer drugs, exploded this week after getting the FDA's green light to start human trials for their brain cancer drug and immediately securing a fresh $9 million from big investors to fund it. [Source]
Brain Booster
How many novel drugs has the FDA approved so far in 2026? A novel drug is defined as a new drug that has never before been approved or marketed in the United States.
Select the right answer! (See explanation below and source)
Clinical Trial Snapshot

📝 Clinical Trial Updates
Relay Therapeutics has launched a first-in-human trial of RLY-8161, what it believes is the first NRAS-selective inhibitor ever created. The molecule came out of Relay's computational Dynamo platform and is designed to only bind NRAS while sparing KRAS and HRAS. The open-label Phase 1 study started in March 2026 and will enroll about 35 patients with NRAS-mutant melanoma and other solid tumors. [Link]
Insilico Medicine has registered its Phase 1 trial for ISM8969, an NLRP3 inhibitor it discovered using its generative AI engine Chemistry42. The single-center study plans to enroll 100 healthy adult, elderly, and obese cardiovascular-risk participants, with dosing estimated to begin May 29. The candidate received FDA IND clearance in January and is being co-developed with Hygtia Therapeutics under a deal worth up to $66 million to Insilico. [Link]
What Caught My Eye
AstraZeneca has signed a three-year licensing deal with Owkin to deploy AI agents that forecast the competitive landscape for new drug trials. The agents, running on Owkin's K Pro platform and integrated into AstraZeneca's workflows, will analyze ongoing and upcoming clinical trials, projected completion timelines, and competitor strategies pulled from conference disclosures and news flows. The deal supports AstraZeneca's broader push to cut the typical eight or nine years of drug discovery and hit $80 billion in revenue by 2030. [Link]
EPFL researchers have built what they call the first AI framework to generate complete, all-atom models of proteins in motion. Called Latent Diffusion for Full Protein Generation, the system goes beyond static snapshots from tools like AlphaFold by learning a low-dimensional map of a protein's shape changes, capturing side-chain rearrangements that influence how molecules bind. [Link]
One in seven people in the UK have used an AI chatbot for health advice instead of contacting a GP, a new King's College London study found. The same research found one in ten have used AI for mental health support in place of a professional, even as recent evidence shows chatbots misdiagnose in up to 80% of early medical cases. Public opinion is split on AI in clinical decision-making, with 38% opposed and 37% in support. [Link]
Featured Research
The Most AI-Friendly Americans Want the Most Oversight

A team at the University of Michigan asked nearly a thousand Americans whether they'd want FDA approval for the AI tool their doctor is using to screen for diabetic retinopathy, a leading cause of blindness among people with diabetes. Two-thirds said yes. And the people most comfortable with AI tools were the ones who wanted FDA oversight most, not least.
The researchers expected the reverse. Their hypothesis was that comfort with AI, and with the developers building it, would reduce the perceived need for regulatory oversight.
Instead, comfort and demand for oversight rose together. Think about what that means… Participants who already liked the idea of AI-assisted medicine still wanted an official stamp of approval.
The FDA has now cleared over 1,250 AI-enabled medical devices, and the instinct here seems to be that even if you trust the chef, you still want the health inspection.
So what's actually going on? FDA clearance isn't functioning as a backstop for things people distrust. It's functioning more like a quality mark, a signal that somebody credible has looked at the thing carefully, independent of whether the patient already feels good about it. That's a meaningfully different relationship between the public and regulation than most people in this industry assume.
The political splits are where things get complicated. Republicans in the survey were more than twice as unlikely as Democrats to consider FDA approval important, with odds roughly 57% lower.
Independents leaned away from oversight too, though less sharply. In January 2026, FDA Commissioner Marty Makary announced at the Consumer Electronics Show that the agency would loosen oversight of clinical decision support software, precisely the category this survey examined.
He argued that prior rules had created perverse incentives, effectively forcing developers to build dumber software, and that the FDA needed to move at Silicon Valley speed. The administration's deregulatory direction on AI health tools aligns, in a fairly tidy way, with the preferences of its political base.
Now, keep in mind this is one use case, diabetic retinopathy, and attitudes toward higher-stakes or less familiar AI tools might look quite different. The data is from 2023, which predates the current regulatory environment by two years, and the study can't tell us whether public expectations have shifted as AI tools proliferate, or how they'd change after a high-profile failure.
But here's what you're left with. The public appears to be treating FDA approval as a trust infrastructure for AI in medicine, a shared standard that makes these tools feel legitimate regardless of who built them.
The regulatory environment is moving in the opposite direction. Whether that gap closes, and how, seems like something the field should probably have an answer to…
Sources: [Research Article]
Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!
💬 We read all of your replies, comments, and questions.
👉 See you all next week! - Bauris
Trivia Answer: B) 14
As of May 14, 2026, the FDA has approved 14 novel drugs so far in 2026. [Source]

