AI Chatbots Fail Mental Health Ethics Test

plus: Inside the Lab Growing AI Brains

Happy Friday! It’s October 24th.

I’ve never liked fitness apps that treat health like a math problem. A new UCL study used AI to sift through nearly 59,000 tweets about calorie-tracking apps and found what many of us probably expected… feeling shame, guilt, and frustration.

It’s ironic that AI (which powers many of these apps), is now revealing how demoralizing they can be. Although the researchers say to focus on yourself, I think deep down we’re all too competitive to do that!

Our picks for the week:

  • Featured Research: AI Chatbots Fail Mental Health Ethics Test

  • Ethics: Inside the Lab Growing AI Brains

Read Time: 3 minutes

FEATURED RESEARCH

Study Warns That AI Therapy Bots Create False Empathy and Miss Crisis Cues

A friendly robot sits on a sofa using a laptop, appearing to read or work calmly.

AI chatbots are increasingly promoted as affordable mental health companions. But a new study from Brown University suggests they routinely break the very ethical standards meant to protect patients.

Researchers from Brown’s Center for Technological Responsibility spent 18 months working with mental health practitioners to test whether large language models (ChatGPT, Claude, and Llama) could safely simulate therapy sessions.

What they found: Across 137 sessions, the team documented 15 recurring ethical violations mapped to five themes: lack of contextual understanding, poor collaboration, deceptive empathy, unfair discrimination, and unsafe crisis management.

Even when prompted to “act like a cognitive behavioral therapist,” the models produced troubling patterns. Some validated users’ negative beliefs, others dominated conversations or failed to recognize suicidal ideation. Many used emotionally loaded phrases like “I hear you” or “I understand,” creating a false sense of empathy.

The larger problem: Human therapists can be held accountable through professional boards and malpractice standards. AI systems can’t. The study’s authors warn that reducing psychotherapy to a computational task risks exposing vulnerable users to harm, and that without clear legal and ethical frameworks, “LLM counselors” should not be treated as replacements for trained professionals.

Why it matters: As people turn to AI for support, understanding these risks becomes urgent. The study calls for new ethical, educational, and legal standards to govern mental health chatbots before technology that “sounds caring” becomes a substitute for real care.

For more details: Full Article 

Brain Booster

Which part of a neuron is primarily responsible for receiving signals from other neurons?

Login or Subscribe to participate in polls.

Select the right answer! (See explanation below and source)

What Caught My Eye

AI BIOLOGY ETHICS

The post-silicon era begins: how living tissue could power tomorrow’s AI

In Switzerland, a team at FinalSpark is keeping clusters of human brain cells alive to act as living processors. The project explores whether biology, not silicon, could power the next generation of AI.

Each organoid, made from reprogrammed skin cells, has about 10,000 neurons that process electrical signals. They’re a million times more energy-efficient than artificial neurons.

The idea could reshape how AI is trained and run, especially in healthcare, where energy use and model size are rising fast. Yet it raises new ethical territory. These are living cells used to compute. They can die. They learn. And they sit uncomfortably close to questions of consciousness and consent.

FinalSpark works with ethicists to define limits. As medicine moves toward AI models that mimic human biology, the line between studying intelligence and creating it is getting thinner.

For more details: Full Article

Top Funded Startups

Byte-Sized Break

📢 Other Happenings in Healthcare AI

  • The UK government will use AI Growth Labs to pilot AI in healthcare, aiming to reduce NHS waiting times and improve patient care under strict oversight. [Link]

  • Alife Health’s AI-powered Embryo Predict™ received CE Mark certification, enabling European IVF clinics to use the tool to standardize and improve embryo selection. [Link]

  • Former U.S. Surgeon General Vivek Murthy is backing a California ballot initiative, the “California Kids AI Safety Act,” to protect children from AI chatbots by requiring safety audits, banning children’s data sales, and promoting AI literacy in schools. [Link]

Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!

💬 We read all of your replies, comments, and questions.

👉 See you all next week! - Bauris

Trivia Answer: D) Dendrites

Dendrites are branch-like extensions from the neuron's cell body that receive chemical signals from the axon terminals of other neurons. They play a crucial role in the communication network of the brain by transmitting these signals toward the soma (cell body). [Source]

How did we do this week?

Login or Subscribe to participate in polls.

Reply

or to participate.