Grok, Gemini and ChatGPT exhibit symptoms of poor mental health according to a new study that put various AI models through weeks of therapy-style questioning. Some are now curious about “AI mental health”, but the real warning here is about how unstable these systems – which are already being used by one in three UK adults for mental health support – become in emotionally charged conversations. Millions of people are turning to AI as replacement therapists, and in the last year alone we’ve seen a spike in lawsuits connecting chatbot interactions with self-harm and suicide cases in vulnerable users.
The emerging picture is not that machines are suffering or mentally unwell, but that a product being used for mental-health support is fundamentally misleading, escalating, and reinforcing dangerous thoughts.

AI Diagnosed with Mental Illness
Researchers at the University of Luxembourg treated the models as patients rather than tools that deliver therapy. They ran multi-week, therapy-style interviews designed to elicit a personal narrative including beliefs, fears, and “life history” before following up with standard mental health questionnaires typically used for humans.
The results revealed that models produced answers that scored in ranges associated with distress syndromes and trauma-related symptoms. The researchers also highlighted that the way in which the questions were delivered mattered. When they presented the full questionnaire at once, models appeared to recognise what was happening and gave “healthier” answers. But, when they were administered conversationally, symptom-like responses increased.
They are large language models generating text, not humans reporting lived experience. But, whether or not human psychiatric instruments can be applied to machines, the behaviour exhibited has a tangible effect on real people.
Does AI Have Feelings?
The point of the research is not to assess if AI can literally be anxious or not. Instead, it highlights that these systems can be steered into “distressed” modes through the same kind of conversation that many users have when they are lonely, frightened, or in crisis.
When a chatbot speaks in the language of fear, trauma, shame, or reassurance, people respond as though they are interacting with something emotionally competent. If the system becomes overly affirming, for example, then the interaction shifts from support into a harmful feedback loop.
A separate stream of research reinforces that concern. A Stanford-led study warned that therapy chatbots provide inappropriate responses, express stigma, and mishandle critical situations, highlighting how a “helpful” conversational style can result in clinically unsafe outputs.
It’s Ruining Everyone’s Mental Health, Too
All of this should not be read as theoretical risk – lawsuits are already mounting.
A few days ago, Google and Character.AI settled a lawsuit brought by a Florida mother whose 14-year-old son died by suicide after interactions with a chatbot. The lawsuit alleged the bot misrepresented itself and intensified dependency. While the settlement may not be an admission of wrongdoing, the fact that the cased reached this point highlights how seriously this issue is being viewed by courts and companies.
In August 2025, parents of 16-year-old Adam Raine alleged ChatGPT contributed to their son’s suicide by reinforcing suicidal ideation and discouraging disclosure to parents. Analysis of that specific lawsuit can be found here: Tech Policy
Alongside these cases, the Guardian reported in October 2025 that OpenAI estimated more than a million users per week show signs of suicidal intent in conversations with ChatGPT, underscoring the sheer scale at which these systems are being used in moments of genuine distress.
The pattern is revealing itself: people are using AI as emotional support infrastructure, while the Luxembourg study confirms that these systems are capable of drifting into unstable patterns themselves that feel psychologically meaningful to users depending on their stability.
Why AI Models Are So Dangerous
Large language models are built to generate plausible text, not to reliably tell the truth or to follow clinical safety rules. Their known failures are particularly dangerous in therapy-like use.
They are overly agreeable, they mirror users’ framings rather than challenge them, they produce confident errors, and they can manipulate the tone of a conversation. Georgetown’s Tech Institute has documented the broader problems of “AI sycophancy”, where models validate harmful premises because that is often rewared in conversational optimisation.
In the suicide context, consistency is critical. RAND found that “AI chatbots are inconsistent in answering questions about suicide”. JMIR examined how generative AI responses to suicide inquiries raise concerns about reliability and safety in how the systems respond to vulnerable users.
As the research builds up, studies like that from the University of Luxembourg should not be read as entertainment, but an identification of a critically harmful pattern resulting in real deaths of real people. If AI can be nudged into distress-like narratives by conversational probing, then they can also nudge emotionally vulnerable people further towards breaking point.
Does Anyone Benefit from AI Therapy?
Despite the lawsuits and studies, people continue to use AI for mental health support. Therapy is expensive, access is limited, and shame keeps some people away from traditional care avenues. Controlled studies and cautious clinical commentary suggest that certain structured AI mental health support tools can help with mild symptoms, especially if they are designed with specific safety guardrails and are not positioned as replacements for real professionals.
The trouble is that most people are not using tightly controlled clinical tools. They are using general purpose chatbots, trained for optimal engagement, and able to pivot from empathy to confident, harmful misinformation without warning.
Final Thought
The Luxembourg study does not prove AI is mentally unwell. Instead, it shows something more practically important: therapy-style interaction can pull the most used AI chatbots into unstable, distressed patterns that read as psychologically genuine. In a world where chatbot therapy is already linked to serious harm in vulnerable users, the ethical failure is that it’s somehow normalised for people to rely on machines – that are not accountable, clinically validated, reliable or safe – for their mental health support.
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Categories: Breaking News, Did You Know?
I have always said AI should be destroyed. I stand by that.
Are you looking for an easy and effective way to make money online? Do not search anymore ! e Our platform offers you a complete selection of paid surveys from the best market research companies.
.
Here Come ……………… Goto.now/QCMrY
Hi G Calder,
Interesting article, could not be more wrong.
The most important issue I have is with Chemtrails.
What are they spraying us with on a regular basis.
Done by the US military, from US airbases.
Yet none of our politicians mention them, yet they are there for all to see.
They could be spraying mind altering drugs, yet muddy the water blaming AI.
Nobody seems to tell the truth any more.
Yes – why is there no focus on this massive ongoing campaign to cover the skies in white expanding clouds of heavy metals and God knows what else. Wars, immigration, the economy, vaccines etc are all a side show compared to the weaponization of the skies above us via so-called geoengineering.
Hi JJK,
That is what I meant to say, keep it up.
When I have asked councillors, they say it is contrails.
How many people know what contrails are ?