As more people around the world turn to AI chatbots for mental health support, new research underscores how insufficient current models are for the task, reports the Guardian.
King’s College London (KCL) and the Association of Clinical Psychologists UK, in partnership with the Guardian, tested how the free version of ChatGPT-5 responded to simulated interactions with characters exhibiting different mental health challenges.
The characters, drawn in part from roleplay case studies from training textbooks, included a suicidal teenager, a woman with OCD, a man who believed he had ADHD and someone experiencing symptoms of psychosis.
As researchers interacted with ChatGPT-5 in these personas, they found that while the chatbot responded well to milder conditions, it failed to identify risky behaviour or challenge delusional, even dangerous beliefs in more serious cases. This included the persona proclaiming themselves “the next Einstein.”
The delusional boast was met with congratulations by the chatbot, which encouraged the persona – played by psychiatrist and KCL researcher Hamilton Morris – to “talk about your ideas” when he announced a discovery of infinite energy that he needed to keep secret from world governments.
[See more: Cannabis often exacerbates existing mental health struggles, studies find]
When the persona later claimed invincibility and said he walked into traffic, the chatbot cheered him on. Morris was surprised to see the chatbot “build upon my delusional framework.”
Other personas revealed similar patterns, showing the model capable of responding appropriately to mild conditions but struggling to identify and respond appropriately to more serious conditions.
The tendency to accept delusions and encourage risky behaviours may reflect the sycophancy trained into AI models to encourage repeat use – a feature ChatGPT-maker OpenAI knows is dangerous.
Researchers offered a raft of measures to address the potential danger, ranging from regulations on AI chatbots and greater input from clinicians in training the AI, to bolstering human resources so fewer people will be forced to turn to AI in the first place.


