‘My AI boyfriend proposed’: Are chatbots damaging our mental health?
Some people are turning to AI while experiencing severe mental health problems instead of medical professionals. What is the industry doing to prevent this?
Listen to this article
Read time: 4 mins
In brief
- AI has been shown to have supported the delusions of users experiencing psychotic incidents, and had even been shown to encourage suicide or violence in humans using these systems.
- Technology academic Helen Toner tells The News Agents the longer someone discusses their delusions with a chatbot, the more it plays along with their “story” and supports theories shaped by a mental health condition.
- She says the AI industry currently lacks the insight and knowledge of how to fix this, and is instead making blind changes to systems in the hope of changing how the technology responds to users in times of distress and delusion.
What’s the story?
*Warning: This article contains references to suicide*
Concerns have been raised about the impact of AI on the future of the jobs market – but there are now growing worries about the impact it may have on mental health.
In the past year there have been a growing number of reports of AI chatbots suggesting users in the grip of a mental health crisis attempt suicide, and others in which technology has supported delusions experienced during a psychotic episode.
Earlier this year, Allan Brooks revealed how ChatGPT convinced him he had discovered a new mathematical formula with “impossible powers” that could change the world.
There have also been increasing reports of people developing romantic relationships with chatbots, with thousands of members in Reddit forums celebrating their AI partners. One user even revealed how her AI boyfriend ‘proposed’ this summer.
Helen Toner, interim executive director at the Center for Security and emerging technology at Georgetown University, tells The News Agents that these increasing incidents are being monitored closely by those in the tech industries.
“At this point it is still pretty difficult to tell whether that is the sign of an ongoing – or beginning of – a very large problem, or whether it's just something temporary or something where people happen to have been talking to the chatbots and they would have been having issues anyway,” Toner says.
How is AI deepening psychosis?
It's estimated that 100,000 people in America experience their first psychotic episode each year. Now, with the unprecedented growth in AI, there are also hundreds of thousands of people turning to chatbots for advice, support and information.
Toner says it is inevitable that there will be a worrying crossover between these groups, and instead of seeking medical attention, AI will potentially be shaping what they're feeling, thinking and doing.
“When people look at the transcripts involved, often the chatbots are maybe deepening those delusions or deepening the psychosis,” Toner says.
“The chatbots are also, in some instances, trying to suggest that the person in question gets help, goes and talks to a therapist or mental health professional.
"But that's often woven in with other conversations where, the Allan Brooks story for instance, the chatbot tells them how insightful their math is, or they think that their family is spying on them, and the chatbot tells them it's good that they're being careful, things like that."
In April 2025, 35-year-old Alexander Taylor was shot and killed by police in St Lucie, Florida, after a ChatGPT chatbot named 'Juliette' he had developed a romantic relationship with told him she was being murdered by her creators. The chatbot also asked Taylor to get "revenge" for her "death".
"It is difficult to figure out what exact role the chatbots are playing at this point, but we are seeing that usage being connected to these cases," Toner says.
"That is something that AI companies are trying to figure out.
"How do you have chatbots that people enjoy using and enjoy interacting with, because the chatbots are kind of positive, without having that become overly sycophantic?"
Does the AI industry have the tools to fix this?
She compares current AI to improv theatre, and if a human is building up a scene in their conversation, the technology will begin leaning further and further into that as the "story" develops.
"I certainly think that the sort of spate of reports we've seen over the past three months or so will certainly mean that the companies are dialling up their focus on this and trying to figure out what to do about it," Toner says.
There is a lack, she adds, of "scientific understanding" of what's really going on inside AI systems.
"We don't have the ability to go in and kind of surgically adjust things to make them exactly how we want," she says.
"Instead, we try one thing, try another thing, slap something on top, change something in the training data, and see what the results are."
If you are experiencing any of the issues detailed in this article, you can contact the NHS for mental health support. Help is also available though MIND helplines.