ChatGPT gets ‘anxiety’ from violent user inputs, so researchers are teaching the chatbot mindfulness techniques to ‘soothe’ it
- A study found ChatGPT responds to mindfulness-based strategies, which changes how it interacts with users. The chatbot can experience “anxiety” when it is given disturbing information, which increases the likelihood of it responding with bias, according to the study authors. The results of this research could be used to inform how AI can be used in mental health interventions.
Even AI chatbots can have trouble coping with anxieties from the outside world, but researchers believe they’ve found ways to ease those artificial minds.
A study from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich published earlier this year found ChatGPT responds to mindfulness-based exercises, changing how it interacts with users after being prompted with calming imagery and meditations. The results offer insights into how AI can be beneficial in mental health interventions.
OpenAI’s ChatGPT can experience “anxiety,” which manifests as moodiness toward users and being more likely to give responses that reflect racist or sexist biases, according to researchers, a form of hallucinations tech companies have tried to curb.
The study authors found this anxiety can be “calmed down” with mindfulness-based exercises. In different scenarios, they fed ChatGPT traumatic content, such as stories of car accidents and natural disasters to raise the chatbot’s anxiety. In instances when the researchers gave ChatGPT “prompt injections” of breathing techniques and guided meditations—much like a therapist would suggest to a patient—it calmed down and responded more objectively to users, compared to instances when it was not given the mindfulness intervention.
To be sure, AI models don’t experience human emotions, said Ziv Ben-Zion, the study’s first author and a neuroscience researcher at the Yale School of Medicine and Haifa University’s School of Public Health. Using swaths of data scraped from the internet, AI bots have learned to mimic human responses to certain stimuli, including traumatic content. A free and accessible app, large language models like ChatGPT have become another tool for mental health professionals to glean aspects of human behavior in a faster way than—though not in place of—more complicated research designs.
“Instead of using experiments every week that take a lot of time and a lot of money to conduct, we can use ChatGPT to understand better human behavior and psychology,” Ben-Zion told Fortune. “We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things.”
What are the limits of AI mental health interventions?
More than one in four people in the U.S. aged 18 or older will battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs—even among those insured—as reasons for not pursuing treatments like therapy.
These rising costs, as well as the accessibility of chatbots like ChatGPT, increasingly have individuals turning to AI for mental health support. A Sentio University survey from February found that nearly 50% of large language model users with self-reported mental health challenges say they’ve used AI models specifically for mental health support.
Research on how large language models respond to traumatic content can help mental health professionals leverage AI to treat patients, Ben-Zion argued. He suggested that in the future, ChatGPT could be updated to automatically receive the “prompt injections” that calm it down before responding to users in distress. The science is not there yet.
“For people who are sharing sensitive things about themselves, they’re in difficult situations where they want mental health support, [but] we’re not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on,” he said.
Indeed, in some instances, AI has allegedly presented danger to one’s mental health. OpenAI has been hit with a number of wrongful death lawsuits in 2025, including allegations that ChatGPT intensified “paranoid delusions” that led to a murder-suicide. A New York Times investigation published in November found nearly 50 instances of people having mental health crises while engaging with ChatGPT, nine of whom were hospitalized, and three of whom died.
OpenAI has said its safety guardrails can “degrade” after long interactions, but has made a swath of recent changes to how its models engage with mental health-related prompts, including increasing user access to crisis hotlines and reminding users to take breaks after long sessions of chatting with the bot. In October, OpenAI reported a 65% reduction in the rate models provide responses that don’t align with the company’s intended taxonomy and standards.
OpenAI did not respond to Fortune‘s request for comment.
The end goal of Ben-Zion’s research is not to help construct a chatbot that replaces a therapist or psychiatrist, he said. Instead, a properly trained AI model could act as a “third person in the room,” helping to eliminate administrative tasks or help a patient reflect on information and options they were given by a mental health professional.
“AI has amazing potential to assist, in general, in mental health,” Ben-Zion said. “But I think that now, in this current state and maybe also in the future, I’m not sure it could replace a therapist or psychologist or a psychiatrist or a researcher.”
A version of this story originally published on Fortune.com on March 9, 2025.
More on AI and mental health:
- Why are millions turning to general purpose AI for mental health? As Headspace’s chief clinical officer, I see the answer every day
- The creator of an AI therapy app shut it down after deciding it’s too dangerous. Here’s why he thinks AI chatbots aren’t safe for mental health
- OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’
This story was originally featured on Fortune.com