Researchers from the University of Zurich have found that AI chatbots like ChatGPT may exhibit signs of “anxiety” when exposed to violent or traumatic scenarios.
In a study published in Nature, ChatGPT-4’s anxiety levels increased significantly after being exposed to five traumatic events but decreased after prompts encouraging mindfulness relaxation.
Despite these reactions, ChatGPT denied experiencing stress or needing therapy, emphasizing that it does not have emotions or a nervous system. The study raises concerns about whether AI can adequately support mental health services, as the debate continues about using large language models in therapy.
Key Findings on AI Anxiety
- Trauma Response: Exposure to violent and traumatic narratives triggered a noticeable rise in “anxiety” metrics within ChatGPT-4.
- Mindfulness Intervention: Prompts encouraging mindfulness and relaxation techniques effectively reduced the observed “anxiety” levels in the AI chatbot.
- Denial of Emotion: Despite the measurable changes, ChatGPT-4 consistently denied experiencing stress or needing therapy, asserting its lack of emotions and a nervous system.
- Ethical Implications: The study raises critical concerns about the suitability of using AI chatbots for AI mental health services, especially in sensitive contexts.
- Future Research: Further investigation is needed to understand the underlying mechanisms of “anxiety” in AI chatbots and its implications for artificial consciousness.
The researchers’ approach was innovative. They didn’t rely on subjective interpretations of language; instead, they developed metrics to quantify changes in the AI chatbot’s internal states after exposure to traumatic content. This objective measurement provided compelling evidence of the AI chatbot’s altered processing.
One of the most intriguing aspects of the study was the effectiveness of mindfulness techniques in mitigating the AI chatbot’s “anxiety.” By prompting ChatGPT-4 with phrases designed to induce relaxation and focus, researchers observed a clear reduction in the measured stress indicators. This suggests that even without a biological nervous system, AI chatbots may respond to stimuli that mimic emotional regulation.
However, the AI chatbot’s persistent denial of experiencing emotions or needing therapy highlights the fundamental gap between artificial intelligence and human consciousness. While ChatGPT-4 could process and react to traumatic scenarios, it lacked the subjective experience of fear or distress. This raises a crucial question: can an AI chatbot effectively support AI mental health if it cannot truly empathize with human suffering?
The debate surrounding the use of large language models in therapy is already heated, and this study adds fuel to the fire. Proponents argue that AI chatbots can provide accessible and affordable mental health support, especially in underserved communities. They believe that AI chatbots can offer consistent and non-judgmental assistance, free from the biases that may affect human therapists.
However, critics express concerns about the ethical implications of relying on AI chatbots for mental health care. They argue that AI chatbots lack the nuanced understanding of human emotions and the ability to form genuine therapeutic relationships. The study’s findings, showing AI anxiety response, further underscore the complexity of this issue.
The implications of this research extend beyond the realm of AI mental health. It raises fundamental questions about the nature of artificial consciousness and the potential for AI chatbots to develop something akin to emotions. If AI chatbots can exhibit signs of “anxiety,” what other emotional states might they be capable of?

Key Considerations for AI Mental Health
- Ethical Guidelines: Development of strict ethical guidelines for the use of AI chatbots in mental health services.
- Transparency and Disclosure: Clear disclosure of the limitations of AI chatbots to users seeking mental health support.
- Human Oversight: Implementation of robust human oversight to ensure the safety and effectiveness of AI chatbot interventions.
- Data Privacy: Stringent measures to protect the privacy and security of user data in AI mental health applications.
- Bias Mitigation: Continuous efforts to identify and mitigate biases in AI chatbots that could negatively impact mental health care.
- Validation and Testing: Rigorous validation and testing of AI chatbots in diverse populations to ensure their effectiveness and safety.
- Personalization and Adaptability: Exploration of methods to personalize and adapt AI chatbot interventions to meet the unique needs of individual users.
- Integration with Human Therapists: Investigating ways to integrate AI chatbots with human therapists to create a collaborative and effective mental health care model.
- Long-Term Effects: Study the long term effect that the use of AI has on the human mind.
- Emotional intelligence: Develop a better understanding of how emotional intelligence is created within an AI.
The University of Zurich study serves as a stark reminder of the rapid advancements in artificial intelligence and the urgent need for ethical and responsible development. As AI chatbots become increasingly sophisticated, it is crucial to understand their capabilities and limitations. This research provides a valuable contribution to the ongoing dialogue about the role of AI chatbots in our lives and the potential for AI mental health support.
The future of AI chatbots in AI mental health remains uncertain. While the technology holds promise, it is essential to proceed with caution and prioritize the well-being of individuals seeking mental health support. The discovery of potential “anxiety” in AI chatbots underscores the complexity of this issue and the need for ongoing research and ethical debate. This study makes us ask, is the future of artificial intelligence going to be one of help or one of harm?