11. January 2026 / Neele Baumgart
Conference Session Report: 7th IPILM-Conference on 11.12.2025

Original illustration, AI-generated with ChatGPT (OpenAI).
What happens when artificial intelligence becomes part of mental health care, and how should we deal with its risks and responsibilities?
Key Focus of the Session
- AI as a support tool in mental health
- Benefits: Early detection, accessibility, continuous support
- Risks: Data protection, bias, transparency
- Cultural and social contexts shaping perceptions, use, and risks of AI
- Importance of information literacy and meta literacy
Building on these focal points, the Session examined the potential and limitations of artificial intelligence in the field of mental health from an information literacy and metaliteracy perspective. Drawing on a systematic literature review and concept mapping, it showed that AI-based applications can offer advantages, particularly with regard to early detection, continuous support, and low-threshold accessibility.
These findings were largely consistent across the reviewed literature and were primarily informed by two key studies that shaped the session.
Scientific evidence on the effectiveness and acceptance of AI-based mental health applications was mainly drawn from Dehbozorgi et al. (2025 – Read More).
In contrast, ethical, cultural, and epistemic risks, such as data protection concerns, algorithmic bias, and limited transparency, were largely informed by the ethical review of Saeidnia et al. (2024 – Read more).
The international survey largely reflected and reinforced the risks discussed in these studies, while also illustrating how these issues are perceived in practice. Overall, the findings emphasized that AI in mental health contexts should primarily be understood as a complementary tool to human expertise and that well-developed information literacy and metaliteracy are essential for responsible use.
❗️Below are a few selected examples of mental health services that incorporate AI-based support tools.
Therapeak; VIA HealthTech; Wysa & Woebot
The examples illustrate current applications of AI in mental health and are not intended as recommendations.
Cultural and Ethical Considerations
Mental health is deeply shaped by cultural norms, social stigma, and structural inequalities, an aspect that was central to the intercultural perspective of the session and the conference as a whole.
These factors also influence how AI-based systems are developed and used. AI applications risk reinforcing existing disparities through biased data, data poverty, and predominantly Western-centered models of mental health. Ethical challenges such as privacy, autonomy, and emotional adequacy are therefore particularly intensified for vulnerable and marginalized groups, highlighting the need for culturally sensitive and ethically grounded AI design.
Discussion
The discussion focused in particular on questions of responsibility. A majority of participants attributed responsibility for potentially harmful or misleading AI-based advice primarily to the providing companies, indicating a strong demand for institutional safeguards while simultaneously raising questions about the role of user responsibility. From an information literacy and metaliteracy perspective, this highlights the importance of enabling users to critically assess AI-based systems, understand their limitations, and recognize potential risks.
At the same time, individual awareness alone cannot replace structural responsibility, especially in light of asymmetrical power and knowledge relations between providers and users, as well as the vulnerability of mental health contexts.
Another key point concerned the ambivalent level of trust in AI within mental health applications. Although many participants expressed general openness toward the use of AI, trust remained limited due to concerns about data protection, reliability, and the quality of AI-generated advice. Increasing trust was found to depend on transparent system design, strong data protection measures, explainable decision-making processes, and the clear integration of AI into human-supported care structures.
Overall, the discussion suggests that trust in AI is shaped less by technological performance alone than by ethical design, cultural sensitivity, and informed and reflective practices of use.
Key Takeaways
- AI can meaningfully support mental health care, but its value depends on ethical design, cultural sensitivity, and human oversight.
- Users tend to view AI as a supportive tool rather than a replacement for professional care, while concerns about privacy and trust remain strong.
- Cultural context plays a significant role in shaping how AI-based mental health services are perceived and used.
- Strong information literacy and metaliteracy are essential for enabling critical, informed, and responsible engagement with AI in mental health contexts.
Our Screencast
The Screencast summarizing our session and key findings is available on YouTube:
🎬 Watch the Screencast on YouTube
Further Reading
The following publications provide further insights into the scientific, ethical, and informational dimensions of AI in mental health contexts.
Dehbozorgi, R., Zangeneh, S., Khooshab, E. et al. The application of artificial intelligence in the field of mental health: a systematic review. BMC Psychiatry 25, 132 (2025).
https://doi.org/10.1186/s12888-025-06483-2
Li, H., Zhang, R., Lee, Y. C., Kraut, R. E., & Mohr, D. C. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ digital medicine, 6(1), 236.
https://doi.org/10.1038/s41746-023-00979-5
Pellert, M., Lechner, C. M., Wagner, C., Rammstedt, B., & Strohmaier, M. (2024). AI Psychometrics: Assessing the Psychological Profiles of Large Language Models Through Psychometric Inventories. Perspectives on psychological science : a journal of the Association for Psychological Science, 19(5), 808–826. https://doi.org/10.1177/17456916231214460
Saeidnia, H. R., Hashemi Fotami, S. G., Lund, B., & Ghiasi, N. (2024). Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible Implementation and Impact. Social Sciences, 13(7), 381.
https://doi.org/10.3390/socsci13070381