Guardians of Trust: Risks and Opportunities for LLMs in Mental Health

Authors: Baidal, M. , Derner, E. , Oliver, N.

Publication: 4th Workshop on NLP for Positive Impact, ACL 2025, 2025

The integration of large language models (LLMs) into mental health applications offers promising opportunities for positive social impact. However, it also presents critical risks. This paper introduces a taxonomy of the main challenges related to the use of LLMs for mental health and proposes a structured research agenda to mitigate them. We emphasize the need for explainable, emotionally aware, culturally sensitive, and clinically aligned systems, supported by continuous monitoring and human oversight. By placing our work within the broader context of natural language processing (NLP) for positive impact, this research contributes to ongoing efforts to ensure that technological advancements in NLP responsibly serve vulnerable populations, fostering a future where mental health solutions enhance rather than endanger well-being.