The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support

Song, Inhwa, Pendse, Sachin R., Kumar, Neha, De Choudhury, Munmun

arXiv.org Artificial Intelligence 

Research from the field of Computer-Supported Cooperative Work(CSCW), including the emergent area of Human-AI interaction, has increasingly examined the societal gaps that prevent people in need from accessing care, and analyzed how people turn to technology-mediated support to fill those gaps[14, 27, 44]. Large Language Model (LLM) chatbots have quickly become one such tool, quickly appropriated for mental health support by people experiencing severe distress and nowhere else to turn. Recent work has discussed how people in distress have turned to LLM chatbots (such as OpenAI's ChatGPT [8, 10] and Replika [28]) for mental health support, and social media users have described how LLM chatbots saved their lives[10, 47]. Following Freud and Breuer's[19] description of the beneficial nature of psychoanalysis as a "talking cure," some have called engagements with technologies for mental health a typing cure [22, 40, 51]. However, others have cautioned against the use of LLM chatbots for mental health support, noting that the outputs of LLM chatbots are less constrained than the rule-based chatbots of the past, with potential for harmful advice or recommendations. For example, the National Eating Disorder Association was forced to shut down their support chatbot in July 2023 after the chatbot provided harmful recommendations to users, including weight loss and dieting advice to users who may already have been struggling with disordered eating [10, 25, 75].