Goto

Collaborating Authors

 Louie, Ryan


Conversational Self-Play for Discovering and Understanding Psychotherapy Approaches

arXiv.org Artificial Intelligence

Of particular protein folding, and materials science [1], it interest are deviations from standard approaches, has not been widely applied to understanding effective such as the use of novel therapeutic techniques, new therapy. Large language models (LLMs) are ways to sequence therapeutic techniques within a already used for analyzing, assisting, and replacing conversation, applications of techniques in unusual [2, 3, 4, 5] therapeutic conversations, but these contexts, and/or more adaptive approaches based on efforts primarily replicate known therapeutic approaches client characteristics. What follows is a proof-ofconcept (e.g., Cognitive Behavioral Therapy [CBT] study and a discussion on how AI can serve and Motivational Interviewing [MI]) rather than contribute as a discovery engine for psychotherapy research.


Roleplay-doh: Enabling Domain-Experts to Create LLM-simulated Patients via Eliciting and Adhering to Principles

arXiv.org Artificial Intelligence

Recent works leverage LLMs to roleplay realistic social scenarios, aiding novices in practicing their social skills. However, simulating sensitive interactions, such as in mental health, is challenging. Privacy concerns restrict data access, and collecting expert feedback, although vital, is laborious. To address this, we develop Roleplay-doh, a novel human-LLM collaboration pipeline that elicits qualitative feedback from a domain-expert, which is transformed into a set of principles, or natural language rules, that govern an LLM-prompted roleplay. We apply this pipeline to enable senior mental health supporters to create customized AI patients for simulated practice partners for novice counselors. After uncovering issues in GPT-4 simulations not adhering to expert-defined principles, we also introduce a novel principle-adherence prompting pipeline which shows 30% improvements in response quality and principle following for the downstream task. Via a user study with 25 counseling experts, we demonstrate that the pipeline makes it easy and effective to create AI patients that more faithfully resemble real patients, as judged by creators and third-party counselors. See our project website at https://roleplay-doh.github.io/ for code and data.


Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors

arXiv.org Artificial Intelligence

Realistic practice and tailored feedback are key processes for training peer counselors with clinical skills. However, existing mechanisms of providing feedback largely rely on human supervision. Peer counselors often lack mechanisms to receive detailed feedback from experienced mentors, making it difficult for them to support the large number of people with mental health issues who use peer counseling. Our work aims to leverage large language models to provide contextualized and multi-level feedback to empower peer counselors, especially novices, at scale. To achieve this, we co-design with a group of senior psychotherapy supervisors to develop a multi-level feedback taxonomy, and then construct a publicly available dataset with comprehensive feedback annotations of 400 emotional support conversations. We further design a self-improvement method on top of large language models to enhance the automatic generation of feedback. Via qualitative and quantitative evaluation with domain experts, we demonstrate that our method minimizes the risk of potentially harmful and low-quality feedback generation which is desirable in such high-stakes scenarios.