Lee, Chaebin
Multi-aspect Depression Severity Assessment via Inductive Dialogue System
Lee, Chaebin, Seo, Seungyeon, Do, Heejin, Lee, Gary Geunbae
With the advancement of chatbots and the growing demand for automatic depression detection, identifying depression in patient conversations has gained more attention. However, prior methods often assess depression in a binary way or only a single score without diverse feedback and lack focus on enhancing dialogue responses. In this paper, we present a novel task of multi-aspect depression severity assessment via an inductive dialogue system (MaDSA), evaluating a patient's depression level on multiple criteria by incorporating an assessment-aided response generation. Further, we propose a foundational system for MaDSA, which induces psychological dialogue responses with an auxiliary emotion classification task within a hierarchical severity assessment structure. We synthesize the conversational dataset annotated with eight aspects of depression severity alongside emotion labels, proven robust via human evaluations. Experimental results show potential for our preliminary work on MaDSA.
Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking
Lee, Jihyun, Lee, Chaebin, Kim, Yunsu, Lee, Gary Geunbae
In dialogue state tracking (DST), labeling the dataset involves considerable human labor. We propose a new self-training framework for fewshot generative DST that utilize unlabeled data. Our self-training method iteratively improves the model by pseudo labeling and employs Purpose Preserving augmentation (PPaug) to prevent overfitting. We increase the few-shot (10%) performance by approximately 4% on Figure 1: Dialogue example of DST dataset and its belief MultiWOZ 2.1 (Eric et al., 2019) and enhances state. The underlined part of the dialogue is the the slot-recall 8.34% for unseen values compared value of the belief state and has specific information to baseline.