Goto

Collaborating Authors

 user emotion


ChatWise: AI-Powered Engaging Conversations for Enhancing Senior Cognitive Wellbeing

Yang, Zhengbang, Zhu, Zhuangdi

arXiv.org Artificial Intelligence

Cognitive health in older adults presents a growing challenge. While conversational interventions show feasibility in improving cognitive wellness, human caregiver resources remain overburdened. AI-based methods have shown promise in providing conversational support, yet existing work is limited to implicit strategy while lacking multi-turn support tailored to seniors. We improve prior art with an LLM-driven chatbot named ChatWise for older adults. It follows dual-level conversation reasoning at the inference phase to provide engaging companionship. ChatWise thrives in long-turn conversations, in contrast to conventional LLMs that primarily excel in short-turn exchanges. Grounded experiments show that ChatWise significantly enhances simulated users' cognitive and emotional status, including those with Mild Cognitive Impairment.


Towards Empathetic Conversational Recommender Systems

Zhang, Xiaoyu, Xie, Ruobing, Lyu, Yougang, Xin, Xin, Ren, Pengjie, Liang, Mingfei, Zhang, Bo, Kang, Zhanhui, de Rijke, Maarten, Ren, Zhaochun

arXiv.org Artificial Intelligence

Conversational recommender systems (CRSs) are able to elicit user preferences through multi-turn dialogues. They typically incorporate external knowledge and pre-trained language models to capture the dialogue context. Most CRS approaches, trained on benchmark datasets, assume that the standard items and responses in these benchmarks are optimal. However, they overlook that users may express negative emotions with the standard items and may not feel emotionally engaged by the standard responses. This issue leads to a tendency to replicate the logic of recommenders in the dataset instead of aligning with user needs. To remedy this misalignment, we introduce empathy within a CRS. With empathy we refer to a system's ability to capture and express emotions. We propose an empathetic conversational recommender (ECR) framework. ECR contains two main modules: emotion-aware item recommendation and emotion-aligned response generation. Specifically, we employ user emotions to refine user preference modeling for accurate recommendations. To generate human-like emotional responses, ECR applies retrieval-augmented prompts to fine-tune a pre-trained language model aligning with emotions and mitigating hallucination. To address the challenge of insufficient supervision labels, we enlarge our empathetic data using emotion labels annotated by large language models and emotional reviews collected from external resources. We propose novel evaluation metrics to capture user satisfaction in real-world CRS scenarios. Our experiments on the ReDial dataset validate the efficacy of our framework in enhancing recommendation accuracy and improving user satisfaction.


Infusing Emotions into Task-oriented Dialogue Systems: Understanding, Management, and Generation

Feng, Shutong, Lin, Hsien-chin, Geishauser, Christian, Lubis, Nurul, van Niekerk, Carel, Heck, Michael, Ruppik, Benjamin, Vukovic, Renato, Gašić, Milica

arXiv.org Artificial Intelligence

Emotions are indispensable in human communication, but are often overlooked in task-oriented dialogue (ToD) modelling, where the task success is the primary focus. While existing works have explored user emotions or similar concepts in some ToD tasks, none has so far included emotion modelling into a fully-fledged ToD system nor conducted interaction with human or simulated users. In this work, we incorporate emotion into the complete ToD processing loop, involving understanding, management, and generation. To this end, we extend the EmoWOZ dataset (Feng et al., 2022) with system affective behaviour labels. Through interactive experimentation involving both simulated and human users, we demonstrate that our proposed framework significantly enhances the user's emotional experience as well as the task success.


Personalized Music Recommendation with a Heterogeneity-aware Deep Bayesian Network

Jing, Erkang, Liu, Yezheng, Chai, Yidong, Yu, Shuo, Liu, Longshun, Jiang, Yuanchun, Wang, Yang

arXiv.org Artificial Intelligence

Music recommender systems are crucial in music streaming platforms, providing users with music they would enjoy. Recent studies have shown that user emotions can affect users' music mood preferences. However, existing emotion-aware music recommender systems (EMRSs) explicitly or implicitly assume that users' actual emotional states expressed by an identical emotion word are homogeneous. They also assume that users' music mood preferences are homogeneous under an identical emotional state. In this article, we propose four types of heterogeneity that an EMRS should consider: emotion heterogeneity across users, emotion heterogeneity within a user, music mood preference heterogeneity across users, and music mood preference heterogeneity within a user. We further propose a Heterogeneity-aware Deep Bayesian Network (HDBN) to model these assumptions. The HDBN mimics a user's decision process to choose music with four components: personalized prior user emotion distribution modeling, posterior user emotion distribution modeling, user grouping, and Bayesian neural network-based music mood preference prediction. We constructed a large-scale dataset called EmoMusicLJ to validate our method. Extensive experiments demonstrate that our method significantly outperforms baseline approaches on widely used HR and NDCG recommendation metrics. Ablation experiments and case studies further validate the effectiveness of our HDBN. The source code is available at https://github.com/jingrk/HDBN.


Learning from Emotions, Demographic Information and Implicit User Feedback in Task-Oriented Document-Grounded Dialogues

Petrak, Dominic, Tran, Thy Thy, Gurevych, Iryna

arXiv.org Artificial Intelligence

The success of task-oriented and document-grounded dialogue systems depends on users accepting and enjoying using them. To achieve this, recently published work in the field of Human-Computer Interaction suggests that the combination of considering demographic information, user emotions and learning from the implicit feedback in their utterances, is particularly important. However, these findings have not yet been transferred to the field of Natural Language Processing, where these data are primarily studied separately. Accordingly, no sufficiently annotated dataset is available. To address this gap, we introduce FEDI, the first English dialogue dataset for task-oriented document-grounded dialogues annotated with demographic information, user emotions and implicit feedback. Our experiments with FLAN-T5, GPT-2 and LLaMA-2 show that these data have the potential to improve task completion and the factual consistency of the generated responses and user acceptance.


EmoUS: Simulating User Emotions in Task-Oriented Dialogues

Lin, Hsien-Chin, Feng, Shutong, Geishauser, Christian, Lubis, Nurul, van Niekerk, Carel, Heck, Michael, Ruppik, Benjamin, Vukovic, Renato, Gašić, Milica

arXiv.org Artificial Intelligence

Existing user simulators (USs) for task-oriented dialogue systems only model user behaviour on semantic and natural language levels without considering the user persona and emotions. Optimising dialogue systems with generic user policies, which cannot model diverse user behaviour driven by different emotional states, may result in a high drop-off rate when deployed in the real world. Thus, we present EmoUS, a user simulator that learns to simulate user emotions alongside user behaviour. EmoUS generates user emotions, semantic actions, and natural language responses based on the user goal, the dialogue history, and the user persona. By analysing what kind of system behaviour elicits what kind of user emotions, we show that EmoUS can be used as a probe to evaluate a variety of dialogue systems and in particular their effect on the user's emotional state. Developing such methods is important in the age of large language model chat-bots and rising ethical concerns.


4 KPIs to Improve Chatbot Accuracy and its Conversational Abilities

#artificialintelligence

Just in case you imagine that all chatbots are designed similarly, you're shockingly off base. Chatbots today come in all shapes and measures and have varying capacities. While fundamental chatbots might be satisfactory for handling basic operations, to improve the customer experience at an enterprise-level you need advanced virtual assistants that are able to understand user sentiments and carry out human-like interactions round-the-clock across all channels. On the other side, don't go over the edge and construct an intricate chatbot with AI abilities that can compete well in the market. Having a good chatbot doesn't ensure success.


A Pragmatic Approach to Implementation of Emotional Intelligence in Machines

Ptaszynski, Michal (Hokkaido University) | Rzepka, Rafal (Hokkaido University) | Araki, Kenji (Hokkaido University)

AAAI Conferences

By this paper we would like to open a discussion on the need ofBy this paper we would like to open a discussion on the need of Emotional Intelligence as a feature in machines interacting with humans. However, we restrain from making a statement about the need of emotional experience in machines. We argue that providing machines computable means for processing emotions is a practical need requiring implementation of a set of abilities included in the Emotional Intelligence Framework. We introduce our methods and present the results of some of the first experiments we performed in this matter.