Plotting

 Kraus, Matthias


Improving Proactive Dialog Agents Using Socially-Aware Reinforcement Learning

arXiv.org Artificial Intelligence

The next step for intelligent dialog agents is to escape their role as silent bystanders and become proactive. Well-defined proactive behavior may improve human-machine cooperation, as the agent takes a more active role during interaction and takes off responsibility from the user. However, proactivity is a double-edged sword because poorly executed pre-emptive actions may have a devastating effect not only on the task outcome but also on the relationship with the user. For designing adequate proactive dialog strategies, we propose a novel approach including both social as well as task-relevant features in the dialog. Here, the primary goal is to optimize proactive behavior so that it is task-oriented - this implies high task success and efficiency - while also being socially effective by fostering user trust. Including both aspects in the reward function for training a proactive dialog agent using reinforcement learning showed the benefit of our approach for more successful human-machine cooperation.


Development of a Trust-Aware User Simulator for Statistical Proactive Dialog Modeling in Human-AI Teams

arXiv.org Artificial Intelligence

HAIT requires close coordination between humans and AI teammates to work together towards a common goal [40]. Effective communication, prediction of teammates' actions, and high-level coordination are essential components of this collaborative effort. In this regard, the proactive behavior of AI-based systems and the communication thereof during collaboration is an important research topic concerning HAITs, e.g., see Horvitz et al. [8]. Proactivity can be defined as an AI's self-initiating, anticipatory behavior for contributing to effective and efficient task completion. It has been shown to be essential for human teamwork as it leads to higher job and team performance and is associated with leadership and innovation [3]. However, the design of adequate proactivity for AI-based systems to support humans is still an open question and a challenging topic. It is essential to study the impact of proactive system actions on the human-agent trust relationship and how to use information about an AI agent's perceived trustworthiness to model appropriate proactive dialog strategies for forming effective HAITs.


ForDigitStress: A multi-modal stress dataset employing a digital job interview scenario

arXiv.org Artificial Intelligence

We present a multi-modal stress dataset that uses digital job interviews to induce stress. The dataset provides multi-modal data of 40 participants including audio, video (motion capturing, facial recognition, eye tracking) as well as physiological information (photoplethysmography, electrodermal activity). In addition to that, the dataset contains time-continuous annotations for stress and occurred emotions (e.g. shame, anger, anxiety, surprise). In order to establish a baseline, five different machine learning classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest, Long-Short-Term Memory Network) have been trained and evaluated on the proposed dataset for a binary stress classification task. The best-performing classifier achieved an accuracy of 88.3% and an F1-score of 87.5%.


Does It Affect You? Social and Learning Implications of Using Cognitive-Affective State Recognition for Proactive Human-Robot Tutoring

arXiv.org Artificial Intelligence

Using robots in educational contexts has already shown to be beneficial for a student's learning and social behaviour. For levitating them to the next level of providing more effective and human-like tutoring, the ability to adapt to the user and to express proactivity is fundamental. By acting proactively, intelligent robotic tutors anticipate possible situations where problems for the student may arise and act in advance for preventing negative outcomes. Still, the decisions of when and how to behave proactively are open questions. Therefore, this paper deals with the investigation of how the student's cognitive-affective states can be used by a robotic tutor for triggering proactive tutoring dialogue. In doing so, it is aimed to improve the learning experience. For this reason, a concept learning task scenario was observed where a robotic assistant proactively helped when negative user states were detected. In a learning task, the user's states of frustration and confusion were deemed to have negative effects on the outcome of the task and were used to trigger proactive behaviour. In an empirical user study with 40 undergraduate and doctoral students, we studied whether the initiation of proactive behaviour after the detection of signs of confusion and frustration improves the student's concentration and trust in the agent. Additionally, we investigated which level of proactive dialogue is useful for promoting the student's concentration and trust. The results show that high proactive behaviour harms trust, especially when triggered during negative cognitive-affective states but contributes to keeping the student focused on the task when triggered in these states. Based on our study results, we further discuss future steps for improving the proactive assistance of robotic tutoring systems.