Goto

Collaborating Authors

 Wrede, Britta


Leveraging Cognitive States for Adaptive Scaffolding of Understanding in Explanatory Tasks in HRI

arXiv.org Artificial Intelligence

-- Understanding how scaffolding strategies influence human understanding in human-robot interaction is important for developing effective assistive systems. This empirical study investigates linguistic scaffolding strategies based on negation as an important means that de-biases the user from potential errors but increases processing costs and hesitations as a means to ameliorate processing costs. In an adaptive strategy, the user state with respect to the current state of understanding and processing capacity was estimated via a scoring scheme based on task performance, prior scaffolding strategy, and current eye gaze behavior . In the study, the adaptive strategy of providing negations and hesitations was compared with a nonadaptive strategy of providing only affirmations. The adaptive scaffolding strategy was generated using the computational model SHIFT . Our findings indicate that using adaptive scaffolding strategies with SHIFT tends to (1) increased processing costs, as reflected in longer reaction times, but (2) improved task understanding, evidenced by a lower error rate of almost 23%. We assessed the efficiency of SHIFT's selected scaffolding strategies across different cognitive states, finding that in three out of five states, the error rate was lower compared to the baseline condition. We discuss how these results align with the assumptions of the SHIFT model and highlight areas for refinement. Moreover, we demonstrate how scaffolding strategies, such as negation and hesitation, contribute to more effective human-robot explanatory dialogues. In the growing field of social robotics, robots are increasingly being designed to assist people in their everyday lives.


SHIFT: An Interdisciplinary Framework for Scaffolding Human Attention and Understanding in Explanatory Tasks

arXiv.org Artificial Intelligence

In this work, we present a domain-independent approach for adaptive scaffolding in robotic explanation generation to guide tasks in human-robot interaction. We present a method for incorporating interdisciplinary research results into a computational model as a pre-configured scoring system implemented in a framework called SHIFT. This involves outlining a procedure for integrating concepts from disciplines outside traditional computer science into a robotics computational framework. Our approach allows us to model the human cognitive state into six observable states within the human partner model. To study the pre-configuration of the system, we implement a reinforcement learning approach on top of our model. This approach allows adaptation to individuals who deviate from the configuration of the scoring system. Therefore, in our proof-of-concept evaluation, the model's adaptability on four different user types shows that the models' adaptation performs better, i.e., recouped faster after exploration and has a higher accumulated reward with our pre-configured scoring system than without it. We discuss further strategies of speeding up the learning phase to enable a realistic adaptation behavior to real users. The system is accessible through docker and supports querying via ROS.


Forms of Understanding of XAI-Explanations

arXiv.org Artificial Intelligence

Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) 'understanding' on the part of the explainee. However, what it means to 'understand' is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding in the context of XAI and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, 'knowing how' to do or decide something, and comprehension, 'knowing that' -- both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.


From Interactive to Co-Constructive Task Learning

arXiv.org Artificial Intelligence

Humans have developed the capability to teach relevant aspects of new or adapted tasks to a social peer with very few task demonstrations by making use of scaffolding strategies that leverage prior knowledge and importantly prior joint experience to yield a joint understanding and a joint execution of the required steps to solve the task. This process has been discovered and analyzed in parent-infant interaction and constitutes a ``co-construction'' as it allows both, the teacher and the learner, to jointly contribute to the task. We propose to focus research in robot interactive learning on this co-construction process to enable robots to learn from non-expert users in everyday situations. In the following, we will review current proposals for interactive task learning and discuss their main contributions with respect to the entailing interaction. We then discuss our notion of co-construction and summarize research insights from adult-child and human-robot interactions to elucidate its nature in more detail. From this overview we finally derive research desiderata that entail the dimensions architecture, representation, interaction and explainability.


The Curious Robot as a Case-Study for Comparing Dialog Systems

AI Magazine

Modeling interaction with robots raises new and different challenges for dialog modeling than traditional dialog modeling with less embodied machines. We present four case studies of implementing a typical human-robot interaction scenario with different state-of-the-art dialog frameworks in order to identify challenges and pitfalls specific to HRI and potential solutions. The results are discussed with a special focus on the interplay between dialog and task modeling on robots.


The Curious Robot as a Case-Study for Comparing Dialog Systems

AI Magazine

Modeling interaction with robots raises new and different challenges for dialog modeling than traditional dialog modeling with less embodied machines. We present four case studies of implementing a typical human-robot interaction scenario with different state-of-the-art dialog frameworks in order to identify challenges and pitfalls specific to HRI and potential solutions. The results are discussed with a special focus on the interplay between dialog and task modeling on robots.


Modeling Human-Robot Interaction Based on Generic Interaction Patterns

AAAI Conferences

While current techniques for human-robot interaction modeling are typically limited to restrictive command-control style, traditional dialog modeling approaches are not directly applicable to robotics due to the lack of real-world integration. We present an approach that combines insights from dialog modeling with software-engineering demands that arise in robotics system research to provide a generalizable framework that can easily be applied to new scenarios. This goal is achieved by defining interaction patterns that combine abstract task states (such as task accepted or failed) with robot dialog acts (such as assertion or apology). An evaluation of the usability for robotic experts and novices showed that both groups were able to program 3 out of 5 dialog patterns in one hour while showing a steep learning curve. We argue that the proposed approach allows for less restricted and more informative human-robot interactions.