Goto

Collaborating Authors

 social robotic


Into the Wild: When Robots Are Not Welcome

Ashkenazi, Shaul, Skantze, Gabriel, Stuart-Smith, Jane, Foster, Mary Ellen

arXiv.org Artificial Intelligence

-- Social robots are increasingly being deployed in public spaces, where they face not only technological difficulties and unexpected user utterances, but also objections from stakeholders who may not be comfortable with introducing a robot into those spaces. We describe our difficulties with deploying a social robot in two different public settings: 1) Student services center; 2) Refugees and asylum seekers drop-in service. Although this is a failure report, in each use case we eventually managed to earn the trust of the staff and form a relationship with them, allowing us to deploy our robot and conduct our studies. We have developed a multilingual robot system (Figure 1) described in [1] for two different use cases: 1) Supporting newly arrived international students in a UK university, answering frequently asked questions; 2) Supporting refugees and asylum seekers with navigating bureaucratic processes. Like most current public-space robot deployments, our field studies involved adding a robot to an existing workplace, with stakeholders including management, visitors, as well as front-line workers who should all be consulted to develop the details of the system to be deployed.


The Social Life of Industrial Arms: How Arousal and Attention Shape Human-Robot Interaction

El-Helou, Roy, Pan, Matthew K. X. J

arXiv.org Artificial Intelligence

Ingenuity Labs Research Institute Queen's University Kingston, Canada matthew.pan@queensu.ca Abstract -- This study explores how human perceptions of a non-anthropomorphic robotic manipulator can be shaped by two key dimensions of behaviour: arousal, defined as the robot's movement energy and expressiveness, and attention, defined as the robot's capacity to selectively orient toward and engage with a user . We present an integrated behaviour system that applies and extends existing movement-centric design principles to non-anthropomorphic robots. Our system combines a gaze-like attention engine with an arousal-modulated motion layer to explore how expressive and interactive behaviours influence social perception in robotic manipulators. In a user study, we find that robots exhibiting high attention--actively directing their focus toward users--are perceived as warmer and more competent, intentional, and lifelike. In contrast, high arousal--characterized by fast, expansive, and energetic motions--increases perceptions of discomfort and disturbance. Importantly, a combination of focused attention and moderate arousal yields the highest ratings of trust and sociability, while excessive arousal diminishes social engagement.


Social Mediation through Robots -- A Scoping Review on Improving Group Interactions through Directed Robot Action using an Extended Group Process Model

Weisswange, Thomas H., Javed, Hifza, Dietrich, Manuel, Jung, Malte F., Jamali, Nawid

arXiv.org Artificial Intelligence

Group processes refer to the dynamics that occur within a group and are critical for understanding how groups function. With robots being increasingly placed within small groups, improving these processes has emerged as an important application of social robotics. Social Mediation Robots elicit behavioral change within groups by deliberately influencing the processes of groups. While research in this field has demonstrated that robots can effectively affect interpersonal dynamics, there is a notable gap in integrating these insights to develop coherent understanding and theory. We present a scoping review of literature targeting changes in social interactions between multiple humans through intentional action from robotic agents. To guide our review, we adapt the classical Input-Process-Output (I-P-O) models that we call "Mediation I-P-O model". We evaluated 1633 publications, which yielded 89 distinct social mediation concepts. We construct 11 mediation approaches robots can use to shape processes in small groups and teams. This work strives to produce generalizable insights and evaluate the extent to which the potential of social mediation through robots has been realized thus far. We hope that the proposed framework encourages a holistic approach to the study of social mediation and provides a foundation to standardize future reporting in the domain.


Bridging the Communication Gap: Artificial Agents Learning Sign Language through Imitation

Tavella, Federico, Galata, Aphrodite, Cangelosi, Angelo

arXiv.org Artificial Intelligence

Artificial agents, particularly humanoid robots, interact with their environment, objects, and people using cameras, actuators, and physical presence. Their communication methods are often pre-programmed, limiting their actions and interactions. Our research explores acquiring non-verbal communication skills through learning from demonstrations, with potential applications in sign language comprehension and expression. In particular, we focus on imitation learning for artificial agents, exemplified by teaching a simulated humanoid American Sign Language. We use computer vision and deep learning to extract information from videos, and reinforcement learning to enable the agent to replicate observed actions. Compared to other methods, our approach eliminates the need for additional hardware to acquire information. We demonstrate how the combination of these different techniques offers a viable way to learn sign language. Our methodology successfully teaches 5 different signs involving the upper body (i.e., arms and hands). This research paves the way for advanced communication skills in artificial agents.


Common (good) practices measuring trust in HRI

Holthaus, Patrick, Rossi, Alessandra

arXiv.org Artificial Intelligence

Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives. It is, therefore, understandable that the literature of the last few decades focuses on measuring how much people trust robots -- and more generally, any agent - to foster such trust in these technologies. Researchers have been exploring how people trust robot in different ways, such as measuring trust on human-robot interactions (HRI) based on textual descriptions or images without any physical contact, during and after interacting with the technology. Nevertheless, trust is a complex behaviour, and it is affected and depends on several factors, including those related to the interacting agents (e.g. humans, robots, pets), itself (e.g. capabilities, reliability), the context (e.g. task), and the environment (e.g. public spaces vs private spaces vs working spaces). In general, most roboticists agree that insufficient levels of trust lead to a risk of disengagement while over-trust in technology can cause over-reliance and inherit dangers, for example, in emergency situations. It is, therefore, very important that the research community has access to reliable methods to measure people's trust in robots and technology. In this position paper, we outline current methods and their strengths, identify (some) weakly covered aspects and discuss the potential for covering a more comprehensive amount of factors influencing trust in HRI.


The Emotional Dilemma: Influence of a Human-like Robot on Trust and Cooperation

Becker, Dennis, Rueda, Diana, Beese, Felix, Torres, Brenda Scarleth Gutierrez, Lafdili, Myriem, Ahrens, Kyra, Fu, Di, Strahl, Erik, Weber, Tom, Wermter, Stefan

arXiv.org Artificial Intelligence

Increasing anthropomorphic robot behavioral design could affect trust and cooperation positively. However, studies have shown contradicting results and suggest a task-dependent relationship between robots that display emotions and trust. Therefore, this study analyzes the effect of robots that display human-like emotions on trust, cooperation, and participants' emotions. In the between-group study, participants play the coin entrustment game with an emotional and a non-emotional robot. The results show that the robot that displays emotions induces more anxiety than the neutral robot. Accordingly, the participants trust the emotional robot less and are less likely to cooperate. Furthermore, the perceived intelligence of a robot increases trust, while a desire to outcompete the robot can reduce trust and cooperation. Thus, the design of robots expressing emotions should be task dependent to avoid adverse effects that reduce trust and cooperation.


Designing Socially Assistive Robots: Exploring Israeli and German Designers' Perceptions

Liberman-Pincu, Ela, Korn, Oliver, Grund, Jonas, van Grondelle, Elmer D., Oron-Gilad, Tal

arXiv.org Artificial Intelligence

Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.


Trust regulation in Social Robotics: From Violation to Repair

Jelínek, Matouš, Fischer, Kerstin

arXiv.org Artificial Intelligence

While trust in human-robot interaction is increasingly recognized as necessary for the implementation of social robots, our understanding of regulating trust in human-robot interaction is yet limited. In the current experiment, we evaluated different approaches to trust calibration in human-robot interaction. The within-subject experimental approach utilized five different strategies for trust calibration: proficiency, situation awareness, transparency, trust violation, and trust repair. We implemented these interventions into a within-subject experiment where participants (N=24) teamed up with a social robot and played a collaborative game. The level of trust was measured after each section using the Multi-Dimensional Measure of Trust (MDMT) scale. As expected, the interventions have a significant effect on i) violating and ii) repairing the level of trust throughout the interaction. Consequently, the robot demonstrating situation awareness was perceived as significantly more benevolent than the baseline.


Hey, robot! An investigation of getting robot's attention through touch

Lehmann, Hagen, Rojik, Adam, Friebe, Kassandra, Hoffmann, Matej

arXiv.org Artificial Intelligence

Touch is a key part of interaction and communication between humans, but has still been little explored in human-robot interaction. In this work, participants were asked to approach and touch a humanoid robot on the hand (Nao - 26 participants; Pepper - 28 participants) to get its attention. We designed reaction behaviors for the robot that consisted in four different combinations of arm movements with the touched hand moving forward or back and the other hand moving forward or staying in place, with simultaneous leaning back, followed by looking at the participant. We studied which reaction of the robot people found the most appropriate and what was the reason for their choice. For both robots, the preferred reaction of the robot hand being touched was moving back. For the other hand, no movement at all was rated most natural for the Pepper, while it was movement forward for the Nao. A correlation between the anxiety subscale of the participants' personality traits and the passive to active/aggressive nature of the robot reactions was found. Most participants noticed the leaning back and rated it positively. Looking at the participant was commented on positively by some participants in unstructured comments. We also analyzed where and how participants spontaneously touched the robot on the hand. In summary, the touch reaction behaviors designed here are good candidates to be deployed more generally in social robots, possibly including incidental touch in crowded environments. The robot size constitutes one important factor shaping how the robot reaction is perceived.


iCub Knows Where You Look: Exploiting Social Cues for Interactive Object Detection Learning

Lombardi, Maria, Maiettini, Elisa, Tikhanoff, Vadim, Natale, Lorenzo

arXiv.org Artificial Intelligence

Performing joint interaction requires constant mutual monitoring of own actions and their effects on the other's behaviour. Such an action-effect monitoring is boosted by social cues and might result in an increasing sense of agency. Joint actions and joint attention are strictly correlated and both of them contribute to the formation of a precise temporal coordination. In human-robot interaction, the robot's ability to establish joint attention with a human partner and exploit various social cues to react accordingly is a crucial step in creating communicative robots. Along the social component, an effective human-robot interaction can be seen as a new method to improve and make the robot's learning process more natural and robust for a given task. In this work we use different social skills, such as mutual gaze, gaze following, speech and human face recognition, to develop an effective teacher-learner scenario tailored to visual object learning in dynamic environments. Experiments on the iCub robot demonstrate that the system allows the robot to learn new objects through a natural interaction with a human teacher in presence of distractors.