human and robot
Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi
Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people. Samuele Vinanzi is a Senior Lecturer in Robotics and Artificial Intelligence at Sheffield Hallam University. He specializes in Cognitive Robotics: an interdisciplinary field that integrates robotics, artificial intelligence, cognitive science, and psychology to create robots that perceive, reason, and interact like humans. His research focuses on enabling social collaboration between humans and robots, particularly emotional intelligence, intention reading, and artificial trust. His recent book, " In Robots We Trust ", explores trust relationships between humans and robots.
Towards Online Robot Interaction Adaptation to Human Upper-limb Mobility Impairments in Return-to-Work Scenarios
Lagomarsino, Marta, Tassi, Francesco
Work environments are often inadequate and lack inclusivity for individuals with upper-body disabilities. This paper presents a novel online framework for adaptive human-robot interaction (HRI) that accommodates users' arm mobility impairments, ultimately aiming to promote active work participation. Unlike traditional human-robot collaboration approaches that assume able-bodied users, our method integrates a mobility model for specific joint limitations into a hierarchical optimal controller. This allows the robot to generate reactive, mobility-aware behaviour online and guides the user's impaired limb to exploit residual functional mobility. The framework was tested in handover tasks involving different upper-limb mobility impairments (i.e., emulated elbow and shoulder arthritis, and wrist blockage), under both standing and seated configurations with task constraints using a mobile manipulator, and complemented by quantitative and qualitative comparisons with state-of-the-art ergonomic HRI approaches. Preliminary results indicated that the framework can personalise the interaction to fit within the user's impaired range of motion and encourage joint usage based on the severity of their functional limitations.
- North America > United States > Washington (0.04)
- Europe > Italy > Liguria > Genoa (0.04)
Game Theory to Study Cooperation in Human-Robot Mixed Groups: Exploring the Potential of the Public Good Game
Pusceddu, Giulia, Mongile, Sara, Rea, Francesco, Sciutti, Alessandra
In this study, we explore the potential of Game Theory as a means to investigate cooperation and trust in human-robot mixed groups. Particularly, we introduce the Public Good Game (PGG), a model highlighting the tension between individual self-interest and collective well-being. In this work, we present a modified version of the PGG, where three human participants engage in the game with the humanoid robot iCub to assess whether various robot game strategies (e.g., always cooperate, always free ride, and tit-for-tat) can influence the participants' inclination to cooperate. We test our setup during a pilot study with nineteen participants. A preliminary analysis indicates that participants prefer not to invest their money in the common pool, despite they perceive the robot as generous. By conducting this research, we seek to gain valuable insights into the role that robots can play in promoting trust and cohesion during human-robot interactions within group contexts. The results of this study may hold considerable potential for developing social robots capable of fostering trust and cooperation within mixed human-robot groups.
Advancing a taxonomy for proxemics in robot social navigation
Nahum, Ehud, Edan, Yael, Oron-Gilad, Tal
Deploying robots in human environments requires effective social robot navigation. This article focuses on proxemics, proposing a new taxonomy and suggesting future directions through an analysis of state-of-the-art studies and the identification of research gaps. The various factors that affect the dynamic properties of proxemics patterns in human-robot interaction are thoroughly explored. To establish a coherent proxemics framework, we identified and organized the key parameters and attributes that shape proxemics behavior. Building on this framework, we introduce a novel approach to define proxemics in robot navigation, emphasizing the significant attributes that influence its structure and size. This leads to the development of a new taxonomy that serves as a foundation for guiding future research and development. Our findings underscore the complexity of defining personal distance, revealing it as a complex, multi-dimensional challenge. Furthermore, we highlight the flexible and dynamic nature of personal zone boundaries, which should be adaptable to different contexts and circumstances. Additionally, we propose a new layer for implementing proxemics in the navigation of social robots.
- North America > United States > New York > Nassau County > Garden City (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (2 more...)
Bi-Directional Mental Model Reconciliation for Human-Robot Interaction with Large Language Models
Moorman, Nina, Zhao, Michelle, Luebbers, Matthew B., Van Waveren, Sanne, Simmons, Reid, Admoni, Henny, Chernova, Sonia, Gombolay, Matthew
In human-robot interactions, human and robot agents maintain internal mental models of their environment, their shared task, and each other. The accuracy of these representations depends on each agent's ability to perform theory of mind, i.e. to understand the knowledge, preferences, and intentions of their teammate. When mental models diverge to the extent that it affects task execution, reconciliation becomes necessary to prevent the degradation of interaction. We propose a framework for bi-directional mental model reconciliation, leveraging large language models to facilitate alignment through semi-structured natural language dialogue. Our framework relaxes the assumption of prior model reconciliation work that either the human or robot agent begins with a correct model for the other agent to align to. Through our framework, both humans and robots are able to identify and communicate missing task-relevant context during interaction, iteratively progressing toward a shared mental model.
Human2Robot: Learning Robot Actions from Paired Human-Robot Videos
Xie, Sicheng, Cao, Haidong, Weng, Zejia, Xing, Zhen, Shen, Shiwei, Leng, Jiaqi, Qiu, Xipeng, Fu, Yanwei, Wu, Zuxuan, Jiang, Yu-Gang
Distilling knowledge from human demonstrations is a promising way for robots to learn and act. Existing work often overlooks the differences between humans and robots, producing unsatisfactory results. In this paper, we study how perfectly aligned human-robot pairs benefit robot learning. Capitalizing on VR-based teleportation, we introduce H\&R, a third-person dataset with 2,600 episodes, each of which captures the fine-grained correspondence between human hands and robot gripper. Inspired by the recent success of diffusion models, we introduce Human2Robot, an end-to-end diffusion framework that formulates learning from human demonstrates as a generative task. Human2Robot fully explores temporal dynamics in human videos to generate robot videos and predict actions at the same time. Through comprehensive evaluations of 8 seen, changed and unseen tasks in real-world settings, we demonstrate that Human2Robot can not only generate high-quality robot videos but also excel in seen tasks and generalize to unseen objects, backgrounds and even new tasks effortlessly.
Using High-Level Patterns to Estimate How Humans Predict a Robot will Behave
Parekh, Sagar, Bramblett, Lauren, Bezzo, Nicola, Losey, Dylan P.
A human interacting with a robot often forms predictions of what the robot will do next. For instance, based on the recent behavior of an autonomous car, a nearby human driver might predict that the car is going to remain in the same lane. It is important for the robot to understand the human's prediction for safe and seamless interaction: e.g., if the autonomous car knows the human thinks it is not merging -- but the autonomous car actually intends to merge -- then the car can adjust its behavior to prevent an accident. Prior works typically assume that humans make precise predictions of robot behavior. However, recent research on human-human prediction suggests the opposite: humans tend to approximate other agents by predicting their high-level behaviors. We apply this finding to develop a second-order theory of mind approach that enables robots to estimate how humans predict they will behave. To extract these high-level predictions directly from data, we embed the recent human and robot trajectories into a discrete latent space. Each element of this latent space captures a different type of behavior (e.g., merging in front of the human, remaining in the same lane) and decodes into a vector field across the state space that is consistent with the underlying behavior type. We hypothesize that our resulting high-level and course predictions of robot behavior will correspond to actual human predictions. We provide initial evidence in support of this hypothesis through a proof-of-concept user study.
- North America > United States > Virginia > Albemarle County > Charlottesville (0.14)
- North America > United States > Missouri > Jackson County > Kansas City (0.14)
- North America > United States > Virginia > Montgomery County > Blacksburg (0.04)
- North America > Canada > Alberta (0.04)
The Critical Role of Effective Communication in Human-Robot Collaborative Assembly
Ferrari, Davide, Secchi, Cristian
In the rapidly evolving landscape of Human-Robot Collaboration (HRC), effective communication between humans and robots is crucial for complex task execution. Traditional request-response systems often lack naturalness and may hinder efficiency. This study emphasizes the importance of adopting human-like communication interactions to enable fluent vocal communication between human operators and robots simulating a collaborative human-robot industrial assembly. We propose a novel approach that employs human-like interactions through natural dialogue, enabling human operators to engage in vocal conversations with robots. Through a comparative experiment, we demonstrate the efficacy of our approach in enhancing task performance and collaboration efficiency. The robot's ability to engage in meaningful vocal conversations enables it to seek clarification, provide status updates, and ask for assistance when required, leading to improved coordination and a smoother workflow. The results indicate that the adoption of human-like conversational interactions positively influences the human-robot collaborative dynamic. Human operators find it easier to convey complex instructions and preferences, resulting in a more productive and satisfying collaboration experience.
- Research Report > Promising Solution (0.35)
- Overview > Innovation (0.35)
Enhancing Human-Robot Collaborative Assembly in Manufacturing Systems Using Large Language Models
Lim, Jonghan, Patel, Sujani, Evans, Alex, Pimley, John, Li, Yifei, Kovalenko, Ilya
The development of human-robot collaboration has the ability to improve manufacturing system performance by leveraging the unique strengths of both humans and robots. On the shop floor, human operators contribute with their adaptability and flexibility in dynamic situations, while robots provide precision and the ability to perform repetitive tasks. However, the communication gap between human operators and robots limits the collaboration and coordination of human-robot teams in manufacturing systems. Our research presents a human-robot collaborative assembly framework that utilizes a large language model for enhancing communication in manufacturing environments. The framework facilitates human-robot communication by integrating voice commands through natural language for task management. A case study for an assembly task demonstrates the framework's ability to process natural language inputs and address real-time assembly challenges, emphasizing adaptability to language variation and efficiency in error resolution. The results suggest that large language models have the potential to improve human-robot interaction for collaborative manufacturing assembly applications.
Robot Explanation Identity
Halilovic, Amar, Krivic, Senka
To bring robots into human everyday life, their capacity for social interaction must increase. One way for robots to acquire social skills is by assigning them the concept of identity. This research focuses on the concept of \textit{Explanation Identity} within the broader context of robots' roles in society, particularly their ability to interact socially and explain decisions. Explanation Identity refers to the combination of characteristics and approaches robots use to justify their actions to humans. Drawing from different technical and social disciplines, we introduce Explanation Identity as a multidisciplinary concept and discuss its importance in Human-Robot Interaction. Our theoretical framework highlights the necessity for robots to adapt their explanations to the user's context, demonstrating empathy and ethical integrity. This research emphasizes the dynamic nature of robot identity and guides the integration of explanation capabilities in social robots, aiming to improve user engagement and acceptance.
- Europe > Bosnia and Herzegovina > Federation of Bosnia and Herzegovina > Sarajevo Canton > Sarajevo (0.04)
- North America > Cuba (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)