Goto

Collaborating Authors

Human-to-Robot Attention Transfer for Robot Execution Failure Avoidance Using Stacked Neural Networks

arXiv.org Artificial Intelligence

Due to world dynamics and hardware uncertainty, robots inevitably fail in task executions, leading to undesired or even dangerous executions. To avoid failures for improved robot performance, it is critical to identify and correct robot abnormal executions in an early stage. However, limited by reasoning capability and knowledge level, it is challenging for a robot to self diagnose and correct their abnormal behaviors. To solve this problem, a novel method is proposed, human-to-robot attention transfer (H2R-AT) to seek help from a human. H2R-AT is developed based on a novel stacked neural networks model, transferring human attention embedded in verbal reminders to robot attention embedded in robot visual perceiving. With the attention transfer from a human, a robot understands what and where human concerns are to identify and correct its abnormal executions. To validate the effectiveness of H2R-AT, two representative task scenarios, "serve water for a human in a kitchen" and "pick up a defective gear in a factory" with abnormal robot executions, were designed in an open-access simulation platform V-REP; $252$ volunteers were recruited to provide about 12000 verbal reminders to learn and test the attention transfer model H2R-AT. With an accuracy of $73.68\%$ in transferring attention and accuracy of $66.86\%$ in avoiding robot execution failures, the effectiveness of H2R-AT was validated.


RoGuE : Robot Gesture Engine

AAAI Conferences

We present the Robot Gesture Library (RoGuE), amotion-planning approach to generating gestures. Gestures improve robot communication skills, strengthening robots as partners in a collaborative setting. Previouswork maps from environment scenario to gesture selection. This work maps from gesture selection to gesture execution. We create a flexible and common language by parameterizing gestures as task-space constraints onrobot trajectories and goals. This allows us to leverage powerful motion planners and to generalize across environments and robot morphologies. We demonstrateRoGuE on four robots: HREB, ADA, CURI and the PR2.


Developing Effective Robot Teammates for Human-Robot Collaboration

AAAI Conferences

Developing collaborative robots that can productively operate out of isolation and work safely in uninstrumented, human-populated environments is critically important for advancing the field of robotics. Especially in domains where modern robots are ineffective, we wish to leverage human-robot teaming to improve the efficiency, ability, and safety of human workers. Our work, outlined in this extended abstract, focuses on creating agents capable of human-robot teamwork by leveraging learning from demonstration, hierarchical task networks, multi-agent planning and state estimation, and intention recognition. We briefly describe our recent work within human-robot collaboration, including task comprehension, learning and performing assistive behaviors, and training novice human collaborators to become competent co-workers.


Robots and Avatars as Hosts, Advisors, Companions, and Jesters

AI Magazine

A convergence of technical progress in AI and robotics has renewed the dream of building artificial entities that will play significant and worthwhile roles in our human lives. We highlight the shared themes in some recent projects aimed toward this goal. Unfortunately, in the interim, this has been true mostly in the realm of science fiction. Recently, however, pioneering researchers have been bringing together advances in many subfields of AI, such as robotics, computer vision, natural language and speech processing, and cognitive modeling, to create the first generation of robots and avatars that illustrate the true potential of combining these technologies. The purpose of this article is to highlight a few of these projects and to draw some conclusions from them for future research.


Short-Term Human-Robot Interaction through Conditional Planning and Execution

AAAI Conferences

The deployment of robots in public environments is gaining more and more attention and interest both for the research opportunities and for the possibility of developing commercial applications over it. In these scenarios, proper definitions and implementations of human-robot interactions are crucial and the specific characteristics of the environment (in particular, the presence of untrained users) makes the task of defining and implementing effective interactions particularly challenging. In this paper, we describe a method and a fully implemented robotic system using conditional planning for generating and executing short-term interactions by a robot deployed in a public environment. To this end, the proposed method integrates and extends two components already successfully used for planning in robotics: ROSPlan and Petri Net Plans. The contributions of this paper are the problem definition of generating short-term interactions as a conditional planning problem and the description of a solution fully implemented on a real robot. The proposed method is based on the integration between a contingent planner in ROSPlan and the Petri Net Plans execution framework, and it has been tested in different scenarios where the robot interacted with hundreds of untrained users.