Goto

Collaborating Authors

A Review of Methodologies for Natural-Language-Facilitated Human-Robot Cooperation

arXiv.org Artificial Intelligence

Natural-language-facilitated human-robot cooperation (NLC) refers to using natural language (NL) to facilitate interactive information sharing and task executions with a common goal constraint between robots and humans. Recently, NLC research has received increasing attention. Typical NLC scenarios include robotic daily assistance, robotic health caregiving, intelligent manufacturing, autonomous navigation, and robot social accompany. However, a thorough review, that can reveal latest methodologies to use NL to facilitate human-robot cooperation, is missing. In this review, a comprehensive summary about methodologies for NLC is presented. NLC research includes three main research focuses: NL instruction understanding, NL-based execution plan generation, and knowledge-world mapping. In-depth analyses on theoretical methods, applications, and model advantages and disadvantages are made. Based on our paper review and perspective, potential research directions of NLC are summarized.


Contexts for Symbiotic Autonomy: Semantic Mapping, Task Teaching and Social Robotics

AAAI Conferences

Home environments constitute a main target location where to deploy robots, which are expected to help humans in completing their tasks. However, modern robots do not meet yet user's expectations in terms of both knowledge and skills. In this scenario, users can provide robots with knowledge and help them in performing tasks, through a continuous human-robot interaction. This human-robot cooperation setting in shared environments is known as Symbiotic Autonomy or Symbiotic Robotics. In this paper, we address the problem of an effective coexistence of robots and humans, by analyzing the proposed approaches in literature and by presenting our perspective on the topic. In particular, our focus is on specific contexts that can be embraced within Symbiotic Autonomy: Human Augmented Semantic Mapping, Task Teaching and Social Robotics. Finally, we sketch our view on the problem of knowledge acquisition in robotic platforms by introducing three essential aspects that are to be dealt with: environmental, procedural and social knowledge.


Human-to-Robot Attention Transfer for Robot Execution Failure Avoidance Using Stacked Neural Networks

arXiv.org Artificial Intelligence

Due to world dynamics and hardware uncertainty, robots inevitably fail in task executions, leading to undesired or even dangerous executions. To avoid failures for improved robot performance, it is critical to identify and correct robot abnormal executions in an early stage. However, limited by reasoning capability and knowledge level, it is challenging for a robot to self diagnose and correct their abnormal behaviors. To solve this problem, a novel method is proposed, human-to-robot attention transfer (H2R-AT) to seek help from a human. H2R-AT is developed based on a novel stacked neural networks model, transferring human attention embedded in verbal reminders to robot attention embedded in robot visual perceiving. With the attention transfer from a human, a robot understands what and where human concerns are to identify and correct its abnormal executions. To validate the effectiveness of H2R-AT, two representative task scenarios, "serve water for a human in a kitchen" and "pick up a defective gear in a factory" with abnormal robot executions, were designed in an open-access simulation platform V-REP; $252$ volunteers were recruited to provide about 12000 verbal reminders to learn and test the attention transfer model H2R-AT. With an accuracy of $73.68\%$ in transferring attention and accuracy of $66.86\%$ in avoiding robot execution failures, the effectiveness of H2R-AT was validated.


Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment

arXiv.org Artificial Intelligence

The Industry 4.0 paradigm promises shorter development times, increased ergonomy, higher flexibility, and resource efficiency in manufacturing environments. Collaborative robots are an important tangible technology for implementing such a paradigm. A major bottleneck to effectively deploy collaborative robots to manufacturing industries is developing task planning algorithms that enable them to recognize and naturally adapt to varying and even unpredictable human actions while simultaneously ensuring an overall efficiency in terms of production cycle time. In this context, an architecture encompassing task representation, task planning, sensing, and robot control has been designed, developed and evaluated in a real industrial environment. A pick-and-place palletization task, which requires the collaboration between humans and robots, is investigated. The architecture uses AND/OR graphs for representing and reasoning upon human-robot collaboration models online. Furthermore, objective measures of the overall computational performance and subjective measures of naturalness in human-robot collaboration have been evaluated by performing experiments with production-line operators. The results of this user study demonstrate how human-robot collaboration models like the one we propose can leverage the flexibility and the comfort of operators in the workplace. In this regard, an extensive comparison study among recent models has been carried out.


Make it So: Continuous, Flexible Natural Language Interaction with an Autonomous Robot

AAAI Conferences

While highly constrained language can be used for robot control, robots that can operate as fully autonomous subordinate agents communicating via rich language remain an open challenge. Toward this end, we developed an autonomous system that supports natural, continuous interaction with the operator through language before, during, and after mission execution. The operator communicates instructions to the system through natural language and is given feedback on how each instruction was understood as the system constructs a logical representation of its orders. While the plan is executed, the operator is updated on relevant progress via language and images and can change the robot's orders. Unlike many other integrated systems of this type, the language interface is built using robust, general purpose parsing and semantics systems that do not rely on domain-specific grammars. This system demonstrates a new level of continuous natural language interaction and a novel approach to using general-purpose language and planning components instead of hand-building for the domain. Language-enabled autonomous systems of this type represent important progress toward the goal of integrating robots as effective members of human teams.