Yanco, Holly A.
Projection Mapping Implementation: Enabling Direct Externalization of Perception Results and Action Intent to Improve Robot Explainability
Han, Zhao, Wilkinson, Alexander, Parrillo, Jenna, Allspaw, Jordan, Yanco, Holly A.
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states such as perception results and action intent. Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient, eliminating mental inference about the robot's intention. However, there is a lack of tools for projection mapping in robotics, compared to established motion planning libraries (e.g., MoveIt). In this paper, we detail the implementation of projection mapping to enable researchers and practitioners to push the boundaries for better interaction between robots and humans. We also provide practical documentation and code for a sample manipulation projection mapping on GitHub: https://github.com/uml-robotics/projection_mapping.
Reasons People Want Explanations After Unrecoverable Pre-Handover Failures
Han, Zhao, Yanco, Holly A.
Most research on human-robot handovers focuses on the development of comfortable and efficient HRI; few have studied handover failures. If a failure occurs in the beginning of the interaction, it prevents the whole handover process and destroys trust. Here we analyze the underlying reasons why people want explanations in a handover scenario where a robot cannot possess the object. Results suggest that participants set expectations on their request and that a robot should provide explanations rather than non-verbal cues after failing. Participants also expect that their handover request can be done by a robot, and, if not, would like to be able to fix the robot or change the request based on the provided explanations.
Make it So: Continuous, Flexible Natural Language Interaction with an Autonomous Robot
Brooks, Daniel J. (University of Massachusetts Lowell) | Lignos, Constantine (University of Pennsylvania) | Finucane, Cameron (Cornell University) | Medvedev, Mikhail S. (University of Massachusetts Lowell) | Perera, Ian (University of Rochester) | Raman, Vasumathi (Cornell University) | Kress-Gazit, Hadas (Cornell University) | Marcus, Mitch (University of Pennsylvania) | Yanco, Holly A. (University of Massachusetts Lowell)
While highly constrained language can be used for robot control, robots that can operate as fully autonomous subordinate agents communicating via rich language remain an open challenge. Toward this end, we developed an autonomous system that supports natural, continuous interaction with the operator through language before, during, and after mission execution. The operator communicates instructions to the system through natural language and is given feedback on how each instruction was understood as the system constructs a logical representation of its orders. While the plan is executed, the operator is updated on relevant progress via language and images and can change the robot's orders. Unlike many other integrated systems of this type, the language interface is built using robust, general purpose parsing and semantics systems that do not rely on domain-specific grammars. This system demonstrates a new level of continuous natural language interaction and a novel approach to using general-purpose language and planning components instead of hand-building for the domain. Language-enabled autonomous systems of this type represent important progress toward the goal of integrating robots as effective members of human teams.
Towards State Summarization for Autonomous Robots
Brooks, Daniel (University of Massachusetts Lowell) | Shultz, Abraham (University of Massachusetts Lowell) | Desai, Munjal (University of Massachusetts Lowell) | Kovac, Philip (University of Massachusetts Lowell) | Yanco, Holly A. (University of Massachusetts Lowell)
Mobile robots are an increasingly important part of search and rescue efforts as well as military combat. โฉIn order for users to accept these robots and use them effectively, the user must be able to communicate clearly with the robots and obtain explanations of the robots' behavior that will allow the user to understand its actions. โฉThis paper describes part of a system of software that will be able to produce explanations of the robots' behavior and situation in an interaction with a human operator.
The AAAI-2002 Mobile Robot Competition and Exhibition
Yanco, Holly A., Balch, Tucker
The AAAI-2002 Mobile Robot Competition and Exhibition
Yanco, Holly A., Balch, Tucker
Usually those attendees with names any of the events (YSC, an Iranian team, took beginning AL are encouraged to line up behind top honors in the Rescue event). Some robots at the 2002 American Association In 2002, the event was organized by Holly for Artificial Intelligence (AAAI) Mobile Yanco of the University of Massachusetts at Robot Competition and Exhibition actually Lowell and Tucker Balch of the Georgia Institute registered for the conference on their of Technology. The Robot Challenge was own. Robot annual competition and exhibition, making it Host was cochaired by David Gustafson of the oldest AIcentric mobile robot competition. Kansas State University and Francois Michaud The event included three competitions of Universite de Sherbrooke.
The 1997 AAAI Mobile Robot Exhibition
Yanco, Holly A.
A wide variety of robotics research was demonstrated at the 1997 Association for the Advancement of Artificial Intelligence Mobile Robot Exhibition. Twenty-one robotic teams participated, making it the largest exhibition ever. This article describes the robotics research presented by the participating teams.
The "Hors d'Oeuvres, Anyone?" Event
Yanco, Holly A.
The "Hors d'Oeuvres, Anyone?" Event
Yanco, Holly A.
The first Hors d'Oeuvres, Anyone? event at the Association for the Advancement of Artificial Intelligence Mobile Robot Competition was held in 1997. Five teams entered their robotic waiters into the contest. After a preliminary round to judge the safety of the robots, the robots served conference attendees at the opening reception of the Fourteenth National Conference on Artificial Intelligence.
The 1997 AAAI Mobile Robot Exhibition
Yanco, Holly A.
The robot uses a layered Intelligence (AAAI-97). Twenty-one robotic architecture for integrating planning and teams participated, making this the largest action. It differs from the usual approach of robot exhibition ever. See figure 1 for a photo interfacing a planner to a reactive system in a of the exhibition participants. Since the first layered architecture because the reactive system Mobile Robot Competition and Exhibition at is replaced with a different kind of action AAAI-92, the exhibition has served to demonstrate system.