“Sorry, I Can’t Do That”: Developing Mechanisms to Appropriately Reject Directives in Human-Robot Interactions
Briggs, Gordon Michael (Tufts University) | Scheutz, Matthias (Tufts University)
An ongoing goal at the intersection of artificial intelligence In this paper, we briefly present initial work that has (AI), robotics, and human-robot interaction (HRI) is to create been done in the DIARC/ADE cognitive robotic architecture autonomous agents that can assist and interact with human (Schermerhorn et al. 2006; Kramer and Scheutz 2006) to enable teammates in natural and humanlike ways. This is a such a rejection and explanation mechanism. First we multifaceted challenge, involving both the development of discuss the theoretical considerations behind this challenge, an ever-expanding set of capabilities (both physical and algorithmic) specifically the conditions that must be met for a directive to such that robotic agents can autonomously engage be appropriately accepted. Next, we briefly present some of in a variety of useful tasks, as well as the development the explicit reasoning mechanisms developed in order to facilitate of interaction mechanisms (e.g.
Nov-1-2015
- Country:
- Europe (0.14)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science > Problem Solving (0.67)
- Representation & Reasoning > Agents (0.49)
- Robots > Humanoid Robots (0.62)
- Information Technology > Artificial Intelligence