incapability
Utilising Explanations to Mitigate Robot Conversational Failures
This paper presents an overview of robot failure detection work from HRI and adjacent fields using failures as an opportunity to examine robot explanation behaviours. As humanoid robots remain experimental tools in the early 2020s, interactions with robots are situated overwhelmingly in controlled environments, typically studying various interactional phenomena. Such interactions suffer from real-world and large-scale experimentation and tend to ignore the 'imperfectness' of the everyday user. Robot explanations can be used to approach and mitigate failures, by expressing robot legibility and incapability, and within the perspective of common-ground. In this paper, I discuss how failures present opportunities for explanations in interactive conversational robots and what the potentials are for the intersection of HRI and explainability research.
- Europe > Sweden > Stockholm > Stockholm (0.05)
- Europe > Germany > Brandenburg > Potsdam (0.05)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States (0.04)
- Research Report (0.64)
- Overview (0.54)
Helping Robots Express Themselves When They Fail
With some limited exceptions, robots are terrible at doing almost everything that humans take for granted. For people who work with robots, this is normal and expected, but for everyone else, it's not immediately clear just how terrible robots are, especially if the robot in question looks human-like enough to generate expectations of human-like capability. Bimanual mobile manipulators like PR2 are particularly bad, because with heads and bodies and arms, it's easy to look at them and think that they should have no problem doing all kinds of things. And then, of course, comes the inevitable disappointment when you realize that (among other things) round doorknobs make for an impassable obstacle. At the ACM/IEEE International Conference on Human Robot Interaction (HRI) earlier this month, researchers from Cornell and UC Berkeley presented some work on how robots can effectively express themselves when they're incapable of doing a task.