Williams, Tom
Toward RAPS: the Robot Autonomy Perception Scale
Silva, Rafael Sousa, Smith, Cailyn, Bezerra, Lara, Williams, Tom
Human-robot interactions can change significantly depending on how autonomous humans perceive a robot to be. Yet, while previous work in the HRI community measured perceptions of human autonomy, there is little work on measuring perceptions of robot autonomy. In this paper, we present our progress toward the creation of the Robot Autonomy Perception Scale (RAPS): a theoretically motivated scale for measuring human perceptions of robot autonomy. We formulated a set of fifteen Likert scale items that are based on the definition of autonomy from Beer et al.'s work, which identifies five key autonomy components: ability to sense, ability to plan, ability to act, ability to act with an intent towards some goal, and an ability to do so without external control. We applied RAPS to an experimental context in which a robot communicated with a human teammate through different levels of Performative Autonomy (PA): an autonomy-driven strategy in which robots may "perform" a lower level of autonomy than they are truly capable of to increase human situational awareness. Our results present preliminary validation for RAPS by demonstrating its sensitivity to PA and motivate the further validation of RAPS.
Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community
Kennington, Casey, Alikhani, Malihe, Pon-Barry, Heather, Atwell, Katherine, Bisk, Yonatan, Fried, Daniel, Gervits, Felix, Han, Zhao, Inan, Mert, Johnston, Michael, Korpan, Raj, Litman, Diane, Marge, Matthew, Matuszek, Cynthia, Mead, Ross, Mohan, Shiwali, Mooney, Raymond, Parde, Natalie, Sinapov, Jivko, Stewart, Angela, Stone, Matthew, Tellex, Stefanie, Williams, Tom
The ability to interact with machines using natural human language is becoming not just commonplace, but expected. The next step is not just text interfaces, but speech interfaces and not just with computers, but with all machines including robots. In this paper, we chronicle the recent history of this growing field of spoken dialogue with robots and offer the community three proposals, the first focused on education, the second on benchmarks, and the third on the modeling of language when it comes to spoken interaction with robots. The three proposals should act as white papers for any researcher to take and build upon.
Introduction to Human-Robot Interaction: A Multi-Perspective Introductory Course
Williams, Tom
In this paper I describe the design of an introductory course in These course goals are critically conditioned on the expected background Human-Robot Interaction. This project-driven course is designed to of the enrolled students. The course is offered at a small introduce undergraduate and graduate engineering students, especially engineering-only university with a strong focus on Robotics related those enrolled in Computer Science, Mechanical Engineering, fields (50% of all undergraduate students are enrolled in Mechanical and Robotics degree programs, to key theories and methods used Engineering or Computer Science degree programs, and degree programs in the field of Human-Robot Interaction that they would otherwise offered in Robotics at both the undergraduate and graduate be unlikely to see in those degree programs. To achieve this aim, level), but with no degree programs offered in social sciences or humanities the course takes students all the way from stakeholder analysis (e.g., Psychology) and few, if any, elective courses available to empirical evaluation, covering and integrating key Qualitative, in those fields. The university size and focus means that the course Design, Computational, and Quantitative methods along the way. I is offered at a mixed undergraduate/graduate level, and is primarily detail the goals, audience, and format of the course, and provide a offered to students from Computer Science, Mechanical Engineering, detailed walkthrough of the course syllabus.
Toward Givenness Hierarchy Theoretic Natural Language Generation
Pal, Poulomi, Williams, Tom
Language-capable interactive robots participating in dialogues with human interlocutors must be able to naturally and efficiently communicate about the entities in their environment. A key aspect of such communication is the use of anaphoric language. The linguistic theory of the Givenness Hierarchy (GH) suggests that humans use anaphora based on the cognitive statuses their referents have in the minds of their interlocutors. In previous work, researchers presented GH-theoretic approaches to robot anaphora understanding. In this paper we describe how the GH might need to be used quite differently to facilitate robot anaphora generation.
Enabling Morally Sensitive Robotic Clarification Requests
Jackson, Ryan Blake, Williams, Tom
The design of current natural language oriented robot architectures enables certain architectural components to circumvent moral reasoning capabilities. One example of this is reflexive generation of clarification requests as soon as referential ambiguity is detected in a human utterance. As shown in previous research, this can lead robots to (1) miscommunicate their moral dispositions and (2) weaken human perception or application of moral norms within their current context. We present a solution to these problems by performing moral reasoning on each potential disambiguation of an ambiguous human utterance and responding accordingly, rather than immediately and naively requesting clarification. We implement our solution in the DIARC robot architecture, which, to our knowledge, is the only current robot architecture with both moral reasoning and clarification request generation capabilities. We then evaluate our method with a human subjects experiment, the results of which indicate that our approach successfully ameliorates the two identified concerns.
Toward Forgetting-Sensitive Referring Expression Generationfor Integrated Robot Architectures
Williams, Tom, Johnson, Torin, Culpepper, Will, Larson, Kellyn
To engage in human-like dialogue, robots require the ability to describe the objects, locations, and people in their environment, a capability known as "Referring Expression Generation." As speakers repeatedly refer to similar objects, they tend to re-use properties from previous descriptions, in part to help the listener, and in part due to cognitive availability of those properties in working memory (WM). Because different theories of working memory "forgetting" necessarily lead to differences in cognitive availability, we hypothesize that they will similarly result in generation of different referring expressions. To design effective intelligent agents, it is thus necessary to determine how different models of forgetting may be differentially effective at producing natural human-like referring expressions. In this work, we computationalize two candidate models of working memory forgetting within a robot cognitive architecture, and demonstrate how they lead to cognitive availability-based differences in generated referring expressions.
Givenness Hierarchy Theoretic Cognitive Status Filtering
Pal, Poulomi, Zhu, Lixiao, Golden-Lasher, Andrea, Swaminathan, Akshay, Williams, Tom
For language-capable interactive robots to be effectively introduced into human society, they must be able to naturally and efficiently communicate about the objects, locations, and people found in human environments. An important aspect of natural language communication is the use of pronouns. Ac-cording to the linguistic theory of the Givenness Hierarchy(GH), humans use pronouns due to implicit assumptions about the cognitive statuses their referents have in the minds of their conversational partners. In previous work, Williams et al. presented the first computational implementation of the full GH for the purpose of robot language understanding, leveraging a set of rules informed by the GH literature. However, that approach was designed specifically for language understanding,oriented around GH-inspired memory structures used to assess what entities are candidate referents given a particular cognitive status. In contrast, language generation requires a model in which cognitive status can be assessed for a given entity. We present and compare two such models of cognitive status: a rule-based Finite State Machine model directly informed by the GH literature and a Cognitive Status Filter designed to more flexibly handle uncertainty. The models are demonstrated and evaluated using a silver-standard English subset of the OFAI Multimodal Task Description Corpus.
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction
Williams, Tom (Colorado School of Mines) | Szafir, Daniel (University of Colorado Boulder) | Chakraborti, Tathagata (Arizona State University) | Amor, Heni Ben (Arizona State University)
The 1st International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) was held in 2018 in conjunction with the 13th International Conference on Human-Robot Interaction, and brought together researchers from the fields of Human-Robot Interaction (HRI), Robotics, Artificial Intelligence, and Virtual, Augmented, and Mixed Reality in order to identify challenges in mixed reality interactions between humans and robots. This inaugural workshop featured a keynote talk from Blair MacIntyre (Mozilla, Georgia Tech), a panel discussion, and twenty-nine papers presented as lightning talks and/or posters. In this report, we briefly survey the papers presented at the workshop and outline some potential directions for the community.
Augmenting Robot Knowledge Consultants with Distributed Short Term Memory
Williams, Tom, Thielstrom, Ravenna, Krause, Evan, Oosterveld, Bradley, Scheutz, Matthias
Human-robot communication in situated environments involves a complex interplay between knowledge representations across a wide variety of modalities. Crucially, linguistic information must be associated with representations of objects, locations, people, and goals, which may be represented in very different ways. In previous work, we developed a Consultant Framework that facilitates modality-agnostic access to information distributed across a set of heterogeneously represented knowledge sources. In this work, we draw inspiration from cognitive science to augment these distributed knowledge sources with Short Term Memory Buffers to create an STM-augmented algorithm for referring expression generation. We then discuss the potential performance benefits of this approach and insights from cognitive science that may inform future refinements in the design of our approach.
Quasi-Dilemmas for Artificial Moral Agents
Kasenberg, Daniel, Sarathy, Vasanth, Arnold, Thomas, Scheutz, Matthias, Williams, Tom
In this paper we describe moral quasi-dilemmas (MQDs): situations similar to moral dilemmas, but in which an agent is unsure whether exploring the plan space or the world may reveal a course of action that satisfies all moral requirements. We argue that artificial moral agents (AMAs) should be built to handle MQDs (in particular, by exploring the plan space rather than immediately accepting the inevitability of the moral dilemma), and that MQDs may be useful for evaluating AMA architectures.