Plotting

 Mutlu, Bilge


Designing for Caregiving: Integrating Robotic Assistance in Senior Living Communities

arXiv.org Artificial Intelligence

Robots hold significant promise to assist with providing care to an aging population and to help overcome increasing caregiver demands. Although a large body of research has explored robotic assistance for individuals with disabilities and age-related challenges, this past work focuses primarily on building robotic capabilities for assistance and has not yet fully considered how these capabilities could be used by professional caregivers. To better understand the workflows and practices of caregivers who support aging populations and to determine how robotic assistance can be integrated into their work, we conducted a field study using ethnographic and co-design methods in a senior living community. From our results, we created a set of design opportunities for robotic assistance, which we organized into three different parts: supporting caregiver workflows, adapting to resident abilities, and providing feedback to all stakeholders of the interaction.


Characterizing Input Methods for Human-to-robot Demonstrations

arXiv.org Artificial Intelligence

Human demonstrations are important in a range of robotics applications, and are created with a variety of input methods. However, the design space for these input methods has not been extensively studied. In this paper, focusing on demonstrations of hand-scale object manipulation tasks to robot arms with two-finger grippers, we identify distinct usage paradigms in robotics that utilize human-to-robot demonstrations, extract abstract features that form a design space for input methods, and characterize existing input methods as well as a novel input method that we introduce, the instrumented tongs. We detail the design specifications for our method and present a user study that compares it against three common input methods: free-hand manipulation, kinesthetic guidance, and teleoperation. Study results show that instrumented tongs provide high quality demonstrations and a positive experience for the demonstrator while offering good correspondence to the target robot.



Turn-Taking and Coordination in Human-Machine Interaction

AI Magazine

This issue of AI Magazine brings together a collection of articles on challenges, mechanisms, and research progress in turn-taking and coordination between humans and machines. The contributing authors work in interrelated fields of spoken dialog systems, intelligent virtual agents, human-computer interaction, human-robot interaction, and semiautonomous collaborative systems and explore core concepts in coordinating speech and actions with virtual agents, robots, and other autonomous systems. Several of the contributors participated in the AAAI Spring Symposium on Turn-Taking and Coordination in Human-Machine Interaction, held in March 2015, and several articles in this issue are extensions of work presented at that symposium. The articles in the collection address key modeling, methodological, and computational challenges in achieving effective coordination with machines, propose solutions that overcome these challenges under sensory, cognitive, and resource restrictions, and illustrate how such solutions can facilitate coordination across diverse and challenging domains. The contributions highlight turn-taking and coordination in human-machine interaction as an emerging and evolving research area with important implications for future applications of AI.


Modeling Human-Robot Interactions as Systems of Distributed Cognition

AAAI Conferences

Robots that are integrated into day-to-day settings as assistants, collaborators, and companions will engage in dynamic, physically-situated social interactions with their users. Enabling such interactions will require appropriate models and representations for interaction. In this paper, we argue that the dynamic, physically-situated interactions between humans and robots can be characterized as a system of distributed cognition, that this system can be represented using probabilistic graphical models (PGMs), and that the parameters of these models can be learned from human interactions. We illustrate the application of this perspective in our ongoing research on modeling dyadic referential communication.


Introduction to the Special Issue on Dialog with Robots

AI Magazine

This special issue of AI Magazine on dialog with robots brings together a collection of articles on situated dialog. The contributing authors have been working in interrelated fields of human-robot interaction, dialog systems, virtual agents, and other related areas and address core concepts in spoken dialog with embodied robots or agents. Several of the contributors participated in the AAAI Fall Symposium on Dialog with Robots, held in November 2010, and several articles in this issue are extensions of work presented there. The articles in this collection address diverse aspects of dialog with robots, but are unified in addressing opportunities with spoken language interaction, physical embodiment, and enriched representations of context.


Introduction to the Special Issue on Dialog with Robots

AI Magazine

In parallel with these efforts, significant advances have also been made in robotics. Innovations in sensing, reasoning, and manipulation have allowed autonomous robots to move beyond the walls of computing labs into the workplace, home, and street. Bringing robots into real-world environments has made it clear to researchers that robots need not only accurately navigate and manipulate objects, but also to work alongside and, ultimately, interact and collaborate with humans. Subsequently, efforts at the intersection of spoken dialogue and human-robot interaction (HRI) have sought to broaden studies of spoken dialogue to richer, more natural, physically situated settings, and have brought to the fore the rich research area of situated dialogue, focused on challenges and opportunities at the intersection of natural language, robotics, and commonsense reasoning. Projects in this realm have addressed challenges with the use of dialogue as enabling coordination among multiple actors, taking into consideration not only the details of the task at hand, but also the dynamic physical and social context in which the actors are immersed and the affordances that embodiment provides. This special issue of AI Magazine on dialogue with robots brings together a collection of articles on situated dialogue.


Designing Embodied Cues for Dialog with Robots

AI Magazine

Of all computational systems, robots are unique in their ability to afford embodied interaction using the wider range of human communicative cues. Research on human communication provides strong evidence that embodied cues, when used effectively, elicit social, cognitive, and task outcomes such as improved learning, rapport, motivation, persuasion, and collaborative task performance. While this connection between embodied cues and key outcomes provides a unique opportunity for design, taking advantage of it requires a deeper understanding of how robots might use these cues effectively and the limitations in the extent to which they might achieve such outcomes through embodied interaction. This article aims to underline this opportunity by providing an overview of key embodied cues and outcomes in human communication and describing a research program that explores how robots might generate high-level social, cognitive, and task outcomes such as learning, rapport, and persuasion using embodied cues such as verbal, vocal, and nonverbal cues.


How Do Humans Teach: On Curriculum Learning and Teaching Dimension

Neural Information Processing Systems

We study the empirical strategies that humans follow as they teach a target concept with a simple 1D threshold to a robot. Previous studies of computational teaching, particularly the teaching dimension model and the curriculum learning principle, offer contradictory predictions on what optimal strategy the teacher should follow in this teaching task. We show through behavioral studies that humans employ three distinct teaching strategies, one of which is consistent with the curriculum learning principle, and propose a novel theoretical framework as a potential explanation for this strategy. This framework, which assumes a teaching goal of minimizing the learner's expected generalization error at each iteration, extends the standard teaching dimension model and offers a theoretical justification for curriculum learning.


Reports of the AAAI 2010 Fall Symposia

AI Magazine

The Association for the Advancement of Artificial Intelligence was pleased to present the 2010 Fall Symposium Series, held Thursday through Saturday, November 11-13, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia are as follows: (1) Cognitive and Metacognitive Educational Systems; (2) Commonsense Knowledge; (3) Complex Adaptive Systems: Resilience, Robustness, and Evolvability; (4) Computational Models of Narrative; (5) Dialog with Robots; (6) Manifold Learning and Its Applications; (7) Proactive Assistant Agents; and (8) Quantum Informatics for Cognitive, Social, and Semantic Processes. The highlights of each symposium are presented in this report.