Goto

Collaborating Authors

 Mutlu, Bilge


Coordinated Multi-Robot Shared Autonomy Based on Scheduling and Demonstrations

arXiv.org Artificial Intelligence

Shared autonomy methods, where a human operator and a robot arm work together, have enabled robots to complete a range of complex and highly variable tasks. Existing work primarily focuses on one human sharing autonomy with a single robot. By contrast, in this paper we present an approach for multi-robot shared autonomy that enables one operator to provide real-time corrections across two coordinated robots completing the same task in parallel. Sharing autonomy with multiple robots presents fundamental challenges. The human can only correct one robot at a time, and without coordination, the human may be left idle for long periods of time. Accordingly, we develop an approach that aligns the robot's learned motions to best utilize the human's expertise. Our key idea is to leverage Learning from Demonstration (LfD) and time warping to schedule the motions of the robots based on when they may require assistance. Our method uses variability in operator demonstrations to identify the types of corrections an operator might apply during shared autonomy, leverages flexibility in how quickly the task was performed in demonstrations to aid in scheduling, and iteratively estimates the likelihood of when corrections may be needed to ensure that only one robot is completing an action requiring assistance. Through a preliminary study, we show that our method can decrease the scheduled time spent sanding by iteratively estimating the times when each robot could need assistance and generating an optimized schedule that allows the operator to provide corrections to each robot during these times.


Periscope: A Robotic Camera System to Support Remote Physical Collaboration

arXiv.org Artificial Intelligence

We investigate how robotic camera systems can offer new capabilities to computer-supported cooperative work through the design, development, and evaluation of a prototype system called Periscope. With Periscope, a local worker completes manipulation tasks with guidance from a remote helper who observes the workspace through a camera mounted on a semi-autonomous robotic arm that is co-located with the worker. Our key insight is that the helper, the worker, and the robot should all share responsibility of the camera view--an approach we call shared camera control. Using this approach, we present a set of modes that distribute the control of the camera between the human collaborators and the autonomous robot depending on task needs. We demonstrate the system's utility and the promise of shared camera control through a preliminary study where 12 dyads collaboratively worked on assembly tasks. Finally, we discuss design and research implications of our work for future robotic camera systems that facilitate remote collaboration.


Exploring the Design Space of Extra-Linguistic Expression for Robots

arXiv.org Artificial Intelligence

In this paper, we explore the new design space of extra-linguistic cues inspired by graphical tropes used in graphic novels and animation to enhance the expressiveness of social robots. To achieve this, we identified a set of cues that can be used to generate expressions, including smoke/steam/fog, water droplets, and bubbles. We prototyped devices that can generate these fluid expressions for a robot and conducted design sessions where eight designers explored the use and utility of the cues in conveying the robot's internal states in various design scenarios. Our analysis of the 22 designs, the associated design justifications, and the interviews with designers revealed patterns in how each cue was used, how they were combined with nonverbal cues, and where the participants drew their inspiration from. These findings informed the design of an integrated module called EmoPack, which can be used to augment the expressive capabilities of any robot platform.


Knowing Who Knows What: Designing Socially Assistive Robots with Transactive Memory System

arXiv.org Artificial Intelligence

Transactive Memory System (TMS) is a group theory that describes how communication can enable the combination of individual minds into a group. While this theory has been extensively studied in human-human groups, it has not yet been formally applied to socially assistive robot design. We demonstrate how the three-phase TMS group communication process-which involves encoding, storage, and retrieval-can be leveraged to improve decision making in socially assistive robots with multiple stakeholders. By clearly defining how the robot is gaining information, storing and updating its memory, and retrieving information from its memory, we believe that socially assistive robots can make better decisions and provide more transparency behind their actions in the group context. Bringing communication theory to robot design can provide a clear framework to help robots integrate better into human-human group dynamics and thus improve their acceptance and use.


"My Unconditional Homework Buddy:'' Exploring Children's Preferences for a Homework Companion Robot

arXiv.org Artificial Intelligence

We aim to design robotic educational support systems that can promote socially and intellectually meaningful learning experiences for students while they complete school work outside of class. To pursue this goal, we conducted participatory design studies with 10 children (aged 10--12) to explore their design needs for robot-assisted homework. We investigated children's current ways of doing homework, the type of support they receive while doing homework, and co-designed the speech and expressiveness of a homework companion robot. Children and parents attending our design sessions explained that an emotionally expressive social robot as a homework aid can support students' motivation and engagement, as well as their affective state. Children primarily perceived the robot as a dedicated assistant at home, capable of forming meaningful friendships, or a shared classroom learning resource. We present key design recommendations to support students' homework experiences with a learning companion robot.


Family Theories in Child-Robot Interactions: Understanding Families as a Whole for Child-Robot Interaction Design

arXiv.org Artificial Intelligence

In this work, we discuss a theoretically motivated family-centered design approach for child-robot interactions, adapted by Family Systems Theory (FST) and Family Ecological Model (FEM). Long-term engagement and acceptance of robots in the home is influenced by factors that surround the child and the family, such as child-sibling-parent relationships and family routines, rituals, and values. A family-centered approach to interaction design is essential when developing in-home technology for children, especially for social agents like robots with which they can form connections and relationships. We review related literature in family theories and connect it with child-robot interaction and child-computer interaction research. We present two case studies that exemplify how family theories, FST and FEM, can inform the integration of robots into homes, particularly research into child-robot and family-robot interaction. Finally, we pose five overarching recommendations for a family-centered design approach in child-robot interactions.


Designing Parent-child-robot Interactions to Facilitate In-Home Parental Math Talk with Young Children

arXiv.org Artificial Intelligence

Parent-child interaction is critical for child development, yet parents may need guidance in some aspects of their engagement with their children. Current research on educational math robots focuses on child-robot interactions but falls short of including the parents and integrating the critical role they play in children's learning. We explore how educational robots can be designed to facilitate parent-child conversations, focusing on math talk, a predictor of later math ability in children. We prototyped capabilities for a social robot to support math talk via reading and play activities and conducted an exploratory Wizard-of-Oz in-home study for parent-child interactions facilitated by a robot. Our findings yield insights into how parents were inspired by the robot's prompts, their desired interaction styles and methods for the robot, and how they wanted to include the robot in the activities, leading to guidelines for the design of parent-child-robot interaction in educational contexts.


Exploring the Use of Collaborative Robots in Cinematography

arXiv.org Artificial Intelligence

Robotic technology can support the creation of new tools that improve the creative process of cinematography. It is crucial to consider the specific requirements and perspectives of industry professionals when designing and developing these tools. In this paper, we present the results from exploratory interviews with three cinematography practitioners, which included a demonstration of a prototype robotic system. We identified many factors that can impact the design, adoption, and use of robotic support for cinematography, including: (1) the ability to meet requirements for cost, quality, mobility, creativity, and reliability; (2) the compatibility and integration of tools with existing workflows, equipment, and software; and (3) the potential for new creative opportunities that robotic technology can open up. Our findings provide a starting point for future co-design projects that aim to support the work of cinematographers with collaborative robots.


Situated Participatory Design: A Method for In Situ Design of Robotic Interaction with Older Adults

arXiv.org Artificial Intelligence

We present a participatory design method to design human-robot interactions with older adults and its application through a case study of designing an assistive robot for a senior living facility. The method, called Situated Participatory Design (sPD), was designed considering the challenges of working with older adults and involves three phases that enable designing and testing use scenarios through realistic, iterative interactions with the robot. In design sessions with nine residents and three caregivers, we uncovered a number of insights about sPD that help us understand its benefits and limitations. For example, we observed how designs evolved through iterative interactions and how early exposure to the robot helped participants consider using the robot in their daily life. With sPD, we aim to help future researchers to increase and deepen the participation of older adults in designing assistive technologies.


Sketching Robot Programs On the Fly

arXiv.org Artificial Intelligence

Service robots for personal use in the home and the workplace require end-user development solutions for swiftly scripting robot tasks as the need arises. Many existing solutions preserve ease, efficiency, and convenience through simple programming interfaces or by restricting task complexity. Others facilitate meticulous task design but often do so at the expense of simplicity and efficiency. There is a need for robot programming solutions that reconcile the complexity of robotics with the on-the-fly goals of end-user development. In response to this need, we present a novel, multimodal, and on-the-fly development system, Tabula. Inspired by a formative design study with a prototype, Tabula leverages a combination of spoken language for specifying the core of a robot task and sketching for contextualizing the core. The result is that developers can script partial, sloppy versions of robot programs to be completed and refined by a program synthesizer. Lastly, we demonstrate our anticipated use cases of Tabula via a set of application scenarios.