Robots


Affective Personalization of a Social Robot Tutor for Children’s Second Language Skills

AAAI Conferences

Though substantial research has been dedicated towards using technology to improve education, no current methods are as effective as one-on-one tutoring. A critical, though relatively understudied, aspect of effective tutoring is modulating the student's affective state throughout the tutoring session in order to maximize long-term learning gains. We developed an integrated experimental paradigm in which children play a second-language learning game on a tablet, in collaboration with a fully autonomous social robotic learning companion. As part of the system, we measured children's valence and engagement via an automatic facial expression analysis system. These signals were combined into a reward signal that fed into the robot's affective reinforcement learning algorithm. Over several sessions, the robot played the game and personalized its motivational strategies (using verbal and non-verbal actions) to each student. We evaluated this system with 34 children in preschool classrooms for a duration of two months. We saw that (1) children learned new words from the repeated tutoring sessions, (2) the affective policy personalized to students over the duration of the study, and (3) students who interacted with a robot that personalized its affective feedback strategy showed a significant increase in valence, as compared to students who interacted with a non-personalizing robot. This integrated system of tablet-based educational content, affective sensing, affective policy learning, and an autonomous social robot holds great promise for a more comprehensive approach to personalized tutoring.


Towards Affect-Awareness for Social Robots

AAAI Conferences

Recent research has demonstrated that emotion plays a key role in human decision making. Across a wide range of disciplines, old concepts, such as the classical ``rational actor" model, have fallen out of favor in place of more nuanced models (e.g., the frameworks of behavioral economics and emotional intelligence) that acknowledge the role of emotions in analyzing human actions. We now know that context, framing, and emotional and physiological state can all drastically influence decision making in humans. Emotions serve an essential, though often overlooked, role in our lives, thoughts, and decisions. However, it is not clear how and to what extent emotions should impact the design of artificial agents, such as social robots. In this paper I argue that enabling robots, especially those intended to interact with humans, to sense and model emotions will improve their performance across a wide variety of human-interaction applications. I outline two broad research topics (affective inference and learning from affect) towards which progress can be made to enable ``affect-aware" robots and give a few examples of applications in which robots with these capabilities may outperform their non-affective counterparts. By identifying these important problems, both necessary for fully affect-aware social robots, I hope to clarify terminology, assess the current research landscape, and provide goalposts for future research.


Bayesian Active Learning-Based Robot Tutor for Children's Word-Reading Skills

AAAI Conferences

Effective tutoring requires personalization of the interaction to each student.Continuous and efficient assessment of the student's skills are a prerequisite for such personalization.We developed a Bayesian active-learning algorithm that continuously and efficiently assesses a child's word-reading skills and implemented it in a social robot.We then developed an integrated experimental paradigm in which a child plays a novel story-creation tablet game with the robot.The robot is portrayed as a younger peer who wishes to learn to read, framing the assessment of the child's word-reading skills as well as empowering the child.We show that our algorithm results in an accurate representation of the child's word-reading skills for a large age range, 4-8 year old children, and large initial reading skill range.We also show that employing child-specific assessment-based tutoring results in an age- and initial reading skill-independent learning, compared to random tutoring.Finally, our integrated system enables us to show that implementing the same learning algorithm on the robot's reading skills results in knowledge that is comparable to what the child thinks the robot has learned.The child's perception of the robot's knowledge is age-dependent and may facilitate an indirect assessment of the development of theory-of-mind.


Exploring Child-Robot Tutoring Interactions with Bayesian Knowledge Tracing

AAAI Conferences

Computer Science researchers have long sought ways to apply the fruits of their labors to education. From the Logo turtles to the latest Cognitive Tutors, the allure of computers that will understand and help humans learn and grow has been a constant thread in Artificial Intelligence research. Now, advances in robotics and our understanding of Human-Robot Interaction make it feasible to develop physically-present robots that are capable of presenting educational material in an engaging manner, adapting online to sensory information from individual students, and building sophisticated, personalized models of a student’s mastery over complex educational domains. In this paper, we discuss how using physical robots as platforms for artificially intelligent tutors enables an expanded space of possible educational interactions. We also describe a work-in-progress to (1) extend previous work in personalized user models for robotic tutoring and (2) further explore the differences between interaction with physical robots and onscreen agents. Specifically, we are examining how embedding an tutoring interaction inside a story, game, or activity with an agent may differentially affect learning gains and engagement in interactions with physical robots and screen-based agents.


Learning to Maintain Engagement: No One Leaves a Sad DragonBot

AAAI Conferences

Engagement is a key factor in every social interaction, be it between humans or humans and robots. Many studies were aimed at designing robot behavior in order to sustain human engagement. Infants and children, however, learn how to engage their caregivers to receive more attention.We used a social robot platform, DragonBot, that learned which of its social behaviors retained human engagement. This was achieved by implementing a reinforcement learning algorithm, wherein the reward is the proximity and number of people near the robot. The experiment was run in the World Science Festival in New York, where hundreds of people interacted with the robot. After more than two continuous hours of interaction, the robot learned by itself that making a sad face was the most rewarding expression. Further analysis showed that after a sad face, people's engagement rose for thirty seconds. In other words, the robot learned by itself in two hours that almost no-one leaves a sad DragonBot.


Crowdsourcing Real World Human-Robot Dialog and Teamwork through Online Multiplayer Games

AI Magazine

We present an innovative approach for large-scale data collection in human-robot interaction research through the use of online multi-player games. By casting a robotic task as a collaborative game, we gather thousands of examples of human-human interactions online, and then leverage this corpus of action and dialog data to create contextually relevant, social and task-oriented behaviors for human-robot interaction in the real world. We demonstrate our work in a collaborative search and retrieval task requiring dialog, action synchronization and action sequencing between the human and robot partners. A user study performed at the Boston Museum of Science shows that the autonomous robot exhibits many of the same patterns of behavior that were observed in the online dataset and survey results rate the robot similarly to human partners in several critical measures.


Crowdsourcing Real World Human-Robot Dialog and Teamwork through Online Multiplayer Games

AI Magazine

We present an innovative approach for large-scale data collection in human-robot interaction research through the use of online multi-player games. By casting a robotic task as a collaborative game, we gather thousands of examples of human-human interactions online, and then leverage this corpus of action and dialog data to create contextually relevant, social and task-oriented behaviors for human-robot interaction in the real world. We demonstrate our work in a collaborative search and retrieval task requiring dialog, action synchronization and action sequencing between the human and robot partners. A user study performed at the Boston Museum of Science shows that the autonomous robot exhibits many of the same patterns of behavior that were observed in the online dataset and survey results rate the robot similarly to human partners in several critical measures.


Crowdsourcing HRI through Online Multiplayer Games

AAAI Conferences

The development of hand-crafted action and dialog generation models for a social robot is a time consuming process that yields a solution only for the relatively narrow range of interactions envisioned by the programmers. In this paper, we propose a data-driven solution for interactive behavior generation that leverages online games as a means of collecting large-scale data corpora for human-robot interaction research. We present a system in which action and dialog models for a collaborative human-robot task are learned based on a reproduction of the task in a two-player online game called Mars Escape.


Dynamic Execution of Temporal Plans for Temporally Fluid Human-Robot Teaming

AAAI Conferences

Introducing robots as teammates in medical, space, and military domains raises interesting and challenging human factors issues that do not necessarily arise in multi-robot coordination. For example, we must consider how to design robots that integrate seamlessly with human group dynamics. An essential quality of a good human partner is her ability to robustly anticipate and adapt to other team members and the environment. Robots should preserve this ability and avoid constraining their human partners’ flexibility to act. This requires that the robot partner be capable of reasoning quickly online, and adapting to the humans’ actions in a temporally fluid way. This paper describes recent advances in dynamic plan execution, and argues that these advances provide a potentially powerful framework for explicitly modeling and efficiently reasoning on temporal information for human-robot interaction. We describe an executive named Chaski that enables a robot to coordinate with a human to execute a shared plan under different models of teamwork. We have applied Chaski to demonstrate teamwork using two Barrett Whole Arm Manipulators, and describe our ongoing work to demonstrate temporally fluid human-robot teaming using the Mobile-Dexterous-Social (MDS) robot.


The Second International Conference on Human-Robot Interaction

AI Magazine

The second international conference on Human-Robot Interaction (HRI-2007) was held in Arlington, Virginia, March 9-11, 2007. The theme of the conference was "Robot as Team Member" and included posters and paper presentations on teamwork, social robotics, adaptation, observation and metrics, attention, user experience, and field testing. One hundred seventy-five researchers and practitioners attended the conference, and many more contributed to the conference as authors or reviewers. HRI-2008 will be held in Amsterdam, The Netherlands from March 12-15, 2008.