Not enough data to create a plot.
Try a different view from the menu above.
Massachusetts Institute of Technology
Learning Hybrid Models with Guarded Transitions
Santana, Pedro (Massachusetts Institute of Technology) | Lane, Spencer (Massachusetts Institute of Technology) | Timmons, Eric (Massachusetts Institute of Technology) | Williams, Brian (Massachusetts Institute of Technology) | Forster, Carlos (Instituto Tecnológico de Aeronáutica)
Innovative methods have been developed for diagnosis, activity monitoring, and state estimation that achieve high accuracy through the use of stochastic models involving hybrid discrete and continuous behaviors. A key bottleneck is the automated acquisition of these hybrid models, and recent methods have focused predominantly on Jump Markov processes and piecewise autoregressive models. In this paper, we present a novel algorithm capable of performing unsupervised learning of guarded Probabilistic Hybrid Automata (PHA) models, which extends prior work by allowing stochastic discrete mode transitions in a hybrid system to have a functional dependence on its continuous state. Our experiments indicate that guarded PHA models can yield significant performance improvements when used by hybrid state estimators, particularly when diagnosing the true discrete mode of the system, without any noticeable impact on their real-time performance.
A Theoretical Analysis of Optimization by Gaussian Continuation
Mobahi, Hossein (Massachusetts Institute of Technology) | III, John W. Fisher (Massachusetts Institute of Technology)
Optimization via continuation method is a widely used approach for solving nonconvex minimization problems. While this method generally does not provide a global minimum, empirically it often achieves a superior local minimum compared to alternative approaches such as gradient descent. However, theoretical analysis of this method is largely unavailable. Here, we provide a theoretical analysis that provides a bound on the endpoint solution of the continuation method. The derived bound depends on a problem specific characteristic that we refer to as optimization complexity. We show that this characteristic can be analytically computed when the objective function is expressed in some suitable basis functions. Our analysis combines elements of scale-space theory, regularization and differential equations.
Bayesian Active Learning-Based Robot Tutor for Children's Word-Reading Skills
Gordon, Goren (Massachusetts Institute of Technology) | Breazeal, Cynthia (Massachusetts Institute of Technology)
Effective tutoring requires personalization of the interaction to each student.Continuous and efficient assessment of the student's skills are a prerequisite for such personalization.We developed a Bayesian active-learning algorithm that continuously and efficiently assesses a child's word-reading skills and implemented it in a social robot.We then developed an integrated experimental paradigm in which a child plays a novel story-creation tablet game with the robot.The robot is portrayed as a younger peer who wishes to learn to read, framing the assessment of the child's word-reading skills as well as empowering the child.We show that our algorithm results in an accurate representation of the child's word-reading skills for a large age range, 4-8 year old children, and large initial reading skill range.We also show that employing child-specific assessment-based tutoring results in an age- and initial reading skill-independent learning, compared to random tutoring.Finally, our integrated system enables us to show that implementing the same learning algorithm on the robot's reading skills results in knowledge that is comparable to what the child thinks the robot has learned.The child's perception of the robot's knowledge is age-dependent and may facilitate an indirect assessment of the development of theory-of-mind.
Friendly Artificial Intelligence: The Physics Challenge
Tegmark, Max (Massachusetts Institute of Technology)
Relentless progress in artificial intelligence (AI) is increasingly raising concerns that machines will replace humans on the job market, and perhaps altogether. Eliezer Yudkowski and others have explored the possibility that a promising future for humankind could be guaranteed by a superintelligent "Friendly AI" , designed to safeguard humanity and its values. I will argue that, from a physics perspective where everything is simply an arrangement of elementary particles, this might be even harder than it appears. Indeed, it may require thinking rigorously about the meaning of life: What is "meaning" in a particle arrangement? What is "life"? What is the ultimate ethical imperative, i.e., how should we strive to rearrange the particles of our Universe and shape its future? If we fail to answer the last question rigorously, this future is unlikely to contain humans.
Power to the People: The Role of Humans in Interactive Machine Learning
Amershi, Saleema (Microsoft Research) | Cakmak, Maya (University of Washington) | Knox, William Bradley (Massachusetts Institute of Technology) | Kulesza, Todd (Oregon State University)
Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives.
Power to the People: The Role of Humans in Interactive Machine Learning
Amershi, Saleema (Microsoft Research) | Cakmak, Maya (University of Washington) | Knox, William Bradley (Massachusetts Institute of Technology) | Kulesza, Todd (Oregon State University)
Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives. After giving a glimpse of the progress that has been made so far, we discuss the challenges that we face in moving the field forward.
Learning Human Types from Demonstration
Nikolaidis, Stefanos (Massachusetts Institute of Technology) | Gu, Keren (Massachusetts Institute of Technology) | Ramakrishnan, Ramya (Massachusetts Institute of Technology) | Shah, Julie (Massachusetts Institute of Technology)
Research on POMDP formulations for collaborative tasks in game AI applications (Nguyen et al. 2011; Macindoe, The development of new industrial robotic systems that operate Kaelbling, and Lozano-Pérez 2012; Silver and Veness in the same physical space as people highlights the 2010) also assumed a known human model. Additionally, emerging need for robots that can integrate seamlessly into previous partially observable formalisms (Ong et al. 2010; human group dynamics by adapting to the personalized style Bandyopadhyay et al. 2013; Broz, Nourbakhsh, and Simmons of human teammates. This adaptation requires learning a statistical 2011; Fern and Tadepalli 2010; Nguyen et al. 2011; model of human behavior and integrating this model Macindoe, Kaelbling, and Lozano-Pérez 2012) in assistive into the decision-making algorithm of the robot in a principled or collaborative tasks represented the preference or intention way. We present a framework for automatically learning of the human for their own actions, rather than those of human user models from joint-action demonstrations the robot, as the partially observable variable.
Information Theoretic Question Asking to Improve Spatial Semantic Representations
Hemachandra, Sachithra (Massachusetts Institute of Technology) | Walter, Matthew R. (Massachusetts Institute of Technology) | Teller, Seth (Massachusetts Institute of Technology)
In this paper, we propose an algorithm that enables robots to improve their spatial-semantic representation of the environment by engaging users in dialog. The algorithm aims to reduce the entropy in maps formulated based upon user-provided natural language descriptions (e.g., "The kitchen is down the hallway"). The robot's available information-gathering actions take the form of targeted questions intended to reduce the entropy over the grounding of the user's descriptions. These questions include those that query the robot's local surround (e.g., "Are we in the kitchen?") as well as areas distant from the robot (e.g., "Is the lab near the kitchen?"). Our algorithm treats dialog as an optimization problem that seeks to balance the information-theoretic value of candidate questions with a measure of cost associated with dialog. In this manner, the method determines the best questions to ask based upon expected entropy reduction while accounting for the burden on the user. We evaluate the entropy reduction based upon a joint distribution over a hybrid metric, topological, and semantic representation of the environment learned from user-provided descriptions and the robot's sensor data. We demonstrate that, by asking deliberate questions of the user, the method results in significant improvements in the accuracy of the resulting map.
Exploring Child-Robot Tutoring Interactions with Bayesian Knowledge Tracing
Spaulding, Samuel (Massachusetts Institute of Technology) | Breazeal, Cynthia (Massachusetts Institute of Technology)
Computer Science researchers have long sought ways to apply the fruits of their labors to education. From the Logo turtles to the latest Cognitive Tutors, the allure of computers that will understand and help humans learn and grow has been a constant thread in Artificial Intelligence research. Now, advances in robotics and our understanding of Human-Robot Interaction make it feasible to develop physically-present robots that are capable of presenting educational material in an engaging manner, adapting online to sensory information from individual students, and building sophisticated, personalized models of a student’s mastery over complex educational domains. In this paper, we discuss how using physical robots as platforms for artificially intelligent tutors enables an expanded space of possible educational interactions. We also describe a work-in-progress to (1) extend previous work in personalized user models for robotic tutoring and (2) further explore the differences between interaction with physical robots and onscreen agents. Specifically, we are examining how embedding an tutoring interaction inside a story, game, or activity with an agent may differentially affect learning gains and engagement in interactions with physical robots and screen-based agents.
Learning to Maintain Engagement: No One Leaves a Sad DragonBot
Gordon, Goren (Massachusetts Institute of Technology) | Breazeal, Cynthia (Massachusetts Institute of Technology)
Engagement is a key factor in every social interaction, be it between humans or humans and robots. Many studies were aimed at designing robot behavior in order to sustain human engagement. Infants and children, however, learn how to engage their caregivers to receive more attention.We used a social robot platform, DragonBot, that learned which of its social behaviors retained human engagement. This was achieved by implementing a reinforcement learning algorithm, wherein the reward is the proximity and number of people near the robot. The experiment was run in the World Science Festival in New York, where hundreds of people interacted with the robot. After more than two continuous hours of interaction, the robot learned by itself that making a sad face was the most rewarding expression. Further analysis showed that after a sad face, people's engagement rose for thirty seconds. In other words, the robot learned by itself in two hours that almost no-one leaves a sad DragonBot.