Goto

Collaborating Authors

 Massachusetts Institute of Technology


Planning to Perceive: Exploiting Mobility for Robust Object Detection

AAAI Conferences

Consider the task of a mobile robot autonomously navigating through an environment while detecting and mapping objects of interest using a noisy object detector. The robot must reach its destination in a timely manner, but is rewarded for correctly detecting recognizable objects to be added to the map, and penalized for false alarms. However, detector performance typically varies with vantage point, so the robot benefits from planning trajectories which maximize the efficacy of the recognition system. This work describes an online, any-time planning framework enabling the active exploration of possible detections provided by an off-the-shelf object detector. We present a probabilistic approach where vantage points are identified which provide a more informative view of a potential object. The agent then weighs the benefit of increasing its confidence against the cost of taking a detour to reach each identified vantage point. The system is demonstrated to significantly improve detection and trajectory length in both simulated and real robot experiments.


Reports of the AAAI 2010 Fall Symposia

AI Magazine

The Association for the Advancement of Artificial Intelligence was pleased to present the 2010 Fall Symposium Series, held Thursday through Saturday, November 11-13, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia are as follows: (1) Cognitive and Metacognitive Educational Systems; (2) Commonsense Knowledge; (3) Complex Adaptive Systems: Resilience, Robustness, and Evolvability; (4) Computational Models of Narrative; (5) Dialog with Robots; (6) Manifold Learning and Its Applications; (7) Proactive Assistant Agents; and (8) Quantum Informatics for Cognitive, Social, and Semantic Processes. The highlights of each symposium are presented in this report.


Reports of the AAAI 2010 Fall Symposia

AI Magazine

The Association for the Advancement of Artificial Intelligence was pleased to present the 2010 Fall Symposium Series, held Thursday through Saturday, November 11-13, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia are as follows: (1) Cognitive and Metacognitive Educational Systems; (2) Commonsense Knowledge; (3) Complex Adaptive Systems: Resilience, Robustness, and Evolvability; (4) Computational Models of Narrative; (5) Dialog with Robots; (6) Manifold Learning and Its Applications; (7) Proactive Assistant Agents ; and (8) Quantum Informatics for Cognitive, Social, and Semantic Processes. The highlights of each symposium are presented in this report.


Reinforcement Learning with Human Feedback in Mountain Car

AAAI Conferences

As computational agents are increasingly used beyond research labs, their success will depend on their ability to learn new skills and adapt to their dynamic, complex environments. If human users โ€” without programming skills โ€” can transfer their task knowledge to the agents, learning rates can increase dramatically, reducing costly trials. The TAMER framework guides the design of agents whose behavior can be shaped through signals of approval and disapproval, a natural form of human feedback. Whereas early work on TAMER assumed that the agent's only feedback was from the human teacher, this paper considers the scenario of an agent within a Markov decision process (MDP), receiving and simultaneously learning from both MDP reward and human reinforcement signals. Preserving MDP reward as the determinant of optimal behavior, we test two methods of combining human reinforcement and MDP reward and analyze their respective performances. Both methods create a predictive model, H-hat, of human reinforcement and use that model in different ways to augment a reinforcement learning (RL) algorithm. We additionally introduce a technique for appropriately determining the magnitude of the model's influence on the RL algorithm throughout time and the state space.


Propagating Uncertainty in Solar Panel Performance for Life Cycle Modeling in Early Stage Design

AAAI Conferences

One of the challenges in accurately applying metrics for life cycle assessment lies in accounting for both irreducible and inherent uncertainties in how a design will perform under real world conditions. This paper presents a preliminary study that compares two strategies, one simulation-based and one set-based, for propagating uncertainty in a system. These strategies for uncertainty propagation are then aggregated. This work is conducted in the context of an amorphous photovoltaic (PV) panel, using data gathered from the National Solar Radiation Database, as well as realistic data collected from an experimental hardware setup specifically for this study. Results show that the influence of various sources of uncertainty can vary widely, and in particular that solar radiation intensity is a more significant source of uncertainty than the efficiency of a PV panel. This work also shows both set-based and simulation-based approaches have limitations and must be applied thoughtfully to prevent unrealistic results. Finally, it was found that aggregation of the two uncertainty propagation methods provided faster results than either method alone.


Reports of the AAAI 2010 Conference Workshops

AI Magazine

The AAAI-10 Workshop program was held Sunday and Monday, July 11โ€“12, 2010 at the Westin Peachtree Plaza in Atlanta, Georgia. The AAAI-10 workshop program included 13 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Fun, Bridging the Gap between Task and Motion Planning, Collaboratively-Built Knowledge Sources and Artificial Intelligence, Goal-Directed Autonomy, Intelligent Security, Interactive Decision Theory and Game Theory, Metacognition for Robust Social Systems, Model Checking and Artificial Intelligence, Neural-Symbolic Learning and Reasoning, Plan, Activity, and Intent Recognition, Statistical Relational AI, Visual Representations and Reasoning, and Abstraction, Reformulation, and Approximation. This article presents short summaries of those events.


Reports of the AAAI 2010 Conference Workshops

AI Magazine

The AAAI-10 Workshop program was held Sunday and Monday, July 11โ€“12, 2010 at the Westin Peachtree Plaza in Atlanta, Georgia. The AAAI-10 workshop program included 13 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Fun, Bridging the Gap between Task and Motion Planning, Collaboratively-Built Knowledge Sources and Artificial Intelligence, Goal-Directed Autonomy, Intelligent Security, Interactive Decision Theory and Game Theory, Metacognition for Robust Social Systems, Model Checking and Artificial Intelligence, Neural-Symbolic Learning and Reasoning, Plan, Activity, and Intent Recognition, Statistical Relational AI, Visual Representations and Reasoning, and Abstraction, Reformulation, and Approximation. This article presents short summaries of those events.


Automated Color Selection Using Semantic Knowledge

AAAI Conferences

Colorizer is a program that hypothesizes color values that represent a given word or sentence, taking into account both physical descriptions of objects and their emotional connotations. This new application of common sense reasoning uses background knowledge about the world to build a model of the connections between everyday things, and uses this model to guess an appropriate color for a word. Colorizer can run over either static text or real time input, such as a speech recognition stream. It has applications in games, the arts, and webpage design.


Cross-Domain Scruffy Inference

AAAI Conferences

Reasoning about Commonsense knowledge poses many problems that traditional logical inference doesn't handle well. Among these is cross-domain inference: how to draw on multiple independently produced knowledge bases. Since knowledge bases may not have the same vocabulary, level of detail, or accuracy, that inference should be "scruffy." The AnalogySpace technique showed that a factored inference approach is useful for approximate reasoning over noisy knowledge bases like ConceptNet. A straightforward extension of factored inference to multiple datasets, called Blending, has seen productive use for commonsense reasoning. We show that Blending is a kind of Collective Matrix Factorization (CMF): the factorization spreads out the prediction loss between each dataset. We then show that blending additional data causes the singular vectors to rotate between the two domains, which enables cross-domain inference. We show, in a simplified example, that the maximum interaction occurs when the magnitudes (as defined by the largest singular values) of the two matrices are equal, confirming previous empirical conclusions. Finally, we describe and mathematically justify Bridge Blending, which facilitates inference between datasets by specifically adding knowledge that "bridges" between the two, in terms of CMF.


A Discriminative Model for Understanding Natural Language Route Directions

AAAI Conferences

To be useful teammates to human partners, robots must be able to follow spoken instructions given in natural language. However, determining the correct sequence of actions in response to a set of spoken instructions is a complex decision-making problem. There is a "semantic gap" between the high-level symbolic models of the world that people use, and the low-level models of geometry, state dynamics, and perceptions that robots use. In this paper, we show how this gap can be bridged by inferring the best sequence of actions from a linguistic description and environmental features. This work improves upon previous work in three ways. First, by using a conditional random field (CRF), we learn the relative weight of environmental and linguistic features, enabling the system to learn the meanings of words and reducing the modeling effort in learning how to follow commands. Second, a number of long-range features are added, which help the system to use additional structure in the problem. Finally, given a natural language command, we infer both the referred path and landmark directly, thereby requiring the algorithm to pick a landmark by which it should navigate. The CRF is demonstrated to have 15% error on a held-out dataset, when compared with 39% error for a Markov random field (MRF). Finally, by analyzing the additional annotations necessary for this work, we find that natural language route directions map sequentially onto the corresponding path and landmarks 99.6% of the time. In addition, the size of the referred landmark varies from 0m 2 to 1964m 2 and the length of the referred path varies from 0 m to 40.83 m .