MIT Media Lab
Visualizing Inference
Lieberman, Henry (MIT Media Lab) | Henke, Joe (MIT Media Lab)
Graphical visualization has demonstrated enormous power in helping people to understand complexity in many branches of science. But, curiously, AI has been slow to pick up on the power of visualization. Alar is a visualization system intended to help people understand and control symbolic inference. Alar presents dynamically controllable node-and-arc graphs of concepts, and of assertions both supplied to the system and inferred. Alar is useful in quality assurance of knowledge bases (finding false, vague, or misleading statements; or missing assertions). It is also useful in tuning parameters of inference, especially how "liberal vs. conservative" the inference is Figure 1. An Alar visualization, centered on the assertion (trading off the desire to maximize the power of inference versus the risk of making incorrect inferences). We present "Orange is a food". Inferred assertions (green) use related a typical scenario of using Alar to debug a knowledge base.
Modeling Subjective Experience-Based Learning under Uncertainty and Frames
Ahn, Hyung-il (IBM Research) | Picard, Rosalind (MIT Media Lab)
In this paper we computationally examine how subjective experience may help or harm the decision maker's learning under uncertain outcomes, frames and their interactions. To model subjective experience, we propose the "experienced-utility function" based on a prospect theory (PT)-based parameterized subjective value function. Our analysis and simulations of two-armed bandit tasks present that the task domain (underlying outcome distributions) and framing (reference point selection) influence experienced utilities and in turn, the "subjective discriminability" of choices under uncertainty. Experiments demonstrate that subjective discriminability improves on objective discriminability by the use of the experienced-utility function with appropriate framing for a given task domain, and that bigger subjective discriminability leads to more optimal decisions in learning under uncertainty.
SenticNet 2: A Semantic and Affective Resource for Opinion Mining and Sentiment Analysis
Cambria, Erik (National University of Singapore) | Havasi, Catherine (MIT Media Lab) | Hussain, Amir (University of Stirling)
Web 2.0 has changed the ways people communicate, collaborate, and express their opinions and sentiments. But despite social data on the Web being perfectly suitable for human consumption, they remain hardly accessible to machines. To bridge the cognitive and affective gap between word-level natural language data and the concept-level sentiments conveyed by them, we developed SenticNet 2, a publicly available semantic and affective resource for opinion mining and sentiment analysis. SenticNet 2 is built by means of sentic computing, a new paradigm that exploits both AI and Semantic Web techniques to better recognize, interpret, and process natural language opinions. By providing the semantics and sentics (that is, the cognitive and affective information) associated with over 14,000 concepts, SenticNet 2 represents one of the most comprehensive semantic resources for the development of affect-sensitive applications in fields such as social data mining, multimodal affective HCI, and social media marketing.
Automated Color Selection Using Semantic Knowledge
Havasi, Catherine (MIT Media Lab) | Speer, Robert (MIT Media Lab) | Holmgren, Justin (Massachusetts Institute of Technology)
Colorizer is a program that hypothesizes color values that represent a given word or sentence, taking into account both physical descriptions of objects and their emotional connotations. This new application of common sense reasoning uses background knowledge about the world to build a model of the connections between everyday things, and uses this model to guess an appropriate color for a word. Colorizer can run over either static text or real time input, such as a speech recognition stream. It has applications in games, the arts, and webpage design.
Coarse Word-Sense Disambiguation Using Common Sense
Havasi, Catherine (MIT Media Lab) | Speer, Robert (MIT Media Lab) | Pustejovsky, James (Brandeis University)
Coarse word sense disambiguation (WSD) is an NLP task that is both important and practical: it aims to distinguish senses of a word that have very different meanings, while avoiding the complexity that comes from trying to finely distinguish every possible word sense. Reasoning techniques that make use of common sense information can help to solve the WSD problem by taking word meaning and context into account. We have created a system for coarse word sense disambiguation using blending, a common sense reasoning technique, to combine information from SemCor, WordNet, ConceptNet and Extended WordNet. Within that space, a correct sense is suggested based on the similarity of the ambiguous word to each of its possible word senses. The general blending-based system performed well at the task, achieving an f-score of 80.8\% on the 2007 SemEval Coarse Word Sense Disambiguation task.
Learning Temporal Plans from Observation of Human Collaborative Behavior
Chernova, Sonia (MIT Media Lab) | Breazeal, Cynthia (MIT Media Lab)
The objective of our research effort is to enable robots to engage in complex collaborative tasks with human-robot interaction. To function as a reliable assistant or teammate, the robot must be able to adapt to the actions of its human partner and respond to temporal variations in its own and its partner's actions. Dynamic plan execution algorithms provide a fast and robust method of executing collaborative multi-robot tasks in the presence of temporal uncertainty. However, current state of the art algorithms, rely on hand-crafted plans, providing no means of generating plans for new tasks. In this paper, we outline our approach for learning a model of collaborative robot behavior by observing human-human interaction of the target task. Through statistical analysis of the recorded human behavior we extract patterns of common behavior, and use the resulting model to learn a temporal plan. The result is a learning framework that automatically produces temporal plans for use with dynamic planning that model human collaborative behavior and produce human-like behavior in the robot. In this paper, we present our current progress in the development of this learning framework.