Information Technology
Beating Common Sense into Interactive Applications
Lieberman, Henry, Liu, Hugo, Singh, Push, Barry, Barbara
A long-standing dream of artificial intelligence has been to put commonsense knowledge into computers -- enabling machines to reason about everyday life. Some projects, such as Cyc, have begun to amass large collections of such knowledge. However, it is widely assumed that the use of common sense in interactive applications will remain impractical for years, until these collections can be considered sufficiently complete and commonsense reasoning sufficiently robust. Recently, at the Massachusetts Institute of Technology's Media Laboratory, we have had some success in applying commonsense knowledge in a number of intelligent interface agents, despite the admittedly spotty coverage and unreliable inference of today's commonsense knowledge systems. This article surveys several of these applications and reflects on interface design principles that enable successful use of commonsense knowledge.
AI in the News
Labs in Chicago that's expected to be Lakshmi Sandhana. Russian Angel Helps AxonX "Bring that can be focused anywhere in the AxonX's software uses artificial intelligence, as possible, something that isn't Executives say the software can package and walks away from it. Welcome to the Internet 2014. Bill to send police officers to the scene function as the brains of the network. "The mere fact agents will collect and analyze data 24 "'With the aggressive way these that everyone is online will change the hours a day, alerting their flesh-andblood types of surveillance equipment are being way the world works, of course. But the counterparts only when they marketed and implemented,' Mr. way we use the processing power available catch sight of something noteworthy."
The 2004 AAAI Spring Symposium Series
Canamero, Lola, Dodds, Zachary, Greenwald, Lloyd, Gunderson, James, Howard, Ayanna, Hudlicka, Eva, Martin, Cheryl, Parker, Lynn, Oates, Tim, Payne, Terry, Qu, Yan, Schlenoff, Craig, Shanahan, James G., Tejada, Sheila, Weinberg, Jerry, Wiebe, Janyce
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2004 Spring Symposium Series, Monday through Wednesday, March 22-24, at Stanford University. The titles of the eight symposia were (1) Accessible Hands-on Artificial Intelligence and Robotics Education; (2) Architectures for Modeling Emotion: Cross-Disciplinary Foundations; (3) Bridging the Multiagent and Multirobotic Research Gap; (4) Exploring Attitude and Affect in Text: Theories and Applications; (5) Interaction between Humans and Autonomous Systems over Extended Operation; (6) Knowledge Representation and Ontologies for Autonomous Systems; (7) Language Learning: An Interdisciplinary Perspective; and (8) Semantic Web Services. Each symposium had limited attendance. Most symposia chairs elected to create AAAI technical reports of their symposium, which are available as paperbound reports or (for AAAI members) are downloadable on the AAAI members-only Web site. This report includes summaries of the eight symposia, written by the symposia chairs.
Project Halo: Towards a Digital Aristotle
Friedland, Noah S., Allen, Paul G., Matthews, Gavin, Witbrock, Michael, Baxter, David, Curtis, Jon, Shepard, Blake, Miraglia, Pierluigi, Angele, Jurgen, Staab, Steffen, Moench, Eddie, Oppermann, Henrik, Wenke, Dirk, Israel, David, Chaudhri, Vinay, Porter, Bruce, Barker, Ken, Fan, James, Chaw, Shaw Yi, Yeh, Peter, Tecuci, Dan, Clark, Peter
Project Halo is a multistaged effort, sponsored by Vulcan Inc, aimed at creating Digital Aristotle, an application that will encompass much of the world's scientific knowledge and be capable of applying sophisticated problem solving to answer novel questions. Vulcan envisions two primary roles for Digital Aristotle: as a tutor to instruct students in the sciences and as an interdisciplinary research assistant to help scientists in their work. As a first step towards this goal, we have just completed a six-month pilot phase designed to assess the state of the art in applied knowledge representation and reasoning (KR&/R). Vulcan selected three teams, each of which was to formally represent 70 pages from the advanced placement (AP) chemistry syllabus and deliver knowledge-based systems capable of answering questions on that syllabus. The evaluation quantified each system's coverage of the syllabus in terms of its ability to answer novel, previously unseen questions and to provide human- readable answer justifications. These justifications will play a critical role in building user trust in the question-answering capabilities of Digital Aristotle. Prior to the final evaluation, a "failure taxonomy' was collaboratively developed in an attempt to standardize failure analysis and to facilitate cross-platform comparisons. Despite differences in approach, all three systems did very well on the challenge, achieving performance comparable to the human median. The analysis also provided key insights into how the approaches might be scaled, while at the same time suggesting how the cost of producing such systems might be reduced. This outcome leaves us highly optimistic that the technical challenges facing this effort in the years to come can be identified and overcome. This article presents the motivation and longterm goals of Project Halo, describes in detail the six-month first phase of the project -- the Halo Pilot -- its KR&R challenge, empirical evaluation, results, and failure analysis. The pilot's outcome is used to define challenges for the next phase of the project and beyond.
LexRank: Graph-based Lexical Centrality as Salience in Text Summarization
We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.
Towards Understanding and Harnessing the Potential of Clause Learning
Beame, P., Kautz, H., Sabharwal, A.
Efficient implementations of DPLL with the addition of clause learning are the fastest complete Boolean satisfiability solvers and can handle many significant real-world problems, such as verification, planning and design. Despite its importance, little is known of the ultimate strengths and limitations of the technique. This paper presents the first precise characterization of clause learning as a proof system (CL), and begins the task of understanding its power by relating it to the well-studied resolution proof system. In particular, we show that with a new learning scheme, CL can provide exponentially shorter proofs than many proper refinements of general resolution (RES) satisfying a natural property. These include regular and Davis-Putnam resolution, which are already known to be much stronger than ordinary DPLL. We also show that a slight variant of CL with unlimited restarts is as powerful as RES itself. Translating these analytical results to practice, however, presents a challenge because of the nondeterministic nature of clause learning algorithms. We propose a novel way of exploiting the underlying problem structure, in the form of a high level problem description such as a graph or PDDL specification, to guide clause learning algorithms toward faster solutions. We show that this leads to exponential speed-ups on grid and randomized pebbling problems, as well as substantial improvements on certain ordering formulas.
Solving Transition Independent Decentralized Markov Decision Processes
Becker, R., Zilberstein, S., Lesser, V., Goldman, C. V.
Formal treatment of collaborative multi-agent systems has been lagging behind the rapid progress in sequential decision making by individual agents. Recent work in the area of decentralized Markov Decision Processes (MDPs) has contributed to closing this gap, but the computational complexity of these models remains a serious obstacle. To overcome this complexity barrier, we identify a specific class of decentralized MDPs in which the agents' transitions are independent. The class consists of independent collaborating agents that are tied together through a structured global reward function that depends on all of their histories of states and actions. We present a novel algorithm for solving this class of problems and examine its properties, both as an optimal algorithm and as an anytime algorithm. To the best of our knowledge, this is the first algorithm to optimally solve a non-trivial subclass of decentralized MDPs. It lays the foundation for further work in this area on both exact and approximate algorithms.
Generalizing Boolean Satisfiability II: Theory
Dixon, H. E., Ginsberg, M. L., Luks, E. M., Parkes, A. J.
This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAP's implementation and presents experimental performance results.
On Prediction Using Variable Order Markov Models
Begleiter, R., El-Yaniv, R., Yona, G.
This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a ``decomposed'' CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems.