Government
AAAI News
Hamilton, Carol M. (Association for the Advancement of Artificial Intelligence)
AAAI/SIGART Doctoral Consortium, and the second AAAI Educational Advances in Artificial Intelligence Symposium, to name only a few of the AAAI is pleased to present the 2011 Spring Symposium Series, to highlights. For complete information be held Monday through Wednesday, March 21-23, 2011, at on these programs, including Tutorial Stanford University.
Building Watson: An Overview of the DeepQA Project
Ferrucci, David (IBM T. J. Watson Research Center) | Brown, Eric (IBM T. J. Watson Research Center) | Chu-Carroll, Jennifer (IBM T. J. Watson Research Center) | Fan, James (IBM T. J. Watson Research Center) | Gondek, David (IBM T. J. Watson Research Center) | Kalyanpur, Aditya A. (IBM T. J. Watson Research Center) | Lally, Adam (IBM T. J. Watson Research Center) | Murdock, J. William (IBM T. J. Watson Research Center) | Nyberg, Eric (Carnegie Mellon University) | Prager, John (IBM T. J. Watson Research Center) | Schlaefer, Nico (Carnegie Mellon University) | Welty, Chris (IBM T. J. Watson Research Center)
IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.
Realistic Fireteam Movement in Urban Environments
Darken, Christian J. (Naval Postgraduate School) | McCue, Daniel (Naval Postgraduate School) | Guerrero, Michael (Naval Postgraduate School)
Realistic simulations of the movement of infantry in urban environments with 3D models and at interactive rates is of wide and enduring interest. Many video games have attempted it, and military simulations are increasingly doing the same. Previous attempts have fallen short in two areas: properly coordinating movement, and adequate modeling of the detection of hostile targets. Novel algorithms to simulate fireteam movement and visual scanning appropriate to urban environments are described. Measurements of the computational performance of the most expensive components of the approach are provided.
Adapting Open Information Extraction to Domain-Specific Relations
Soderland, Stephen (University of Washington) | Roof, Brendan (University of Washington) | Qin, Bo (University of Washington) | Xu, Shi (University of Washington) | Mausam, - (University of Washington) | Etzioni, Oren (University of Washington)
Information extraction (IE) can identify a set of relations from free text to support question answering (QA). Until recently, IE systems were domain-specific and needed a combination of manual engineering and supervised learning to adapt to each target domain. A new paradigm, Open IE operates on large text corpora without any manual tagging of relations, and indeed without any pre-specified relations. Due to its open-domain and open-relation nature, Open IE is purely textual and is unable to relate the surface forms to an ontology, if known in advance. We explore the steps needed to adapt Open IE to a domain-specific ontology and demonstrate our approach of mapping domain-independent tuples to an ontology using domains from DARPAโs Machine Reading Project. Our system achieves precision over 0.90 from as few as 8 training examples for an NFL-scoring domain.
A Comprehensive Survey of Data Mining-based Fraud Detection Research
Phua, Clifton, Lee, Vincent, Smith, Kate, Gayler, Ross
This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.
Multiplex Structures: Patterns of Complexity in Real-World Networks
Complex network theory aims to model and analyze complex systems that consist of multiple and interdependent components. Among all studies on complex networks, topological structure analysis is of the most fundamental importance, as it represents a natural route to understand the dynamics, as well as to synthesize or optimize the functions, of networks. A broad spectrum of network structural patterns have been respectively reported in the past decade, such as communities, multipartites, hubs, authorities, outliers, bow ties, and others. Here, we show that most individual real-world networks demonstrate multiplex structures. That is, a multitude of known or even unknown (hidden) patterns can simultaneously situate in the same network, and moreover they may be overlapped and nested with each other to collaboratively form a heterogeneous, nested or hierarchical organization, in which different connective phenomena can be observed at different granular levels. In addition, we show that the multiplex structures hidden in exploratory networks can be well defined as well as effectively recognized within an unified framework consisting of a set of proposed concepts, models, and algorithms. Our findings provide a strong evidence that most real-world complex systems are driven by a combination of heterogeneous mechanisms that may collaboratively shape their ubiquitous multiplex structures as we observe currently. This work also contributes a mathematical tool for analyzing different sources of networks from a new perspective of unveiling multiplex structures, which will be beneficial to multiple disciplines including sociology, economics and computer science.
Solving the Resource Constrained Project Scheduling Problem with Generalized Precedences by Lazy Clause Generation
Schutt, Andreas, Feydy, Thibaut, Stuckey, Peter J., Wallace, Mark G.
The technical report presents a generic exact solution approach for minimizing the project duration of the resource-constrained project scheduling problem with generalized precedences (Rcpsp/max). The approach uses lazy clause generation, i.e., a hybrid of finite domain and Boolean satisfiability solving, in order to apply nogood learning and conflict-driven search on the solution generation. Our experiments show the benefit of lazy clause generation for finding an optimal solutions and proving its optimality in comparison to other state-of-the-art exact and non-exact methods. The method is highly robust: it matched or bettered the best known results on all of the 2340 instances we examined except 3, according to the currently available data on the PSPLib. Of the 631 open instances in this set it closed 573 and improved the bounds of 51 of the remaining 58 instances.
Not only a lack of right definitions: Arguments for a shift in information-processing paradigm
Machine Consciousness and Machine Intelligence are not simply new buzzwords that occupy our imagination. Over the last decades, we witness an unprecedented rise in attempts to create machines with human-like features and capabilities. However, despite widespread sympathy and abundant funding, progress in these enterprises is far from being satisfactory. The reasons for this are twofold: First, the notions of cognition and intelligence (usually borrowed from human behavior studies) are notoriously blurred and ill-defined, and second, the basic concepts underpinning the whole discourse are by themselves either undefined or defined very vaguely. That leads to improper and inadequate research goals determination, which I will illustrate with some examples drawn from recent documents issued by DARPA and the European Commission. On the other hand, I would like to propose some remedies that, I hope, would improve the current state-of-the-art disgrace.
An Empirical Study of Borda Manipulation
Davies, Jessica, Katsirelos, George, Narodystka, Nina, Walsh, Toby
We study the problem of coalitional manipulation in elections using the unweighted Borda rule. We provide empirical evidence of the manipulability of Borda elections in the form of two new greedy manipulation algorithms based on intuitions from the bin-packing and multiprocessor scheduling domains. Although we have not been able to show that these algorithms beat existing methods in the worst-case, our empirical evaluation shows that they significantly outperform the existing method and are able to find optimal manipulations in the vast majority of the randomly generated elections that we tested. These empirical results provide further evidence that the Borda rule provides little defense against coalitional manipulation.
Convergence to Equilibria in Plurality Voting
Meir, Reshef (The Hebrew University of Jerusalem) | Polukarov, Maria (University of Southampton) | Rosenschein, Jeffrey S. (The Hebrew University of Jerusalem) | Jennings, Nicholas R. (University of Southampton)
Multi-agent decision problems, in which independent agents have to agree on a joint plan of action or allocation of resources, are central to AI. In such situations, agents' individual preferences over available alternatives may vary, and they may try to reconcile these differences by voting. Based on the fact that agents may have incentives to vote strategically and misreport their real preferences, a number of recent papers have explored different possibilities for avoiding or eliminating such manipulations. In contrast to most prior work, this paper focuses on convergence of strategic behavior to a decision from which no voter will want to deviate. We consider scenarios where voters cannot coordinate their actions, but are allowed to change their vote after observing the current outcome. We focus on the Plurality voting rule, and study the conditions under which this iterative game is guaranteed to converge to a Nash equilibrium (i.e., to a decision that is stable against further unilateral manipulations). We show for the first time how convergence depends on the exact attributes of the game, such as the tie-breaking scheme, and on assumptions regarding agents' weights and strategies.