North America
An expert system for detecting automobile insurance fraud using social network analysis
Šubelj, Lovro, Furlan, Štefan, Bajec, Marko
The article proposes an expert system for detection, and subsequent investigation, of groups of collaborating automobile insurance fraudsters. The system is described and examined in great detail, several technical difficulties in detecting fraud are also considered, for it to be applicable in practice. Opposed to many other approaches, the system uses networks for representation of data. Networks are the most natural representation of such a relational domain, allowing formulation and analysis of complex relations between entities. Fraudulent entities are found by employing a novel assessment algorithm, \textit{Iterative Assessment Algorithm} (\textit{IAA}), also presented in the article. Besides intrinsic attributes of entities, the algorithm explores also the relations between entities. The prototype was evaluated and rigorously analyzed on real world data. Results show that automobile insurance fraud can be efficiently detected with the proposed system and that appropriate data representation is vital.
Linear Temporal Logic and Propositional Schemata, Back and Forth (extended version)
Aravantinos, Vincent, Caferra, Ricardo, Peltier, Nicolas
This paper relates the well-known Linear Temporal Logic with the logic of propositional schemata introduced by the authors. We prove that LTL is equivalent to a class of schemata in the sense that polynomial-time reductions exist from one logic to the other. Some consequences about complexity are given. We report about first experiments and the consequences about possible improvements in existing implementations are analyzed.
Understanding Exhaustive Pattern Learning
Pattern learning in an important problem in Natural Language Processing (NLP). Some exhaustive pattern learning (EPL) methods (Bod, 1992) were proved to be flawed (Johnson, 2002), while similar algorithms (Och and Ney, 2004) showed great advantages on other tasks, such as machine translation. In this article, we first formalize EPL, and then show that the probability given by an EPL model is constant-factor approximation of the probability given by an ensemble method that integrates exponential number of models obtained with various segmentations of the training data. This work for the first time provides theoretical justification for the widely used EPL algorithm in NLP, which was previously viewed as a flawed heuristic method. Better understanding of EPL may lead to improved pattern learning algorithms in future.
Visual Object Recognition
Gauman, Kristen, Leibe, Bastian
The visual recognition problem is central to computer vision research. This tutorial overviews computer vision algorithms for visual object recognition and image classification. We introduce primary representations and learning approaches, with an emphasis on recent advances in the field for researchers or students working in AI, robotics, or vision. ISBN 9781598299687, 181 pages.
Simultaneous model-based clustering and visualization in the Fisher discriminative subspace
Bouveyron, Charles, Brunet, Camille
Clustering in high-dimensional spaces is nowadays a recurrent problem in many scientific domains but remains a difficult task from both the clustering accuracy and the result understanding points of view. This paper presents a discriminative latent mixture (DLM) model which fits the data in a latent orthonormal discriminative subspace with an intrinsic dimension lower than the dimension of the original space. By constraining model parameters within and between groups, a family of 12 parsimonious DLM models is exhibited which allows to fit onto various situations. An estimation algorithm, called the Fisher-EM algorithm, is also proposed for estimating both the mixture parameters and the discriminative subspace. Experiments on simulated and real datasets show that the proposed approach performs better than existing clustering methods while providing a useful representation of the clustered data. The method is as well applied to the clustering of mass spectrometry data.
Metamodel-based importance sampling for the simulation of rare events
Dubourg, V., Deheeger, F., Sudret, B.
In the field of structural reliability, the Monte-Carlo estimator is considered as the reference probability estimator. However, it is still untractable for real engineering cases since it requires a high number of runs of the model. In order to reduce the number of computer experiments, many other approaches known as reliability methods have been proposed. A certain approach consists in replacing the original experiment by a surrogate which is much faster to evaluate. Nevertheless, it is often difficult (or even impossible) to quantify the error made by this substitution. In this paper an alternative approach is developed. It takes advantage of the kriging meta-modeling and importance sampling techniques. The proposed alternative estimator is finally applied to a finite element based structural reliability analysis.
Reports of the AAAI 2010 Fall Symposia
Azevedo, Roger (McGill University) | Biswas, Gautam (Vanderbilt University) | Bohus, Dan (Microsoft Research) | Carmichael, Ted (University of North Carolina at Charlotte) | Finlayson, Mark (Massachusetts Institute of Technology) | Hadzikadic, Mirsad (University of North Carolina at Charlotte) | Havasi, Catherine (Massachusetts Institute of Technology) | Horvitz, Eric (Microsoft Research) | Kanda, Takayuki (ATR Intelligent Robotics and Communications Laboratories) | Koyejo, Oluwasanmi (University of Texas at Austin) | Lawless, William (Paine College) | Lenat, Doug (Cycorp) | Meneguzzi, Felipe (Carnegie Mellon University) | Mutlu, Bilge (University of Wisconsin, Madison) | Oh, Jean (Carnegie Mellon University) | Pirrone, Roberto (University of Palermo) | Raux, Antoine (Honda Research Institute USA) | Sofge, Donald (Naval Research Laboratory) | Sukthankar, Gita (University of Central Florida) | Durme, Benjamin Van (Johns Hopkins University)
The Association for the Advancement of Artificial Intelligence was pleased to present the 2010 Fall Symposium Series, held Thursday through Saturday, November 11-13, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia are as follows: (1) Cognitive and Metacognitive Educational Systems; (2) Commonsense Knowledge; (3) Complex Adaptive Systems: Resilience, Robustness, and Evolvability; (4) Computational Models of Narrative; (5) Dialog with Robots; (6) Manifold Learning and Its Applications; (7) Proactive Assistant Agents ; and (8) Quantum Informatics for Cognitive, Social, and Semantic Processes. The highlights of each symposium are presented in this report.
Enabling Intelligence through Middleware: Report of the AAAI 2010 Workshop
Anderson, Monica (University of Alabama) | Thomaz, Andrea L. (Georgia Institute of Technology)
For example, baby boomers are aging. Researchers are actively pursuing interdisciplinary research that enables robots to function autnomously within arbitrary environments alongside people. The goal of the AAAI 2010 Workshop on Enabling Intelligence through Middleware was to examine both the successes and opportunities to provide tools that enable a larger pool of researchers to experiment with embodied, intelligent algorithms. The half-day workshop, attended by over 80 people, was held as part of the Twenty-Fourth AAAI Conference on Artificial Intelligence in Atlanta Georgia on July 12, 2010. The workshop consisted of two parts: (1) invited talks and (2) middleware presentations.
AAAI-10 Classic Paper Award: Systematic Nonlinear Planning A Commentary
Weld, Daniel S. (University of Washington)
David McAllester and David Rosenblitt's paper, "Systematic Nonlinear Planning" (published This commentary by Daniel S. Weld describes David Rosenblitt's paper, "Systematic Nonlinear Planning" (McAllester and Rosenblitt 1991), presented 19 years ago at the Ninth National Conference on Artificial Intelligence (AAAI-91), had two major impacts on the field: (1) an elegant algorithm and (2) endorsement of the lifting technique. The paper's biggest impact stems from its extremely clear and simple presentation of a sound and complete algorithm (known as SNLP or POP) for classical planning. While it is easy to define such an algorithm as search through the space of world states, SNLP is a "partialorder" planner, meaning it searches the space of partially specified plans, where only partial constraints on action arguments and ordering decisions are maintained. Here, McAllester and Rosenblitt benefited from David Chapman's elegant TWEAK planner, which greatly clarified previous partial-order algorithms (Chapman 1985). SNLP's key feature is the use of a data structure, called a causal link, to record the planner's commitment to establish a precondition of one action with the postcondition of another.