Goto

Collaborating Authors

 Government


Kernel Design Using Boosting

Neural Information Processing Systems

The focus of the paper is the problem of learning kernel operators from empirical data. We cast the kernel design problem as the construction of an accurate kernel from simple (and less accurate) base kernels. We use the boosting paradigm to perform the kernel construction process. To do so, we modify the booster so as to accommodate kernel operators. We also devise an efficient weak-learner for simple kernels that is based on generalized eigen vector decomposition. We demonstrate the effectiveness ofour approach on synthetic data and on the USPS dataset. On the USPS dataset, the performance of the Perceptron algorithm with learned kernels is systematically better than a fixed RBF kernel.


Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

Neural Information Processing Systems

Machine learning has reached a point where many probabilistic methods canbe understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized fordifferent models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer algebra to derive an efficient specialized algorithm for learning that model, and generates executable code implementing that algorithm. This capability is far beyond that of code collections such as Matlab toolboxes oreven tools for model-independent optimization such as BUGS for Gibbs sampling: complex new algorithms can be generated without newprogramming, algorithms can be highly specialized and tightly crafted for the exact structure of the model and data, and efficient and commented code can be generated for different languages or systems.


Learning Sparse Multiscale Image Representations

Neural Information Processing Systems

We describe a method for learning sparse multiscale image representations usinga sparse prior distribution over the basis function coefficients. The prior consists of a mixture of a Gaussian and a Dirac delta function, and thus encourages coefficients to have exact zero values. Coefficients for an image are computed by sampling from the resulting posterior distribution with a Gibbs sampler. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Denoising usingthe learned image model is demonstrated for some standard test images, with results that compare favorably with other denoising methods.


Calendar of Events

AI Magazine

Aided Design of User Interfaces. (ICKEDS 2004). "Halpern presents a masterful, complete and unified account of the many ways in which the connections between logic, probability theory and commonsensical linguistic terms can be formalized. 'believed,' 'known,' 'default,' 'relevant,' "Presents a novel thesis--that the mind is a'independent,' and'preferred' are given rigorous program whose components are semantically semantical and syntactical analyses, and their meaningful modules--and explores it with a rich interrelationships explicated and exemplified. An array of evidence drawn from a variety of fields.


Qualitative Spatial Reasoning Extracting and Reasoning with Spatial Aggregates

AI Magazine

Reasoning about spatial data is a key task in many applications, including geographic information systems, meteorological and fluid-flow analysis, computer-aided design, and protein structure databases. Such applications often require the identifi- cation and manipulation of qualitative spatial representations, for example, to detect whether one object will soon occlude another in a digital image or efficiently determine relationships between a proposed road and wetland regions in a geographic data set. Qualitative spatial reasoning (QSR) provides representational primitives (a spatial "vocabulary") and inference mechanisms for these tasks. This article first reviews representative work on QSR for data-poor scenarios, where the goal is to design representations that can answer qualitative queries without much numeric information. It then turns to the data-rich case, where the goal is to derive and manipulate qualitative spatial representations that efficiently and correctly abstract important spatial aspects of the underlying data for use in subsequent tasks. This article focuses on how a particular QSR system, SPATIAL AGGREGATION, can help answer spatial queries for scientific and engineering data sets. A case study application of weather analysis illustrates the effective representation and reasoning supported by both data-poor and data-rich forms of QSR


AI in the News

AI Magazine

In reflect the overall tenor of the article; (3) although robotics, speech recognition, search, etc. - In April, Business Communication online and without charge, few things that helps engineers create better jet Company (BCC) will release a good last forever; and (4) the AI in the News engines. One variety, called a neural September 29, 2003 (www.koreaherald.co. "Will the day really come when'intelligent' Brokers Will Spend Big on Anti-Money detecting credit-card fraud. Since 1992, robots take over mundane household Laundering. Wall Street & neural nets have slashed such incidents chores and give people more leisure Technology. There is no definition yet of the andtech.com). But according to will spend almost $700 million in the small credit unions are required to use the the preliminary definition by the International next three years on antimoney-laundering software in order to qualify for debit-card Federation of Robotics, an industry technologies, according to a recent report insurance from Credit Union National Association."


Model-Based Programming of Fault-Aware Systems

AI Magazine

A wide range of sensor-rich, networked embedded systems are being created that must operate robustly for years in the face of novel failures by managing complex autonomic processes. These systems are being composed, for example, into vast networks of space, air, ground, and underwater vehicles. Our objective is to revolutionize the way in which we control these new artifacts by creating reactive model-based programming languages that enable everyday systems to reason intelligently and enable machines to explore other worlds. A model-based program is state and fault aware; it elevates the programming task to specifying intended state evolutions of a system. The program's executive automatically coordinates system interactions to achieve these states, entertaining known and potential failures, using models of its constituents and environment. At the executive's core is a method, called CONFLICT-DIRECTED A*, which quickly prunes promising but infeasible solutions, using a form of one-shot learning. This approach has been demonstrated on a range of systems, including the National Aeronautics and Space Administration's Deep Space One probe. Model-based programming is being generalized to hybrid discrete-continuous systems and the coordination of networks of robotic vehicles.


Taming Numbers and Durations in the Model Checking Integrated Planning System

Journal of Artificial Intelligence Research

The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization.


The 3rd International Planning Competition: Results and Analysis

Journal of Artificial Intelligence Research

This paper reports the outcome of the third in the series of biennial international planning competitions, held in association with the International Conference on AI Planning and Scheduling (AIPS) in 2002. In addition to describing the domains, the planners and the objectives of the competition, the paper includes analysis of the results. The results are analysed from several perspectives, in order to address the questions of comparative performance between planners, comparative difficulty of domains, the degree of agreement between planners about the relative difficulty of individual problem instances and the question of how well planners scale relative to one another over increasingly difficult problems. The paper addresses these questions through statistical analysis of the raw results of the competition, in order to determine which results can be considered to be adequately supported by the data. The paper concludes with a discussion of some challenges for the future of the competition series.


VHPOP: Versatile Heuristic Partial Order Planner

Journal of Artificial Intelligence Research

VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990's on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners.