Goto

Collaborating Authors

 Government


Bounded Finite State Controllers

Neural Information Processing Systems

We describe a new approximation algorithm for solving partially observable MDPs. Our bounded policy iteration approach searches through the space of bounded-size, stochastic finite state controllers, combining several advantages of gradient ascent (efficiency, search through restricted controller space) and policy iteration (less vulnerability to local optima).


Parameterized Novelty Detectors for Environmental Sensor Monitoring

Neural Information Processing Systems

As part of an environmental observation and forecasting system, sensors deployed in the Columbia RIver Estuary (CORIE) gather information on physical dynamics and changes in estuary habitat. Ofthese, salinity sensors are particularly susceptible to biofouling, whichgradually degrades sensor response and corrupts critical data. Automatic fault detectors have the capability to identify bio-fouling early and minimize data loss. Complicating the development ofdiscriminatory classifiers is the scarcity of bio-fouling onset examples and the variability of the bio-fouling signature. To solve these problems, we take a novelty detection approach that incorporates a parameterized bio-fouling model. These detectors identify the occurrence of bio-fouling, and its onset time as reliably as human experts. Real-time detectors installed during the summer of2001 produced no false alarms, yet detected all episodes of sensor degradation before the field staff scheduled these sensors for cleaning. From this initial deployment through February 2003, our bio-fouling detectors have essentially doubled the amount of useful data coming from the CORIE sensors.



An AI Planning-based Tool for Scheduling Satellite Nominal Operations

AI Magazine

Satellite domains are becoming a fashionable area of research within the AI community due to the complexity of the problems that satellite domains need to solve. With the current U.S. and European focus on launching satellites for communication, broadcasting, or localization tasks, among others, the automatic control of these machines becomes an important problem. Many new techniques in both the planning and scheduling fields have been applied successfully, but still much work is left to be done for reliable autonomous architectures. The purpose of this article is to present CONSAT, a real application that plans and schedules the performance of nominal operations in four satellites during the course of a year for a commercial Spanish satellite company, HISPASAT. For this task, we have used an AI domain-independent planner that solves the planning and scheduling problems in the HISPASAT domain thanks to its capability of representing and handling continuous variables, coding functions to obtain the operators' variable values, and the use of control rules to prune the search. We also abstract the approach in order to generalize it to other domains that need an integrated approach to planning and scheduling.


Formalizations of Commonsense Psychology

AI Magazine

(Niles and Pease 2001). Considering that tremendous scheduling that are robust in the face of realworld progress has been made in commonsense reasoning concerns like time zones, daylight savings in specialized topics such as thermodynamics time, and international calendar variations. in physical systems (Collins and Forbus 1989), it is surprising that our best content theories Given the importance of an ontology of of people are still struggling to get past time across so many different commonsense simple notions of belief and intentionality (van der Hoek and Wooldridge 2003). However, search is the generation of competency theories systems that can successfully reason about that have a degree of depth necessary to solve people are likely to be substantially more valuable inferential problems that people are easily able than those that reason about thermodynamics to handle. in most future applications. Yet competency in content theories is only Content theories for reasoning about people half of the challenge. Commonsense reasoning are best characterized collectively as a theory of in AI theories will require that computers not commonsense psychology, in contrast to those only make deep humanlike inferences but also that are associated with commonsense (naรฏve) ensure that the scope of these inferences is as physics. The scope of commonsense physics, broad as humans can handle, as well. That is, best outlined in Patrick Hayes's first and second in addition to competency, content theories will "Naรฏve Physics Manifestos" (Hayes 1979, need adequate coverage over the full breadth of 1984), includes content theories of time, space, concepts that are manipulated in human-level physical entities, and their dynamics. It is only by achieving psychology, in contrast, concerns all some adequate level of coverage that we of the aspects of the way that people think they can begin to construct reasoning systems that think. It should include notions of plans and integrate fully into real-world AI applications, goals, opportunities and threats, decisions and where pragmatic considerations and expressive preferences, emotions and memories, along user interfaces raise the bar significantly.


The Fourteenth International Conference on Automated Planning and Scheduling (ICAPS-04)

AI Magazine

The Fourteenth International Conference on Automated Planning and Scheduling (ICAPS-04) was held in Canada in June of 2004. It covered the latest theoretical and empirical advances in planning and scheduling. The conference program consisted of tutorials, workshops, a doctoral consortium, and three days of technical paper presentations in a single plenary track, one day of which was jointly organized with the Ninth International Conference on Principles of Knowledge Representation and Reasoning. ICAPS-04 also hosted the International Planning Competition, including a classical track and a newly formed probabilistic track. This report describes the conference in more detail.


The 2004 National Conference on AI: Post-Conference Wrap-Up

AI Magazine

AAAI's Nineteenth National Conference on Artificial Intelligence (AAAI-04) filled the top floor of the San Jose Convention Center from July 25-29, 2004. The week's program was full of recent advances in many different AI research areas, as well as emerging applications for AI. Within the various topics discussed at the conference, a number of strategic domains emerged where AI is being harnessed, including counterterrorism, space exploration, robotics, the Web, health care, scientific research, education, and manufacturing.


LexRank: Graph-based Lexical Centrality as Salience in Text Summarization

Journal of Artificial Intelligence Research

We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.


Generalizing Boolean Satisfiability II: Theory

Journal of Artificial Intelligence Research

This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAP's implementation and presents experimental performance results.


Existence of Multiagent Equilibria with Limited Agents

Journal of Artificial Intelligence Research

Multiagent learning is a necessary yet challenging problem as multiagent systems become more prevalent and environments become more dynamic. Much of the groundbreaking work in this area draws on notable results from game theory, in particular, the concept of Nash equilibria. Learners that directly learn an equilibrium obviously rely on their existence. Learners that instead seek to play optimally with respect to the other players also depend upon equilibria since equilibria are fixed points for learning. From another perspective, agents with limitations are real and common. These may be undesired physical limitations as well as self-imposed rational limitations, such as abstraction and approximation techniques, used to make learning tractable. This article explores the interactions of these two important concepts: equilibria and limitations in learning. We introduce the question of whether equilibria continue to exist when agents have limitations. We look at the general effects limitations can have on agent behavior, and define a natural extension of equilibria that accounts for these limitations. Using this formalization, we make three major contributions: (i) a counterexample for the general existence of equilibria with limitations, (ii) sufficient conditions on limitations that preserve their existence, (iii) three general classes of games and limitations that satisfy these conditions. We then present empirical results from a specific multiagent learning algorithm applied to a specific instance of limited agents. These results demonstrate that learning with limitations is feasible, when the conditions outlined by our theoretical analysis hold.