Goto

Collaborating Authors

 Information Technology


SAVVYSEARCH: A Metasearch Engine That Learns Which Search Engines to Query

AI Magazine

Search engines are among the most successful applications on the web today. So many search engines have been created that it is difficult for users to know where they are, how to use them, and what topics they best address. Metasearch engines reduce the user burden by dispatching queries to multiple search engines in parallel. The SAVVYSEARCH metasearch engine is designed to efficiently query other search engines by carefully selecting those search engines likely to return useful results and responding to fluctuating load demands on the web. SAVVYSEARCH learns to identify which search engines are most appropriate for particular queries, reasons about resource demands, and represents an iterative parallel search strategy as a simple plan.


On the Other Hand

AI Magazine

Date: 4/1/2002 WASA -- World Aeronautics & Space Administration Executive Summary of Committee Report on Disaster Investigation, Incident # 362 Analysis of records downloaded from the 2001 Jupiter Orbital Black Parallelopiped Investigation Mission indicates that the basic source of failure was excessive emotional stress in the HAL computer, leading to a previously unknown condition now called Computational Paranoia. This in turn was an unforeseen side-effect of the design of the HAL-9000 series. HAL was given a genuine personality, enabling it to act as an onboard psychiatric advisor, colleague, and confidante to the human crew members. As a consequence, much of HAL's perceptual software was devoted to reading subtleties of facial expression, unconscious intonation stresses, and other emotional signals. Its performance at empathy and emotional insight was at least two orders of magnitude (as measured by the Kraft-Ebbing-Rachmaninoff method) better than that of the rest of the crew.


Worldwide Perspectives and Trends in Expert Systems: An Analysis Based on the Three World Congresses on Expert Systems

AI Magazine

Some people believe that the expert system field is dead, yet others believe it is alive and well. To gain a better insight into these possible views, the first three world congresses on expert systems (which typically attract representatives from some 45-50 countries) are used to determine the health of the global expert system field in terms of applied technologies, applications, and management. This article highlights some of these findings.


An Introduction to this Special Issue of AI Magazine

AI Magazine

A description of articles about AI systems that have the world wide web as their domain.


Artificial Intelligence: What Works and What Doesn't?

AI Magazine

AI has been well supported by government research and development dollars for decades now, and people are beginning to ask hard questions: What really works? What are the limits? What doesn't work as advertised? What isn't likely to work? What isn't affordable? This article holds a mirror up to the community, both to provide feedback and stimulate more self-assessment. The significant accomplishments and strengths of the field are highlighted. The research agenda, strategy, and heuristics are reviewed, and a change of course is recommend-ed to improve the field's ability to produce reusable and interoperable components.


Flaw Selection Strategies for Partial-Order Planning

Journal of Artificial Intelligence Research

Several recent studies have compared the relative efficiency of alternative flaw selection strategies for partial-order causal link (POCL) planning. We review this literature, and present new experimental results that generalize the earlier work and explain some of the discrepancies in it. In particular, we describe the Least-Cost Flaw Repair (LCFR) strategy developed and analyzed by Joslin and Pollack (1994), and compare it with other strategies, including Gerevini and Schubert's (1996) ZLIFO strategy. LCFR and ZLIFO make very different, and apparently conflicting claims about the most effective way to reduce search-space size in POCL planning. We resolve this conflict, arguing that much of the benefit that Gerevini and Schubert ascribe to the LIFO component of their ZLIFO strategy is better attributed to other causes. We show that for many problems, a strategy that combines least-cost flaw selection with the delay of separable threats will be effective in reducing search-space size, and will do so without excessive computational overhead. Although such a strategy thus provides a good default, we also show that certain domain characteristics may reduce its effectiveness.


A Complete Classification of Tractability in RCC-5

Journal of Artificial Intelligence Research

We investigate the computational properties of the spatial algebra RCC-5 which is a restricted version of the RCC framework for spatial reasoning. The satisfiability problem for RCC-5 is known to be NP-complete but not much is known about its approximately four billion subclasses. We provide a complete classification of satisfiability for all these subclasses into polynomial and NP-complete respectively. In the process, we identify all maximal tractable subalgebras which are four in total.


Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

Journal of Artificial Intelligence Research

An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.


Query DAGs: A Practical Paradigm for Implementing Belief-Network Inference

Journal of Artificial Intelligence Research

We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks (we show how they can be generated using clustering and conditioning algorithms). The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The intended value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on different software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm. Interestingly enough, Q-DAGs were found to serve other purposes: simple techniques for reducing Q-DAGs tend to subsume relatively complex optimization techniques for belief-network inference, such as network-pruning and computation-caching.


Finite size scaling of the bayesian perceptron

arXiv.org Artificial Intelligence

We study numerically the properties of the bayesian perceptron through a gradient descent on the optimal cost function. The theoretical distribution of stabilities is deduced. It predicts that the optimal generalizer lies close to the boundary of the space of (error-free) solutions. The numerical simulations are in good agreement with the theoretical distribution. The extrapolation of the generalization error to infinite input space size agrees with the theoretical results. Finite size corrections are negative and exhibit two different scaling regimes, depending on the training set size. The variance of the generalization error vanishes for $N \rightarrow \infty$ confirming the property of self-averaging.