Classics


The Role of Experimentation in Artificial Intelligence

Classics

Intelligence is a complex, natural phenomenon exhibited by humans and many other living things, without sharply defined boundaries between intelligent and unintelligent behaviour. Artificial inteliigence focuses on the phenomenon of intelligent behaviour, in humans or machines. Experimentation with computer programs allows us to manipulate their design and intervene in the environmental conditions in ways that are not possible with humans. Thus, experimentation can help us to understand what principles govern intelligent action and what mechanisms are sufficient for computers to replicate intelligent behaviours.Phil. Trans. R. Soc. Lond. A. 1994 349 1689


Machine Learning, Neural and Statistical Classification

Classics

This book (originally published in 1994 by Ellis Horwood) is now out of print. The copyright now resides with the editors who have decided to make the material freely available on the web.This book is based on the EC (ESPRIT) project StatLog which compare and evaluated a range of classification techniques, with an assessment of their merits, disadvantages and range of application. This integrated volume provides a concise introduction to each method, and reviews comparative trials in large-scale commercial and industrial problems. It makes accessible to a wide range of workers the complex issue of classification as approached through machine learning, statistics and neural networks, encouraging a cross-fertilization between these discplines.


The coming technological singularity: How to survive in the post-human era

Classics

National Aeronautics and Space Administration Office of Management Scientific and Technical Information Program 1993 Into the Era of Cyberspace Our robots precede us with infinite diversity exploring the universe delighting in complexity A matrix of neurons, we create our own reality of carbon and of silicon, we evolve toward what we chose to be. The symposium "Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace" was held at the Holiday Inn in Westlake, Ohio on March 30-31, 1993, sponsored by the NASA Lewis Research Center's Aerospace Technology Directorate under the auspices of the NASA Office of Aeronautics, Exploration and Technology. Carol Stoker, of the Telepresence for Planetary Exploration Project at NASA Ames Research Center, is a leading researcher in both telerobotics and Mars. Marc G. Millis NASA Lewis Research Center Cleveland, Ohio Technologies that exist today were once just visions in the minds of their creat


Machine discovery of effective admissible heuristics

Classics

Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. Absolver II discovered several well-known and a few novel admissible heuristics, including the first known effective one for Rubik's Cube, thus concretely demonstrating that effective admissible heuristics can be tractably discovered by a machine.


Prioritized sweeping—Reinforcement learning with less data and less time

Classics

We present a new algorithm,prioritized sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as temporal differencing and Q-learning have real-time performance. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of state-space. We compare prioritized sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems.



A SIMD approach to parallel heuristic search

Classics

Thus, the design of parallel search algorithms with limited memory is of obvious interest. This paper presents an efficient SIMD parallel algorithm, called IDPS (for iterative-deepening parallel search). Under the first scheme, an unnormalized average efficiency of approximately was obtained for 4K, 8K, and 16K processors. Under the second scheme, unnormalized average efficiencies of 0.92 and 0.76, and normalized average efficiencies of 0.70 and 0.63 were obtained for 8K and 16K processors, respectively.



Deduction: Automated Logic

Classics

In this paper, we describe KoMeT, a theorem prover for full first order logic. KoMeT is based on the connection method. Our main goal is to develop an adequate proof procedure by integrating a variety of different proof techniques.


Automatically constructing a dictionary for information extraction tasks

Classics

Knowledge-based natural language processing (NLP) systems have demonstrated strong performance for information extraction tasks in limited domains [Lehnert and Sundheim, 1991; MUC-4 Proceedings, 1992]. Given a training set from the MUC-4 corpus, AutoSlog created a dictionary for the domain of terrorist events that achieved 98% of the performance of a handcrafted dictionary on 2 blind test sets. The UMass/MUC-4 system [Lehnert et al., 1992a] used 2 dictionaries: a part-of-speech lexicon containing 5436 lexical definitions, including semantic features for domain-specific words and a dictionary of 389 concept node definitions for the domain of terrorist event descriptions. The concept node dictionary was manually constructed by 2 graduate students who had extensive experience with CIRCUS and we estimate that it required approximately 1500 person-hours of effort to build.