Goto

Collaborating Authors

 Government


Knowledge-Based Systems Research and Applications in Japan, 1992

AI Magazine

This article summarizes the findings of a 1992 study of knowledge-based systems research and applications in Japan. Representatives of universities and businesses were chosen by the Japan Technology Evaluation Center to investigate the state of the technology in Japan relative to the United States. The panel's report focused on applications, tools, and research and development in universities and industry and on major national projects.


Applied AI News

AI Magazine

A simulated (Houston, Tex.) has selected Telepresence technology allows scientists The Consolidated Communications from the Advanced Technology Program signed a strategic alliance agreement Facility's Element Manager at the National Institute of Standards with Gensym (Cambridge, Mass.) to will allow data communications and Technology. The grant will use Gensym's G2 real-time expert system system operators to remotely configure, support Kurzweil AI's development of development tool. Chevron control and monitor the operation a spoken-language interface capable installations are using G2 to intelligently of the front-end processor, providing of controlling PC software applications monitor energy management simultaneous support for through natural language and process simulation in conjunction multiple manned space flight missions, instruction in combination with a with other systems. Logica Cambridge (Cambridge, Developers at Georgia Tech AT&T Universal Card Services England) is developing a virtual reality (Atlanta, Ga.) have designed a neural (Jacksonville, Fla.) has signed a multiyear application to improve presentation network modeling, control and diagnostic agreement with HNC (San Diego, of data for air traffic controllers. Falcon uses see the heights of different aircraft, linked to sensors and other data neural network technology to learn rather than just the altitudes displayed sources on the factory floor, the neural and identify unusual transaction pat-numerically.


PI-in-a-Box: A Knowledge-Based System for Space Science Experimentation

AI Magazine

The principal investigator (PI)-IN-A-BOX knowledge based system helps astronauts perform science experiments in space. These experiments are typically costly to devise and build and often are difficult to perform. Further, the space laboratory environment is unique; ever changing; hectic; and, therefore, stressful. The environment requires quick, correct reactions to events over a wide range of experiments and disciplines, including ones distant from an astronaut's main science specialty. This environment suggests the use of advanced techniques for data collection, analysis, and decision making to maximize the value of the research performed. PI-IN-A-BOX aids astronauts with quick-look data collection, reduction, and analysis as well as equipment diagnosis and troubleshooting, procedural reminders, and suggestions for high-value departures from the preplanned experiment protocol. The astronauts have direct access to the system, which is hosted on a portable computer in the Space Lab module. The system is in use on the ground for mission training and was used in flight during the October 1993 space life sciences 2 (SLS-2) shuttle mission.


Long-Term Effects of Secondary Sensing

AI Magazine

To integrate robotics into society, it is first necessary to measure and analyze current societal responses to areas within robotics. This article is the second in a continuing series of reports on the societal effects of various aspects of robotics. In my previous article, I discussed the problems of sensor abuse and outlined a program of treatment. However, despite the wide dissemination of that article, there are still numerous empty beds at the Susan Calvin Clinic for the Prevention of Sensor Abuse. Sensor abuse continues unabated despite strong evidence that there is a better way. In this article, I explore the age-old question, Why does the robotics community look down on efficient sensing systems?


Bias-Driven Revision of Logical Domain Theories

Journal of Artificial Intelligence Research

The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``flow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair flawed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories.


Improving Performance in Neural Networks Using a Boosting Algorithm

Neural Information Processing Systems

A boosting algorithm converts a learning machine with error rate less than 50% to one with an arbitrarily low error rate. However, the algorithm discussed here depends on having a large supply of independent training samples. We show how to circumvent this problem and generate an ensemble of learning machines whose performance in optical character recognition problems is dramatically improved over that of a single network. We report the effect of boosting on four databases (all handwritten) consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service (USPS) and the following from the National Institute of Standards and Testing (NIST): 220,000 digits, 45,000 upper case alphas, and 45,000 lower case alphas. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected.


Word Space

Neural Information Processing Systems

Representations for semantic information about words are necessary for many applications of neural networks in natural language processing. This paper describes an efficient, corpus-based method for inducing distributed semantic representations for a large number of words (50,000) from lexical coccurrence statistics by means of a large-scale linear regression. The representations are successfully applied to word sense disambiguation using a nearest neighbor method. 1 Introduction Many tasks in natural language processing require access to semantic information about lexical items and text segments.


Analogy-- Watershed or Waterloo? Structural alignment and the development of connectionist models of analogy

Neural Information Processing Systems

Neural network models have been criticized for their inability to make use of compositional representations. In this paper, we describe a series of psychological phenomena that demonstrate the role of structured representations in cognition. These findings suggest that people compare relational representations via a process of structural alignment. This process will have to be captured by any model of cognition, symbolic or subsymbolic.


Analogy-- Watershed or Waterloo? Structural alignment and the development of connectionist models of analogy

Neural Information Processing Systems

Neural network models have been criticized for their inability to make use of compositional representations. In this paper, we describe a series of psychological phenomena that demonstrate the role of structured representations in cognition. These findings suggest that people compare relational representations via a process of structural alignment. This process will have to be captured by any model of cognition, symbolic or subsymbolic.


Word Space

Neural Information Processing Systems

Representations for semantic information about words are necessary for many applications of neural networks in natural language processing. This paper describes an efficient, corpus-based method for inducing distributed semantic representations for a large number of words (50,000) from lexical coccurrence statistics by means of a large-scale linear regression. The representations are successfully applied to word sense disambiguation using a nearest neighbor method. 1 Introduction Many tasks in natural language processing require access to semantic information about lexical items and text segments.