If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Department of Electrical Engineering The University of Maryland College Park, MD 20722 Abstract Recent physiological research has shown that synchronization of oscillatory responses in striate cortex may code for relationships between visual features of objects. A VLSI circuit has been designed toprovide rapid phase-locking synchronization of multiple oscillators to allow for further exploration of this neural mechanism. By exploiting the intrinsic random transistor mismatch of devices operated in subthreshold, large groups of phase-locked oscillators can be readily partitioned into smaller phase-locked groups. A mUltiple target tracker for binary images is described utilizing this phase-locking architecture. A VLSI chip has been fabricated and tested to verify the architecture.
Urs A. Miiller, Michael Kocheisen, and Anton Gunzinger Electronics Laboratory, Swiss Federal Institute of Technology CH-B092 Zurich, Switzerland Abstract The performance requirements in experimental research on artificial neuralnets often exceed the capability of workstations and PCs by a great amount. But speed is not the only requirement. Flexibility and implementation time for new algorithms are usually of equal importance. This paper describes the simulation of neural nets on the MUSIC parallel supercomputer, a system that shows a good balance between the three issues and therefore made many research projects possible that were unthinkable before. The system should be flexible, simple to program and the realization time should be short enough to not have an obsolete system by the time it is finished.
Smirnakis Lyman Laboratory of Physics Harvard University Cambridge, MA 02138 Lei Xu * Dept. of Computer Science HSH ENG BLDG, Room 1006 The Chinese University of Hong Kong Shatin, NT Hong Kong Abstract Recent work by Becker and Hinton (Becker and Hinton, 1992) shows a promising mechanism, based on maximizing mutual information assumingspatial coherence, by which a system can selforganize itself to learn visual abilities such as binocular stereo. We introduce a more general criterion, based on Bayesian probability theory, and thereby demonstrate a connection to Bayesian theories ofvisual perception and to other organization principles for early vision (Atick and Redlich, 1990). Methods for implementation usingvariants of stochastic learning are described and, for the special case of linear filtering, we derive an analytic expression for the output. 1 Introduction The input intensity patterns received by the human visual system are typically complicated functions of the object surfaces and light sources in the world. It *Lei Xu was a research scholar in the Division of Applied Sciences at Harvard University while this work was performed. Thus the visual system must be able to extract information from the input intensities that is relatively independent of the actual intensity values.
We report face recognition results under drastically changing lighting conditions for a computer vision system whichconcurrently uses a contrast sensitive silicon retina and a conventional, gaincontrolled CCO camera. For both input devices the face recognition system employs an elastic matching algorithm with wavelet based features to classify unknown faces. To assess the effect of analog on-chip preprocessing by the silicon retina the CCO images have been "digitally preprocessed" with a bandpass filter to adjust the power spectrum. Thesilicon retina with its ability to adjust sensitivity increases the recognition rate up to 50 percent. These comparative experiments demonstrate that preprocessing with an analog VLSI silicon retina generates imagedata enriched with object-constant features.
What is the'correct' theoretical description of neuronal activity? The analysis of the dynamics of a globally connected network of spiking neurons (the Spike Response Model) shows that a description bymean firing rates is possible only if active neurons fire incoherently. Iffiring occurs coherently or with spatiotemporal correlations, the spike structure of the neural code becomes relevant. Alternatively, neurons can be gathered into local or distributed ensembles or'assemblies'. A description based on the mean ensemble activity is, in principle, possible but the interaction between different assembliesbecomes highly nonlinear. A description with spikes should therefore be preferred.
Thomas G. Dietterich Arris Pharmaceutical Corporation and Oregon State University Corvallis, OR 97331-3202 Ajay N. Jain Arris Pharmaceutical Corporation 385 Oyster Point Blvd., Suite 3 South San Francisco, CA 94080 Richard H. Lathrop and Tomas Lozano-Perez Arris Pharmaceutical Corporation and MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 Abstract In drug activity prediction (as in handwritten character recognition), thefeatures extracted to describe a training example depend on the pose (location, orientation, etc.) of the example. In handwritten characterrecognition, one of the best techniques for addressing thisproblem is the tangent distance method of Simard, LeCun and Denker (1993). Jain, et al. (1993a; 1993b) introduce a new technique-dynamic reposing-that also addresses this problem. Dynamicreposing iteratively learns a neural network and then reposes the examples in an effort to maximize the predicted output values.New models are trained and new poses computed until models and poses converge. This paper compares dynamic reposing to the tangent distance method on the task of predicting the biological activityof musk compounds.
This paper introduces a new recognition-based segmentation approach torecognizing online cursive handwriting from a database of 10,000 English words. The original input stream of z, y pen coordinates isencoded as a sequence of uniform stroke descriptions that are processed by six feed-forward neural-networks, each designed to recognize letters of different sizes. Words are then recognized by performing best-first search over the space of all possible segmentations. Resultsdemonstrate that the method is effective at both writer dependent recognition (1.7% to 15.5% error rate) and writer independent recognition (5.2% to 31.1% error rate). 1 Introduction With the advent of pen-based computers, the problem of automatically recognizing handwriting from the motions of a pen has gained much significance. Progress has been made in reading disjoint block letters [Weissman et.
University of Rochester) and ICSIM (lCSI Berkeley) allow the definition of unit types and complex connectivity patterns. On a very high level of abstraction, simulators like tleam (UCSD) allow the easy realization of predefined network architectures (feedforwardnetworks) and leaming algorithms such as backpropagation. Ben Gomes, International Computer Science Institute (Berkeley) introduced the Connectionist Supercomputer 1. The CNSl is a multiprocessor system designed for moderate precision fixed point operations used extensively in connectionist network calculations. Custom VLSI digital processors employ an on-chip vector coprocessor unit tailored for neural network calculations and controlled by RISC scalar CPU. One processor and associated commercial DRAM comprise a node, which is connected in a mesh topology with other nodes to establish a MIMD array. One edge of the communications meshis reserved for attaching various 110 devices, which connect via a custom network adaptor chip. The CNSl operates as a compute server and one 110 port is used for connecting to a host workstation. Users with mainstream connectionist applications can use CNSim, an object-oriented, graphical high-level interface to the CNSl environment.
Four versions of a k-nearest neighbor algorithm with locally adaptive kare introduced and compared to the basic k-nearest neighbor algorithm (kNN). Locally adaptive kNN algorithms choose the value of k that should be used to classify a query by consulting the results of cross-validation computations in the local neighborhood of the query. Local kNN methods are shown to perform similar to kNN in experiments with twelve commonly used data sets. Encouraging resultsin three constructed tasks show that local methods can significantly outperform kNN in specific applications. Local methods can be recommended for online learning and for applications wheredifferent regions of the input space are covered by patterns solving different sub-tasks.