artificial intelligence


The Connectivity Analysis of Simple Association

Neural Information Processing Systems

Oregon Graduate Center, Beaverton, OR 97006 ABSTRACT The efficient realization, using current silicon technology, of Very Large Connection Networks (VLCN) with more than a billion connections requires that these networks exhibit a high degree of communication locality. Real neural networks exhibit significant locality, yet most connectionist/neural network models have little. In this paper, the connectivity requirements of a simple associative network are analyzed using communication theory. Several techniques based on communication theory are presented that improve the robustness ofthe network in the face of sparse, local interconnect structures. Also discussed are some potential problems when information is distributed too widely. INTRODUCTION Connectionist/neural network researchers are learning to program networks that exhibit a broad range of cognitive behavior.


PARTITIONING OF SENSORY DATA BY A CORTICAL NETWORK

Neural Information Processing Systems

Two hundred layer IT cells are used with 100 input (LOT) lines and 200 collateral axons; both the LOT and collateral axons flow caudally. LOT axons connect with rostral dendrites with a probability of 0.2, which decreases linearly to 0.05 by the caudal end of the model. The connectivity is arranged randomly, subject to the constraint that the number of contacts for axons and dendrites is fixed within certain narrow b01llldaries (in the most severe case, each axon forms 20 synapses and each dendrite receives 20 contacts). The resulting matrix is thus hypergeometric in both dimensions. There are 20 simulated inhibitory interneurons, such that the layer IT cells are arranged in 20 overlapping patches, each within the influence of one such inhibitory cell.


Probabilistic Characterization of Neural Model Computations

Neural Information Processing Systems

Learning algorithms for the neural network which search for the "most probable" member of P can then be designed. Statistical tests which decide if the "true" or environmental probability distribution is in P can also be developed. Example applications of the theory to the highly nonlinear back-propagation learning algorithm, and the networks of Hopfield and Anderson are discussed. INTRODUCTION A connectionist system is a network of simple neuron-like computing elements which can store and retrieve information, and most importantly make generalizations. Using terminology suggested by Rumelhart & McClelland 1, the computing elements of a connectionist system are called units, and each unit is associated with a real number indicating its activity level. The activity level of a given unit in the system can also influence the activity level of another unit. The degree of influence between two such units is often characterized by a parameter of the system known as a connection strength. During the information retrievalprocess some subset of the units in the system are activated, and these units in turn activate neighboring units via the inter-unit connection strengths.


MURPHY: A Robot that Learns by Doing

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8].



Basins of Attraction for Electronic Neural Networks

Neural Information Processing Systems

Basin measurement circuitry periodically opens the network feedback loop, loads raster-scanned initial conditions and examines the resulting attractor. Plotting the basins for fixed points (memories), we show that overloading an associative memory network leads to irregular basin shapes. The network also includes analog time delay circuitry, and we have shown that delay in symmetric networks can introduce basins for oscillatory attractors. Conditions leading to oscillation are related to the presence of frustration; reducing frustration by diluting the connections can stabilize a delay network.



Microelectronic Implementations of Connectionist Neural Networks

Neural Information Processing Systems

Three chip designs are described: a hybrid digital/analog programmable connection matrix, an analog connection matrix with adjustable connection strengths, and a digital pipelined best-match chip. The common feature of the designs is the distribution of arithmetic processing power amongst the data storage to minimize data movement.


Optimal Neural Spike Classification

Neural Information Processing Systems

Using one extracellular microelectrode to record from several neurons is one approach to studying the response properties of sets of adjacent and therefore likely related neurons. However, to do this, it is necessary to correctly classify the signals generated by these different neurons. This paper considers this problem of classifying the signals in such an extracellular recording, based upon their shapes, and specifically considers the classification of signals in the case when spikes overlap temporally. Introduction How single neurons in a network of neurons interact when processing information is likely to be a fundamental question central to understanding how real neural networks compute. In the mammalian nervous system we know that spatially adjacent neurons are, in general, more likely to interact, as well as receive common inputs.