The Clusteron: Toward a Simple Abstraction for a Complex Neuron

Neural Information Processing Systems

Are single neocortical neurons as powerful as multi-layered networks? A recent compartmental modeling study has shown that voltage-dependent membrane nonlinearities present in a complex dendritic tree can provide a virtual layer of local nonlinear processing elements between synaptic inputs and the final output at the cell body, analogous to a hidden layer in a multi-layer network. In this paper, an abstract model neuron is introduced, called a clusteron, which incorporates aspects of the dendritic "cluster-sensitivity" phenomenon seen in these detailed biophysical modeling studies. It is shown, using a clusteron, that a Hebb-type learning rule can be used to extract higher-order statistics from a set of training patterns, by manipulating the spatial ordering of synaptic connections onto the dendritic tree. The potential neurobiological relevance of these higher-order statistics for nonlinear pattern discrimination is then studied within a full compartmental model of a neocortical pyramidal cell, using a training set of 1000 high-dimensional sparse random patterns.


Best-First Model Merging for Dynamic Learning and Recognition

Neural Information Processing Systems

"Best-first model merging" is a general technique for dynamically choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both leaming and recognition tasks and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access.


Modeling Applications with the Focused Gamma Net

Neural Information Processing Systems

The focused gamma network is proposed as one of the possible implementations of the gamma neural model. The focused gamma network is compared with the focused backpropagation network and TDNN for a time series prediction problem, and with ADALINE in a system identification problem.



Models Wanted: Must Fit Dimensions of Sleep and Dreaming

Neural Information Processing Systems

During waking and sleep, the brain and mind undergo a tightly linked and precisely specified set of changes in state. At the level of neurons, this process has been modeled by variations of Volterra-Lotka equations for cyclic fluctuations of brainstem cell populations. However, neural network models based upon rapidly developing knowledge ofthe specific population connectivities and their differential responses to drugs have not yet been developed. Furthermore, only the most preliminary attempts have been made to model across states. Some of our own attempts to link rapid eye movement (REM) sleep neurophysiology and dream cognition using neural network approaches are summarized in this paper.


Single Neuron Model: Response to Weak Modulation in the Presence of Noise

Neural Information Processing Systems

THE MODEL; STOCHASTIC RESONANCE The reduced neuron model consists of a single Hopfield-type computational element, which may be modeled as a R-C circuit with nonlinear feedback provided by an operational amplifier having a sigmoid transfer function.



A Parallel Analog CCD/CMOS Signal Processor

Neural Information Processing Systems

A CCO based signal processing IC that computes a fully parallel single quadrant vector-matrix multiplication has been designed and fabricated with a 2j..un CCO/CMOS process. The device incorporates an array of Charge Coupled Devices (CCO) which hold an analog matrix of charge encoding the matrix elements. Input vectors are digital with 1 - 8 bit accuracy.


A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands to Control a Kinematic Arm

Neural Information Processing Systems

A neurophysiologically-based model is presented that controls a simulated kinematic arm during goal-directed reaches. The network generates a quasi-feedforward motor command that is learned using training signals generated by corrective movements. For each target, the network selects and sets the output of a subset of pattern generators. During the movement, feedback from proprioceptors turns off the pattern generators. The task facing individual pattern generators is to recognize when the arm reaches the target and to turn off. A distributed representation of the motor command that resembles population vectors seen in vivo was produced naturally by these simulations.


Generalization Performance in PARSEC - A Structured Connectionist Parsing Architecture

Neural Information Processing Systems

This paper presents PARSECa system for generating connectionist parsing networks from example parses. PARSEC is not based on formal grammar systems and is geared toward spoken language tasks. PARSEC networks exhibit three strengths important for application to speech processing: 1) they learn to parse, and generalize well compared to handcoded grammars; 2) they tolerate several types of noise; 3) they can learn to use multi-modal input. Presented are the PARSEC architecture and performance analyses along several dimensions that demonstrate PARSEC's features. PARSEC's performance is compared to that of traditional grammar-based parsing systems.