Plotting

 Country


A Novel Channel Selection System in Cochlear Implants Using Artificial Neural Network

Neural Information Processing Systems

A cochlear implant is a device used to provide the sensation of sound to those who are profoundly deaf by means of electrical stimulation of residual auditory neurons. It generally consists of a directional microphone, a wearable speech processor, a headset transmitter and an implanted receiver-stimulator module with an electrode A Novel Channel Selection System in Cochlear Implants 911 array which all together provide an electrical representation of the speech signal to the residual nerve fibres of the peripheral auditory system (Clark et ai, 1990).


Predictive Q-Routing: A Memory-based Reinforcement Learning Approach to Adaptive Traffic Control

Neural Information Processing Systems

The controllers usually have no or only very little prior knowledge of the environment. While only local communication between controllers is allowed, the controllers must cooperate among themselves to achieve the common, global objective. Finding the optimal routing policy in such a distributed manner is very difficult. Moreover, since the environment is non-stationary, the optimal policy varies with time as a result of changes in network traffic and topology.



Family Discovery

Neural Information Processing Systems

"Family discovery" is the task of learning the dimension and structure ofa parameterized family of stochastic models. It is especially appropriatewhen the training examples are partitioned into "episodes" of samples drawn from a single parameter value. We present three family discovery algorithms based on surface learning andshow that they significantly improve performance over two alternatives on a parameterized classification task. 1 INTRODUCTION Human listeners improve their ability to recognize speech by identifying the accent of the speaker. "Might" in an American accent is similar to "mate" in an Australian accent. By first identifying the accent, discrimination between these two words is improved.


An Information-theoretic Learning Algorithm for Neural Network Classification

Neural Information Processing Systems

A new learning algorithm is developed for the design of statistical classifiers minimizing the rate of misclassification. The method, which is based on ideas from information theory and analogies to statistical physics, assigns data to classes in probability. The distributions arechosen to minimize the expected classification error while simultaneously enforcing the classifier's structure and a level of "randomness" measured by Shannon's entropy. Achievement of the classifier structure is quantified by an associated cost. The constrained optimizationproblem is equivalent to the minimization of a Helmholtz free energy, and the resulting optimization method is a basic extension of the deterministic annealing algorithm that explicitly enforces structural constraints on assignments while reducing theentropy and expected cost with temperature. In the limit of low temperature, the error rate is minimized directly and a hard classifier with the requisite structure is obtained. This learning algorithmcan be used to design a variety of classifier structures. The approach is compared with standard methods for radial basis function design and is demonstrated to substantially outperform other design methods on several benchmark examples, while often retainingdesign complexity comparable to, or only moderately greater than that of strict descent-based methods.


Independent Component Analysis of Electroencephalographic Data

Neural Information Processing Systems

Recent efforts to identify EEG sources have focused mostly on verforming spatial segregation and localization of source activity [4]. By applying the leA algorithm of Bell and Sejnowski [1], we attempt to completely separate the twin problems of source identification (What) and source localization (Where). The leA algorithm derives independent sources from highly correlated EEG signals statistically and without regard to the physical location or configuration of the source generators. Rather than modeling the EEG as a unitary output of a multidimensional dynamical system,or as "the roar of the crowd" of independent microscopic generators, we suppose that the EEG is the output of a number of statistically independent but spatially fixed potential-generating systems which may either be spatially restricted or widely distributed.


Optimization Principles for the Neural Code

Neural Information Processing Systems

Recent experiments show that the neural codes at work in a wide range of creatures share some common features. At first sight, these observations seem unrelated. However, we show that these features arise naturally in a linear filtered threshold crossing (LFTC) model when we set the threshold to maximize the transmitted information. This maximization process requires neural adaptation to not only the DC signal level, as in conventional light and dark adaptation, but also to the statistical structure of the signal and noise distributions. Wealso present a new approach for calculating the mutual information between a neuron's output spike train and any aspect of its input signal which does not require reconstruction of the input signal.This formulation is valid provided the correlations in the spike train are small, and we provide a procedure for checking this assumption.



A Neural Network Model of 3-D Lightness Perception

Neural Information Processing Systems

A neural network model of 3-D lightness perception is presented which builds upon the FACADE Theory Boundary Contour System/Feature ContourSystem of Grossberg and colleagues. Early ratio encoding by retinal ganglion neurons as well as psychophysical resultson constancy across different backgrounds (background constancy) are used to provide functional constraints to the theory and suggest a contrast negation hypothesis which states that ratio measures between coplanar regions are given more weight in the determination of lightness of the respective regions.


Optimal Asset Allocation using Adaptive Dynamic Programming

Neural Information Processing Systems

Ralph Neuneier* Siemens AG, Corporate Research and Development Otto-Hahn-Ring 6, D-81730 Munchen, Germany Abstract In recent years, the interest of investors has shifted to computerized assetallocation (portfolio management) to exploit the growing dynamics of the capital markets. In this paper, asset allocation is formalized as a Markovian Decision Problem which can be optimized byapplying dynamic programming or reinforcement learning based algorithms. Using an artificial exchange rate, the asset allocation strategyoptimized with reinforcement learning (Q-Learning) is shown to be equivalent to a policy computed by dynamic programming. Theapproach is then tested on the task to invest liquid capital in the German stock market. Here, neural networks are used as value function approximators.