Country
Blind Separation of Radio Signals in Fading Channels
We apply information maximization / maximum likelihood blind source separation [2, 6) to complex valued signals mixed with complex valuednonstationary matrices. This case arises in radio communications withbaseband signals. We incorporate known source signal distributions in the adaptation, thus making the algorithms less "blind". This results in drastic reduction of the amount of data needed for successful convergence. Adaptation to rapidly changing signal mixing conditions, such as to fading in mobile communications, becomesnow feasible as demonstrated by simulations. 1 Introduction In SDMA (spatial division multiple access) the purpose is to separate radio signals of interfering users (either intentional or accidental) from each others on the basis of the spatial characteristics of the signals using smart antennas, array processing, and beamforming [5, 8).
An Analog VLSI Model of the Fly Elementary Motion Detector
Harrison, Reid R., Koch, Christof
Flies are capable of rapidly detecting and integrating visual motion information inbehaviorly-relevant ways. The first stage of visual motion processing in flies is a retinotopic array of functional units known as elementary motiondetectors (EMDs). Several decades ago, Reichardt and colleagues developed a correlation-based model of motion detection that described the behavior of these neural circuits. We have implemented a variant of this model in a 2.0-JLm analog CMOS VLSI process. The result isa low-power, continuous-time analog circuit with integrated photoreceptors thatresponds to motion in real time. The responses of the circuit to drifting sinusoidal gratings qualitatively resemble the temporal frequency response, spatial frequency response, and direction selectivity of motion-sensitive neurons observed in insects. In addition to its possible engineeringapplications, the circuit could potentially be used as a building block for constructing hardware models of higher-level insect motion integration.
Generalization in Decision Trees and DNF: Does Size Matter?
Golea, Mostefa, Bartlett, Peter L., Lee, Wee Sun, Mason, Llew
Recent theoretical results for pattern classification with thresholded real-valuedfunctions (such as support vector machines, sigmoid networks,and boosting) give bounds on misclassification probability that do not depend on the size of the classifier, and hence can be considerably smaller than the bounds that follow from the VC theory. In this paper, we show that these techniques can be more widely applied, by representing other boolean functions as two-layer neural networks (thresholded convex combinations of boolean functions).
Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks
Seung, H. Sebastian, Richardson, Tom J., Lagarias, J. C., Hopfield, John J.
A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations.The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics. The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[l, 2].
Adaptive Choice of Grid and Time in Reinforcement Learning
Consistency problems arise if the discretization needs to be refined, e.g. for more accuracy, application of multi-grid iteration or better starting values for the iteration of the approximate optimal value function. In [7] it was shown, that for diffusion dominated problems, a state to time discretization ratio k/ h of Ch'r, I
Asymptotic Theory for Regularization: One-Dimensional Linear Case
The generalization ability of a neural network can sometimes be improved dramatically by regularization. To analyze the improvement one needs more refined results than the asymptotic distribution of the weight vector. Here we study the simple case of one-dimensional linear regression under quadratic regularization, i.e., ridge regression. We study the random design, misspecified case, where we derive expansions for the optimal regularization parameter and the ensuing improvement. It is possible to construct examples where it is best to use no regularization.
Bayesian Robustification for Audio Visual Fusion
Movellan, Javier R., Mineiro, Paul
Department of Cognitive Science Department of Cognitive Science University of California, San Diego University of California, San Diego La Jolla, CA 92092-0515 La Jolla, CA 92092-0515 Abstract We discuss the problem of catastrophic fusion in multimodal recognition systems. This problem arises in systems that need to fuse different channels in non-stationary environments. Practice shows that when recognition modules within each modality are tested in contexts inconsistent with their assumptions, their influence on the fused product tends to increase, with catastrophic results. We explore a principled solution to this problem based upon Bayesian ideas of competitive models and inference robustification: each sensory channel is provided with simple white-noise context models, and the perceptual hypothesis and context are jointly estimated. Consequently, context deviations are interpreted as changes in white noise contamination strength, automatically adjusting the influence of the module.
An Incremental Nearest Neighbor Algorithm with Queries
We consider the general problem of learning multi-category classification from labeled examples. We present experimental results for a nearest neighbor algorithm which actively selects samples from different pattern classes according to a querying rule instead of the a priori class probabilities. The amount of improvement of this query-based approach over the passive batch approach depends on the complexity of the Bayes rule. The principle on which this algorithm is based is general enough to be used in any learning algorithm which permits a model-selection criterion and for which the error rate of the classifier is calculable in terms of the complexity of the model. 1 INTRODUCTION We consider the general problem of learning multi-category classification from labeled examples. In many practical learning settings the time or sample size available for training are limited. This may have adverse effects on the accuracy of the resulting classifier. For instance, in learning to recognize handwritten characters typical time limitation confines the training sample size to be of the order of a few hundred examples. It is important to make learning more efficient by obtaining only training data which contains significant information about the separability of the pattern classes thereby letting the learning algorithm participate actively in the sampling process. Querying for the class labels of specificly selected examples in the input space may lead to significant improvements in the generalization error (cf.
Ensemble Learning for Multi-Layer Networks
Barber, David, Bishop, Christopher M.
In contrast to the maximum likelihood approach which finds only a single estimate for the regression parameters, the Bayesian approach yields a distribution of weight parameters, p(wID), conditional on the training data D, and predictions are ex- ·Present address: SNN, University of Nijmegen, Geert Grooteplein 21, Nijmegen, The Netherlands.