Plotting

Single-Iteration Threshold Hamming Networks

Neural Information Processing Systems

Isaac Meilijson EytanRuppin Moshe Sipper School of Mathematical Sciences Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, 69978 Tel Aviv, Israel Abstract We analyze in detail the performance of a Hamming network classifying inputsthat are distorted versions of one of its m stored memory patterns. The activation function of the memory neurons in the original Hamming network is replaced by a simple threshold function. The THN drastically reduces the time and space complexity of Hamming Network classifiers. 1 Introduction Originally presented in (Steinbuch 1961, Taylor 1964) the Hamming network (HN) has received renewed attention in recent years (Lippmann et. The HN calculates the Hamming distance between the input pattern and each memory pattern, and selects the memory with the smallest distance. It is composed of two subnets: The similarity subnet, consisting of an n-neuron input layer connected with an m-neuron memory layer, calculates the number of equal bits between the input and each memory pattern.


Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain

Neural Information Processing Systems

We present the information-theoretic derivation of a learning algorithm that clusters unlabelled data with linear discriminants. In contrast to methods that try to preserve information about the input patterns, we maximize the information gained from observing the output of robust binary discriminators implemented with sigmoid nodes. We deri ve a local weight adaptation rule via gradient ascent in this objective, demonstrate its dynamics on some simple data sets, relate our approach to previous work and suggest directions in which it may be extended.


A Note on Learning Vector Quantization

Neural Information Processing Systems

Vector Quantization is useful for data compression. Competitive Learning whichminimizes reconstruction error is an appropriate algorithm for vector quantization of unlabelled data. Vector quantization of labelled data for classification has a different objective, to minimize the number of misclassifications, and a different algorithm is appropriate. We show that a variant of Kohonen's LVQ2.1 algorithm can be seen as a multiclass extensionof an algorithm which in a restricted 2 class case can be proven to converge to the Bayes optimal classification boundary. We compare the performance of the LVQ2.1 algorithm to that of a modified version having a decreasing window and normalized step size, on a ten class vowel classification problem.


Directional-Unit Boltzmann Machines

Neural Information Processing Systems

University of Colorado Boulder, CO 80309-0430 Abstract We present a general formulation for a network of stochastic directional units.This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values in a cyclic range, between 0 and 271' radians. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. This combination of a value and a certainty provides additional representational powerin a unit. Many kinds of information can naturally be represented in terms of angular, or directional, variables. A circular range forms a suitable representation for explicitly directional information, such as wind direction, as well as for information where the underlying range is periodic, such as days of the week or months of the year.


Weight Space Probability Densities in Stochastic Learning: II. Transients and Basin Hopping Times

Neural Information Processing Systems

Genevieve B. Orr and Todd K. Leen Department of Computer Science and Engineering Oregon Graduate Institute of Science & Technology 19600 N.W. von Neumann Drive Beaverton, OR 97006-1999 Abstract In stochastic learning, weights are random variables whose time evolution is governed by a Markov process. We summarize the theory of the time evolution of P, and give graphical examples of the time evolution that contrast the behavior of stochastic learning with true gradient descent (batch learning). Finally, we use the formalism to obtain predictions of the time required for noise-induced hopping between basins of different optima. We compare the theoretical predictions with simulations of large ensembles of networks for simple problems in supervised and unsupervised learning. Despite the recent application of convergence theorems from stochastic approximation theoryto neural network learning (Oja 1982, White 1989) there remain outstanding questionsabout the search dynamics in stochastic learning.


Global Regularization of Inverse Kinematics for Redundant Manipulators

Neural Information Processing Systems

When m n, we say that the manipulator has redundant degrees--of-freedom (dot). The inverse kinematics problem is the following: given a desired workspace location x, find joint variables 0 such that f(O) x. Even when the forward kinematics is known, 255 256 DeMers and Kreutz-Delgado the inverse kinematics for a manipulator is not generically solvable in closed form (Craig. 1986).


Deriving Receptive Fields Using an Optimal Encoding Criterion

Neural Information Processing Systems

In unsupervised network learning, the development of the connection weights is influenced by statistical properties of the ensemble of input vectors, rather than by the degree of mismatch between the network's output and some'desired' output. An implicit goal of such learning is that the network should transform the input so that salient features present in the input are represented at the output in a 953 954 Linsker more useful form. This is often done by reducing the input dimensionality in a way that preserves the high-variance components of the input (e.g., principal component analysis, Kohonen feature maps). The principle of maximum information preservation ('infomax') is an unsupervised learning strategy that states (Linsker 1988): From a set of allowed input-output mappings (e.g., parametrized by the connection weights), choose a mapping that maximizes the (ensemble-averaged) Shannon information that the output vector conveys about the input vector, in the presence of noise.


Generic Analog Neural Computation - The EPSILON Chip

Neural Information Processing Systems

An analog CMOS VLSI neural processing chip has been designed and fabricated. The device employs "pulse-stream" neural state signalljng, and is capable of computing some 360 million synaptic connections per secood.


Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors

Neural Information Processing Systems

We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian), which does not require to even calculate the Hessian. Several other applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive part of neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.


Computing with Almost Optimal Size Neural Networks

Neural Information Processing Systems

Artificial neural networks are comprised of an interconnected collection of certain nonlinear devices; examples of commonly used devices include linear threshold elements, sigmoidal elements and radial-basis elements. We employ results from harmonic analysis and the theory of rational approximation to obtain almost tight lower bounds on the size (i.e.