Goto

Collaborating Authors

 Ohira, Toru


A Neural Network model with Bidirectional Whitening

arXiv.org Machine Learning

We present here a new model and algorithm which performs an efficient Natural gradient descent for Multilayer Perceptrons. Natural gradient descent was originally proposed from a point of view of information geometry, and it performs the steepest descent updates on manifolds in a Riemannian space. In particular, we extend an approach taken by the "Whitened neural networks" model. We make the whitening process not only in feed-forward direction as in the original model, but also in the back-propagation phase. Its efficacy is shown by an application of this "Bidirectional whitened neural networks" model to a handwritten character recognition data (MNIST data).


Chases and Escapes, and Optimization Problems

arXiv.org Artificial Intelligence

We propose a new approach for solving combinatorial optimization problem by utilizing the mechanism of chases and escapes, which has a long history in mathematics. In addition to the well-used steepest descent and neighboring search, we perform a chase and escape game on the "landscape" of the cost function. We have created a concrete algorithm for the Traveling Salesman Problem. Our preliminary test indicates a possibility that this new fusion of chases and escapes problem into combinatorial optimization search is fruitful.




Stochastic Dynamics of Three-State Neural Networks

Neural Information Processing Systems

We present here an analysis of the stochastic neurodynamics of a neural network composed of three-state neurons described by a master equation. An outer-product representation of the master equationis employed. In this representation, an extension of the analysis from two to three-state neurons is easily performed. We apply this formalism with approximation schemes to a simple three-statenetwork and compare the results with Monte Carlo simulations.


Stochastic Dynamics of Three-State Neural Networks

Neural Information Processing Systems

We present here an analysis of the stochastic neurodynamics of a neural network composed of three-state neurons described by a master equation. An outer-product representation of the master equation is employed. In this representation, an extension of the analysis from two to three-state neurons is easily performed. We apply this formalism with approximation schemes to a simple three-state network and compare the results with Monte Carlo simulations.