new learning algorithm
A New Learning Algorithm for Blind Signal Separation
A new on-line learning algorithm which minimizes a statistical de(cid:173) pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in(cid:173) formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI.
Computed Decision Weights and a New Learning Algorithm for Neural Classifiers
In this paper we consider the possibility of computing rather than training the decision layer weights of a neural classifier. Such a possibility arises in two way, from making an appropriate choice of loss function and by solving a problem of constrained optimization. The latter formulation leads to a promising new learning process for pre-decision weights with both simplicity and efficacy.
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Taiwan (0.04)
Meta's new learning algorithm can teach AI to multi-task
If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both. Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others.
Meta's new learning algorithm can teach AI to multi-task
Data2vec is part of a big trend in AI toward models that can learn to understand the world in more than one way. "It's a clever idea," says Ani Kembhavi at the Allen Institute for AI in Seattle, who works on vision and language. "It's a promising advance when it comes to generalized systems for learning." An important caveat is that although the same learning algorithm can be used for different skills, it can only learn one skill at a time. Once it has learned to recognize images, it must start from scratch to learn to recognize speech.
Neko: a Library for Exploring Neuromorphic Learning Rules
Zhao, Zixuan, Wycoff, Nathan, Getty, Neil, Stevens, Rick, Xia, Fangfang
The field of neuromorphic computing is in a period of active exploration. While many tools have been developed to simulate neuronal dynamics or convert deep networks to spiking models, general software libraries for learning rules remain underexplored. This is partly due to the diverse, challenging nature of efforts to design new learning rules, which range from encoding methods to gradient approximations, from population approaches that mimic the Bayesian brain to constrained learning algorithms deployed on memristor crossbars. To address this gap, we present Neko, a modular, extensible library with a focus on aiding the design of new learning algorithms. We demonstrate the utility of Neko in three exemplar cases: online local learning, probabilistic learning, and analog on-device learning. Our results show that Neko can replicate the state-of-the-art algorithms and, in one case, lead to significant outperformance in accuracy and speed. Further, it offers tools including gradient comparison that can help develop new algorithmic variants. Neko is an open source Python library that supports PyTorch and TensorFlow backends.
- Education (0.67)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
A New Learning Algorithm for Blind Signal Separation
Amari, Shun-ichi, Cichocki, Andrzej, Yang, Howard Hua
A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the online learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm are verified by computer simulations.
A New Learning Algorithm for Blind Signal Separation
Amari, Shun-ichi, Cichocki, Andrzej, Yang, Howard Hua
A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the online learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm are verified by computer simulations.
A New Learning Algorithm for Blind Signal Separation
Amari, Shun-ichi, Cichocki, Andrzej, Yang, Howard Hua
A new online learning algorithm which minimizes a statistical dependency amongoutputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI)of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the online learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm are verified by computer simulations.