Equilibrium Propagation: the Quantum and the Thermal Cases

Massar, Serge, Mognetti, Bortolo Matteo

arXiv.org Artificial Intelligence 

Artificial neural networks have achieved impressive results in very disparate tasks, both in science and in everyday life. The bottleneck in the optimization of artificial neural networks is the learning procedure, i.e., the process through which the internal parameters of the model are optimized to accomplish a desired task. The learning procedure used in the best networks today is gradient descent, in which the internal parameters are incrementally changed in order to improve performance, as measured by a cost function. In feed forward networks this procedure can be implemented efficiently, using error backpropagation. In more complex networks it is implemented by backpropagation through time. Biological systems that learn do not seem to use error backpropagation as the latter cannot be naturally performed by the internal dynamics of the system. Better understanding of biological learning systems could pass through developing learning algorithms in which the two phases of the model (the neuronal and the learning dynamics) can be implemented using similar procedures (or the same circuitry). Such approaches may also be particularly interesting for implementation in analog physical systems, which may lead to improvements in speed or energy consumption. Quantum versions of neural networks and more generally machine learning have attracted much attention recently, as they could offer improved performance over classical algorithms, see e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found