Some Remarks on Replicated Simulated Annealing
Gripon, Vincent, Löwe, Matthias, Vermet, Franck
In the past few years, there has been a growing interest in finding methods to train discrete weights neural networks. As a matter of fact, when it comes to implementations, discrete weights allow to reach a better efficiency, as they considerably simplify the multiply-accumulate operations, with the extreme case where weights become binary and there is no need to perform any multiplication anymore. Unfortunately, training discrete weights neural networks is complex in practice, since it basically boils down to a NPhard optimization problem. To circumvent this difficulty, many works have introduced techniques that aim at finding reasonable approximations [7, 6, 24, 13]. Among these works, in a recent paper Baldassi et al. [2] discuss the learning process in artificial neural networks with discrete weights and try to explain why these networks work so efficiently.
Sep-30-2020