Light-in-the-loop: using a photonics co-processor for scalable training of neural networks
Launay, Julien, Poli, Iacopo, Müller, Kilian, Carron, Igor, Daudet, Laurent, Krzakala, Florent, Gigan, Sylvain
As neural networks grow larger and more complex and data-hungry, training costs are skyrocketing. Especially when lifelong learning is necessary, such as in recommender systems or self-driving cars, this might soon become unsustainable. In this study, we present the first optical co-processor able to accelerate the training phase of digitally-implemented neural networks. We rely on direct feedback alignment as an alternative to backpropagation, and perform the error projection step optically. Leveraging the optical random projections delivered by our co-processor, we demonstrate its use to train a neural network for handwritten digits recognition.
Jun-3-2020