Circuit realization and hardware linearization of monotone operator equilibrium networks

Chaffey, Thomas

arXiv.org Artificial Intelligence 

--It is shown that the port behavior of a resistor-diode network corresponds to the solution of a ReLU monotone operator equilibrium network (a neural network in the limit of infinite depth), giving a parsimonious construction of a neural network in analog hardware. We furthermore show that the gradient of such a circuit can be computed directly in hardware, using a procedure we call hardware linearization . This allows the network to be trained in hardware, which we demonstrate with a device-level circuit simulation. We extend the results to cascades of resistor-diode networks, which can be used to implement feedforward and other asymmetric networks. We finally show that different nonlinear elements give rise to different activation functions, and introduce the novel diode ReLU which is induced by a non-ideal diode model. The idea of building a neural network in analog hardware is classical [1]-[5]. Since the discovery of semiconductor devices with memristive properties [6], and in light of the growing energy intensiveness of machine learning systems, there has been a resurgence of interest in building devices which incorporate analog memristive components and are specially suited for deep learning applications [7], [8]. One of the primary advantages of such devices is that memristors, and similar elements such as phase change memory, act as both memory and computational units. This allows the transport delay between memory and computation to be circumvented. A particularly successful design is to arrange a number of memristors in a crossbar array, which can be used to perform matrix-vector calculation in a single operation [9]-[12].