Desynchronous Learning in a Physics-Driven Learning Network

Wycoff, Jacob F, Dillavou, Sam, Stern, Menachem, Liu, Andrea J, Durian, Douglas J

arXiv.org Artificial Intelligence 

Here we demonstrate that desynchronous implementation of coupled learning is effective in self-adjusting resistor networks, in both simulation and experiment. Furthermore, we Learning is a special case of memory [1, 2], where the goal show that desynchronous learning can actually improve performance is to encode targeted functional responses in a network [3-by allowing the system to evolve indefinitely, escaping 6]. Artificial Neural Networks (ANNs) are complex functions local minima. We draw a direct analogy between stochastic designed to achieve such targeted responses. These networks gradient descent and desynchronous learning, and show are trained by using gradient descent on a cost function, they have similar effects on the learning degrees of freedom which evolves the system's parameters until a local minimum in our system. Thus we are able to remove the final vestige of is found [7, 8]. Typically, this algorithm is modified non-locality from our physics-driven learning network, moving such that subsections (batches) of data are used at each training it closer to biological implementations of learning. The step, effectively adding noise to the gradient calculation, ability to learn with entirely independent learning elements is known as Stochastic Gradient Descent (SGD) [9]. This algorithm expected to greatly improve the scalability of such physical produces more generalizable results [10-12], i.e. better learning systems.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found