Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training Huy T uan Pham Department of Mathematics, Stanford University Phan-Minh Nguyen The V oleon Group

Neural Information Processing Systems 

The mean field theory of multilayer neural networks centers around a particular infinite-width scaling, in which the learning dynamics is shown to be closely tracked by the mean field limit. A random fluctuation around this infinite-width limit is expected from a large-width expansion to the next order. This fluctuation has been studied only in the case of shallow networks, where previous works employ heavily technical notions or additional formulation ideas amenable only to that case. Treatment of the multilayer case has been missing, with the chief difficulty in finding a formulation that must capture the stochastic dependency across not only time but also depth. In this work, we initiate the study of the fluctuation in the case of multilayer networks, at any network depth.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found