Recovering a Feed-Forward Net From Its Output

Fefferman, Charles, Markel, Scott

Neural Information Processing Systems 

We study feed-forward nets with arbitrarily many layers, using the standard sigmoid,tanh x. Aside from technicalities, our theorems are: 1. Complete knowledge of the output of a neural net for arbitrary inputs uniquely specifies the architecture, weights and thresholds; and 2. There are only finitely many critical points on the error surface for a generic training problem. Neural nets were originally introduced as highly simplified models of the nervous system. Today they are widely used in technology and studied theoretically by scientists from several disciplines. However, they remain little understood.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found