Information Theoretic Lower Bounds for Feed-Forward Fully-Connected Deep Networks

Yang, Xiaochen, Honorio, Jean

arXiv.org Machine Learning 

In this paper, we study the sample complexity lower bounds for the exact recovery of parameters and for a positive excess risk of a feed-forward, fully-connected neural network for binary classification, using information-theoretic tools. We prove these lower bounds by the existence of a generative network characterized by a backwards data generating process, where the input is generated based on the binary output, and the network is parametrized by weight parameters for the hidden layers. The sample complexity lower bound for the exact recovery of parameters is $\Omega(d r \log(r) + p )$ and for a positive excess risk is $\Omega(r \log(r) + p )$, where $p$ is the dimension of the input, $r$ reflects the rank of the weight matrices and $d$ is the number of hidden layers. To the best of our knowledge, our results are the first information theoretic lower bounds.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found