Concentration inequalities and optimal number of layers for stochastic deep neural networks
Caprio, Michele, Mukherjee, Sayan
–arXiv.org Artificial Intelligence
We state concentration inequalities for the output of the hidden layers of a stochastic deep neural network (SDNN), as well as for the output of the whole SDNN. These results allow us to introduce an expected classifier (EC), and to give probabilistic upper bound for the classification error of the EC. We also state the optimal number of layers for the SDNN via an optimal stopping procedure. We apply our analysis to a stochastic version of a feedforward neural network with ReLU activation function.
arXiv.org Artificial Intelligence
Mar-20-2023
- Country:
- Asia
- Middle East > Jordan (0.04)
- Singapore (0.04)
- Europe
- Germany > Saxony
- Leipzig (0.04)
- Switzerland
- Basel-City > Basel (0.04)
- Zürich > Zürich (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Germany > Saxony
- North America > United States
- Louisiana > Orleans Parish
- New Orleans (0.04)
- New York > New York County
- New York City (0.04)
- North Carolina > Durham County
- Durham (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Louisiana > Orleans Parish
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Government (0.46)
- Technology: