Goto

Collaborating Authors

 training regime


Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)

Neural Information Processing Systems

We study the average robustness notion in deep neural networks in (selected) wide and narrow, deep and shallow, as well as lazy and non-lazy training settings. We prove that in the under-parameterized setting, width has a negative effect while it improves robustness in the over-parameterized setting. The effect of depth closely depends on the initialization and the training mode. In particular, when initialized with LeCun initialization, depth helps robustness with the lazy training regime. In contrast, when initialized with Neural Tangent Kernel (NTK) and He-initialization, depth hurts the robustness. Moreover, under the non-lazy training regime, we demonstrate how the width of a two-layer ReLU network benefits robustness. Our theoretical developments improve the results by [Huang et al.




Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)

Neural Information Processing Systems

A plethora of aspects on the robustness have been studied, ranging from algorithms to their initialization as well as from the width of neural networks to their depth (i.e., the architecture).





A Training Regime

Neural Information Processing Systems

For the Spectral Mixture Kernel, we use 4 mixtures. The CNF component for our model was inspired by FFJORD. For NGGP, we use the same CNF component architecture as in for the sines dataset. Adding noise allows for better performance when learning with the CNF component. We also use the same CNF component architecture as in the sines dataset. For this dataset, we tested NGGP and DKT models with RBF and Spectral kernels only.



When Expressivity Meets Trainability: Fewer than n Neurons Can Work

Neural Information Processing Systems

Modern neural networks are often quite wide, causing large memory and computation costs. It is thus of great interest to train a narrower network. However, training narrow neural nets remains a challenging task. We ask two theoretical questions: Can narrow networks have as strong expressivity as wide ones?