Review for NeurIPS paper: Hardness of Learning Neural Networks with Natural Weights
–Neural Information Processing Systems
Additional Feedback: In this paper the authors consider the following question: what is an important aspect of a neural network that makes it work well in practice when it's known to be theoretically hard to learn neural networks. There have been many prior works in this direction but most works have proved hardness of learning neural networks under strange architectures or distributions. In this paper their main contribution is to look at "natural" distributions which include for example the uniform distribution and normal distribution and show hardness of learning neural networks even under such natural distributions. Given how important neural networks are in practice, understanding their theoretical underpinning is an important question in ML and this paper shows yet another direction in which it is hard to learn NNs. The main selling point is that prior works looked at arbitrary weights, solely with the purpose of proving lower bounds but in this paper the authors look at natural weights (under which one might have expected to prove positive results) and show that even in this setting, NNs are hard to learn.
Neural Information Processing Systems
Jan-21-2025, 09:49:34 GMT
- Technology: