Hardness of Learning Neural Networks with Natural Weights

Daniely, Amit, Vardi, Gal

arXiv.org Machine Learning 

Neural networks have revolutionized performance in multip le domains, such as computer vision and natural language processing, and have proven to be a highly effectiv e tool for solving many challenging problems. This impressive practical success of neural networks is not well understood from the theoretical point of view. In particular, despite extensive research in recent y ears, it is not clear which models are learnable by neural networks algorithms. Historically, there were many negative results for learnin g neural networks, and it is now known that under certain complexity assumptions, it is computational ly hard to learn the class of functions computed by a neural network, even if the architecture is very simple. Indeed, it has been shown that learning neural networks is hard already for networks of depth 2 [ Klivans and Sherstov, 2006, Daniely and Shalev-Shwartz, 2016 ]. These results hold already for improper learning, namely where the learning algorithm is allowed to return a hypothesis that does not belong to the considered hy pothesis class. In recent years, researchers have considered several ways t o circumvent the discrepancy between those hardness results and the empirical success of neural n etworks. Namely, to understand which models are still learnable by neural networks algorithms.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found