Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks
–Neural Information Processing Systems
We consider the problem of learning function classes computed by neural networks with various activations (e.g. ReLU or Sigmoid), a task believed to be computationally intractable in the worst-case. A major open problem is to understand the minimal assumptions under which these classes admit provably efficient algorithms. In this work we show that a natural distributional assumption corresponding to eigenvalue decay of the Gram matrix yields polynomial-time algorithms in the non-realizable setting for expressive classes of networks (e.g.
Neural Information Processing Systems
Nov-21-2025, 08:52:41 GMT
- Country:
- Africa > Middle East
- Tunisia > Ben Arous Governorate > Ben Arous (0.04)
- Asia > Middle East
- Jordan (0.04)
- Europe
- Germany > Saarland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America > United States
- California
- Los Angeles County > Long Beach (0.04)
- Santa Cruz County > Santa Cruz (0.04)
- Texas > Travis County
- Austin (0.04)
- California
- Africa > Middle East
- Genre:
- Research Report (0.46)
- Technology: