Reviews: Limitations of Lazy Training of Two-layers Neural Network
–Neural Information Processing Systems
The analysis shows that the error of RF is always bounded away from zero unless the number of neural N goes to infinity. Both NT and NN achieve zero error if N is greater than or equal to the dimension d. It also shows that NN always achieves smaller errors than NT, because NN learns to fit the most significant direction, while NT can only fit the sub-space that is spanned by random directions. The results of the paper is quite intuitive, but non-trivial in my perspective. It provides a clear evidence that even for simple target functions, the neural network can hold advantage over random features models.
Neural Information Processing Systems
Jan-26-2025, 21:02:05 GMT
- Technology: