Towards Understanding Hierarchical Learning: Benefits of Neural Representations
–Neural Information Processing Systems
Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to shallow learners'' such as kernels. In this work, we demonstrate that intermediate \emph{neural representations} add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree- p polynomial ( p \geq 4) in d dimension, neural representation requires only \widetilde{O}(d {\ceil{p/2}}) samples, while the best-known sample complexity upper bound for the raw input is \widetilde{O}(d {p-1}) .
Neural Information Processing Systems
Jan-16-2025, 18:50:43 GMT
- Technology: