Beyond Uniform Scaling: Exploring Depth Heterogeneity in Neural Architectures
T, Akash Guna R., Chavan, Arnav, Gupta, Deepak
–arXiv.org Artificial Intelligence
Conventional scaling of neural networks typically involves designing a base network and growing different dimensions like width, depth, etc. of the same by some predefined scaling factors. We introduce an automated scaling approach leveraging second-order loss landscape information. Our method is flexible toward skip connections, a mainstay in modern vision transformers. Motivated by the hypothesis that not all neurons need uniform depth complexity, our approach embraces depth heterogeneity. Scaled networks demonstrate superior performance upon training small-scale datasets from scratch. We introduce the first intact scaling mechanism for vision transformers, a step towards efficient model scaling. Scaling of the network architectures has been a crucial aspect of pushing the performance of deep learning models.
arXiv.org Artificial Intelligence
Feb-19-2024
- Country:
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report (0.64)
- Technology: