PartialFormer: Modeling Part Instead of Whole
Zheng, Tong, Li, Bei, Bao, Huiwen, Shan, Weiqiao, Xiao, Tong, Zhu, Jingbo
–arXiv.org Artificial Intelligence
The design choices in Transformer feed-forward neural networks have resulted in significant computational and parameter overhead. In this work, we emphasize the importance of hidden dimension in designing lightweight FFNs, a factor often overlooked in previous architectures. Guided by this principle, we introduce PartialFormer, a parameter-efficient Transformer architecture utilizing multiple smaller FFNs to reduce parameters and computation while maintaining essential hidden dimensions. These smaller FFNs are integrated into a multi-head attention system to enable effective collaboration. We also propose a tailored head scaling strategy to enhance PartialFormer's capabilities. Furthermore, we present a residual-like attention calculation to improve depth scaling within PartialFormer. Extensive experiments on 9 translation tasks and 1 abstractive summarization task validate the effectiveness of our PartialFormer approach. Our code would be available at: \url{https://github.com/zhengkid/PartialFormer}.
arXiv.org Artificial Intelligence
Oct-23-2023
- Country:
- Asia (0.68)
- Europe (1.00)
- North America > United States
- California > Los Angeles County
- Long Beach (0.14)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- California > Los Angeles County
- Genre:
- Research Report > Promising Solution (0.68)
- Technology: