Rethinking the adaptive relationship between Encoder Layers and Decoder Layers
–arXiv.org Artificial Intelligence
In the field of machine learning, using pre-trained models to perform specific tasks is a common practice. Typically, this involves fine-tuning the pre-trained model on a specific dataset through iterative adjustments without modifying the model structure. This article focuses on the state-of-the-art (SOTA) machine translation model Helsinki-NLP/opus-mtde-en, which translates German to English, to explore the adaptive relationship between Encoder Layers and Decoder Layers by introducing a bias-free fully connected layer. Additionally, the study investigates the effects of modifying the pre-trained model structure during fine-tuning. Four experiments were conducted by introducing a bias-free fully connected layer between the Encoder and Decoder Layers: Using original pre-trained model weights and initializing the fully connected layer weights to maintain the original connections, where each Decoder Layer's input is from the 6th Encoder Layer. Through fine-tuning, these weights adapt towards optimal configurations.
arXiv.org Artificial Intelligence
May-14-2024
- Country:
- Europe > Finland
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report (0.82)
- Technology: