Rethinking the adaptive relationship between Encoder Layers and Decoder Layers

Song, Yubo

arXiv.org Artificial Intelligence 

In the field of machine learning, using pre-trained models to perform specific tasks is a common practice. Typically, this involves fine-tuning the pre-trained model on a specific dataset through iterative adjustments without modifying the model structure. This article focuses on the state-of-the-art (SOTA) machine translation model Helsinki-NLP/opus-mtde-en, which translates German to English, to explore the adaptive relationship between Encoder Layers and Decoder Layers by introducing a bias-free fully connected layer. Additionally, the study investigates the effects of modifying the pre-trained model structure during fine-tuning. Four experiments were conducted by introducing a bias-free fully connected layer between the Encoder and Decoder Layers: Using original pre-trained model weights and initializing the fully connected layer weights to maintain the original connections, where each Decoder Layer's input is from the 6th Encoder Layer. Through fine-tuning, these weights adapt towards optimal configurations.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found