Similarity of Pre-trained and Fine-tuned Representations

Goerttler, Thomas, Obermayer, Klaus

arXiv.org Artificial Intelligence 

However, Representation similarity analysis shows that the Oh et al. (2021) found out that, especially in the case of most significant change still occurs in the head cross-domain adaption, where the fine-tuning task does not even if all weights are updatable. However, recent come from the same distribution as in training, also an adaptation results from few-shot learning have shown that of earlier layers is very beneficial. Neyshabur et al. representation change in the early layers, which (2020) investigated what is transferred in transfer learning are mostly convolutional, is beneficial, especially by shuffling the blocks of inputs. They confirmed that lower in the case of cross-domain adaption. In our paper, layers are responsible for more general features and that a we find out whether that also holds true for transfer network with pre-trained weights stays in the same basin of learning. In addition, we analyze the change solution during fine-tuning. of representation in transfer learning, both during pre-training and fine-tuning, and find out that This paper analyses representation obtained by models having pre-trained structure is unlearned if not usable.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found