Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model

Roth, Karsten, Thede, Lukas, Koepke, Almut Sophia, Vinyals, Oriol, Hénaff, Olivier, Akata, Zeynep

arXiv.org Artificial Intelligence 

Training deep networks requires various design decisions regarding for instance their architecture, data augmentation, or optimization. In this work, we find these training variations to result in networks learning unique feature sets from the data. Using public model libraries comprising thousands of models trained on canonical datasets like ImageNet, we observe that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other - independent of overall performance. Given any arbitrary pairing of pretrained models and no external rankings (such as separate test sets, e.g. Yet facilitating robust transfer in scenarios agnostic to pretrained model pairings would unlock auxiliary gains and knowledge fusion from any model repository without restrictions on model and problem specifics - including from weaker, lowerperformance models. This work therefore provides an initial, in-depth exploration on the viability of such general-purpose knowledge transfer. Across large-scale experiments, we first reveal the shortcomings of standard knowledge distillation techniques, and then propose a much more general extension through data partitioning for successful transfer between nearly all pretrained models, which we show can also be done unsupervised. Finally, we assess both the scalability and impact of fundamental model properties on successful model-agnostic knowledge transfer. Training neural networks on specific datasets has become a machine learning standard to tackle a myriad of research and industry challenges, involving a large number of explicit and implicit decisions that range from architecture choices to specific optimization protocols, the particular choice of data augmentation, data sampling and even the data ordering. In this work, we begin by highlighting the extent of this statement through extensive experiments. We build on previous efforts of the research community -providing large and diverse, publicly accessible model libraries (e.g. Doing so, we discover the consistent existence of significant complementary knowledge - information about the data that one model (referred to as "teacher") holds that is not available in the other one (the "student"). Interestingly, we find that complementary knowledge exists regardless of external performance rankings or factors, such as model families (CNNs (LeCun and Bengio, 1995), Transformer (Dosovitskiy et al., 2021), MLP (Tolstikhin et al., 2021), 3), and often aggregates in semantic areas of expertise. This means that for stronger teachers (by some test performance standard), but also for those with similar or weaker performance than the student, significant knowledge about the data can be found that is not available to the student.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found