Flexible Dataset Distillation: Learn Labels Instead of Images

Bohdal, Ondrej, Yang, Yongxin, Hospedales, Timothy

arXiv.org Machine Learning 

Practically, this leads to improved performance Distillation is a topical area of neural network research compared to prior image distillation approaches. As that initially began with the goal of extracting the knowledge a byproduct, this enables a new kind of cross-dataset knowledge of a large pre-trained model and compiling it into a distillation (Figure 1). One can learn solely on a source smaller model, while retaining similar performance (Hinton, dataset (such as English characters) with synthetic distilled Vinyals, and Dean 2014). The notion of distillation has since labels, and apply the learned model to recognise concepts found numerous applications and uses including the possibility in a disjoint target dataset (such as Japanese characters). of dataset distillation (Wang et al. 2018): extracting Surprisingly, it turns out that models can make progress on the knowledge of a large dataset and compiling it into a small learning to recognise Japanese only through exposure to English set of carefully crafted examples, such that a model trained characters with synthetic labels.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found