Deep Kernel Transfer in Gaussian Processes for Few-shot Learning

Patacchiola, Massimiliano, Turner, Jack, Crowley, Elliot J., Storkey, Amos

arXiv.org Machine Learning 

Here, we use the nomenclature derived from the meta-learning literature which is the most prevalent at time of writing. Let S {( x l,y l)} L l 1 be a support-set containing input-output pairs, with L equal to one (1-shot) or five (5-shot), and Q { (x m,y m)} M m 1be a query-set (sometimes referred to in the literature as a target-set), with M typically one order of magnitude greater than L. For ease of notation, the support and query sets are grouped in a task T {S, Q}, with the dataset D {T n} N n 1 defined as a collection of such tasks. Models are trained on random tasks sampled from D . Then, given a new task T {S, Q } sampled from a test set, the objective is to condition the model on the samples of the support S to estimate the membership of the samples in the query set Q . In the most common scenario, the inputs x D belong to the same distribution p(x) and are distributed across training, validation, and test sets such that their class membership is non-overlapping. Note that y can be a continuous value (regression) or a discrete one (classification), even though most of the previous work has focused on classification. We also consider the cross-domain scenario, where the inputs are sampled from different distributions at training and test time; this is more representative of real-world scenarios.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found