Meta-learning approaches for few-shot learning: A survey of recent advances

Gharoun, Hassan, Momenifar, Fereshteh, Chen, Fang, Gandomi, Amir H.

arXiv.org Artificial Intelligence 

Humans possess the extraordinary capability of learning a new concept even after minimal observation. To a greater extent, a child can distinguish a dog from a cat through a single picture [1]. This critical characteristic of human intelligence lies in the humans' ability to leverage obtained knowledge of prior experiences to unforeseen circumstances with small observation. Unlike the human learning paradigm, traditional machine learning (ML) and deep learning (DL) models train a specific task from scratch through: (a) the training phase in which a model is initiated randomly and then updated, and (b) the test phase in which the model evaluates. While ML and DL have obtained remarkable success in a wide range of applications, they are notorious for requiring a huge number of samples to generalize. In many real-world problems, collecting more data is costly, time-consuming, and even might not feasible due to physical system constraints [2]. Moreover, most ML and DL models presume that training and testing datasets have the same distribution [3]. Thus, their performance suffers under data distribution shifts [4].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found