ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning
Wu, Shiguang, Wang, Yaqing, Bian, Yatao, Yao, Quanming
–arXiv.org Artificial Intelligence
Meta-learning enables learning systems to adapt quickly to new tasks, similar to humans. To emulate this human-like rapid learning and enhance alignment and discrimination abilities, we propose ConML, a universal meta-learning framework that can be applied to various meta-learning algorithms without relying on specific model architectures nor target models. The core of ConML is task-level contrastive learning, which extends contrastive learning from the representation space in unsupervised learning to the model space in meta-learning. By leveraging task identity as an additional supervision signal during meta-training, we contrast the outputs of the meta-learner in the model space, minimizing inner-task distance (between models trained on different subsets of the same task) and maximizing inter-task distance (between models from different tasks). We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms, as well as in-context learning, resulting in performance improvements across diverse few-shot learning tasks. Meta-learning, or "learning to learn" (Schmidhuber, 1987; Thrun & Pratt, 1998), is a powerful paradigm designed to enable learning systems to adapt quickly to new tasks. During the meta-training phase, a meta-learner simulates learning across a variety of relevant tasks to accumulate knowledge on how to adapt effectively.
arXiv.org Artificial Intelligence
Oct-14-2024