Learning to (Learn at Test Time)
Sun, Yu, Li, Xinhao, Dalal, Karan, Hsu, Chloe, Koyejo, Sanmi, Guestrin, Carlos, Wang, Xiaolong, Hashimoto, Tatsunori, Chen, Xinlei
–arXiv.org Artificial Intelligence
We reformulate the problem of supervised learning as learning to learn with two nested loops (i.e. The inner loop learns on each individual instance with self-supervision before final prediction. The outer loop learns the self-supervised task used by the inner loop, such that its final prediction improves. Our inner loop turns out to be equivalent to linear attention when the inner-loop learner is only a linear model, and to self-attention when it is a kernel estimator. For practical comparison with linear or self-attention layers, we replace each of them in a transformer with an inner loop, so our outer loop is equivalent to training the architecture. When each inner-loop learner is a neural network, our approach vastly outperforms transformers with linear attention on ImageNet from 224 224 raw pixels in both accuracy and FLOPs, while (regular) transformers cannot run. Test-time training (TTT) is an algorithmic framework for machine learning. The core idea is that each test instance defines its own learning problem, with its own target of generalization (Sun et al., 2020). Since the test instance comes without its label, TTT is performed with a self-supervised task such as reconstruction. Performance should improve on this particular instance for the selfsupervised task, because that is the objective optimized by TTT. But will such a process lead to better performance for the main task we actually care about? If improvement for a self-supervised task transfers to a given main task, we say the two tasks are aligned (Sun et al., 2020). In prior work, task alignment has been an art, combining ingenuity with trial and error (Gandelsman et al., 2022; Wang et al., 2023). Crucially, the amount of ingenuity in task design does not scale with more data and compute.
arXiv.org Artificial Intelligence
Jan-7-2024
- Country:
- Europe (0.28)
- North America > United States (0.28)
- Genre:
- Research Report (0.50)
- Industry:
- Education (1.00)
- Technology: