Task Diversity Shortens the ICL Plateau
Kim, Jaeyeon, Kwon, Sehyun, Choi, Joo Young, Park, Jongho, Cho, Jaewoong, Lee, Jason D., Ryu, Ernest K.
–arXiv.org Artificial Intelligence
In-context learning (ICL) describes a language model's ability to generate outputs based on a set of input demonstrations and a subsequent query. To understand this remarkable capability, researchers have studied simplified, stylized models. These studies have consistently observed long loss plateaus, during which models exhibit minimal improvement, followed by a sudden, rapid surge of learning. In this work, we reveal that training on multiple diverse ICL tasks simultaneously shortens the loss plateaus, making each task easier to learn. This finding is surprising as it contradicts the natural intuition that the combined complexity of multiple ICL tasks would lengthen the learning process, not shorten it. Our result suggests that the recent success in large-scale training of language models may be attributed not only to the richness of the data at scale but also to the easier optimization (training) induced by the diversity of natural language training data. Figure 1: We train a transformer from scratch on in-context learning tasks. Single-task ICL: Training loss () and test error/accuracy () when each task is trained individually. The Parity task cannot be learned even after 1000k training steps. Multi-task ICL: Training loss () and test error/accuracy () when all six tasks are trained simultaneously. Green lines mark the plateau escape points. In-context learning (ICL), first reported by Brown et al. (2020) with GPT-3, describes a language model's ability to generate outputs based on a set of input demonstrations and a subsequent query.
arXiv.org Artificial Intelligence
Oct-7-2024