OptiSeq: Optimizing Example Ordering for In-Context Learning

Bhope, Rahul Atul, Venkateswaran, Praveen, Jayaram, K. R., Isahagian, Vatche, Muthusamy, Vinod, Venkatasubramanian, Nalini

arXiv.org Artificial Intelligence 

A common approach to selecting examples at The use of in-context learning (ICL) with large inference-time is to generate embeddings of candidate language models (LLMs) has become a popular examples using a model like Sentence-BERT approach to achieve impressive performance in (Reimers, 2019) and retrieve the top-k most similar many NLP tasks (Raffel et al., 2020; Radford et al., examples for a given test instance, ranking them 2019). In ICL, models are prompted during inference based on distance or similarity. However, there is with task-specific examples that help condition a distinction between ranking examples (determining the generated output. Unlike fine-tuning, it how relevant they are to our test case) does not require updates to the model parameters, and ordering them (deciding how to arrange which offers many benefits with ever-increasing them in the prompt).