OptiSeq: Optimizing Example Ordering for In-Context Learning
Bhope, Rahul Atul, Venkateswaran, Praveen, Jayaram, K. R., Isahagian, Vatche, Muthusamy, Vinod, Venkatasubramanian, Nalini
–arXiv.org Artificial Intelligence
A common approach to selecting examples at The use of in-context learning (ICL) with large inference-time is to generate embeddings of candidate language models (LLMs) has become a popular examples using a model like Sentence-BERT approach to achieve impressive performance in (Reimers, 2019) and retrieve the top-k most similar many NLP tasks (Raffel et al., 2020; Radford et al., examples for a given test instance, ranking them 2019). In ICL, models are prompted during inference based on distance or similarity. However, there is with task-specific examples that help condition a distinction between ranking examples (determining the generated output. Unlike fine-tuning, it how relevant they are to our test case) does not require updates to the model parameters, and ordering them (deciding how to arrange which offers many benefits with ever-increasing them in the prompt).
arXiv.org Artificial Intelligence
Jan-24-2025
- Country:
- Asia
- India (0.04)
- Indonesia > Bali (0.04)
- Middle East > Iraq (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Philippines (0.04)
- Europe
- France > Provence-Alpes-Côte d'Azur
- Bouches-du-Rhône > Marseille (0.04)
- Italy (0.05)
- France > Provence-Alpes-Côte d'Azur
- North America
- Canada
- Newfoundland and Labrador > Labrador (0.04)
- Ontario > Toronto (0.04)
- United States
- California > Stanislaus County
- Modesto (0.04)
- Hawaii (0.04)
- Illinois > Cook County
- Chicago (0.04)
- New York (0.04)
- California > Stanislaus County
- Canada
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance (0.95)
- Health & Medicine > Therapeutic Area
- Immunology (0.46)
- Leisure & Entertainment > Sports (1.00)
- Media > Film (1.00)
- Technology: