Efficient LLM Context Distillation

Upadhayayaya, Rajesh, Smith, Zachary, Kottmyer, Chritopher, Osti, Manish Raj

arXiv.org Artificial Intelligence 

Given Large Language Models (LLMs) demonstrate proficiency this constrained context window, context distillation (CD) across diverse tasks but often require targeted adaptations extends accessible task-specific examples by internalizing for specific applications. Various methods have them, greatly increasing the number of available examples been proposed to facilitate this adaptation, including fewshot outside of the query prompt [1]. This not only limits the fine-tuning, in-context learning, and context distillation.