Auto-Demo Prompting: Leveraging Generated Outputs as Demonstrations for Enhanced Batch Prompting
Feng, Longyu, Hong, Mengze, Zhang, Chen Jason
–arXiv.org Artificial Intelligence
Batch prompting is a common technique in large language models (LLMs) used to process multiple inputs simultaneously, aiming to improve computational efficiency. However, as batch sizes increase, performance degradation often occurs due to the model's difficulty in handling lengthy context inputs. Existing methods that attempt to mitigate these issues rely solely on batch data arrangement and majority voting rather than improving the design of the batch prompt itself. In this paper, we address these limitations by proposing "Auto-Demo Prompting," a novel approach that leverages the question-output pairs from earlier questions within a batch as demonstrations for subsequent answer inference. We provide a formal theoretical analysis of how Auto-Demo Prompting functions within the autoregressive generation process of LLMs, illustrating how it utilizes prior outputs to optimize the model's internal representations. Experimental results across five NLP tasks demonstrate its effectiveness in mitigating performance degradation and occasionally outperforming single prompts. Furthermore, it opens new avenues for applying few-shot learning techniques, such as demonstration selection, within batch prompting, making it a robust solution for real-world applications. Large language models (LLMs), such as GPT (Brown et al., 2020), and PaLM (Chowdhery et al., 2023), have demonstrated an extraordinary ability to perform in-context learning (ICL), where they utilize provided examples or contextual information to adapt and solve a wide range of downstream tasks. This capability enables LLMs to generalize from few-shot or even zero-shot examples without requiring task-specific fine-tuning, significantly enhancing their versatility across diverse applications (Song et al., 2023). The success of ICL in these models highlights their potential as powerful tools for natural language processing and as adaptable frameworks for learning in dynamic, dataconstrained environments, offering broader implications for machine learning and AI research.
arXiv.org Artificial Intelligence
Oct-2-2024
- Country:
- Asia > Middle East (0.28)
- North America > United States (0.46)
- Genre:
- Overview (1.00)
- Research Report > New Finding (1.00)
- Technology: