Automatic Prompt Selection for Large Language Models
Do, Viet-Tung, Hoang, Van-Khanh, Nguyen, Duy-Hung, Sabahi, Shahab, Yang, Jeff, Hotta, Hajime, Nguyen, Minh-Tien, Le, Hung
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) can perform various natural language processing tasks with suitable instruction prompts. However, designing effective prompts manually is challenging and time-consuming. Existing methods for automatic prompt optimization either lack flexibility or efficiency. In this paper, we propose an effective approach to automatically select the optimal prompt for a given input from a finite set of synthetic candidate prompts. Our approach consists of three steps: (1) clustering the training data and generating candidate prompts for each cluster using an LLM-based prompt generator; (2) synthesizing a dataset of input-prompt-output tuples for training a prompt evaluator to rank the prompts based on their relevance to the input; (3) using the prompt evaluator to select the best prompt for a new input at test time. Our approach balances prompt generality-specificity and eliminates the need for resource-intensive training and inference. It demonstrates competitive performance on zero-shot question-answering datasets: GSM8K, MultiArith, and AQuA.
arXiv.org Artificial Intelligence
Apr-3-2024
- Country:
- Asia > Vietnam (0.14)
- Europe > Croatia (0.14)
- North America > United States (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.68)
- Technology: