PromptSuite: A Task-Agnostic Framework for Multi-Prompt Generation
Habba, Eliya, Dahan, Noam, Lior, Gili, Stanovsky, Gabriel
–arXiv.org Artificial Intelligence
Evaluating LLMs with a single prompt has proven unreliable, with small changes leading to significant performance differences. However, generating the prompt variations needed for a more robust multi-prompt evaluation is challenging, limiting its adoption in practice. To address this, we introduce PromptSuite, a framework that enables the automatic generation of various prompts. PromptSuite is flexible - working out of the box on a wide range of tasks and benchmarks. It follows a modular prompt design, allowing controlled perturbations to each component, and is extensible, supporting the addition of new components and perturbation types. Through a series of case studies, we show that PromptSuite provides meaningful variations to support strong evaluation practices. All resources, including the Python API, source code, user-friendly web interface, and demonstration video, are available at: https://eliyahabba.github.io/PromptSuite/.
arXiv.org Artificial Intelligence
Sep-23-2025
- Country:
- Asia
- Middle East > Israel
- Jerusalem District > Jerusalem (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > Israel
- North America > United States
- Maryland > Baltimore (0.04)
- North Dakota > Bowman County (0.04)
- Texas > Travis County
- Austin (0.04)
- Washington > King County
- Seattle (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.47)
- Technology: