On Meta-Prompting
de Wynter, Adrian, Wang, Xun, Gu, Qilong, Chen, Si-Qing
–arXiv.org Artificial Intelligence
Certain statistical models are capable of interpreting input strings as instructions, or prompts, and carry out tasks based on them. Many approaches to prompting and pre-training these models involve the automated generation of these prompts. We call these approaches meta-prompting, or prompting to obtain prompts. We propose a theoretical framework based on category theory to generalize and describe them. This framework is flexible enough to account for LLM stochasticity; and allows us to obtain formal results around task agnosticity and equivalence of various meta-prompting approaches. We experiment with meta-prompting in two active areas of model research: creativity and ideation. We find that user preference favors (p < 0.01) the prompts generated under meta-prompting, as well as their corresponding outputs, over a series of hardcoded baseline prompts that include the original task prompt. Using our framework, we argue that meta-prompting is more effective than basic prompting at generating desirable outputs.
arXiv.org Artificial Intelligence
Dec-11-2023
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States (0.28)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (1.00)
- Technology: