How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings
Chang, Shuaichen, Fosler-Lussier, Eric
–arXiv.org Artificial Intelligence
Large language models (LLMs) with in-context learning have demonstrated remarkable capability in the text-to-SQL task. Previous research has prompted LLMs with various demonstration-retrieval strategies and intermediate reasoning steps to enhance the performance of LLMs. However, those works often employ varied strategies when constructing the prompt text for text-to-SQL inputs, such as databases and demonstration examples. This leads to a lack of comparability in both the prompt constructions and their primary contributions. Furthermore, selecting an effective prompt construction has emerged as a persistent problem for future research. To address this limitation, we comprehensively investigate the impact of prompt constructions across various settings and provide insights into prompt constructions for future text-to-SQL studies.
arXiv.org Artificial Intelligence
Nov-26-2023
- Country:
- Asia > Middle East
- Jordan (0.05)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California > San Bernardino County
- Fontana (0.04)
- Illinois > Will County
- Joliet (0.04)
- Ohio (0.04)
- Pennsylvania (0.04)
- California > San Bernardino County
- Mexico > Mexico City
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.68)
- Leisure & Entertainment > Sports
- Motorsports (0.68)
- Technology: