Generating Synthetic Datasets for Few-shot Prompt Tuning
Guo, Xu, Du, Zilin, Li, Boyang, Miao, Chunyan
–arXiv.org Artificial Intelligence
A major limitation of prompt tuning is its dependence on large labeled training datasets. Under few-shot learning settings, prompt tuning lags far behind full-model fine-tuning, limiting its scope of application. In this paper, we leverage the powerful LLMs to synthesize task-specific labeled data for training the soft prompts. We first introduce a distribution-aligned weighted generator tuning (DawGen) method to encourage generating in-distribution data that aligns with the few-shot real data. Then, we train soft prompts on both synthetic and real datasets using a gradient surgery approach, which eliminates the conflicting gradients from different data sources. Experiments on seven sentence-pair classification datasets demonstrate the effectiveness of our proposed method for boosting prompt tuning in few-shot learning settings. Results on QQP, MRPC, and SICK datasets are even comparable to the performance of transfer learning from large real-world datasets, showing the promise of synthetic data as an alternative for enhancing soft prompt tuning.
arXiv.org Artificial Intelligence
Oct-7-2024
- Country:
- Asia (0.93)
- Europe (1.00)
- North America > United States
- Minnesota (0.28)
- Genre:
- Research Report > New Finding (0.46)
- Technology: