Investigating Cost-Efficiency of LLM-Generated Training Data for Conversational Semantic Frame Analysis
Matta, Shiho, Huang, Yin Jou, Cheng, Fei, Kiyomaru, Hirokazu, Murawaki, Yugo
–arXiv.org Artificial Intelligence
Recent studies have demonstrated that few-shot learning allows LLMs to generate training data for supervised models at a low cost. However, the quality of LLM-generated data may not entirely match that of human-labeled data. This raises a crucial question: how should one balance the trade-off between the higher quality but more expensive human data and the lower quality yet substantially cheaper LLM-generated data? In this paper, we synthesized training data for conversational semantic frame analysis using GPT-4 and examined how to allocate budgets optimally to achieve the best performance. Our experiments, conducted across various budget levels, reveal that optimal cost-efficiency is achieved by combining both human and LLM-generated data across a wide range of budget levels. Notably, as the budget decreases, a higher proportion of LLM-generated data becomes more preferable.
arXiv.org Artificial Intelligence
Oct-9-2024
- Country:
- Europe > Spain
- Canary Islands (0.14)
- North America > Canada
- Quebec (0.14)
- Europe > Spain
- Genre:
- Research Report > New Finding (1.00)
- Technology: