Can LLMs Design Good Questions Based on Context?
Zhang, Yueheng, Liu, Xiaoyuan, Sun, Yiyou, Alharbi, Atheer, Alzahrani, Hend, Alomair, Basel, Song, Dawn
–arXiv.org Artificial Intelligence
This paper evaluates questions generated by LLMs from context, comparing them to human-generated questions across six dimensions. We introduce an automated LLM-based evaluation method, focusing on aspects like question length, type, context coverage, and answerability. Our findings highlight unique characteristics of LLM-generated questions, contributing insights that can support further research in question quality and downstream applications.
arXiv.org Artificial Intelligence
Jan-6-2025
- Country:
- North America > United States > California (0.28)
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.34)
- Technology: