Evaluating LLMs with Multiple Problems at once: A New Paradigm for Probing LLM Capabilities
Wang, Zhengxiang, Kodner, Jordan, Rambow, Owen
–arXiv.org Artificial Intelligence
Current LLM evaluation predominantly performs evaluation with prompts comprising single problems. We propose multi-problem evaluation as an additional approach to study the multiple problem handling capabilities of LLMs. We present a systematic study in this regard by comprehensively examining 7 LLMs on 4 related types of tasks constructed from 6 classification benchmarks. The 4 task types include traditional single-problem tasks, homogeneous multi-problem tasks, and two index selection tasks that embed the multi-problem tasks. We find that LLMs are competent multi-problem solvers: they generally perform (nearly) as well on multi-problem tasks as on single-problem tasks. Furthermore, contrary to common expectation, they often do not suffer from a positional bias with long inputs. This makes multi-problem prompting a simple and cost-efficient prompting method of practical significance. However, our results also strongly indicate that LLMs lack true understanding: they perform significantly worse in the two index selection tasks than in the multi-problem task under various evaluation settings, although they can indeed do index selection in general.
arXiv.org Artificial Intelligence
Jun-15-2024
- Country:
- Asia
- Middle East
- Jordan (0.04)
- Yemen > Amran Governorate
- Amran (0.04)
- Singapore (0.04)
- Middle East
- Europe
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > Suffolk County
- Stony Brook (0.04)
- Washington > King County
- Seattle (0.04)
- Minnesota > Hennepin County
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (0.88)
- Technology: