LMUnit: Fine-grained Evaluation with Natural Language Unit Tests
Saad-Falcon, Jon, Vivek, Rajan, Berrios, William, Naik, Nandita Shankar, Franklin, Matija, Vidgen, Bertie, Singh, Amanpreet, Kiela, Douwe, Mehri, Shikib
–arXiv.org Artificial Intelligence
As language models become integral to critical workflows, assessing their behavior remains a fundamental challenge -- human evaluation is costly and noisy, while automated metrics provide only coarse, difficult-to-interpret signals. We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUnit, which combines multi-objective training across preferences, direct ratings, and natural language rationales. Through controlled human studies, we show this paradigm significantly improves inter-annotator agreement and enables more effective LLM development workflows. LMUnit achieves state-of-the-art performance on evaluation benchmarks (FLASK, BigGenBench) and competitive results on RewardBench. These results validate both our proposed paradigm and scoring model, suggesting a promising path forward for language model evaluation and development.
arXiv.org Artificial Intelligence
Dec-17-2024
- Country:
- North America > United States (0.46)
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Health & Medicine (0.95)
- Information Technology (0.93)
- Technology: