DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors
Barmina, Gianluca, Norman, Nathalie Carmen Hau, Schneider-Kamp, Peter, Poech, Lukas Galke
–arXiv.org Artificial Intelligence
We present an enhanced benchmark for evaluating linguistic acceptability in Danish. We first analyze the most common errors found in written Danish. Based on this analysis, we introduce a set of fourteen corruption functions that generate incorrect sentences by systematically introducing errors into existing correct Danish sentences. To ensure the accuracy of these corruptions, we assess their validity using both manual and automatic methods. The results are then used as a benchmark for evaluating Large Language Models on a linguistic acceptability judgement task. Our findings demonstrate that this extension is both broader and more comprehensive than the current state of the art. By incorporating a greater variety of corruption types, our benchmark provides a more rigorous assessment of linguistic acceptability, increasing task difficulty, as evidenced by the lower performance of LLMs on our benchmark compared to existing ones. Our results also suggest that our benchmark has a higher discriminatory power which allows to better distinguish well-performing models from low-performing ones.
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Europe
- Sweden > Kronoberg County
- Växjö (0.04)
- Estonia > Tartu County
- Tartu (0.04)
- Poland > Masovia Province
- Warsaw (0.04)
- Faroe Islands > Streymoy
- Tórshavn (0.04)
- Netherlands > South Holland
- The Hague (0.04)
- Czechia > South Moravian Region
- Brno (0.04)
- Italy > Liguria
- Genoa (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- Denmark
- Capital Region > Copenhagen (0.04)
- Southern Denmark (0.04)
- Sweden > Kronoberg County
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Technology: