Break the Checkbox: Challenging Closed-Style Evaluations of Cultural Alignment in LLMs
Kabir, Mohsinul, Abrar, Ajwad, Ananiadou, Sophia
–arXiv.org Artificial Intelligence
A large number of studies rely on closed-style multiple-choice surveys to evaluate cultural alignment in Large Language Models (LLMs). In this work, we challenge this constrained evaluation paradigm and explore more realistic, unconstrained approaches. Using the World Values Survey (WVS) and Hofstede Cultural Dimensions as case studies, we demonstrate that LLMs exhibit stronger cultural alignment in less constrained settings, where responses are not forced. Additionally, we show that even minor changes, such as reordering survey choices, lead to inconsistent outputs, exposing the limitations of closed-style evaluations. Our findings advocate for more robust and flexible evaluation frameworks that focus on specific cultural proxies, encouraging more nuanced and accurate assessments of cultural alignment in LLMs.
arXiv.org Artificial Intelligence
Feb-15-2025
- Country:
- Asia (1.00)
- Europe > United Kingdom (0.28)
- North America > United States (0.29)
- Genre:
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (1.00)
- Industry:
- Education (0.35)
- Technology: