LocalValueBench: A Collaboratively Built and Extensible Benchmark for Evaluating Localized Value Alignment and Ethical Safety in Large Language Models
Meadows, Gwenyth Isobel, Lau, Nicholas Wai Long, Susanto, Eva Adelina, Yu, Chi Lok, Paul, Aditya
–arXiv.org Artificial Intelligence
The proliferation of large language models (LLMs) requires robust evaluation of their alignment with local values and ethical standards, especially as existing benchmarks often reflect the cultural, legal, and ideological values of their creators. \textsc{LocalValueBench}, introduced in this paper, is an extensible benchmark designed to assess LLMs' adherence to Australian values, and provides a framework for regulators worldwide to develop their own LLM benchmarks for local value alignment. Employing a novel typology for ethical reasoning and an interrogation approach, we curated comprehensive questions and utilized prompt engineering strategies to probe LLMs' value alignment. Our evaluation criteria quantified deviations from local values, ensuring a rigorous assessment process. Comparative analysis of three commercial LLMs by USA vendors revealed significant insights into their effectiveness and limitations, demonstrating the critical importance of value alignment. This study offers valuable tools and methodologies for regulators to create tailored benchmarks, highlighting avenues for future research to enhance ethical AI development.
arXiv.org Artificial Intelligence
Jul-27-2024
- Country:
- North America > United States (0.25)
- Oceania > Australia
- Australian Capital Territory > Canberra (0.04)
- New South Wales > Sydney (0.04)
- Northern Territory > Darwin (0.04)
- Queensland > Brisbane (0.04)
- Genre:
- Research Report (0.82)
- Industry:
- Law (0.92)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.48)
- Technology: