IndoSafety: Culturally Grounded Safety for LLMs in Indonesian Languages
Azmi, Muhammad Falensi, Kautsar, Muhammad Dehan Al, Wicaksono, Alfan Farizki, Koto, Fajri
–arXiv.org Artificial Intelligence
Although region-specific large language models (LLMs) are increasingly developed, their safety remains underexplored, particularly in culturally diverse settings like Indonesia, where sensitivity to local norms is essential and highly valued by the community. In this work, we present IndoSafety, the first high-quality, human-verified safety evaluation dataset tailored for the Indonesian context, covering five language varieties: formal and colloquial Indonesian, along with three major local languages: Javanese, Sundanese, and Minangkabau. IndoSafety is constructed by extending prior safety frameworks to develop a taxonomy that captures Indonesia's sociocultural context. We find that existing Indonesian-centric LLMs often generate unsafe outputs, particularly in colloquial and local language settings, while fine-tuning on IndoSafety significantly improves safety while preserving task performance. Our work highlights the critical need for culturally grounded safety evaluation and provides a concrete step toward responsible LLM deployment in multilingual settings. Warning: This paper contains example data that may be offensive, harmful, or biased.
arXiv.org Artificial Intelligence
Jun-4-2025
- Country:
- Asia
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- Monaco (0.04)
- Ireland > Leinster
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > New Mexico
- Bernalillo County > Albuquerque (0.04)
- Mexico > Mexico City
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (0.68)
- Law (0.93)
- Media (0.68)
- Technology: