Is Lying Only Sinful in Islam? Exploring Religious Bias in Multilingual Large Language Models Across Major Religions
Hossain, Kazi Abrab, Mahmud, Jannatul Somiya, Tuli, Maria Hossain, Mitra, Anik, Haque, S. M. Taiabul, Sadeque, Farig Y.
–arXiv.org Artificial Intelligence
While recent developments in large language models have improved bias detection and classification, sensitive subjects like religion still present challenges because even minor errors can result in severe misunderstandings. In particular, multilingual models often misrepresent religions and have difficulties being accurate in religious contexts. To address this, we introduce BRAND: Bilingual Religious Accountable Norm Dataset, which focuses on the four main religions of South Asia: Buddhism, Christianity, Hinduism, and Islam, containing over 2,400 entries, and we used three different types of prompts in both English and Bengali. Our results indicate that models perform better in English than in Bengali and consistently display bias toward Islam, even when answering religion-neutral questions. These findings highlight persistent bias in multilingual models when similar questions are asked in different languages. We further connect our findings to the broader issues in HCI regarding religion and spirituality.
arXiv.org Artificial Intelligence
Dec-4-2025
- Country:
- Asia
- Bangladesh > Dhaka Division
- Dhaka District > Dhaka (0.40)
- Indonesia
- Bali (0.04)
- Borneo > Kalimantan
- East Kalimantan > Nusantara (0.04)
- Japan > Honshū
- Chūbu > Toyama Prefecture > Toyama (0.04)
- Malaysia (0.04)
- Singapore (0.04)
- Sri Lanka (0.04)
- Bangladesh > Dhaka Division
- Europe
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Louisiana > Orleans Parish
- New Orleans (0.04)
- New York > New York County
- New York City (0.04)
- Louisiana > Orleans Parish
- Canada > Quebec
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Law Enforcement & Public Safety (0.46)
- Media (0.46)
- Technology: