religion
I rejected religion all my life. Then a mysterious illness left me begging to die... and I saw God's body. It completely shattered my ego
Meghan shares intimate footage of her and Harry dancing in video filmed by Princess Lilibet - as Duchess hops on viral 2016 throwback trend with photo from couple's early dates Former Prime Minister Sir Tony Blair will join Donald Trump's Gaza'Board of Peace', White House confirms Taylor Swift and'defeated' Travis Kelce are facing'first real test' in their relationship... as insiders say'things are changing' Filthy rich fitness influencers engulfed in bitter war over $30M empire after secret'betrayal'... as their lavish spending is laid bare Thirty years ago, I represented one of his accusers... then he tried to destroy me Steve Bannon reveals why Canada is the'next Ukraine' as he details Trump's real motivations behind taking Greenland and his'Donroe Doctrine' Incredible'world first' footage shows spear-wielding hunters from world's biggest uncontacted Amazon tribe T. rex fossil discovery rewrites dinosaur history and reveals how long they really lived She won't like this column either... but she needs to read it: KENNEDY Trouble in paradise as retail tycoon locked in bitter $2.4m fight over gate and pickleball court at his mansion Sara Foster recalls being'offended' by Cindy Crawford setting her up on'blah' date with'old' George Clooney'Super flu' ravaging America kills 15 more children... as grieving parents issue urgent plea Melissa Gilbert being forced to kiss an older actor when she was 15 on Little House On The Prairie goes viral after husband Timothy Busfield's arrest I rejected religion all my life. Then a mysterious illness left me begging to die... and I saw God's body. A woman who had previously rejected the traditional teachings of God has revealed how her entire view on life and faith changed when she nearly died while vacationing in South Asia . In an interview with the Daily Mail, Colorado-based herbalist Scarlet Ravin said she contracted an illness believed to be COVID-19 during a December 2019 trip to rural Sri Lanka - and claimed she may have been among the first patients to come so close to death. Ravin, now 43, said a sore throat at the start of her vacation quickly spiraled into a high fever, intense body pain and delirium.
- Asia > Sri Lanka (0.24)
- North America > United States > Colorado (0.24)
- North America > Greenland (0.24)
- (24 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (0.93)
- Information Technology > Communications > Mobile (0.69)
Is Lying Only Sinful in Islam? Exploring Religious Bias in Multilingual Large Language Models Across Major Religions
Hossain, Kazi Abrab, Mahmud, Jannatul Somiya, Tuli, Maria Hossain, Mitra, Anik, Haque, S. M. Taiabul, Sadeque, Farig Y.
While recent developments in large language models have improved bias detection and classification, sensitive subjects like religion still present challenges because even minor errors can result in severe misunderstandings. In particular, multilingual models often misrepresent religions and have difficulties being accurate in religious contexts. To address this, we introduce BRAND: Bilingual Religious Accountable Norm Dataset, which focuses on the four main religions of South Asia: Buddhism, Christianity, Hinduism, and Islam, containing over 2,400 entries, and we used three different types of prompts in both English and Bengali. Our results indicate that models perform better in English than in Bengali and consistently display bias toward Islam, even when answering religion-neutral questions. These findings highlight persistent bias in multilingual models when similar questions are asked in different languages. We further connect our findings to the broader issues in HCI regarding religion and spirituality.
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.40)
- Europe > Austria > Vienna (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (9 more...)
- Media (0.46)
- Law Enforcement & Public Safety (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Addressing Stereotypes in Large Language Models: A Critical Examination and Mitigation
Large Language models (LLMs), such as ChatGPT, have gained popularity in recent years with the advancement of Natural Language Processing (NLP), with use cases spanning many disciplines and daily lives as well. LLMs inherit explicit and implicit biases from the datasets they were trained on; these biases can include social, ethical, cultural, religious, and other prejudices and stereotypes. It is important to comprehensively examine such shortcomings by identifying the existence and extent of such biases, recognizing the origin, and attempting to mitigate such biased outputs to ensure fair outputs to reduce harmful stereotypes and misinformation. This study inspects and highlights the need to address biases in LLMs amid growing generative Artificial Intelligence (AI). We utilize bias-specific benchmarks such StereoSet and CrowSPairs to evaluate the existence of various biases in many different generative models such as BERT, GPT 3.5, and ADA. To detect both explicit and implicit biases, we adopt a three-pronged approach for thorough and inclusive analysis. Results indicate fine-tuned models struggle with gender biases but excel at identifying and avoiding racial biases. Our findings also illustrated that despite some cases of success, LLMs often over-rely on keywords in prompts and its outputs. This demonstrates the incapability of LLMs to attempt to truly understand the accuracy and authenticity of its outputs. Finally, in an attempt to bolster model performance, we applied an enhancement learning strategy involving fine-tuning, models using different prompting techniques, and data augmentation of the bias benchmarks. We found fine-tuned models to exhibit promising adaptability during cross-dataset testing and significantly enhanced performance on implicit bias benchmarks, with performance gains of up to 20%.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Middle East (0.04)
- Asia > Middle East (0.04)
- (7 more...)
- Health & Medicine (1.00)
- Education (1.00)
- Law Enforcement & Public Safety (0.68)
AfriStereo: A Culturally Grounded Dataset for Evaluating Stereotypical Bias in Large Language Models
Beux, Yann Le, Audu, Oluchi, Ankeli, Oche D., Balakrishnan, Dhananjay, Weya, Melissah, Ralaiarinosy, Marie D., Ezeani, Ignatius
Existing AI bias evaluation benchmarks largely reflect Western perspectives, leaving African contexts underrepresented and enabling harmful stereotypes in applications across various domains. To address this gap, we introduce AfriStereo, the first open-source African stereotype dataset and evaluation framework grounded in local socio-cultural contexts. Through community engaged efforts across Senegal, Kenya, and Nigeria, we collected 1,163 stereotypes spanning gender, ethnicity, religion, age, and profession. Using few-shot prompting with human-in-the-loop validation, we augmented the dataset to over 5,000 stereotype-antistereotype pairs. Entries were validated through semantic clustering and manual annotation by culturally informed reviewers. Preliminary evaluation of language models reveals that nine of eleven models exhibit statistically significant bias, with Bias Preference Ratios (BPR) ranging from 0.63 to 0.78 (p <= 0.05), indicating systematic preferences for stereotypes over antistereotypes, particularly across age, profession, and gender dimensions. Domain-specific models appeared to show weaker bias in our setup, suggesting task-specific training may mitigate some associations. Looking ahead, AfriStereo opens pathways for future research on culturally grounded bias evaluation and mitigation, offering key methodologies for the AI community on building more equitable, context-aware, and globally inclusive NLP technologies.
- Research Report > Experimental Study (0.66)
- Research Report > New Finding (0.48)
FairJudge: MLLM Judging for Social Attributes and Prompt Image Alignment
Sahili, Zahraa Al, Fetanat, Maryam, Nowaz, Maimuna, Patras, Ioannis, Purver, Matthew
Text-to-image (T2I) systems lack simple, reproducible ways to evaluate how well images match prompts and how models treat social attributes. Common proxies -- face classifiers and contrastive similarity -- reward surface cues, lack calibrated abstention, and miss attributes only weakly visible (for example, religion, culture, disability). We present FairJudge, a lightweight protocol that treats instruction-following multimodal LLMs as fair judges. It scores alignment with an explanation-oriented rubric mapped to [-1, 1]; constrains judgments to a closed label set; requires evidence grounded in the visible content; and mandates abstention when cues are insufficient. Unlike CLIP-only pipelines, FairJudge yields accountable, evidence-aware decisions; unlike mitigation that alters generators, it targets evaluation fairness. We evaluate gender, race, and age on FairFace, PaTA, and FairCoT; extend to religion, culture, and disability; and assess profession correctness and alignment on IdenProf, FairCoT-Professions, and our new DIVERSIFY-Professions. We also release DIVERSIFY, a 469-image corpus of diverse, non-iconic scenes. Across datasets, judge models outperform contrastive and face-centric baselines on demographic prediction and improve mean alignment while maintaining high profession accuracy, enabling more reliable, reproducible fairness audits.
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Europe > Slovenia > Central Slovenia > Municipality of Ljubljana > Ljubljana (0.04)
A FineWeb
Curated by The dataset was curated by Hugging Face.Funded by The dataset was funded by Hugging Face.Language(s) English License The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to Common-Crawl's Terms of Use. The following is an example sample from the dataset. Tony, one of the Wedgwood chefs, suggested sprinkling on some toasted crushed peanuts at the end to create extra crunch, which I thought was a great idea. The default subset includes the entire dataset.
- Health & Medicine > Consumer Health (0.47)
- Education > Educational Setting (0.47)
- Leisure & Entertainment (0.94)
- Media (0.69)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Croatia > Dubrovnik-Neretva County > Dubrovnik (0.04)
- Asia > India > Maharashtra (0.04)
- (11 more...)
- Questionnaire & Opinion Survey (0.68)
- Overview (0.46)
- Oceania (0.04)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- (5 more...)
- Health & Medicine (1.00)
- Consumer Products & Services > Restaurants (0.30)
AI Is Not God
In recent times, there have been two techno-religious awakenings. To be human is to yearn for a Sky Daddy. Something that explains the unexplainable, someone to blame. No wonder, then, that in the ZIRP-fueled 2010s, when a new gospel of creation was being spread, some people started to see technology as a kind of religion. Startup founders and CEOs became messianic figures.
- Europe > Holy See (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (2 more...)
- Law (0.96)
- Transportation > Passenger (0.48)
- Transportation > Ground > Road (0.48)
- Media > Film (0.31)