Evaluating Social Biases in LLM Reasoning
Wu, Xuyang, Nian, Jinming, Tao, Zhiqiang, Fang, Yi
–arXiv.org Artificial Intelligence
In the recent development of AI reasoning, large language models (LLMs) are trained to automatically generate chain-of-thought reasoning steps, which have demonstrated compelling performance on math and coding tasks. However, when bias is mixed within the reasoning process to form strong logical arguments, it could cause even more harmful results and further induce hallucinations. In this paper, we have evaluated the 8B and 32B variants of DeepSeek-R1 against their instruction tuned counterparts on the BBQ dataset, and investigated the bias that is elicited out and being amplified through reasoning steps. To the best of our knowledge, this empirical study is the first to assess bias issues in LLM reasoning.
arXiv.org Artificial Intelligence
Feb-21-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > Austria
- Vienna (0.14)
- North America
- Mexico > Mexico City (0.14)
- United States (0.94)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.68)
- Technology: