Stars, Stripes, and Silicon: Unravelling the ChatGPT's All-American, Monochrome, Cis-centric Bias
–arXiv.org Artificial Intelligence
This paper investigates the challenges associated with bias, toxicity, unreliability, and lack of robustness in large language models (LLMs) such as ChatGPT. It emphasizes that these issues primarily stem from the quality and diversity of data on which LLMs are trained, rather than the model architectures themselves. As LLMs are increasingly integrated into various real-world applications, their potential to negatively impact society by amplifying existing biases and generating harmful content becomes a pressing concern. The paper calls for interdisciplinary efforts to address these challenges. Additionally, it highlights the need for collaboration between researchers, practitioners, and stakeholders to establish governance frameworks, oversight, and accountability mechanisms to mitigate the harmful consequences of biased LLMs.
arXiv.org Artificial Intelligence
Oct-2-2024
- Country:
- Europe > Italy
- Piedmont > Turin Province > Turin (0.04)
- North America > United States
- California (0.04)
- Europe > Italy
- Genre:
- Overview (0.68)
- Research Report (1.00)
- Industry:
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.46)
- Media > News (0.69)
- Technology: