Evaluating Large Language Models through Gender and Racial Stereotypes
–arXiv.org Artificial Intelligence
Language Models have ushered a new age of AI gaining traction within the NLP community as well as amongst the general population. AI's ability to make predictions, generations and its applications in sensitive decision-making scenarios, makes it even more important to study these models for possible biases that may exist and that can be exaggerated. We conduct a quality comparative study and establish a framework to evaluate language models under the premise of two kinds of biases: gender and race, in a professional setting. We find out that while gender bias has reduced immensely in newer models, as compared to older ones, racial bias still exists.
arXiv.org Artificial Intelligence
Nov-24-2023
- Country:
- Asia > Pakistan (0.04)
- Europe > Norway (0.04)
- North America > United States
- Georgia > Fulton County > Atlanta (0.04)
- Genre:
- Research Report (1.00)
- Technology: