Investigating Labeler Bias in Face Annotation for Machine Learning
Haliburton, Luke, Ghebremedhin, Sinksar, Welsch, Robin, Schmidt, Albrecht, Mayer, Sven
–arXiv.org Artificial Intelligence
In a world increasingly reliant on artificial intelligence, it is more Data collection, processing, and prediction are key pillars of AI important than ever to consider the ethical implications of artificial applications. Although AI is a powerful tool, the fundamental reliance intelligence on humanity. One key under-explored challenge is labeler on data can be problematic since datasets can be distorted bias, which can create inherently biased datasets for training in various ways, creating unintended consequences. One underinvestigated and subsequently lead to inaccurate or unfair decisions in healthcare, contributing factor to biased AI tools is labeler bias, employment, education, and law enforcement. Hence, we conducted which results from cognitive biases [14] in crowd workers and other a study to investigate and measure the existence of labeler dynamics in the labeling process [44]. Many AI applications rely bias using images of people from different ethnicities and sexes in on crowdsourcing platforms to label their data, yet they usually a labeling task. Our results show that participants hold stereotypes do not consider whether they are utilizing a diverse population of that influence their decision-making process and that labeler demographics labelers [43]. A biased labeler pool could lead to unfair outcomes for impact assigned labels. We also discuss how labeler bias certain groups, such as women, ethnic minorities, or people from influences datasets and, subsequently, the models trained on them.
arXiv.org Artificial Intelligence
Jun-26-2023
- Country:
- Europe (1.00)
- North America > United States
- New York (0.29)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Technology: