Collaborating Authors

Why are Artificial Intelligence systems biased?


A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates. Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color.

Can AI be Biased? Think Again.


Naturally, we're biased to believe that we're the best -- just like you -- but we think it's important to question things sometimes. Technology can be biased -- even if it's not its fault. There are plenty of reasons to be wary about artificial intelligence. AI systems can inherit biases from their creators and from their surroundings. If the goal is to eliminate bias, it's important to consider AI bias. As researchers found, Google's image recognition algorithms labelled people of color as gorillas or that Microsoft's AI chatbot Tay rapidly became racist and sexist, and these biases were reinforced by how it was taught.

Fighting algorithmic bias in artificial intelligence – Physics World


Physicists are increasingly developing artificial intelligence and machine learning techniques to advance our understanding of the physical world but there is a rising concern about the bias in such systems and their wider impact on society at large. In 2011, during her undergraduate degree at Georgia Institute of Technology, Ghanaian-US computer scientist Joy Buolamwini discovered that getting a robot to play a simple game of peek-a-boo with her was impossible – the machine was incapable of seeing her dark-skinned face. Later, in 2015, as a Master's student at Massachusetts Institute of Technology's Media Lab working on a science–art project called Aspire Mirror, she had a similar issue with facial analysis software: it detected her face only when she wore a white mask. Buolamwini's curiosity led her to run one of her profile images across four facial recognition demos, which, she discovered, either couldn't identify a face at all or misgendered her – a bias that she refers to as the "coded gaze". She then decided to test 1270 faces of politicians from three African and three European countries, with different features, skin tones and gender, which became her Master's thesis project "Gender Shades: Intersectional accuracy disparities in commercial gender classification" (figure 1).

Bias in AI and Machine Learning: Sources and Solutions - Lexalytics


"Bias in AI" refers to situations where machine learning-based data analytics systems discriminate against particular groups of people. This discrimination usually follows our own societal biases regarding race, gender, biological sex, nationality, or age (more on this later). Just this past week, for example, researchers showed that Google's AI-based hate speech detector is biased against black people. In this article, I'll explain two types of bias in artificial intelligence and machine learning: algorithmic/data bias and societal bias. I'll explain how they occur, highlight some examples of AI bias in the news, and show how you can fight back by becoming more aware.