The internet might seem like a level playing field, but it isn't. Safiya Umoja Noble came face to face with that fact one day when she used Google's search engine to look for subjects her nieces might find interesting. She entered the term "black girls" and came back with pages dominated by pornography.
Have you ever wondered how the Waze app knows shortcuts in your neighborhood better than you? It's because Waze acts like a superhuman air traffic controller -- it measures distance and traffic patterns, it listens to feedback from drivers, and it compiles massive data set to get you to your location as quickly as possible. Even as we grow more reliant on these kinds of innovations, we still want assurances that we're in charge, because we still believe our humanity elevates us above computers. Movies such as "2001: A Space Odyssey" and the "Terminator" franchise teach us to fear computers programmed without any understanding of humanity; when a human sobs, Arnold Schwarzenegger's robotic character asks, "What's wrong with your eyes?" They always end with the machines turning on their makers.
Machine bias is when a machine learning process makes erroneous assumptions due to the limitations of a data set. Data sets can create machine bias when human interpretation and cognitive assessment may have influenced it, thereby the data set can reflect human biases. The data set may also create machine learning bias if there are problems related to the collection and quality of the data leading to improper conclusions being made during the machine learning process. In this article, I discuss what machine bias looks like and how we can go about preventing and mitigating these biases. In the past few years, words like machine learning and artificial intelligence have become ubiquitous in the media.
IBM on Wednesday introduced technology that automatically detects bias and explains how AI formulates decisions as they are made. The software runs on the IBM Cloud. IBM Research's bias detection tool, AI Fairness 360 toolkit, will become open-source. The hope is that academics, researchers and data scientists will integrate bias detection into their models. The company puts its bias detection tools on Github, a software development platform that Microsoft acquired in June 2018 for $7.5 billion.