A lack of diversity in staff and diversity in data has led to several recent problems with AI solutions, such as the inability of facial recognition algorithms to correctly identify those with darker skin. The key to preventing these kinds of problems in new technologies is diversity – diversity in staff and diversity in data, which leads to diversity in the questions asked and the options explored. Researchers at the AI Now Institute reported that only 12% of researchers who contributed to the three leading machine learning conferences in 2017 were women and only 10-15% of AI researchers at major tech companies are women. They found there is no reliable data on racial diversity, and limited evidence suggests the percentage of women in AI is even lower than those in computer science in general. It is clear the field of AI and machine learning needs to become more diverse to create models that produce predictions that more accurately reflect our values.
Artificial intelligence (AI) algorithms are generally hungry for data, a trend which is accelerating. A new breed of AI approaches, called lifelong learning machines, are being designed to pull data continually and indefinitely. But this is already happening with other AI approaches, albeit with human intervention. A steady stream of data is the fuel for coveted results. But, with the ever-increasing importance of data, the stakes of data bias are growing ever higher.
Knowledge is the most important commodity we possess. The ability to harness ideas to improve ourselves has always been the competitive advantage of our species. One of the most effective ways to cultivate ideas -- especially in addressing business challenges such as those that are AI- or data-driven in nature -- is to build teams around cognitive diversity. Diverse teams are effective because they draw on a unique set of backgrounds and experiences to look at problems from multiple angles. Imagine if you could take your best employee and create as many clones as you could ever desire.
Setting up minorities to compete for attention with the working class is in itself divisive and destructive. People are not only one thing, or one reaction; while no one group can be listened to over another. Setting up minorities to compete for attention with the working class is in itself divisive and destructive. People are not only one thing, or one reaction; while no one group can be listened to over another.
Turns out that it's not only children who learn what they're taught. The same can be applied to AI-based algorithms. While AI solutions are helping industries everywhere detect insights hidden in data and images and automate business processes to provide better customer service, improved patient outcomes and a more streamlined and efficient workforce, they're still only as good as the data that fuels them -- data that is fed to them by humans. In essence, algorithms are trained to mimic the human thought process, but what happens when that thought process is tarnished with preconceived opinions, experiences and even outright biases? Data scientists and tech professionals are only humans, after all, and sometimes, even unwittingly, they can carry this baggage into the algorithm training process.