Fair and Equitable: How IBM Is Removing Bias from AI - DZone AI

#artificialintelligence

As more apps come to market that rely on Artificial Intelligence, software developers and data scientists can unwittingly (or perhaps even knowingly) inject their personal biases into these solutions. This can cause a variety of problems ranging from a poor user experience to major errors in critical decision-making. We at IBM have created a solution specifically to address AI bias. Because flaws and biases may not be easy to detect without the right tool, IBM is deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent. Thus, we are pleased to back up that commitment with the launch of AI Fairness 360, an open-source library to help detect and remove bias in Machine Learning models and data sets.


Configure, monitor, and understand machine learning models with IBM AI OpenScale

#artificialintelligence

The new Monitor WML models with AI OpenScale code pattern shows you how to gain insight into a machine learning model using IBM AI OpenScale. The pattern provides examples of how to configure the AI OpenScale service. You can then enable and explore a model deployed with Watson Machine Learning, and create fairness and accuracy measures for the model. IBM AI OpenScale is an open platform that enables organizations to automate and operate their AI across its full lifecycle. AI OpenScale provides a powerful environment for managing AI and ML models on IBM Cloud, IBM Cloud Private, or other platforms.


Artificial Intelligence Can Now Explain Its Own Decision-Making

#artificialintelligence

People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn't yet been widely adopted may be because the rationale behind a machine's decision-making is still unknown. How can decisions be trusted when people don't know where they come from? This is referred to as the black box of AI--something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.


Artificial Intelligence Can Reinforce Bias, Cloud Giants Announce Tools For AI Fairness

#artificialintelligence

Unfairly trained Artificial Intelligence (AI) systems can reinforce bias, therefore AI systems must be trained fairly. Experts say AI fairness is a dataset issue for each specific machine learning model. AI fairness is a newly recognized challenge. The big cloud providers are in the process of developing and announcing tools to help address AI fairness. Facebook announced internal software tools development to search for bias in training datasets in May 2018.


Artificial Intelligence Can Reinforce Bias, Cloud Giants Announce Tools For AI Fairness

#artificialintelligence

Unfairly trained Artificial Intelligence (AI) systems can reinforce bias, therefore AI systems must be trained fairly. Experts say AI fairness is a dataset issue for each specific machine learning model. AI fairness is a newly recognized challenge. The big cloud providers are in the process of developing and announcing tools to help address AI fairness. Facebook announced internal software tools development to search for bias in training datasets in May 2018.