Goto

Collaborating Authors

Fair and Equitable: How IBM Is Removing Bias from AI - DZone AI

#artificialintelligence

As more apps come to market that rely on Artificial Intelligence, software developers and data scientists can unwittingly (or perhaps even knowingly) inject their personal biases into these solutions. This can cause a variety of problems ranging from a poor user experience to major errors in critical decision-making. We at IBM have created a solution specifically to address AI bias. Because flaws and biases may not be easy to detect without the right tool, IBM is deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent. Thus, we are pleased to back up that commitment with the launch of AI Fairness 360, an open-source library to help detect and remove bias in Machine Learning models and data sets.


Configure, monitor, and understand machine learning models with IBM AI OpenScale

#artificialintelligence

The new Monitor WML models with AI OpenScale code pattern shows you how to gain insight into a machine learning model using IBM AI OpenScale. The pattern provides examples of how to configure the AI OpenScale service. You can then enable and explore a model deployed with Watson Machine Learning, and create fairness and accuracy measures for the model. IBM AI OpenScale is an open platform that enables organizations to automate and operate their AI across its full lifecycle. AI OpenScale provides a powerful environment for managing AI and ML models on IBM Cloud, IBM Cloud Private, or other platforms.


IBM continues momentum in AI and trust leadership - DevOps.com

#artificialintelligence

IBM continues to serve as an industry leader in advancing what we call Trusted AI, focused on developing diverse approaches that implement elements of fairness, explainability, and accountability across the entire lifecycle of an AI application. Under our Trusted AI efforts, IBM released in 2018 the AI Fairness 360 toolkit (AIF360), which is an extensible, open source toolkit that can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. It contains over 70 fairness metrics and 11 state-of-the-art bias mitigation algorithms developed by the research community, and it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. Now, IBM is adding two new ways in which AIF360 is becoming even more accessible for a wider range of developers, as well as increased functionality: compatibility with scikit-learn and R. AI fairness is an important topic as machine learning models are increasingly used for high-stakes decisions. Machine learning discovers and generalizes patterns in the data and therefore, could replicate systematic advantages of privileged groups.


Designing for AI: Trust

#artificialintelligence

At IBM, we're building software solutions that help our users make smarter decisions faster. In the world of data and artificial intelligence (AI), it all comes down to designing products our users can trust enough to help them make those important decisions. This focus on trust goes beyond data security and validation, it's about helping our users understand their data, providing relevant recommendations when they need it, and empowering them to create solutions they can be confident in. As we designed our end to end AI platform IBM Cloud Pak for Data, as well as a diverse set of AI offerings and solutions in our IBM Watson portfolio, we focused on the following 8 principles for establishing trust within AI experiences. At IBM, we believe that good design does not sacrifice transparency and that imperceptible AI is not ethical AI.


Artificial Intelligence Can Now Explain Its Own Decision-Making

#artificialintelligence

People are scared of the unknown. So naturally, one reason why artificial intelligence (AI) hasn't yet been widely adopted may be because the rationale behind a machine's decision-making is still unknown. How can decisions be trusted when people don't know where they come from? This is referred to as the black box of AI--something that needs to be cracked open. As technology continues to play an increasingly important role in day-to-day life and change roles within the workforce, the ethics behind algorithms has become a hotly debated topic.