How to remove bias from AI models
As AI becomes more pervasive, AI-based discrimination is getting the attention of policymakers and corporate leaders but keeping it out of AI-models in the first place is harder than it sounds. According to a new Forrester report, Put the AI in "Fair" with the Right Approach to Fairness, most organizations adhere to fairness in principle but fail in practice. "Fairness" has multiple meanings: "To determine whether or not a machine learning model is fair, a company must decide how it will quantify and evaluate fairness," the report said. "Mathematically speaking, there are at least 21 different methods for measuring fairness." Sensitivity attributes are missing: "The essential paradox of fairness in AI is the fact that companies often don't capture protected attributes like race, sexual orientation, and veteran status in their data because they're not supposed to base decisions on them," the report said.
Dec-2-2021, 10:45:38 GMT