IBM builds a more diverse million-face data set to help reduce bias in AI
Encoding biases into machine learning models, and in general into the constructs we refer to as AI, is nearly inescapable -- but we can sure do better than we have in past years. IBM is hoping that a new database of a million faces more reflective of those in the real world will help. Facial recognition is being relied on for everything from unlocking your phone to your front door, and is being used to estimate your mood or likelihood to commit criminal acts -- and we may as well admit many of these applications are bunk. But even the good ones often fail simple tests like working adequately with people of certain skin tones or ages. This is a multi-layered problem, and of course a major part of it is that many developers and creators of these systems fail to think about, let alone audit for, a failure of representation in their data.
Feb-18-2019, 01:20:26 GMT