AI and ethics: looking inside the black box
AI systems have huge potential for good, but they're only as good as the data they're trained with. As machine learning expands into all areas of our lives, finding uses in healthcare, autonomous driving and law enforcement (to name just a few), any bias could be not just inconvenient, but could mean it does more harm than good. The problem of AI bias isn't just theoretical; after personal experience with systems recognizing her lighter –skinned colleagues' faces more readily than her own, MIT researcher Joy Buolamwini began a project to find out whether the software had trouble with her particular features, or if there was a wider issue. Buolamwini tested systems from IBM, Microsoft and Chinese company Face, showing them 1,000 faces and asking them to identify the subjects as either male or female. She found that all the systems were significantly better at identifying male faces than female ones, and perform better on lighter faces than darker faces.
Jan-31-2019, 07:57:18 GMT