Why it's so hard to create unbiased artificial intelligence
Ben Dickson is a software engineer and the founder of TechTalks. As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we've become somewhat expectant that robots can succeed where humans have failed -- namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society. While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make. At its core, machine learning uses algorithms to parse data, extract patterns, learn and make predictions and decisions based on the gleaned insights.
Nov-14-2016, 00:05:53 GMT
- Country:
- North America > United States (0.15)
- Industry:
- Government > Regional Government (0.49)
- Technology: