Microsoft's racist robot and the problem with AI development


I asked @TayandYou their thoughts on abortion, g-g, racism, domestic violence, etc @Microsoft train your bot better But then trolls and abusers began tweeting at Tay, projecting their own repugnant and offensive opinions onto Microsoft's constantly learning AI, and she began to reflect those opinions in her own conversation. The company declined to say why it didn't implement protocols for harassment or block foul language, or whether engineers anticipated this kind of behavior. Inherent bias is pre-programmed because it exists in humans, and if individuals building products represent homogenous groups, then the result will be homogenous technology that can, perhaps unintentionally, become racist. Microsoft's flub is particularly striking considering Google's recent public AI failure.