Ethical questions in AI use cannot be solved by STEM grads alone
Practical adoption of artificial intelligence (AI) faces a variety of roadblocks--splashy, high-profile deployments of AI have not been received well, with Microsoft's "Tay" bot on Twitter parroting anti-Semetic vitriol just 16 hours after launch. Similarly, Amazon's AI-powered hiring process displayed bias against women and the company marketed unreliable facial recognition technology to municipal law enforcement. AI often reflects the biases--including, and especially, unconscious biases--of the designers, which would make Facebook attempting to build an AI with an "ethical compass" a concerning prospect, given the multitude of other problems the social network has experienced. This is a problem that necessarily requires diversity of thought, according to Northeastern University's Ethics Institute and professional services firm Accenture, which published a guide to building data and AI ethics committees. Such committees are, by definition, not achievable by pooling together people of similar backgrounds to debate the merits of AI design.
Oct-10-2019, 16:01:50 GMT
- Industry:
- Health & Medicine (0.51)
- Information Technology > Services (0.56)
- Professional Services (0.58)
- Technology: