Ethical questions in AI use cannot be solved by STEM grads alone

#artificialintelligence 

Practical adoption of artificial intelligence (AI) faces a variety of roadblocks--splashy, high-profile deployments of AI have not been received well, with Microsoft's "Tay" bot on Twitter parroting anti-Semetic vitriol just 16 hours after launch. Similarly, Amazon's AI-powered hiring process displayed bias against women and the company marketed unreliable facial recognition technology to municipal law enforcement. AI often reflects the biases--including, and especially, unconscious biases--of the designers, which would make Facebook attempting to build an AI with an "ethical compass" a concerning prospect, given the multitude of other problems the social network has experienced. This is a problem that necessarily requires diversity of thought, according to Northeastern University's Ethics Institute and professional services firm Accenture, which published a guide to building data and AI ethics committees. Such committees are, by definition, not achievable by pooling together people of similar backgrounds to debate the merits of AI design.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found