Data Scientists Worry About Human Bias in Machine Learning, AI-Based Warfare -- ADTmag

#artificialintelligence 

Data scientists are a happy bunch overall, but they do worry about ethical issues such as human bias and prejudice being programmed into machine learning (ML) and the use of artificial intelligence (AI) and automation in warfare and intelligence gathering. That's a finding in the new "2017 Data Scientist Report" just published by AI specialist CrowdFlower Inc. "Read any article on AI (and there is no shortage) and shortly behind, you'll likely find mention of ethical issues," the report said. "From the White House to the Wall Street Journal to the World Economic Forum, the question of how we program the future is one of the most critical issues facing not just data scientists but society as a whole. In perhaps the most important question in this year's survey, we asked, 'Which of the following do you personally think might be issues regarding ethics and AI?' " The top concern raised in answering that question was "human bias/prejudice programmed into machine learning" (listed by 63 percent of respondents), followed by "use of AI and automation in warfare/intelligence" (49 percent). "Unease on the displacement of human workforces and the impossibility of programming a commonly agreed upon moral code also ranked high on the radar of ethical issues for data scientists tallying in at 41 percent and 42 percent respectively," CrowdFlower said.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found