AI researchers say scientific publishers help perpetuate racist algorithms
The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled "A Deep Neural Network Model to Predict Criminality Using Image Processing," presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release. It was developed by researchers at Harrisburg University and was due to be presented at a forthcoming conference. The demands: Citing the work of leading Black AI scholars, the letter debunks the scientific basis of the paper and asserts that crime-prediction technologies are racist. It also lists three demands: 1) for Springer Nature to rescind its offer to publish the study; 2) for it to issue a statement condemning the use of statistical techniques such as machine learning to predict criminality and acknowledging its role in incentivizing such research; and 3) for all scientific publishers to commit to not publishing similar papers in the future.
Jun-23-2020, 18:36:22 GMT
- AI-Alerts:
- 2020 > 2020-06 > AAAI AI-Alert for Jun 23, 2020 (1.00)
- Country:
- North America
- Canada > Quebec
- Montreal (0.07)
- United States (0.06)
- Canada > Quebec
- North America
- Genre:
- Press Release (0.38)
- Industry:
- Law > Civil Rights & Constitutional Law (0.73)
- Technology: