The ethical questions that haunt facial-recognition research
In September 2019, four researchers wrote to the publisher Wiley to "respectfully ask" that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1. China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang -- which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces. As a result, many researchers found it disturbing that academics had tried to build such algorithms -- and that a US journal had published a research paper on the topic. And the 2018 study wasn't the only one: journals from publishers including Springer Nature, Elsevier and the Institute of Electrical and Electronics Engineers (IEEE) had also published peer-reviewed papers that describe using facial recognition to identify Uyghurs and members of other Chinese minority groups. The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research.
Nov-18-2020
- AI-Alerts:
- 2020 > 2020-11 > AAAI AI-Alert for Nov 24, 2020 (1.00)
- Country:
- Asia > China
- Xinjiang (0.34)
- North America > United States (1.00)
- Asia > China
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Media (1.00)
- Technology: