Facial-recognition research needs an ethical reckoning
Cameras using facial-recognition technology in King's Cross, London, were taken down in 2019 after concerns were raised that they had been installed without appropriate consent or involvement of the data regulator.Credit: James Veysey/Shutterstock Over the past 18 months, a number of universities and companies have been removing online data sets containing thousands -- or even millions -- of photographs of faces used to improve facial-recognition algorithms. The pictures are classified as public data, and their collection didn't seem to alarm institutional review boards (IRBs) and other research-ethics bodies. But none of the people in the photos had been asked for permission, and some were unhappy about the way their faces had been used. This problem has been brought to prominence by the work of Berlin-based artist and researcher Adam Harvey, who highlighted how public data sets are used by companies to hone surveillance-linked technology -- and by the journalists who reported on Harvey's work. Many researchers in the fields of computer science and artificial intelligence (AI), and those responsible for the relevant institutional ethical review processes, did not see any harm in using public data without consent.
Jan-26-2021, 06:12:15 GMT
- Country:
- Asia > China (0.05)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.15)
- Industry:
- Information Technology > Security & Privacy (0.30)
- Technology: