AI Research Is in Desperate Need of an Ethical Watchdog
About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training it with tens of thousands of photographs from dating sites, the algorithm could perform better than a human judge in specific instances. For example, when given photographs of a gay white man and a straight white man taken from dating sites, the algorithm could guess which one was gay more accurately than actual people participating in the study.* They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper.
Dec-28-2017, 16:06:13 GMT
- Genre:
- Research Report > New Finding (0.35)
- Industry:
- Information Technology > Services (0.76)
- Law (0.98)
- Technology: