Goto

Collaborating Authors

 face-recognition system


Facebook to shut down face-recognition system, delete data

PBS NewsHour

Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people. "This change will represent one of the largest shifts in facial recognition usage in the technology's history," said a blog post Tuesday from Jerome Pesenti, vice president of artificial intelligence for Facebook's new parent company, Meta. "More than a third of Facebook's daily active users have opted in to our Face Recognition setting and are able to be recognized, and its removal will result in the deletion of more than a billion people's individual facial recognition templates." He said the company was trying to weigh the positive use cases for the technology "against growing societal concerns, especially as regulators have yet to provide clear rules." More than a third of Facebook's daily active users have opted in to have their faces recognized by the social network's system.


Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs

Barocas, Solon, Guo, Anhong, Kamar, Ece, Krones, Jacquelyn, Morris, Meredith Ringel, Vaughan, Jennifer Wortman, Wadsworth, Duncan, Wallach, Hanna

arXiv.org Artificial Intelligence

Several pieces of work have uncovered performance disparities by conducting "disaggregated evaluations" of AI systems. We build on these efforts by focusing on the choices that must be made when designing a disaggregated evaluation, as well as some of the key considerations that underlie these design choices and the tradeoffs between these considerations. We argue that a deeper understanding of the choices, considerations, and tradeoffs involved in designing disaggregated evaluations will better enable researchers, practitioners, and the public to understand the ways in which AI systems may be underperforming for particular groups of people.


This Company Uses AI to Outwit Malicious AI

#artificialintelligence

In September 2019, the National Institute of Standards and Technology issued its first-ever warning for an attack on a commercial artificial intelligence algorithm. Security researchers had devised a way to attack a Proofpoint product that uses machine learning to identify spam emails. The system produced email headers that included a "score" of how likely a message was to be spam. But analyzing these scores, along with the contents of messages, made it possible to build a clone of the machine-learning model and craft spam messages that evaded detection. The vulnerability notice may be the first of many.