Goto

Collaborating Authors

 face photo and machine-learning algorithm


Now That Machines Can Learn, Can They Unlearn? - AI Summary

#artificialintelligence

Early this year, the US Federal Trade Commission forced facial recognition startup Paravision to delete a collection of improperly obtained face photos and machine-learning algorithms trained with them. FTC Commissioner Rohit Chopra praised that new enforcement tactic as a way to force a company breaching data rules to "forfeit the fruits of its deception." Roth and collaborators from Penn, Harvard, and Stanford recently demonstrated a flaw in that approach, showing that the unlearning system would break down if submitted deletion requests came in a particular sequence, either through chance or from a malicious actor. It will take virtuoso technical work before tech companies can actually implement machine unlearning as a way to offer people more control over the algorithmic fate of their data. Binns says that while it can be genuinely useful, "in other cases it's more something a company does to show that it's innovating."

  artificial intelligence, face photo and machine-learning algorithm, machine learning, (11 more...)
  AI-Alerts: 2021 > 2021-08 > AAAI AI-Alert for Aug 24, 2021 (1.00)
  Industry: Information Technology (0.65)

Now That Machines Can Learn, Can They Unlearn? - AI Summary

#artificialintelligence

A nascent area of computer science dubbed machine unlearning seeks ways to induce selective amnesia in artificial intelligence software. "This research aims to find some middle ground," says Aaron Roth, a professor at the University of Pennsylvania who is working on machine unlearning. Early this year, the US Federal Trade Commission forced facial recognition startup Paravision to delete a collection of improperly obtained face photos and machine-learning algorithms trained with them. FTC commissioner Rohit Chopra praised that new enforcement tactic as a way to force a company breaching data rules to "forfeit the fruits of its deception." Roth and collaborators from Penn, Harvard, and Stanford recently demonstrated a flaw in that approach, showing that the unlearning system would break down if submitted deletion requests came in a particular sequence, either through chance or from a malicious actor.