A.I. Privacy Assistants Could Stop You From Exposing Sensitive Info

#artificialintelligence 

As the hundreds of people who have publicly posted pictures of their debit cards on Twitter can attest, it's often easy to unwittingly expose private information in the age of social media. But what if a friendly automated assistant, similar to Siri or Alexa, warned you before you share sensitive images, potentially mitigating threats like online stalking and identity theft? That's the idea behind a recent study from researchers at the Max Planck Institute for Informatics in Germany, who say they've built an AI-powered privacy watchdog that can learn a person's privacy preferences and caution them whenever private information might be exposed in the pictures they post to social media. "Our model is trained to predict the user specific privacy risk and even outperforms the judgment of the users, who often fail to follow their own privacy preferences," the researchers write in a recent paper, which awaits peer review. "In fact -- as our study shows -- people frequently misjudge the privacy relevant information content in an image -- which leads to failure of enforcing their own privacy preferences."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found