Goto

Collaborating Authors

Results


Inference attacks: How much information can machine learning models leak?

#artificialintelligence

The widespread adoption of machine learning models in different applications has given rise to a new range of privacy and security concerns. Among them are'inference attacks', whereby attackers cause a target machine learning model to leak information about its training data. However, these attacks are not very well understood and we need to readjust our definitions and expectations of how they can affect our privacy. This is according to researchers from several academic institutions in Australia and India who made the warning in a new paper (PDF) accepted at the IEEE European Symposium on Security and Privacy, which will be held in September. The paper was jointly authored by researchers at the University of New South Wales; Birla Institute of Technology and Science, Pilani; Macquarie University; and the Cyber & Electronic Warfare Division, Defence Science and Technology Group, Australia.