Privacy Attacks on Machine Learning Models
Machine learning is an exciting field of new opportunities and applications; but like most technology, there are also dangers present as we expand the machine learning systems and reach within our organizations. The use of machine learning on sensitive information, such as financial data, shopping histories, conversations with friends and health-related data, has expanded in the past five years -- and so has the research on vulnerabilities within those machine learning systems. In the news and commentary today, the most common example of hacking a machine learning system is adversarial input. Adversarial input, like the video shown below, are crafted examples which fool a machine learning system into making a false prediction. In this video, a group of researchers at MIT were able to show that they can 3D print an adversarial turtle which is misclassified as a rifle from multiple angles by a computer vision system.
Feb-23-2020, 23:24:19 GMT
- Country:
- Asia > Singapore (0.05)
- Europe > Germany
- Berlin (0.05)
- North America > United States
- California > Los Angeles County > Los Angeles (0.05)
- Industry:
- Health & Medicine > Therapeutic Area
- Immunology > HIV (0.30)
- Infections and Infectious Diseases (0.50)
- Internal Medicine (0.30)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area
- Technology: