Adversarial machine learning: With artificial intelligence comes new types of attacks
Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found. In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions. "If machine learning is the software of the future, we're at a very basic starting point for securing it," said Prateek Mittal, the lead researcher and an associate professor in the Department of Electrical Engineering at Princeton.
Oct-26-2022, 05:15:12 GMT
- Country:
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- Genre:
- Research Report (0.56)
- Industry:
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.50)
- Transportation (0.73)
- Technology: