Hacked Dog Pics Can Play Tricks on Computer Vision AI
Tricking Google's computer vision AI into seeing a dog as a pair of human skiers may seem mostly harmless. But the possibilities become more unnerving when considering how hackers could trick a self-driving car's AI into seeing a plastic bag instead of a child up ahead. Or making future surveillance systems overlook a gun because they see it as a toy doll. An independent AI research group run by MIT students has demonstrated a new way to fool the computer vision algorithms that enable AI systems to see the world--an approach that could prove up to 1000 times as fast as other existing ways of hacking "black box" systems whose inner workings remain hidden to outsiders. That idea of a black box perfectly describes the neural networks behind the deep learning algorithms enabling computer vision services for Google, Facebook, and other companies.
Dec-22-2017, 16:30:23 GMT
- Country:
- Europe
- Germany (0.05)
- Switzerland (0.05)
- North America > United States
- Utah (0.05)
- Europe
- Industry:
- Information Technology > Security & Privacy (0.71)
- Transportation > Air (0.59)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.82)
- Robots (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence