The rush to commercialize AI is creating major security risks
At this year's International Conference on Learning Representations (ICLR), a team of researchers from the University of Maryland presented an attack technique meant to slow down deep learning models that have been optimized for fast and sensitive operations. The attack, aptly named DeepSloth, targets "adaptive deep neural networks," a range of deep learning architectures that cut down computations to speed up processing. Recent years have seen growing interest in the security of machine learning and deep learning, and there are numerous papers and techniques on hacking and defending neural networks. But one thing made DeepSloth particularly interesting: The researchers at the University of Maryland were presenting a vulnerability in a technique they themselves had developed two years earlier. In some ways, the story of DeepSloth illustrates the challenges that the machine learning community faces.
Jun-7-2021, 11:10:07 GMT
- Country:
- North America > United States > Maryland (0.47)
- Genre:
- Research Report (0.49)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: