Researchers At Skoltech Institute Explain How Turing-Like Patterns Cause Neural Networks To Make Mistakes

#artificialintelligence 

Although intelligent and adept at image recognition and classification, deep neural networks can still be vulnerable to adversarial perturbations, i.e., small but queer details in an image that causes errors in neural network output. Some of these are universal. They tend to interfere with the neural network when placed on any input. A research paper presented at the 35th AAAI Conference on Artificial Intelligence by researchers at Skoltech demonstrated that patterns that cause neural networks to make mistakes in image recognition are, in fact, similar to Turing patterns found all around the Neural network world. This result can help design defenses for pattern recognition systems that are currently susceptible to attacks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found