New method for comparing neural networks exposes how artificial intelligence works: Adversarial training makes it harder to fool the networks

#artificialintelligence 

"The artificial intelligence research community doesn't necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don't know how or why," said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. "Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI." Jones is the lead author of the paper "If You've Trained One You've Trained Them All: Inter-Architecture Similarity Increases With Robustness," which was presented recently at the Conference on Uncertainty in Artificial Intelligence. In addition to studying network similarity, the paper is a crucial step toward characterizing the behavior of robust neural networks. Neural networks are high performance, but fragile. For example, self-driving cars use neural networks to detect signs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found