New method for comparing neural networks exposes how artificial intelligence works: Adversarial training makes it harder to fool the networks
"The artificial intelligence research community doesn't necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don't know how or why," said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. "Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI." Jones is the lead author of the paper "If You've Trained One You've Trained Them All: Inter-Architecture Similarity Increases With Robustness," which was presented recently at the Conference on Uncertainty in Artificial Intelligence. In addition to studying network similarity, the paper is a crucial step toward characterizing the behavior of robust neural networks. Neural networks are high performance, but fragile. For example, self-driving cars use neural networks to detect signs.
Oct-3-2022, 23:32:19 GMT
- Country:
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.31)
- Genre:
- Research Report (0.74)
- Industry:
- Information Technology (0.38)
- Technology: