Training Artificial Neural Networks by Generalized Likelihood Ratio Method: Exploring Brain-like Learning to Improve Adversarial Defensiveness

Xiao, Li, Peng, Yijie, Hong, Jeff, Ke, Zewu

arXiv.org Machine Learning 

Recent work in deep learning has shown that the artificial neural networks are vulnerable to adversarial attacks, where a very small perturbation of the inputs can drastically alter the classification result. In this work, we propose a generalized likelihood ratio method capable of training the artificial neural networks with some biological brain-like mechanisms,.e.g., (a) learning by the loss value, (b) learning via neurons with discontinuous activation and loss functions. The traditional back propagation method cannot train the artificial neural networks with aforementioned brain-like learning mechanisms. Numerical results show that various artificial neural networks trained by the new method can significantly improve the defensiveness against the adversarial attacks. Code is available: \url{https://github.com/LX-doctorAI/GLR_ADV} .

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found