Testing Spintronics Implemented Monte Carlo Dropout-Based Bayesian Neural Networks

Ahmed, Soyed Tuhin, Hefenbrock, Michael, Prenat, Guillaume, Anghel, Lorena, Tahoori, Mehdi B.

arXiv.org Artificial Intelligence 

Bayesian Neural Networks (BayNNs) can inherently estimate predictive uncertainty, facilitating informed decision-making. Dropout-based BayNNs are increasingly implemented in spintronics-based computation-in-memory architectures for resourceconstrained yet high-performance safety-critical applications. Although uncertainty estimation is important, the reliability of Dropout generation and BayNN computation is equally important for target applications but is overlooked in existing works. However, testing BayNNs is significantly more challenging compared to conventional NNs, due to their stochastic nature. In this paper, we present for the first time the model of the non-idealities of the spintronics-based Dropout module and analyze their impact on uncertainty estimates and accuracy. Furthermore, we propose a testing framework based on repeatability ranking for Dropout-based BayNN with up to 100% fault coverage while using only 0.2% of training data as test vectors. Bayesian Neural Networks (BayNNs) offer substantial benefits over conventional neural networks (NNs), particularly in safety-critical applications where reliability and confidence in prediction are paramount [1]. Unlike traditional NNs, BayNNs can inherently capture and estimate the uncertainty of their predictions, enhancing decision-making under uncertain conditions. However, their implementation faces significant computational bottlenecks, especially on edge devices. Spintronics-based computation-in-memory (Spintronics-CIM) architectures are a promising solution for the hardware realization of BayNNs as they mitigate some of the inherent computational costs, balancing high-performance demands with the constraints of resourcelimited devices.