Prime Convolutional Model: Breaking the Ground for Theoretical Explainability
Panelli, Francesco, Almhaithawi, Doaa, Cerquitelli, Tania, Bellini, Alessandro
–arXiv.org Artificial Intelligence
In this paper, we propose a new theoretical approach to Explainable AI. Following the Scientific Method, this approach consists in formulating on the basis of empirical evidence, a mathematical model to explain and predict the behaviors of Neural Networks. We apply the method to a case study created in a controlled environment, which we call Prime Convolutional Model (p-Conv for short). p-Conv operates on a dataset consisting of the first one million natural numbers and is trained to identify the congruence classes modulo a given integer $m$. Its architecture uses a convolutional-type neural network that contextually processes a sequence of $B$ consecutive numbers to each input. We take an empirical approach and exploit p-Conv to identify the congruence classes of numbers in a validation set using different values for $m$ and $B$. The results show that the different behaviors of p-Conv (i.e., whether it can perform the task or not) can be modeled mathematically in terms of $m$ and $B$. The inferred mathematical model reveals interesting patterns able to explain when and why p-Conv succeeds in performing task and, if not, which error pattern it follows.
arXiv.org Artificial Intelligence
Mar-4-2025
- Genre:
- Overview (0.67)
- Research Report > New Finding (0.48)
- Industry:
- Health & Medicine > Diagnostic Medicine (0.46)
- Technology: