KLay: Accelerating Neurosymbolic AI
Maene, Jaron, Derkinderen, Vincent, Martires, Pedro Zuidberg Dos
–arXiv.org Artificial Intelligence
A popular approach to neurosymbolic AI involves mapping logic formulas to arithmetic circuits (computation graphs consisting of sums and products) and passing the outputs of a neural network through these circuits. This approach enforces symbolic constraints onto a neural network in a principled and end-toend differentiable way. Unfortunately, arithmetic circuits are challenging to run on modern AI accelerators as they exhibit a high degree of irregular sparsity. Interest in neurosymbolic AI (Hitzler & Sarker, 2022) continues to grow as the integration of symbolic reasoning and neural networks has been shown to increase reasoning capabilities (Yi et al., 2018; Trinh et al., 2024), safety (Yang et al., 2023), controllability (Jiao et al., 2024), and interpretability (Koh et al., 2020). Furthermore, neurosymbolic methods often require less data by allowing a richer and more explicit set of priors (Diligenti et al., 2017; Manhaeve et al., 2018). However, as the computational structure of many neurosymbolic models is partially dense (in its neural component) and partially sparse (in its symbolic component), efficiently learning neurosymbolic models still presents a challenge (Wan et al., 2024). So far, the symbolic components of these neurosymbolic models have struggled to fully exploit the potential of modern AI accelerators. Our work focuses on a particular flavor of neurosymbolic AI, pioneered by Xu et al. (2018) and Manhaeve et al. (2018), which performs probabilistic inference on the outputs of a neural network. This is achieved by encoding the symbolic knowledge using arithmetic circuits.
arXiv.org Artificial Intelligence
Oct-15-2024