Reviews: Embedding Symbolic Knowledge into Deep Networks
–Neural Information Processing Systems
This paper introduces a method for incorporating prior knowledge encoded as logical rules to improve the performance of deep learning models. In particular, it takes logical rules which are in decomposable and deterministic negation normal form (d-DNNF), and proposes using an augmented graph convolution network to embed them into a vector space. This embedding is then regularised according to the logical constraints, allowing the addition of a "logic loss" term to train models obeying these logical rules. Incorporating (symbolic) background knowledge to improve performance of deep learning methods is an interesting and valuable direction, and from the experiments using a d-DNNF rather than a CNF appears to be beneficial. However, for me the notion of using a d-DNNF as the source of background knowledge raises a few issues which I feel are not addressed in the paper.
Neural Information Processing Systems
Jan-24-2025, 23:52:29 GMT
- Technology: