Reviews: End-to-End Differentiable Proving
–Neural Information Processing Systems
Summary of paper ---------------- The paper presents a novel class of models, termed Neural Theorem Provers (NTPs), for automated knowledge base completion and automated theorem proving, using a deep neural network architecture. The recursive construction of the network is inspired by the backward chaining algorithm typically employed in logic programming (i.e., using the basic operations unification, conjunction and disjunction). Instead of directly operating on symbols, the neural network is employed to learn subsymbolic vector representations of entities and predicates, which are then exploited for assessing the similarity of symbols. Since the proposed architecture is fully differentiable, knowledge base completion can be performed using gradient descent. Thus, following the fundamental philosophy of neural-symbolic systems, the paper aims at combining the advantages of symbolic reasoning with those of subsymbolic inference.
Neural Information Processing Systems
Oct-8-2024, 07:11:57 GMT
- Technology: