Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks

Sen, Prithviraj, de Carvalho, Breno W. S. R., Riegel, Ryan, Gray, Alexander

arXiv.org Artificial Intelligence 

Inductive logic programming (ILP) (Muggleton 1996) has We propose first-order extensions of LNNs that can been of long-standing interest where the goal is to learn tackle ILP. Since vanilla backpropagation is insufficient for logical rules from labeled data. Since rules are explicitly constraint optimization, we propose flexible learning algorithms symbolic, they provide certain advantages over black box capable of handling a variety of (linear) inequality and models. For instance, learned rules can be inspected, understood equality constraints. We experiment with diverse benchmarks and verified forming a convenient means of storing for ILP including gridworld and knowledge base completion learned knowledge. Consequently, a number of approaches (KBC) that call for learning of different kinds of rules have been proposed to address ILP including, but not limited and show how our approach can tackle both effectively. In to, statistical relational learning (Getoor and Taskar 2007) fact, our KBC results represents a 4-16% relative improvement and more recently, neuro-symbolic methods.