Stacked Structure Learning for Lifted Relational Neural Networks
Sourek, Gustav, Svatos, Martin, Zelezny, Filip, Schockaert, Steven, Kuzelka, Ondrej
Lifted Relational Neural Networks (LRNNs [15]) are weighted sets of first-order rules, which are used to construct feed-forward neural networks from relational structures. A central characteristic of LRNNs is that a different neural network is constructed for each learning example, but crucially, the weights of these different neural networks are shared. This allows LRNNs to use neural networks for learning in relational domains, despite the fact that training examples may vary considerably in size and structure. In previous work, LRNNs have been learned from handcrafted rules. In such cases, only the weights of the first-order rules have to be learned from training data, which can be accomplished using a variant of back-propagation. The use of handcrafted rules offers a natural way to incorporate domain knowledge in the learning process. In some applications, however, (sufficient) domain knowledge is lacking and both the rules and their weights have to be learned from data. To this end, in this paper we introduce a structure learning method for LRNNs. Our proposed structure learning method proceeds in an iterative fashion.
Oct-5-2017