Goto

Collaborating Authors

 propositionalization


Neural RELAGGS

Pensel, Lukas, Kramer, Stefan

arXiv.org Artificial Intelligence

Multi-relational databases are the basis of most consolidated data collections in science and industry today. Most learning and mining algorithms, however, require data to be represented in a propositional form. While there is a variety of specialized machine learning algorithms that can operate directly on multi-relational data sets, propositionalization algorithms transform multi-relational databases into propositional data sets, thereby allowing the application of traditional machine learning and data mining algorithms without their modification. One prominent propositionalization algorithm is RELAGGS by Krogel and Wrobel, which transforms the data by nested aggregations. We propose a new neural network based algorithm in the spirit of RELAGGS that employs trainable composite aggregate functions instead of the static aggregate functions used in the original approach. In this way, we can jointly train the propositionalization with the prediction model, or, alternatively, use the learned aggegrations as embeddings in other algorithms. We demonstrate the increased predictive performance by comparing N-RELAGGS with RELAGGS and multiple other state-of-the-art algorithms.


What is Relational Machine Learning?

#artificialintelligence

All intelligent life forms instinctively model their surrounding environment in order to actively navigate through it with their actions. In Artificial Intelligence (AI) research, we then try to understand and automate this interesting ability of living systems with machine learning (ML) at the core. Machine learning then instantiates the scientific method of searching for a mathematical hypothesis (model) that best fits the observed data. However, thanks to the advances in computing, it allows to further automate this process into searching through large prefabricated hypothesis spaces in a heavily data-driven fashion. This is particularly useful in the modeling of complex systems for which the structure of the underlying hypothesis space is too complex, or even unknown, but large amounts of data are available. While the approaches to the problem of mathematical modeling of complex systems evolved in various, largely independent, ways, one aspect remained almost universal -- the data representation.


LazyBum: Decision tree learning using lazy propositionalization

Schouterden, Jonas, Davis, Jesse, Blockeel, Hendrik

arXiv.org Artificial Intelligence

Propositionalization is the process of summarizing relational data into a tabular (attribute-value) format. The resulting table can next be used by any propositional learner. This approach makes it possible to apply a wide variety of learning methods to relational data. However, the transformation from relational to propositional format is generally not lossless: different relational structures may be mapped onto the same feature vector. At the same time, features may be introduced that are not needed for the learning task at hand. In general, it is hard to define a feature space that contains all and only those features that are needed for the learning task. This paper presents LazyBum, a system that can be considered a lazy version of the recently proposed OneBM method for propositionalization. LazyBum interleaves OneBM's feature construction method with a decision tree learner. This learner both uses and guides the propositionalization process. It indicates when and where to look for new features. This approach is similar to what has elsewhere been called dynamic propositionalization. In an experimental comparison with the original OneBM and with two other recently proposed propositionalization methods (nFOIL and MODL, which respectively perform dynamic and static propositionalization), LazyBum achieves a comparable accuracy with a lower execution time on most of the datasets.


Knowledge Graph Embedding With Iterative Guidance From Soft Rules

Guo, Shu (Institute of Information Engineering, Chinese Academy of Sciences) | Wang, Quan (Institute of Information Engineering, Chinese Academy of Sciences) | Wang, Lihong (National Computer Network Emergency Response Technical Team &amp) | Wang, Bin (Coordination Center of China) | Guo, Li (Institute of Information Engineering, Chinese Academy of Sciences)

AAAI Conferences

Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.


Propositionalization for Unsupervised Outlier Detection in Multi-Relational Data

Riahi, Fatemeh (Simon Fraser University) | Schulte, Oliver (Simon Fraser University)

AAAI Conferences

We develop a novel propositionalization approach to unsupervised outlier detection for multi-relational data. Propositionalization summarizes the information from multi-relational data, that are typically stored in multiple tables, in a single data table. The columns in the data table represent conjunctive relational features that are learned from the data. An advantage of propositionalization is that it facilitates applying the many previous outlier detection methods that were designed for single-table data. We show that conjunctive features for outlier detection can be learned from data using statistical-relational methods. Specifically, we apply Markov Logic Network structure learning. Compared to baseline propositionalization methods, Markov Logic propositionalization produces the most compact data tables, whose attributes capture the most complex multi-relational correlations. We apply three representative outlier detection methods LOF, KNN, OutRank to the data tables constructed by propositionalization.


A Heuristic Search Algorithm for Solving First-Order MDPs

Karabaev, Eldar, Skvortsova, Olga

arXiv.org Artificial Intelligence

We present a heuristic search algorithm for solving first-order MDPs (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating states individually, and heuristic search that avoids evaluating all states. Firstly, we apply state abstraction directly on the FOMDP avoiding propositionalization. Such kind of abstraction is referred to as firstorder state abstraction. Secondly, guided by an admissible heuristic, the search is restricted only to those states that are reachable from the initial state. We demonstrate the usefullness of the above techniques for solving FOMDPs on a system, referred to as FCPlanner, that entered the probabilistic track of the International Planning Competition (IPC'2004).


Relational Random Forests Based on Random Relational Rules

Anderson, Grant (University of Waikato) | Pfahringer, Bernhard (University of Waikato)

AAAI Conferences

Random Forests have been shown to perform very well in propositional learning.  FORF is an upgrade of Random Forests for relational data. In this paper we investigate shortcomings of FORF and propose an alternative algorithm, RF, for generating Random Forests over relational data. RF employs randomly generated relational rules as fully self-contained Boolean tests inside each node in a tree and thus can be viewed as an instance of dynamic propositionalization.  The implementation of RF allows for the simultaneous or parallel growth of all the branches of all the trees in the ensemble in an efficient shared, but still single-threaded way.  Experiments favorably compare RF to both FORF and the combination of static propositionalization together with standard Random Forests. Various strategies for tree initialization and splitting of nodes, as well as resulting ensemble size, diversity, and computational complexity of RF are also investigated.


FluCaP: A Heuristic Search Planner for First-Order MDPs

Hoelldobler, S., Karabaev, E., Skvortsova, O.

Journal of Artificial Intelligence Research

We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating states individually, and heuristic search that avoids evaluating all states. Firstly, in contrast to existing systems, which start with propositionalizing the FOMDP and then perform state abstraction on its propositionalized version we apply state abstraction directly on the FOMDP avoiding propositionalization. This kind of abstraction is referred to as first-order state abstraction. Secondly, guided by an admissible heuristic, the search is restricted to those states that are reachable from the initial state. We demonstrate the usefulness of the above techniques for solving FOMDPs with a system, referred to as FluCaP (formerly, FCPlanner), that entered the probabilistic track of the 2004 International Planning Competition (IPC'2004) and demonstrated an advantage over other planners on the problems represented in first-order terms.