Markov Logic Networks with Complex Weights: Expressivity, Liftability and Fourier Transforms
–arXiv.org Artificial Intelligence
Statistical Relational Learning [Getoor and Taskar, 2007] (SRL) is concerned with learning probabilistic models from relational data such as, for instance, knowledge graphs, biological or social networks, structures of molecules etc. Markov Logic Networks [Richardson and Domingos, 2006] (MLNs) are among the most prominent SRL systems and in this paper we are interested in their expressivity. Informally, expressivity measures the "amount" of distributions that can be modelled by a given class of probabilistic models. An MLN is given by a set of weighted first-order logic formulas and it defines a distribution on possible worlds over a given domain. Here we study expressivity of MLNs in a setting where we first fix the first-order logic formulas defining the MLN and then vary their weights. Since it is not even clear what expressivity should mean in this context, our first contribution in this paper is a formal framework for studying expressivity of MLNs. The main reason for studying expressivity of MLNs in the setting where one first fixes the formulas is computational complexity of inference because its complexity usually depends mostly on the formulas and not so much on their weights.
arXiv.org Artificial Intelligence
Feb-24-2020