Simplicial Gaussian Models: Representation and Inference

Marinucci, Lorenzo, D'Acunto, Gabriele, Di Lorenzo, Paolo, Barbarossa, Sergio

arXiv.org Machine Learning 

Thus, they are widely used in several applications, including computer vision, computational biology, and spatial statistics [2, 3, 4]. In a PGM, random variables are associated with the vertices of a graph, while edges encode statistical dependencies. The meaning of the edges depend on the graph type: Bayesian Networks capture directional dependencies through directed acyclic graphs (DAGs) [5], whereas Markov Random Fields (MRFs) model symmetric conditional dependencies with undirected graphs, thanks to the Markov property [6]. A well-studied family is Gaussian Markov Random Fields (GMRFs), i.e., MRFs that model Gaussian random variables [7]. Indeed, conditional dependencies in the Gaussian distribution are encoded by the precision matrix, thus allowing to learn GMRF from data with efficient algorithms [8]. However, PGMs are inherently limited to graphs. First, PGMs typically associate random variables with individual nodes (sets of cardinality one), while in many settings random quantities naturally relates with larger sets. Examples include data traffic in communication networks or water flows in distribution networks, where measurements are collected on the links of the networks [9, 10, 11]. Second, PGMs are restricted to modeling pairwise dependencies via edges.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found