Goto

Collaborating Authors

 ggm


7 Appendix Figure 5: Comparison of GenStat architecture to selected graph generative models. 7.1 Proofs 7.1.1 Proposition 1 Let p

Neural Information Processing Systems

Figure 5: Comparison of GenStat architecture to selected graph generative models. This proof uses two properties of LDP: composability and immunity to post-processing [2]. Figure 6 illustrates the PGM of Randomized algorithms. The GGM parameters are a function of the perturbed graph statistics as learning input. The implementation can be easily extended to directed graphs. A statistics-based GGM that takes the degree sequence as sufficient statistics [5].




Testing for Differences in Gaussian Graphical Models: Applications to Brain Connectivity

Neural Information Processing Systems

Functional brain networks are well described and estimated from data with Gaussian Graphical Models (GGMs), e.g.\ using sparse inverse covariance estimators. Comparing functional connectivity of subjects in two populations calls for comparing these estimated GGMs. Our goal is to identify differences in GGMs known to have similar structure. We characterize the uncertainty of differences with confidence intervals obtained using a parametric distribution on parameters of a sparse estimator. Sparse penalties enable statistical guarantees and interpretable models even in high-dimensional and low-sample settings. Characterizing the distributions of sparse models is inherently challenging as the penalties produce a biased estimator.



7 Appendix Figure 5: Comparison of GenStat architecture to selected graph generative models. 7.1 Proofs 7.1.1 Proposition 1 Let p

Neural Information Processing Systems

Figure 5: Comparison of GenStat architecture to selected graph generative models. This proof uses two properties of LDP: composability and immunity to post-processing [2]. Figure 6 illustrates the PGM of Randomized algorithms. The GGM parameters are a function of the perturbed graph statistics as learning input. The implementation can be easily extended to directed graphs. A statistics-based GGM that takes the degree sequence as sufficient statistics [5].





Learning Gaussian Graphical Models with Observed or Latent FVSs

Neural Information Processing Systems

Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely used in many applications, and the trade-off between the modeling capacity and the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles. Exact inference such as computing the marginal distributions and the partition function has complexity $O(k^{2}n)$ using message-passing algorithms, where k is the size of the FVS, and n is the total number of nodes. We propose efficient structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. Regardless of the maximum degree, without knowing the full graph structure, we can exactly compute the maximum likelihood estimate in $O(kn^2+n^2\log n)$ if the FVS is known or in polynomial time if the FVS is unknown but has bounded size.