Goto

Collaborating Authors

 fast structure


Fast structure learning with modular regularization

Neural Information Processing Systems

Estimating graphical model structure from high-dimensional and undersampled data is a fundamental problem in many scientific fields. Existing approaches, such as GLASSO, latent variable GLASSO, and latent tree models, suffer from high computational complexity and may impose unrealistic sparsity priors in some cases. We introduce a novel method that leverages a newly discovered connection between information-theoretic measures and structured latent factor models to derive an optimization objective which encourages modular structures where each observed variable has a single latent parent. The proposed method has linear stepwise computational complexity w.r.t. the number of observed variables. Our experiments on synthetic data demonstrate that our approach is the only method that recovers modular structure better as the dimensionality increases. We also use our approach for estimating covariance structure for a number of real-world datasets and show that it consistently outperforms state-of-the-art estimators at a fraction of the computational cost. Finally, we apply the proposed method to high-resolution fMRI data (with more than 10^5 voxels) and show that it is capable of extracting meaningful patterns.


Reviews: Fast structure learning with modular regularization

Neural Information Processing Systems

The manuscript proposes a new objective function for learning Gaussian latent factor models. The objective function is based on information-theoretic characterization of modular latent factor models, where the model attains optimal value. The derivation of the objective function carefully avoids matrix inversion to improve computational complexity compared to traditional methods. The authors pointed out that the proposed model enjoys'blessing of dimension' in that model performance improves when the dimension of observable variables increases while the dimension of latent variables remains constant. This is demonstrated by both simulation and an information-theoretic lower bound on the sample size.


Reviews: Fast structure learning with modular regularization

Neural Information Processing Systems

This manuscript proposes an estimator for graphical models which encourages modularity. The strengths of the manuscript include the conceptual simplicity of the proposal and the clear analysis. Reviewers also commented on the overall clarity of the presentation and the extensive experiments. I encourage the authors to read the reviews carefully and make changes as appropriate for the final version.


Fast structure learning with modular regularization

Neural Information Processing Systems

Estimating graphical model structure from high-dimensional and undersampled data is a fundamental problem in many scientific fields. Existing approaches, such as GLASSO, latent variable GLASSO, and latent tree models, suffer from high computational complexity and may impose unrealistic sparsity priors in some cases. We introduce a novel method that leverages a newly discovered connection between information-theoretic measures and structured latent factor models to derive an optimization objective which encourages modular structures where each observed variable has a single latent parent. The proposed method has linear stepwise computational complexity w.r.t. the number of observed variables. Our experiments on synthetic data demonstrate that our approach is the only method that recovers modular structure better as the dimensionality increases.


Fast structure learning with modular regularization

Steeg, Greg Ver, Harutyunyan, Hrayr, Moyer, Daniel, Galstyan, Aram

Neural Information Processing Systems

Estimating graphical model structure from high-dimensional and undersampled data is a fundamental problem in many scientific fields. Existing approaches, such as GLASSO, latent variable GLASSO, and latent tree models, suffer from high computational complexity and may impose unrealistic sparsity priors in some cases. We introduce a novel method that leverages a newly discovered connection between information-theoretic measures and structured latent factor models to derive an optimization objective which encourages modular structures where each observed variable has a single latent parent. The proposed method has linear stepwise computational complexity w.r.t. the number of observed variables. Our experiments on synthetic data demonstrate that our approach is the only method that recovers modular structure better as the dimensionality increases.