latent source
AR-Flow VAE: A Structured Autoregressive Flow Prior Variational Autoencoder for Unsupervised Blind Source Separation
Wei, Yuan-Hao, Deng, Fu-Hao, Cui, Lin-Yong, Sun, Yan-Jie
Blind source separation (BSS) seeks to recover latent source signals from observed mixtures. Variational autoencoders (VAEs) offer a natural perspective for this problem: the latent variables can be interpreted as source components, the encoder can be viewed as a demixing mapping from observations to sources, and the decoder can be regarded as a remixing process from inferred sources back to observations. In this work, we propose AR-Flow VAE, a novel VAE-based framework for BSS in which each latent source is endowed with a parameter-adaptive autoregressive flow prior. This prior significantly enhances the flexibility of latent source modeling, enabling the framework to capture complex non-Gaussian behaviors and structured dependencies, such as temporal correlations, that are difficult to represent with conventional priors. In addition, the structured prior design assigns distinct priors to different latent dimensions, thereby encouraging the latent components to separate into different source signals under heterogeneous prior constraints. Experimental results validate the effectiveness of the proposed architecture for blind source separation. More importantly, this work provides a foundation for future investigations into the identifiability and interpretability of AR-Flow VAE.
f66340d6f28dae6aab0176892c9065e7-Supplemental-Conference.pdf
Once closed-form expressions for these Jacobians are derived, it remains to substitute those expressions into (16). The following identity (often termed the "vec" rule) will To depict the spatial topographies of the latent components measured on the EEG and fMRI analyses, the "forward-model" [ The results of the comparison are shown in Fig S1, where it is clear that the signal fidelity of the GCs (right panel) significantly exceeds those yielded by PCA (left) and ICA (middle). GCA is only able to recover sources with temporal dependencies (i.e., s Both the single electrodes and Granger components exhibit two pronounced peaks in the spectra: one near 2 Hz ("delta" Fig S3 shows the corresponding result for the left motor imagery condition. EEG motor imagery dataset described in the main text. For each technique, the first 6 components are presented.
- South America > Chile > Arica y Parinacota Region > Arica Province > Arica (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Granger Components Analysis: Unsupervised learning of latent temporal dependencies
Here the concept of Granger causality is employed to propose a new criterion for unsupervised learning that is appropriate in the case of temporally-dependent source signals. The basic idea is to identify two projections of a multivariate time series such that the Granger causality among the resulting pair of components is maximized.
- South America > Chile > Arica y Parinacota Region > Arica Province > Arica (0.04)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.95)
- Asia > Japan > Honshū > Tōhoku > Iwate Prefecture > Morioka (0.05)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > Belgium > Flanders (0.04)
Independent Component Discovery in Temporal Count Data
Chaussard, Alexandre, Bonnet, Anna, Corff, Sylvain Le
Advances in data collection are producing growing volumes of temporal count observations, making adapted modeling increasingly necessary. In this work, we introduce a generative framework for independent component analysis of temporal count data, combining regime-adaptive dynamics with Poisson log-normal emissions. The model identifies disentangled components with regime-dependent contributions, enabling representation learning and perturbations analysis. Notably, we establish the identifiability of the model, supporting principled interpretation. To learn the parameters, we propose an efficient amortized variational inference procedure. Experiments on simulated data evaluate recovery of the mixing function and latent sources across diverse settings, while an in vivo longitudinal gut microbiome study reveals microbial co-variation patterns and regime shifts consistent with clinical perturbations.
- Asia > Japan > Honshū > Tōhoku > Iwate Prefecture > Morioka (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Modeling & Simulation (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
A tutorial on discovering and quantifying the effect of latent causal sources of multimodal EHR data
Barbero-Mota, Marco, Strobl, Eric V., Still, John M., Stead, William W., Lasko, Thomas A.
We provide an accessible description of a peer-reviewed generalizable causal machine learning pipeline to (i) discover latent causal sources of large-scale electronic health records observations, and (ii) quantify the source causal effects on clinical outcomes. We illustrate how imperfect multimodal clinical data can be processed, decomposed into probabilistic independent latent sources, and used to train taskspecific causal models from which individual causal effects can be estimated. We summarize the findings of the two real-world applications of the approach to date as a demonstration of its versatility and utility for medical discovery at scale.
- South America > Uruguay > Maldonado > Maldonado (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Health Care Technology > Medical Record (0.71)
- Health & Medicine > Therapeutic Area > Immunology (0.68)
- Health & Medicine > Diagnostic Medicine (0.67)
Causal Discovery for Linear DAGs with Dependent Latent Variables via Higher-order Cumulants
Cai, Ming, Gao, Penggang, Hara, Hisayuki
This paper addresses the problem of estimating causal directed acyclic graphs in linear non-Gaussian acyclic models with latent confounders (LvLiNGAM). Existing methods assume mutually independent latent confounders or cannot properly handle models with causal relationships among observed variables. We propose a novel algorithm that identifies causal DAGs in LvLiNGAM, allowing causal structures among latent variables, among observed variables, and between the two. The proposed method leverages higher-order cumu-lants of observed data to identify the causal structure. Extensive simulations and experiments with real-world data demonstrate the validity and practical utility of the proposed algorithm. Introduction Estimating causal directed acyclic graphs (DAGs) in the presence of latent confounders has been a major challenge in causal analysis. Conventional causal discovery methods, such as the Peter-Clark (PC) algorithm [1], Greedy Equivalence Search (GES) [2], and the Linear Non-Gaussian Acyclic Model (LiNGAM) [3, 4], focus solely on the causal model without latent confounders. Fast Causal Inference (FCI) [1] extends the PC algorithm to handle latent variables, recovering a partial ancestral graph (PAG) under the faithfulness assumption. Greedy Fast Causal Inference (GFCI) [6] hybridizes GES and FCI but inherits the limitation of FCI. The assumption of linearity and non-Gaussian disturbances in the causal model enables the identification of causal structures beyond the PAG. The linear non-Gaussian acyclic model with latent confounders (LvLiNGAM) is an extension of LiNGAM that incorporates latent confounders. Hoyer et al. [7] demonstrated that LvLiNGAM can be transformed into a canonical model in which all latent variables are mutually independent and causally precede the observed variables.
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Europe > Switzerland (0.04)
- Asia > Middle East > Jordan (0.04)
f66340d6f28dae6aab0176892c9065e7-Supplemental-Conference.pdf
Once closed-form expressions for these Jacobians are derived, it remains to substitute those expressions into (16). The following identity (often termed the "vec" rule) will To depict the spatial topographies of the latent components measured on the EEG and fMRI analyses, the "forward-model" [ The results of the comparison are shown in Fig S1, where it is clear that the signal fidelity of the GCs (right panel) significantly exceeds those yielded by PCA (left) and ICA (middle). GCA is only able to recover sources with temporal dependencies (i.e., s Both the single electrodes and Granger components exhibit two pronounced peaks in the spectra: one near 2 Hz ("delta" Fig S3 shows the corresponding result for the left motor imagery condition. EEG motor imagery dataset described in the main text. For each technique, the first 6 components are presented.
- South America > Chile > Arica y Parinacota Region > Arica Province > Arica (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- South America > Chile > Arica y Parinacota Region > Arica Province > Arica (0.04)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.95)
Towards Causal Representation Learning with Observable Sources as Auxiliaries
Kim, Kwonho, Nam, Heejeong, Hwang, Inwoo, Lee, Sanghack
Causal representation learning seeks to recover latent factors that generate observational data through a mixing function. Needing assumptions on latent structures or relationships to achieve identifiability in general, prior works often build upon conditional independence given known auxiliary variables. However, prior frameworks limit the scope of auxiliary variables to be external to the mixing function. Yet, in some cases, system-driving latent factors can be easily observed or extracted from data, possibly facilitating identification. In this paper, we introduce a framework of observable sources being auxiliaries, serving as effective conditioning variables. Our main results show that one can identify entire latent variables up to subspace-wise transformations and permutations using volume-preserving encoders. Moreover, when multiple known auxiliary variables are available, we offer a variable-selection scheme to choose those that maximize recoverability of the latent factors given knowledge of the latent causal graph. Finally, we demonstrate the effectiveness of our framework through experiments on synthetic graph and image data, thereby extending the boundaries of current approaches.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Japan > Honshū > Tōhoku > Iwate Prefecture > Morioka (0.04)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- (2 more...)