Zero-Flow Encoders
Wang, Yakun, Wang, Leyang, Liu, Song, Suzuki, Taiji
Flow-based methods have achieved significant success in various generative modeling tasks, capturing nuanced details within complex data distributions. However, few existing works have exploited this unique capability to resolve fine-grained structural details beyond generation tasks. This paper presents a flow-inspired framework for representation learning. First, we demonstrate that a rectified flow trained using independent coupling is zero everywhere at $t=0.5$ if and only if the source and target distributions are identical. We term this property the \emph{zero-flow criterion}. Second, we show that this criterion can certify conditional independence, thereby extracting \emph{sufficient information} from the data. Third, we translate this criterion into a tractable, simulation-free loss function that enables learning amortized Markov blankets in graphical models and latent representations in self-supervised learning tasks. Experiments on both simulated and real-world datasets demonstrate the effectiveness of our approach. The code reproducing our experiments can be found at: https://github.com/probabilityFLOW/zfe.
Feb-3-2026
- Country:
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Middle East > Jordan (0.04)
- Japan > Honshū
- Europe > United Kingdom
- North America > United States (0.14)
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.68)
- Performance Analysis > Accuracy (1.00)
- Statistical Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning > Uncertainty (0.89)
- Vision (0.93)
- Machine Learning
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology