homeomorphism
Quasiprobabilistic Density Ratio Estimation with a Reverse Engineered Classification Loss Function
Drnevich, Matthew, Jiggins, Stephen, Cranmer, Kyle
We consider a generalization of the classifier-based density-ratio estimation task to a quasiprobabilistic setting where probability densities can be negative. The problem with most loss functions used for this task is that they implicitly define a relationship between the optimal classifier and the target quasiprobabilistic density ratio which is discontinuous or not surjective. We address these problems by introducing a convex loss function that is well-suited for both probabilistic and quasiprobabilistic density ratio estimation. To quantify performance, an extended version of the Sliced-Wasserstein distance is introduced which is compatible with quasiprobability distributions. We demonstrate our approach on a real-world example from particle physics, of di-Higgs production in association with jets via gluon-gluon fusion, and achieve state-of-the-art results.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Germany (0.04)
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (7 more...)
- North America > United States > California (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (4 more...)
Incremental Generation is Necessity and Sufficient for Universality in Flow-Based Modelling
Rouhvarzi, Hossein, Kratsios, Anastasis
Incremental flow-based denoising models have reshaped generative modelling, but their empirical advantage still lacks a rigorous approximation-theoretic foundation. We show that incremental generation is necessary and sufficient for universal flow-based generation on the largest natural class of self-maps of $[0,1]^d$ compatible with denoising pipelines, namely the orientation-preserving homeomorphisms of $[0,1]^d$. All our guarantees are uniform on the underlying maps and hence imply approximation both samplewise and in distribution. Using a new topological-dynamical argument, we first prove an impossibility theorem: the class of all single-step autonomous flows, independently of the architecture, width, depth, or Lipschitz activation of the underlying neural network, is meagre and therefore not universal in the space of orientation-preserving homeomorphisms of $[0,1]^d$. By exploiting algebraic properties of autonomous flows, we conversely show that every orientation-preserving Lipschitz homeomorphism on $[0,1]^d$ can be approximated at rate $\mathcal{O}(n^{-1/d})$ by a composition of at most $K_d$ such flows, where $K_d$ depends only on the dimension. Under additional smoothness assumptions, the approximation rate can be made dimension-free, and $K_d$ can be chosen uniformly over the class being approximated. Finally, by linearly lifting the domain into one higher dimension, we obtain structured universal approximation results for continuous functions and for probability measures on $[0,1]^d$, the latter realized as pushforwards of empirical measures with vanishing $1$-Wasserstein error.
- North America > Canada > Ontario > Hamilton (0.14)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > New York (0.04)
- (3 more...)
Autoencoding Dynamics: Topological Limitations and Capabilities
Kvalheim, Matthew D., Sontag, Eduardo D.
Given a "data manifold" $M\subset \mathbb{R}^n$ and "latent space" $\mathbb{R}^\ell$, an autoencoder is a pair of continuous maps consisting of an "encoder" $E\colon \mathbb{R}^n\to \mathbb{R}^\ell$ and "decoder" $D\colon \mathbb{R}^\ell\to \mathbb{R}^n$ such that the "round trip" map $D\circ E$ is as close as possible to the identity map $\mbox{id}_M$ on $M$. We present various topological limitations and capabilites inherent to the search for an autoencoder, and describe capabilities for autoencoding dynamical systems having $M$ as an invariant manifold.
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Maryland > Baltimore County (0.04)
- (2 more...)
Dimensionality reduction and width of deep neural networks based on topological degree theory
Dimensionality reduction (DR) and deep neural networks (DNNs) are two important aspects in data analysis. In data analysis and deep learning, the datasets are often high-dimensional and exhibit some complicated topological structures due to various backgrounds from science to engineering [1,2,4-7]. Traditional approaches to data analysis and visualization, in particular on images recognition, often fail in the high-dimensional setting, and a common practice is to perform dimensionality reduction [2, 6, 11] in order to make data analysis tractable and economic, and the DNNs is a powerful tool in dealing with non-linear dimensionality reduction problems. It has now been recognized that practical datasets often consists of features of low intrinsic dimensions with some nontrivial topological structures [1,2,6], and the geometric structure of datasets heavily affect the architecture of the deep neural networks. Nonetheless, how and to what extent the geometric (topological) structure of datasets is connected with the architecture of a deep neural network remains unclear and is an active research area of deep learning in recent years. 1
- North America > United States > California (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (4 more...)
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (7 more...)
Understanding Mode Connectivity via Parameter Space Symmetry
Zhao, Bo, Dehmamy, Nima, Walters, Robin, Yu, Rose
Neural network minima are often connected by curves along which train and test loss remain nearly constant, a phenomenon known as mode connectivity. While this property has enabled applications such as model merging and fine-tuning, its theoretical explanation remains unclear. We propose a new approach to exploring the connectedness of minima using parameter space symmetry. By linking the topology of symmetry groups to that of the minima, we derive the number of connected components of the minima of linear networks and show that skip connections reduce this number. We then examine when mode connectivity and linear mode connectivity hold or fail, using parameter symmetries which account for a significant part of the minimum. Finally, we provide explicit expressions for connecting curves in the minima induced by symmetry. Using the curvature of these curves, we derive conditions under which linear mode connectivity approximately holds. Our findings highlight the role of continuous symmetries in understanding the neural network loss landscape.
- Europe > Latvia > Lubāna Municipality > Lubāna (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
Homeomorphism Prior for False Positive and Negative Problem in Medical Image Dense Contrastive Representation Learning
He, Yuting, Wang, Boyu, Ge, Rongjun, Chen, Yang, Yang, Guanyu, Li, Shuo
Dense contrastive representation learning (DCRL) has greatly improved the learning efficiency for image-dense prediction tasks, showing its great potential to reduce the large costs of medical image collection and dense annotation. However, the properties of medical images make unreliable correspondence discovery, bringing an open problem of large-scale false positive and negative (FP&N) pairs in DCRL. In this paper, we propose GEoMetric vIsual deNse sImilarity (GEMINI) learning which embeds the homeomorphism prior to DCRL and enables a reliable correspondence discovery for effective dense contrast. We propose a deformable homeomorphism learning (DHL) which models the homeomorphism of medical images and learns to estimate a deformable mapping to predict the pixels' correspondence under topological preservation. It effectively reduces the searching space of pairing and drives an implicit and soft learning of negative pairs via a gradient. We also propose a geometric semantic similarity (GSS) which extracts semantic information in features to measure the alignment degree for the correspondence learning. It will promote the learning efficiency and performance of deformation, constructing positive pairs reliably. We implement two practical variants on two typical representation learning tasks in our experiments. Our promising results on seven datasets which outperform the existing methods show our great superiority. We will release our code on a companion link: https://github.com/YutingHe-list/GEMINI.
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > Ohio > Cuyahoga County > Cleveland (0.04)
- North America > United States > Kansas > Hodgeman County (0.04)
- (2 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.86)