metric tensor
Supplement to " Estimating Riemannian Metric with Noise-Contaminated Intrinsic Distance "
Unlike distance metric learning where the subsequent tasks utilizing the estimated distance metric is the usual focus, the proposal focuses on the estimated metric characterizing the geometry structure. Despite the illustrated taxi and MNIST examples, it is still open to finding more compelling applications that target the data space geometry. Interpreting mathematical concepts such as Riemannian metric and geodesic in the context of potential application (e.g., cognition and perception research where similarity measures are common) could be inspiring. Our proposal requires sufficiently dense data, which could be demanding, especially for high-dimensional data due to the curse of dimensionality. Dimensional reduction (e.g., manifold embedding as in the MNIST example) can substantially alleviate the curse of dimensionality, and the dense data requirement will more likely hold true.
- Europe > Austria > Vienna (0.14)
- North America > United States > New York > Richmond County > New York City (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Middle East > Lebanon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Passenger (0.93)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Denmark (0.04)
Walking the Weight Manifold: a Topological Approach to Conditioning Inspired by Neuromodulation
Benjamin, Ari S., Daruwalla, Kyle, Pehle, Christian, Zekri, Abdul-Malik, Zador, Anthony M.
One frequently wishes to learn a range of similar tasks as efficiently as possible, re-using knowledge across tasks. In artificial neural networks, this is typically accomplished by conditioning a network upon task context by injecting context as input. Brains have a different strategy: the parameters themselves are modulated as a function of various neuromodulators such as serotonin. Here, we take inspiration from neuromodulation and propose to learn weights which are smoothly parameterized functions of task context variables. Rather than optimize a weight vector, i.e. a single point in weight space, we optimize a smooth manifold in weight space with a predefined topology. To accomplish this, we derive a formal treatment of optimization of manifolds as the minimization of a loss functional subject to a constraint on volumetric movement, analogous to gradient descent. During inference, conditioning selects a single point on this manifold which serves as the effective weight matrix for a particular sub-task. This strategy for conditioning has two main advantages. First, the topology of the manifold (whether a line, circle, or torus) is a convenient lever for inductive biases about the relationship between tasks. Second, learning in one state smoothly affects the entire manifold, encouraging generalization across states. To verify this, we train manifolds with several topologies, including straight lines in weight space (for conditioning on e.g. noise level in input data) and ellipses (for rotated images). Despite their simplicity, these parameterizations outperform conditioning identical networks by input concatenation and better generalize to out-of-distribution samples. These results suggest that modulating weights over low-dimensional manifolds offers a principled and effective alternative to traditional conditioning.
- North America > United States > Pennsylvania > Centre County > State College (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
latent space components, which traditionally assume a Euclidean metric over the latent space, by their hyperbolic
We thank the reviewers for their time, helpful feedback, and advice. We thank them for their kind words, and hope to address any remaining concerns below. We agree and propose the following replacement: "We show that replacing V AE We will improve that for the next version. In more detail, we compared three decoders: (i) a standard "vanilla" multilayer perceptron (implicitly relying on the This ablation study shows that linearising the Poincaré ball through the logarithm map (i.e. The analogy is not limited to the two-dimensional case.
Energy Guided Geometric Flow Matching
Zweig, Aaron, Zhang, Mingxuan, Azizi, Elham, Knowles, David
A useful inductive bias for temporal data is that trajectories should stay close to the data manifold. Traditional flow matching relies on straight conditional paths, and flow matching methods which learn geodesics rely on RBF kernels or nearest neighbor graphs that suffer from the curse of dimensionality. We propose to use score matching and annealed energy distillation to learn a metric tensor that faithfully captures the underlying data geometry and informs more accurate flows. We demonstrate the efficacy of this strategy on synthetic manifolds with analytic geodesics, and interpolation of cell
- Europe > Netherlands > South Holland > Leiden (0.05)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.04)
Appendix for Riemannian Continuous Normalizing Flows A Constant curvature manifolds
In the following, we provide a brief overview of Riemannian geometry and constant curvature manifolds, specifically the Poincaré ball and the hypersphere models. In the following, we provide a brief overview of key concepts related to hyperbolic geometry. In the following, we discuss key concepts related to positively curved spaces known as elliptic spaces, and in particular to the hypersphere model. It is endowed with the pull-back metric of the ambient Euclidean space. Unfortunately, conventional probabilistic models implicitly assume a flat geometry.
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
Finding geodesics with the Deep Ritz method
Geodesic problems involve computing trajectories between prescribed initial and final states to minimize a user-defined measure of distance, cost, or energy. They arise throughout physics and engineering -- for instance, in determining optimal paths through complex environments, modeling light propagation in refractive media, and the study of spacetime trajectories in control theory and general relativity. Despite their ubiquity, the scientific machine learning (SciML) community has given relatively little attention to investigating its methods in the context of these problems. In this work, we argue that given their simple geometry, variational structure, and natural nonlinearity, geodesic problems are particularly well-suited for the Deep Ritz method. We substantiate this claim with four numerical examples drawn from path planning, optics, solid mechanics, and generative modeling. Our goal is not to provide an exhaustive study of geodesic problems, but rather to identify a promising application of the Deep Ritz method and a fruitful direction for future SciML research.
- North America > United States > Colorado > Boulder County > Boulder (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)