distortion
- Asia > Middle East > Israel (0.04)
- North America > United States > Massachusetts (0.04)
- Asia > Middle East > Israel (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > Canada > Ontario > Toronto (0.28)
- North America > Canada > Ontario > Hamilton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Data Science > Data Mining (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (0.46)
Minimax Rates for Hyperbolic Hierarchical Learning
Rawal, Divit, Vishwanath, Sriram
We prove an exponential separation in sample complexity between Euclidean and hyperbolic representations for learning on hierarchical data under standard Lipschitz regularization. For depth-$R$ hierarchies with branching factor $m$, we first establish a geometric obstruction for Euclidean space: any bounded-radius embedding forces volumetric collapse, mapping exponentially many tree-distant points to nearby locations. This necessitates Lipschitz constants scaling as $\exp(Ω(R))$ to realize even simple hierarchical targets, yielding exponential sample complexity under capacity control. We then show this obstruction vanishes in hyperbolic space: constant-distortion hyperbolic embeddings admit $O(1)$-Lipschitz realizability, enabling learning with $n = O(mR \log m)$ samples. A matching $Ω(mR \log m)$ lower bound via Fano's inequality establishes that hyperbolic representations achieve the information-theoretic optimum. We also show a geometry-independent bottleneck: any rank-$k$ prediction space captures only $O(k)$ canonical hierarchical contrasts.
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > Dominican Republic (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Israel (0.04)
Advances in Diffusion-Based Generative Compression
Popularized by their strong image generation performance, diffusion and related methods for generative modeling have found widespread success in visual media applications. In particular, diffusion methods have enabled new approaches to data compression, where realistic reconstructions can be generated at extremely low bit-rates. This article provides a unifying review of recent diffusion-based methods for generative lossy compression, with a focus on image compression. These methods generally encode the source into an embedding and employ a diffusion model to iteratively refine it in the decoding procedure, such that the final reconstruction approximately follows the ground truth data distribution. The embedding can take various forms and is typically transmitted via an auxiliary entropy model, and recent methods also explore the use of diffusion models themselves for information transmission via channel simulation. We review representative approaches through the lens of rate-distortion-perception theory, highlighting the role of common randomness and connections to inverse problems, and identify open challenges.
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
Multi-task Modeling for Engineering Applications with Sparse Data
Comlek, Yigitcan, Krishnan, R. Murali, Ravi, Sandipp Krishnan, Moghaddas, Amin, Giorjao, Rafael, Eff, Michael, Samaddar, Anirban, Ramachandra, Nesar S., Madireddy, Sandeep, Wang, Liping
Modern engineering and scientific workflows frequently require simultaneous prediction across related tasks and fidelity levels [1-6]. In such contexts, some outputs are scarce and expensive to obtain, while others are cheaper and more abundant. Multi-task Gaussian processes (MTGPs), also known as multi-output Gaussian processes, offer a principled Bayesian framework to exploit inter-task correlations, enabling knowledge sharing that improves predictive accuracy and reduces the demand for large high-fidelity datasets [7-9]. Over decades of development, MTGPs have been applied across diverse domains, including time series forecasting, multitask optimization, and multifidelity classification, demonstrating their broad utility wherever data cost asymmetries and cross-task dependencies are present [10-16]. The central motivation for MTGPs is to leverage dependencies among related tasks to enhance predictive quality when high-fidelity information is limited [17]. For example, predicting an airfoil's lift coefficient from limited, expensive high-fidelity computational fluid dynamics (CFD) simulations can benefit from correlating with sufficient low-fidelity simulations [3]. Recent work in joint multi-objective and multifidelity optimization has also utilized MT - GPs to balance exploration and exploitation across tasks, improving predictive performance and decision-making by explicitly modeling relationships among outputs and fidelities [12].
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.04)
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy (1.00)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Information Fusion (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.35)