efficient approximation
Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds
Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds.
Global Solver and Its Efficient Approximation for Variational Bayesian Low-rank Subspace Clustering
When a probabilistic model and its prior are given, Bayesian learning offers inference with automatic parameter tuning. However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima. In this paper, we overcome this difficulty for low-rank subspace clustering (LRSC) by providing an exact global solver and its efficient approximation. LRSC extracts a low-dimensional structure of data by embedding samples into the union of low-dimensional subspaces, and its variational Bayesian variant has shown good performance. We first prove a key property that the VB-LRSC model is highly redundant. Thanks to this property, the optimization problem of VB-LRSC can be separated into small subproblems, each of which has only a small number of unknown variables. Our exact global solver relies on another key property that the stationary condition of each subproblem is written as a set of polynomial equations, which is solvable with the homotopy method. For further computational efficiency, we also propose an efficient approximate variant, of which the stationary condition can be written as a polynomial equation with a single variable. Experimental results show the usefulness of our approach.
Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds
Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds.
Efficient Approximations of Complete Interatomic Potentials for Crystal Property Prediction
Lin, Yuchao, Yan, Keqiang, Luo, Youzhi, Liu, Yi, Qian, Xiaoning, Ji, Shuiwang
We study property prediction for crystal materials. A crystal structure consists of a minimal unit cell that is repeated infinitely in 3D space. How to accurately represent such repetitive structures in machine learning models remains unresolved. Current methods construct graphs by establishing edges only between nearby nodes, thereby failing to faithfully capture infinite repeating patterns and distant interatomic interactions. In this work, we propose several innovations to overcome these limitations. First, we propose to model physics-principled interatomic potentials directly instead of only using distances as in many existing methods. These potentials include the Coulomb potential, London dispersion potential, and Pauli repulsion potential. Second, we model the complete set of potentials among all atoms, instead of only between nearby atoms as in existing methods. This is enabled by our approximations of infinite potential summations, where we extend the Ewald summation for several potential series approximations with provable error bounds. Finally, we propose to incorporate our computations of complete interatomic potentials into message passing neural networks for representation learning. We perform experiments on the JARVIS and Materials Project benchmarks for evaluation. Results show that the use of interatomic potentials and complete interatomic potentials leads to consistent performance improvements with reasonable computational costs. Our code is publicly available as part of the AIRS library (https://github.com/divelab/AIRS/tree/main/OpenMat/PotNet).
- North America > United States > Texas > Brazos County > College Station (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > United States > Florida > Leon County > Tallahassee (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds
Chen, Minshuo, Jiang, Haoming, Liao, Wenjing, Zhao, Tuo
Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds.
Global Solver and Its Efficient Approximation for Variational Bayesian Low-rank Subspace Clustering
Nakajima, Shinichi, Takeda, Akiko, Babacan, S. Derin, Sugiyama, Masashi, Takeuchi, Ichiro
When a probabilistic model and its prior are given, Bayesian learning offers inference with automatic parameter tuning. However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima. In this paper, we overcome this difficulty for low-rank subspace clustering (LRSC) by providing an exact global solver and its efficient approximation. LRSC extracts a low-dimensional structure of data by embedding samples into the union of low-dimensional subspaces, and its variational Bayesian variant has shown good performance. We first prove a key property that the VB-LRSC model is highly redundant.
Fast Fair Regression via Efficient Approximations of Mutual Information
Steinberg, Daniel, Reid, Alistair, O'Callaghan, Simon, Lattimore, Finnian, McCalman, Lachlan, Caetano, Tiberio
Most work in algorithmic fairness to date has focused on discrete outcomes, such as deciding whether to grant someone a loan or not. In these classification settings, group fairness criteria such as independence, separation and sufficiency can be measured directly by comparing rates of outcomes between subpopulations. Many important problems however require the prediction of a real-valued outcome, such as a risk score or insurance premium. In such regression settings, measuring group fairness criteria is computationally challenging, as it requires estimating information-theoretic divergences between conditional probability density functions. This paper introduces fast approximations of the independence, separation and sufficiency group fairness criteria for regression models from their (conditional) mutual information definitions, and uses such approximations as regularisers to enforce fairness within a regularised risk minimisation framework. Experiments in real-world datasets indicate that in spite of its superior computational efficiency our algorithm still displays state-of-the-art accuracy/fairness tradeoffs.
- Oceania > Australia > New South Wales > Sydney (0.04)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- (2 more...)
- Banking & Finance (0.48)
- Education (0.46)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.46)
Efficient Approximation for Security Games with Interval Uncertainty
Kiekintveld, Christopher (University of Texas at El Paso) | Kreinovich, Vladik (University of Texas at El Paso)
There are an increasing number of applications of security games. One of the key challenges for this field going forward is to address the problem of model uncertainty and the robustness of the game-theoretic solutions. Most existing methods for dealing with payoff uncertainty are Bayesian methods which are NP-hard and have difficulty scaling to very large problems. In this work we consider an alternative approach based on interval uncertainty. For a variant of security games with interval uncertainty we introduce a polynomial-time approximation algorithm that can compute very accurate solutions within a given error bound.
- North America > United States > Texas > El Paso County > El Paso (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Information Technology > Game Theory (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.35)