Regression
- North America > United States > Michigan (0.04)
- North America > United States > Texas (0.04)
- North America > United States > Missouri (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.65)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
Distributional Learning of Variational AutoEncoder: Application to Synthetic Data Generation Seunghwan An and Jong-June Jeon
The Gaussianity assumption has been consistently criticized as a main limitation of the V ariational Autoencoder (V AE) despite its efficiency in computational modeling. In this paper, we propose a new approach that expands the model capacity (i.e., expressive power of distributional family) without sacrificing the computational advantages of the V AE framework. Our V AE model's decoder is composed of an infinite mixture of asymmetric Laplace distribution, which possesses general
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
- North America > Montserrat (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (4 more...)
- Europe > Austria > Vienna (0.13)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- North America > Mexico > Yucatán > Mérida (0.04)
- Asia > Pakistan (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.93)
- Information Technology > Data Science (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.49)
- Asia > Middle East > Israel (0.04)
- North America > Canada (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
- Information Technology > Data Science (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.32)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Croatia > Primorje-Gorski Kotar County > Rijeka (0.04)
- Asia (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.67)
- Health & Medicine (0.93)
- Education (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
Block Broyden's Methods for Solving Nonlinear Equations
This paper studies quasi-Newton methods for solving nonlinear equations. We propose block variants of both good and bad Broyden's methods, which enjoy explicit local superlinear convergence rates. Our block good Broyden's method has a faster condition-number-free convergence rate than existing Broyden's methods because it takes the advantage of multiple rank modification on Jacobian estimator. On the other hand, our block bad Broyden's method directly estimates the inverse of the Jacobian provably, which reduces the computational cost of the iteration. Our theoretical results provide some new insights on why good Broyden's method outperforms bad Broyden's method in most of the cases. The empirical results also demonstrate the superiority of our methods and validate our theoretical analysis.
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > China > Hong Kong (0.04)
- North America > United States (0.04)
- Asia > Middle East > Republic of Türkiye (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)
- North America > United States > Virginia (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.46)