sparse
Select-and-Sample for Spike-and-Slab Sparse Coding
Probabilistic inference serves as a popular model for neural processing. It is still unclear, however, how approximate probabilistic inference can be accurate and scalable to very high-dimensional continuous latent spaces. Especially as typical posteriors for sensory data can be expected to exhibit complex latent dependencies including multiple modes. Here, we study an approach that can efficiently be scaled while maintaining a richly structured posterior approximation under these conditions. As example model we use spike-and-slab sparse coding for V1 processing, and combine latent subspace selection with Gibbs sampling (select-and-sample).
Proximal SCOPE for Distributed Sparse Learning
Distributed sparse learning with a cluster of multiple machines has attracted much attention in machine learning, especially for large-scale applications with high-dimensional data. One popular way to implement sparse learning is to use L1 regularization. In this paper, we propose a novel method, called proximal SCOPE (pSCOPE), for distributed sparse learning with L1 regularization.
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.05)
- Asia > China (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (13 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- North America > United States > Michigan (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Massachusetts > Plymouth County > Hanover (0.04)
Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses
Ananya Uppal, Shashank Singh, Barnabas Poczos
Along line ofwork has established convergence rates ofthe empirical distribution tothe true distribution in spaces as general as unbounded metric spaces [54, 25, 45]). In the Euclidean setting, this is well understood [14,2,18], although, to the best of our knowledge, minimax lower bounds have been proven only recently [45]; this setting intersects with our work in the caseσd = 1,σg = 0, pd =,matchingourminimaxrateofn 1/D+n 1/2.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.05)
- North America > United States > Virginia (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Middle East > Jordan (0.04)