dictionary learning
Alternating minimization for dictionary learning with random initialization
Our algorithm is a simple alternating minimization procedure that switches between $\ell_1$ minimization and gradient descent in alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically. However, in contrast to previous theoretical analyses for this problem, we replace a condition on the operator norm (that is, the largest magnitude singular value) of the true underlying dictionary $A^*$ with a condition on the matrix infinity norm (that is, the largest magnitude term). This not only allows us to get convergence rates for the error of the estimated dictionary measured in the matrix infinity norm, but also ensures that a random initialization will provably converge to the global optimum. Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. We also establish upper bounds on the sample complexity of our algorithm.
- North America > United States > Massachusetts > Hampshire County > Amherst Center (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Singapore (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (3 more...)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Asia > Middle East > Israel (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Asia > Middle East > Israel (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
OTLDA: AGeometry-AwareOptimalTransport ApproachforTopicModeling
We present an optimal transport framework for learning topics from textual data. While the celebrated Latent Dirichlet allocation (LDA) topic model and its variants have been applied to many disciplines, they mainly focus on wordoccurrences and neglect to incorporate semantic regularities in language.
- Oceania > Australia (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California (0.14)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Health & Medicine (0.93)
- Leisure & Entertainment > Sports (0.69)
GeneralizedandDiscriminativeFew-ShotObject DetectionviaSVD-DictionaryEnhancement AnonymousAuthor(s) Affiliation Address email
Inspecific,wepropose5 a novel method, namely, SVD-Dictionary enhancement, to build two separated6 spaces based on the sorted singular values. Concretely, the eigenvectors corre-7 sponding to larger singular values are used to build the generalization space in8 which localization isperformed, asthese eigenvectors generally suppress certain9 variations (e.g., the variation of styles) and contain intrinsical characteristics of10 objects.