Weakly Supervised Representation Learning with Sparse Perturbations Kartik Ahuja Jason Hartford
–Neural Information Processing Systems
The theory of representation learning aims to build methods that provably invert the data generating process with minimal domain knowledge or any source of supervision. Most prior approaches require strong distributional assumptions on the latent variables and weak supervision (auxiliary information such as timestamps) to provide provable identification guarantees. In this work, we show that if one has weak supervision from observations generated by sparse perturbations of the latent variables-e.g.
Neural Information Processing Systems
May-30-2025, 14:28:56 GMT
- Country:
- Asia > Japan
- Honshū > Tōhoku > Iwate Prefecture > Morioka (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > Canada
- Asia > Japan
- Genre:
- Research Report > New Finding (0.67)
- Technology: