Weakly Supervised Representation Learning with Sparse Perturbations Kartik Ahuja Jason Hartford

Neural Information Processing Systems 

The theory of representation learning aims to build methods that provably invert the data generating process with minimal domain knowledge or any source of supervision. Most prior approaches require strong distributional assumptions on the latent variables and weak supervision (auxiliary information such as timestamps) to provide provable identification guarantees. In this work, we show that if one has weak supervision from observations generated by sparse perturbations of the latent variables-e.g.