Inferring Generative Model Structure with Static Analysis

Varma, Paroma, He, Bryan D., Bajaj, Payal, Khandwala, Nishith, Banerjee, Imon, Rubin, Daniel, Ré, Christopher

Neural Information Processing Systems 

Obtaining enough labeled data to robustly train complex discriminative models is a major bottleneck in the machine learning pipeline. A popular solution is combining multiple sources of weak supervision using generative models. The structure of these models affects the quality of the training labels, but is difficult to learn without any ground truth labels. We instead rely on weak supervision sources having some structure by virtue of being encoded programmatically. We present Coral, a paradigm that infers generative model structure by statically analyzing the code for these heuristics, thus significantly reducing the amount of data required to learn structure.