Discrete Object Generation with Reversible Inductive Construction
Seff, Ari, Zhou, Wenda, Damani, Farhan, Doyle, Abigail, Adams, Ryan P.
–Neural Information Processing Systems
The success of generative modeling in continuous domains has led to a surge of interest in generating discrete data such as molecules, source code, and graphs. However, construction histories for these discrete objects are typically not unique and so generative models must reason about intractably large spaces in order to learn. Additionally, structured discrete domains are often characterized by strict constraints on what constitutes a valid object and generative models must respect these requirements in order to produce useful novel samples. Here, we present a generative model for discrete objects employing a Markov chain where transitions are restricted to a set of local operations that preserve validity. Building off of generative interpretations of denoising autoencoders, the Markov chain alternates between producing 1) a sequence of corrupted objects that are valid but not from the data distribution, and 2) a learned reconstruction distribution that attempts to fix the corruptions while also preserving validity.
Neural Information Processing Systems
Mar-19-2020, 00:48:27 GMT
- Technology: