Goto

Collaborating Authors

 learnable knowledge constraint


Deep Generative Models with Learnable Knowledge Constraints

Neural Information Processing Systems

The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified {\it a priori}, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL. The resulting algorithm is model-agnostic to apply to any DGMs, and is flexible to adapt arbitrary constraints with the model jointly. Experiments on human image generation and templated sentence generation show models with learned knowledge constraints by our algorithm greatly improve over base generative models.


Deep Generative Models with Learnable Knowledge Constraints

Neural Information Processing Systems

The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified {\it a priori}, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL.


Reviews: Deep Generative Models with Learnable Knowledge Constraints

Neural Information Processing Systems

Summary: The paper proposes a way to incorporate constraints into the learning of generative models through posterior regularization. In doing so, the paper draws connections between posterior regularization and policy optimization. One of the key contributions of this paper is that the constraints are modeled as extrinsic rewards and learned through inverse reinforcement learning. The paper studies an interesting and very practical problem and the contributions are substantial. The writing could definitely be made clearer for Sections 3 and 4, where the overloaded notation is often hard to follow. I have the following questions: 1.


Deep Generative Models with Learnable Knowledge Constraints

Hu, Zhiting, Yang, Zichao, Salakhutdinov, Russ R., Qin, LIANHUI, Liang, Xiaodan, Dong, Haoye, Xing, Eric P.

Neural Information Processing Systems

The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified {\it a priori}, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL.