Goto

Collaborating Authors

 weighted retraining


Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

Neural Information Processing Systems

Many important problems in science and engineering, such as drug design, involve optimizing an expensive black-box objective function over a complex, high-dimensional, and structured input space. Although machine learning techniques have shown promise in solving such problems, existing approaches substantially lack sample efficiency. We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model. In contrast to previous approaches, we actively steer the generative model to maintain a latent manifold that is highly useful for efficiently optimizing the objective. We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value. This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems.


Review for NeurIPS paper: Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

Neural Information Processing Systems

Weaknesses: Cons: In general I found the method section ok, however some important parts are missing and need to be addressed. "Fit objective model h" (pseudo algo line 6) What is h and how is it fitted. You mention a gaussian process for the Zinc dataset - why is that model appropriate and how well does it actually fit the true objective function? "suggest new latent z based on h" (pseudo algo line 6) How do you find new latent space samples? Some of this information can likely be found in the refs or in the appendix however this information (in my opinion) really needs to be explained and self-contained in the main paper It would strengthen the paper a lot if one more real world example were included in the experimental results (currently two toy tasks, one real world dataset).


Review for NeurIPS paper: Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

Neural Information Processing Systems

This paper had 4 qualified reviewers, 3 of whom recommended acceptance and one who gave a 4 (updated from a 3 post-rebuttal). I think some of the complaints raised by the low review are technically correct, but I also don't feel that they are super-relevant to evaluating the scientific significance of this work. I think the numerical score they gave was too low given the text of their review). Given all of that, I am recommending acceptance for this paper.


Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

Neural Information Processing Systems

Many important problems in science and engineering, such as drug design, involve optimizing an expensive black-box objective function over a complex, high-dimensional, and structured input space. Although machine learning techniques have shown promise in solving such problems, existing approaches substantially lack sample efficiency. We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model. In contrast to previous approaches, we actively steer the generative model to maintain a latent manifold that is highly useful for efficiently optimizing the objective. We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value.