ScaLES: Scalable Latent Exploration Score for Pre-Trained Generative Networks

Ronen, Omer, Humayun, Ahmed Imtiaz, Balestriero, Randall, Baraniuk, Richard, Yu, Bin

arXiv.org Machine Learning 

We develop Scalable Latent Exploration Score (ScaLES) to mitigate overexploration in Latent Space Optimization (LSO), a popular method for solving black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space of a Variational Autoencoder (VAE) and is known to be susceptible to over-exploration, which manifests in unrealistic solutions that reduce its practicality. ScaLES is an exact and theoretically motivated method leveraging the trained decoder's approximation of the data distribution. ScaLES can be calculated with any existing decoder, e.g. from a VAE, without additional training, architectural changes, or access to the training data. Our evaluation across five LSO benchmark tasks and three VAE architectures demonstrates that ScaLES enhances the quality of the solutions while maintaining high objective values, leading to improvements over existing solutions. We believe that new avenues to LSO will be opened by ScaLES' ability to identify out of distribution areas, differentiability, and computational tractability.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found