Gligorijević, Vladimir
LLMs are Highly-Constrained Biophysical Sequence Optimizers
Chen, Angelica, Stanton, Samuel D., Alberstein, Robert G., Watkins, Andrew M., Bonneau, Richard, Gligorijević, Vladimir, Cho, Kyunghyun, Frey, Nathan C.
Large language models (LLMs) have recently shown significant potential in various biological tasks such as protein engineering and molecule design. These tasks typically involve black-box discrete sequence optimization, where the challenge lies in generating sequences that are not only biologically feasible but also adhere to hard fine-grained constraints. However, LLMs often struggle with such constraints, especially in biological contexts where verifying candidate solutions is costly and time-consuming. In this study, we explore the possibility of employing LLMs as highly-constrained bilevel optimizers through a methodology we refer to as Language Model Optimization with Margin Expectation (LLOME). This approach combines both offline and online optimization, utilizing limited oracle evaluations to iteratively enhance the sequences generated by the LLM. We additionally propose a novel training objective - Margin-Aligned Expectation (MargE) - that trains the LLM to smoothly interpolate between the reward and reference distributions. Lastly, we introduce a synthetic test suite that bears strong geometric similarity to real biophysical problems and enables rapid evaluation of LLM optimizers without time-consuming lab validation. Our findings reveal that, in comparison to genetic algorithm baselines, LLMs achieve significantly lower regret solutions while requiring fewer test function evaluations. However, we also observe that LLMs exhibit moderate miscalibration, are susceptible to generator collapse, and have difficulty finding the optimal solution when no explicit ground truth rewards are available. Large language models (LLMs) have recently shown significant promise on various biophysical optimization tasks, such as protein engineering and molecule design. These tasks are often formulated as black-box discrete sequence optimization problems, wherein a solver must attempt to output a discrete sequence x X that is feasible (i.e., a biologically plausible sequence) and that fulfills a number of strict constraints, such as containing specific motifs. Yet despite their many successes, LLMs often struggle to generate outputs that fulfill hard fine-grained constraints [31].
Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient
Tagasovska, Nataša, Gligorijević, Vladimir, Cho, Kyunghyun, Loukas, Andreas
Across scientific domains, generating new models or optimizing existing ones while meeting specific criteria is crucial. Traditional machine learning frameworks for guided design use a generative model and a surrogate model (discriminator), requiring large datasets. However, real-world scientific applications often have limited data and complex landscapes, making data-hungry models inefficient or impractical. We propose a new framework, PropEn, inspired by ``matching'', which enables implicit guidance without training a discriminator. By matching each sample with a similar one that has a better property value, we create a larger training dataset that inherently indicates the direction of improvement. Matching, combined with an encoder-decoder architecture, forms a domain-agnostic generative framework for property enhancement. We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution, allowing efficient design optimization. Extensive evaluations in toy problems and scientific applications, such as therapeutic protein design and airfoil optimization, demonstrate PropEn's advantages over common baselines. Notably, the protein design results are validated with wet lab experiments, confirming the competitiveness and effectiveness of our approach.