Plotting

 Wang, Xinyou


DPLM-2: A Multimodal Diffusion Protein Language Model

arXiv.org Artificial Intelligence

Proteins are essential macromolecules defined by their amino acid sequences, which determine their three-dimensional structures and, consequently, their functions in all living organisms. Therefore, generative protein modeling necessitates a multimodal approach to simultaneously model, understand, and generate both sequences and structures. However, existing methods typically use separate models for each modality, limiting their ability to capture the intricate relationships between sequence and structure. This results in suboptimal performance in tasks that requires joint understanding and generation of both modalities. In this paper, we introduce DPLM-2, a multimodal protein foundation model that extends discrete diffusion protein language model (DPLM) to accommodate both sequences and structures. To enable structural learning with the language model, 3D coordinates are converted to discrete tokens using a lookup-free quantization-based tokenizer. By training on both experimental and high-quality synthetic structures, DPLM-2 learns the joint distribution of sequence and structure, as well as their marginals and conditionals. We also implement an efficient warm-up strategy to exploit the connection between large-scale evolutionary data and structural inductive biases from pre-trained sequence-based protein language models. Empirical evaluation shows that DPLM-2 can simultaneously generate highly compatible amino acid sequences and their corresponding 3D structures eliminating the need for a two-stage generation approach. Moreover, DPLM-2 demonstrates competitive performance in various conditional generation tasks, including folding, inverse folding, and scaffolding with multimodal motif inputs, as well as providing structure-aware representations for predictive tasks.


Diffusion Language Models Are Versatile Protein Learners

arXiv.org Artificial Intelligence

Drawing inspiration from the remarkable This paper introduces diffusion protein language progress in NLP achieved by language models (LMs; Devlin model (DPLM), a versatile protein language et al., 2019; Radford et al., 2018; OpenAI, 2023) thanks to model that demonstrates strong generative and the scalability of Transformers (Vaswani et al., 2017) and predictive capabilities for protein sequences. We the existence of large-scale text data, recent explorations in first pre-train scalable DPLMs from evolutionaryscale protein has also demonstrated the impressive capabilities of protein sequences within a generative selfsupervised protein language models (Rives et al., 2019; Lin et al., 2022; discrete diffusion probabilistic framework, Hu et al., 2022), learned from the universe of evolutionaryscale which generalizes language modeling for protein sequences. As a result, protein LMs have proteins in a principled way. After pre-training, become one of the most important cornerstones in AI for DPLM exhibits the ability to generate structurally protein research, serving a pivotal role not only in predictive plausible, novel and diverse protein sequences tasks (e.g., probing functional properties, and predicting for unconditional generation. We further protein structures from single sequences without explicit demonstrate the proposed diffusion generative evolutionary homologs) but also in generative tasks (e.g., pre-training make DPLM possess a better redesigning sequences given protein backbone structures, or understanding of proteins, making it a superior synthesizing completely new protein sequences).