Protein Representation Learning by Capturing Protein Sequence-Structure-Function Relationship
Ko, Eunji, Lee, Seul, Kim, Minseon, Kim, Dongki
–arXiv.org Artificial Intelligence
The goal of protein representation learning is to extract knowledge from protein databases that can be applied to various protein-related downstream tasks. Although protein sequence, structure, and function are the three key modalities for a comprehensive understanding of proteins, existing methods for protein representation learning have utilized only one or two of these modalities due to the difficulty of capturing the asymmetric interrelationships between them. To account for this asymmetry, we introduce our novel asymmetric multi-modal masked autoencoder (AMMA). AMMA adopts (1) a unified multi-modal encoder to integrate all three modalities into a unified representation space and (2) asymmetric decoders to ensure that sequence latent features reflect structural and functional information. The experiments demonstrate that the proposed AMMA is highly effective in learning protein representations that exhibit well-aligned inter-modal relationships, which in turn makes it effective for various downstream protein-related tasks. Proteins are generated in an organism in the form of a sequence, which is then folded into a threedimensional structure, and as a three-dimensional structure, they become functional and fulfill their roles. This is the so-called protein sequence-structure-function paradigm (Liberles et al., 2012; Serçinoğlu & Ozbek, 2020). Of the three modalities--sequence, structure, and function--sequence information underlies many protein applications and is the most abundant, making it a popular choice for training neural networks.
arXiv.org Artificial Intelligence
Apr-29-2024