BioBridge: Bridging Biomedical Foundation Models via Knowledge Graphs
Wang, Zifeng, Wang, Zichen, Srinivasan, Balasubramaniam, Ioannidis, Vassilis N., Rangwala, Huzefa, Anubhai, Rishita
–arXiv.org Artificial Intelligence
Foundation models (FMs) learn from large volumes of unlabeled data to demonstrate superior performance across a wide range of tasks. However, FMs developed for biomedical domains have largely remained unimodal, i.e., independently trained and used for tasks on protein sequences alone, small molecule structures alone, or clinical data alone. To overcome this limitation, we present BioBRIDGE, a parameter-efficient learning framework, to bridge independently trained unimodal FMs to establish multimodal behavior. BioBRIDGE achieves it by utilizing Knowledge Graphs (KG) to learn transformations between one unimodal FM and another without fine-tuning any underlying unimodal FMs. Our results demonstrate that BioBRIDGE can beat the best baseline KG embedding methods (on average by 76.3%) in cross-modal retrieval tasks. We also identify BioBRIDGE demonstrates out-of-domain generalization ability by extrapolating to unseen modalities or relations. Additionally, we also show that BioBRIDGE presents itself as a general-purpose retriever that can aid biomedical multimodal question answering as well as enhance the guided generation of novel drugs. Foundation models (Bommasani et al., 2021) trained on large volumes of data can be leveraged and adapted for different domains. In biomedicine, FMs are trained to ingest text corpora (Gu et al., 2021) from scientific literature, protein data in sequences and 3D-structures (Jumper et al., 2021), molecule in graphs and SMILES strings (Fabian et al., 2020) and protein-interaction data in the form of relational graphs. These pre-trained biomedical FMs have achieved a significant gain in comparison to previous methods trained on smaller datasets (Qiu et al., 2023). Introducing multimodal data in training further boosts the performance of FMs, especially in few-shot/zero-shot prediction settings (Radford et al., 2021). In the biomedical domain, for drug-text (Edwards et al., 2022), protein-text (Liu et al., 2023), and drug-protein data (Huang et al., 2021; Ioannidis et al., 2020), multimodal data was leveraged by the joint optimization of unimodal encoders. However, this idea encounters key issues when scaling beyond two modalities: Computational Cost.
arXiv.org Artificial Intelligence
Jan-18-2024
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine
- Consumer Health (0.93)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Neurology (1.00)
- Psychiatry/Psychology > Mental Health (0.46)
- Immunology (1.00)
- Cardiology/Vascular Diseases (1.00)
- Gastroenterology (1.00)
- Dermatology (0.93)
- Oncology > Carcinoma (0.67)
- Rheumatology (0.68)
- Endocrinology > Diabetes (0.46)
- Infections and Infectious Diseases (1.00)
- Health & Medicine
- Technology: