Biomedical Foundation Model: A Survey
Liu, Xiangrui, Zhang, Yuanyuan, Lu, Yingzhou, Yin, Changchang, Hu, Xiaoling, Liu, Xiaoou, Chen, Lulu, Wang, Sheng, Rodriguez, Alexander, Yao, Huaxiu, Yang, Yezhou, Zhang, Ping, Chen, Jintai, Fu, Tianfan, Wang, Xiao
–arXiv.org Artificial Intelligence
Foundation models, first introduced in 2021, are large-scale pre-trained models (e.g., large language models (LLMs) and vision-language models (VLMs)) that learn from extensive unlabeled datasets through unsupervised methods, enabling them to excel in diverse downstream tasks. These models, like GPT, can be adapted to various applications such as question answering and visual understanding, outperforming task-specific AI models and earning their name due to broad applicability across fields. The development of biomedical foundation models marks a significant milestone in leveraging artificial intelligence (AI) to understand complex biological phenomena and advance medical research and practice. This survey explores the potential of foundation models across diverse domains within biomedical fields, including computational biology, drug discovery and development, clinical informatics, medical imaging, and public health. The purpose of this survey is to inspire ongoing research in the application of foundation models to health science.
arXiv.org Artificial Intelligence
Mar-3-2025
- Country:
- Asia (0.67)
- North America > United States
- Indiana > Tippecanoe County (0.14)
- Michigan > Washtenaw County
- Ann Arbor (0.14)
- Washington > King County
- Seattle (0.14)
- Genre:
- Overview (0.86)
- Research Report > Experimental Study (1.00)
- Industry:
- Health & Medicine
- Consumer Health (1.00)
- Diagnostic Medicine > Imaging (1.00)
- Epidemiology (1.00)
- Health Care Technology (1.00)
- Nuclear Medicine (0.95)
- Pharmaceuticals & Biotechnology (1.00)
- Public Health (1.00)
- Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine
- Technology: