Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches
Christophe, Clément, Kanithi, Praveen K, Munjal, Prateek, Raha, Tathagata, Hayat, Nasir, Rajan, Ronnie, Al-Mahrooqi, Ahmed, Gupta, Avani, Salman, Muhammad Umar, Gosal, Gurpreet, Kanakiya, Bhargav, Chen, Charles, Vassilieva, Natalia, Amor, Boulbaba Ben, Pimentel, Marco AF, Khan, Shadab
–arXiv.org Artificial Intelligence
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question-answering capabilities. Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks. Notably, our medical LLM Med42 showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs. Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications.
arXiv.org Artificial Intelligence
Apr-23-2024
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe (0.28)
- North America > United States (0.28)
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Technology: