Fine-Tuning Medical Language Models for Enhanced Long-Contextual Understanding and Domain Expertise
Yang, Qimin, Wang, Rongsheng, Chen, Jiexin, Su, Runqi, Tan, Tao
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have been widely applied in various professional fields. By fine-tuning the models using domain specific question and answer datasets, the professional domain knowledge and Q\&A abilities of these models have significantly improved, for example, medical professional LLMs that use fine-tuning of doctor-patient Q\&A data exhibit extraordinary disease diagnostic abilities. However, we observed that despite improvements in specific domain knowledge, the performance of medical LLM in long-context understanding has significantly declined, especially compared to general language models with similar parameters. The purpose of this study is to investigate the phenomenon of reduced performance in understanding long-context in medical LLM. We designed a series of experiments to conduct open-book professional knowledge exams on all models to evaluate their ability to read long-context. By adjusting the proportion and quantity of general data and medical data in the process of fine-tuning, we can determine the best data composition to optimize the professional model and achieve a balance between long-context performance and specific domain knowledge.
arXiv.org Artificial Intelligence
Jul-16-2024
- Country:
- Asia (0.14)
- Genre:
- Research Report > New Finding (0.47)
- Industry:
- Health & Medicine (1.00)
- Technology: