MedTVT-R1: A Multimodal LLM Empowering Medical Reasoning and Diagnosis
Zhang, Yuting, Yuan, Kaishen, Lu, Hao, Yue, Yutao, Chen, Jintai, Wu, Kaishun
–arXiv.org Artificial Intelligence
Accurate and interpretable multi-disease diagnosis remains a critical challenge in medical research, particularly when leveraging heterogeneous multimodal medical data. Current approaches often rely on single-modal data, limiting their ability to comprehensively understand complex diseases. To address this, we propose MedTVT-R1, a novel Multimodal Large Language Model (MLLM) framework designed to integrate clinical multimodal data for reasoning and diagnosing multiple diseases. We construct MedTVT-QA, a curated instruction dataset that provides question-answer pairs for physiological-level interpretations and disease-level diagnoses with a Chain of Evidence approach. MedTVT-R1 incorporates a modality perception layer to capture inter-modal dependencies and adaptively weight modality contributions. Additionally, we employ Group Relative Policy Optimization (GRPO)-based Reinforcement Fine-Tuning with a Jaccard Reward function to enhance diagnostic reasoning. Experimental results demonstrate MedTVT-R1's superiority in multimodal feature utilization and multi-disease diagnosis, offering significant potential for clinical applications such as diagnostic report generation and comorbidity reasoning. The dataset and code are available at https://github.com/keke-nice/MedTVT-R1.
arXiv.org Artificial Intelligence
Jun-25-2025
- Country:
- Asia > China
- Guangdong Province > Guangzhou (0.04)
- Hong Kong (0.04)
- Asia > China
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.94)
- Therapeutic Area
- Cardiology/Vascular Diseases (1.00)
- Endocrinology > Diabetes (0.31)
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Nephrology (1.00)
- Health & Medicine
- Technology: