Fusion of Domain-Adapted Vision and Language Models for Medical Visual Question Answering
Ha, Cuong Nhat, Asaadi, Shima, Karn, Sanjeev Kumar, Farri, Oladimeji, Heimann, Tobias, Runkler, Thomas
–arXiv.org Artificial Intelligence
Vision-language models, while effective in general domains and showing strong performance in diverse multi-modal applications like visual question-answering (VQA), struggle to maintain the same level of effectiveness in more specialized domains, e.g., medical. We propose a medical vision-language model that integrates large vision and language models adapted for the medical domain. This model goes through three stages of parameter-efficient training using three separate biomedical and radiology multi-modal visual and text datasets. The proposed model achieves state-of-the-art performance on the SLAKE 1.0 medical VQA (MedVQA) dataset with an overall accuracy of 87.5% and demonstrates strong performance on another MedVQA dataset, VQA-RAD, achieving an overall accuracy of 73.2%.
arXiv.org Artificial Intelligence
Apr-24-2024
- Country:
- Europe (1.00)
- North America > Canada (0.28)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (0.92)
- Technology: