Multi-Task Learning for Visually Grounded Reasoning in Gastrointestinal VQA
Safwan, Itbaan, Shaikh, Muhammad Annas, Haaris, Muhammad, Khan, Ramail, Tahir, Muhammad Atif
–arXiv.org Artificial Intelligence
We present a multi-task framework for the MediaEval Medico 2025 challenge, leveraging a LoRA-tuned Florence-2 model for simultaneous visual question answering (VQA), explanation generation, and visual grounding. The proposed system integrates three curated datasets: (1) Kvasir-VQA-x1 for question-answer learning, (2) a synthetically enriched explanation dataset offering structured medical reasoning, and (3) text-to-region pairs linking visual features with segmentation masks. This multi-task setup enables the model to jointly learn visual grounding, reasoning, and interpretation, producing responses that are both accurate and interpretable. Extensive evaluation demonstrates that our approach substantially improves over single-task baselines in both answer accuracy and visual localization, highlighting the effectiveness of grounded multi-task learning for medical VQA applications.
arXiv.org Artificial Intelligence
Nov-7-2025
- Country:
- Asia > Pakistan
- Sindh > Karachi Division > Karachi (0.04)
- Europe > Ireland
- Leinster > County Dublin > Dublin (0.04)
- North America > United States
- New York > New York County > New York City (0.04)
- Asia > Pakistan
- Genre:
- Research Report (0.42)
- Industry:
- Health & Medicine > Diagnostic Medicine (0.69)
- Technology: