Democratizing MLLMs in Healthcare: TinyLLaVA-Med for Efficient Healthcare Diagnostics in Resource-Constrained Settings
Mir, Aya El, Luoga, Lukelo Thadei, Chen, Boyuan, Hanif, Muhammad Abdullah, Shafique, Muhammad
–arXiv.org Artificial Intelligence
These MLLMs in healthcare is hindered by their high computational demands integrate Large Language Models (LLMs) with Vision Encoders, and significant memory requirements, which are thus possessing capabilities that extend beyond textual particularly challenging for resource-constrained devices understanding and analysis to include image processing like the Nvidia Jetson Xavier. This problem is particularly capabilities. This enables them to simultaneously interpret evident in remote medical settings where advanced both textual data and medical images, facilitating more accurate diagnostics are needed but resources are limited. In this and comprehensive diagnostics and decision-making in paper, we introduce an optimization method for the generalpurpose healthcare. By rapidly processing and synthesizing diverse MLLM, TinyLLaVA, which we have adapted data types, these models can significantly advance patient and renamed TinyLLaVA-Med. This adaptation involves care, enabling quicker, more precise diagnoses and personalized instruction-tuning and fine-tuning TinyLLaVA on a medical treatment plans, thus, transforming healthcare into a dataset by drawing inspiration from the LLaVA-Med training more efficient, effective, and patient-centered service [5] [6].
arXiv.org Artificial Intelligence
Sep-2-2024
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States
- New York (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.89)
- Health Care Technology (1.00)
- Health & Medicine