By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual Prompting
Yoon, Hyungjun, Tolera, Biniyam Aschalew, Gong, Taesik, Lee, Kimin, Lee, Sung-Ju
–arXiv.org Artificial Intelligence
Large language models (LLMs) have demonstrated exceptional abilities across various domains. However, utilizing LLMs for ubiquitous sensing applications remains challenging as existing text-prompt methods show significant performance degradation when handling long sensor data sequences. We propose a visual prompting approach for sensor data using multimodal LLMs (MLLMs). We design a visual prompt that directs MLLMs to utilize visualized sensor data alongside the target sensory task descriptions. Additionally, we introduce a visualization generator that automates the creation of optimal visualizations tailored to a given sensory task, eliminating the need for prior task-specific knowledge. We evaluated our approach on nine sensory tasks involving four sensing modalities, achieving an average of 10% higher accuracy than text-based prompts and reducing token costs by 15.8x. Our findings highlight the effectiveness and cost-efficiency of visual prompts with MLLMs for various sensory tasks.
arXiv.org Artificial Intelligence
Jul-14-2024
- Country:
- North America > Montserrat (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine
- Consumer Health (0.68)
- Diagnostic Medicine (1.00)
- Health Care Technology (1.00)
- Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- Health & Medicine
- Technology: