LLaVA-SG: Leveraging Scene Graphs as Visual Semantic Expression in Vision-Language Models
Wang, Jingyi, Ju, Jianzhong, Luan, Jian, Deng, Zhidong
–arXiv.org Artificial Intelligence
Recent advances in large vision-language models (VLMs) typically employ vision encoders based on the Vision Transformer (ViT) architecture. The division of the images into patches by ViT results in a fragmented perception, thereby hindering the visual understanding capabilities of VLMs. In this paper, we propose an innovative enhancement to address this limitation by introducing a Scene Graph Expression (SGE) module in VLMs. This module extracts and structurally expresses the complex semantic information within images, thereby improving the foundational perception and understanding abilities of VLMs. Extensive experiments demonstrate that integrating our SGE module significantly enhances the VLM's performance in vision-language tasks, indicating its effectiveness in preserving intricate semantic details and facilitating better visual understanding.
arXiv.org Artificial Intelligence
Aug-29-2024
- Genre:
- Research Report (0.40)
- Industry:
- Leisure & Entertainment (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Natural Language
- Chatbot (0.46)
- Large Language Model (0.72)
- Vision (1.00)
- Natural Language
- Information Technology > Artificial Intelligence