Integrating Object Detection Modality into Visual Language Model for Enhanced Autonomous Driving Agent
He, Linfeng, Sun, Yiming, Wu, Sihao, Liu, Jiaxu, Huang, Xiaowei
–arXiv.org Artificial Intelligence
In this paper, we propose a novel framework for enhancing visual comprehension in autonomous driving systems by integrating visual language models (VLMs) with additional visual perception module specialised in object detection. We extend the Llama-Adapter architecture by incorporating a YOLOS-based detection network alongside the CLIP perception network, addressing limitations in object detection and localisation. Our approach introduces camera ID-separators to improve multi-view processing, crucial for comprehensive environmental awareness. Experiments on the DriveLM visual question answering challenge demonstrate significant improvements over baseline models, with enhanced performance in ChatGPT scores, BLEU scores, and CIDEr metrics, indicating closeness of model answer to ground truth. Our method represents a promising step towards more capable and interpretable autonomous driving systems. Possible safety enhancement enabled by detection modality is also discussed.
arXiv.org Artificial Intelligence
Nov-8-2024
- Country:
- Europe > United Kingdom > England (0.28)
- Genre:
- Research Report (0.64)
- Industry:
- Automobiles & Trucks (0.96)
- Information Technology > Robotics & Automation (0.86)
- Transportation > Ground
- Road (0.86)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.35)
- Natural Language > Large Language Model (0.71)
- Robots > Autonomous Vehicles (0.96)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence