Commands 4 Autonomous Vehicles (C4AV) Workshop Summary
Deruyttere, Thierry, Vandenhende, Simon, Grujicic, Dusan, Liu, Yu, Van Gool, Luc, Blaschko, Matthew, Tuytelaars, Tinne, Moens, Marie-Francine
–arXiv.org Artificial Intelligence
The task of visual grounding requires locating the most relevant region or object in an image, given a natural language query. So far, progress on this task was mostly measured on curated datasets, which are not always representative of human spoken language. In this work, we deviate from recent, popular task settings and consider the problem under an autonomous vehicle scenario. In particular, we consider a situation where passengers can give free-form natural language commands to a vehicle which can be associated with an object in the street scene. To stimulate research on this topic, we have organized the Commands for Autonomous Vehicles (C4AV) challenge based on the recent Talk2Car dataset. This paper presents the results of the challenge. First, we compare the used benchmark against existing datasets for visual grounding. Second, we identify the aspects that render top-performing models successful, and relate them to existing state-of-the-art models for visual grounding, in addition to detecting potential failure cases by evaluating on carefully selected subsets. Finally, we discuss several possibilities for future work.
arXiv.org Artificial Intelligence
Sep-18-2020
- Genre:
- Instructional Material > Course Syllabus & Notes (0.40)
- Overview (0.68)
- Research Report (0.71)
- Industry:
- Information Technology (0.46)
- Transportation > Passenger (0.48)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.69)
- Natural Language (1.00)
- Robots > Autonomous Vehicles (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence