Relative Drawing Identification Complexity is Invariant to Modality in Vision-Language Models
Freitas, Diogo, Håvardstun, Brigt, Ferri, Cèsar, Garigliotti, Darío, Telle, Jan Arne, Hernández-Orallo, José
–arXiv.org Artificial Intelligence
Large language models have become multimodal, and many of them are said to integrate their modalities using common representations. If this were true, a drawing of a car as an image, for instance, should map to a similar area in the latent space as a textual description of the strokes that form the drawing. To explore this in a black-box access regime to these models, we propose the use of machine teaching, a theory that studies the minimal set of examples a teacher needs to choose so that the learner captures the concept. In this paper, we evaluate the complexity of teaching vision-language models a subset of objects in the Quick, Draw! dataset using two presentations: raw images as bitmaps and trace coordinates in TikZ format. The results indicate that image-based representations generally require fewer segments and achieve higher accuracy than coordinate-based representations. But, surprisingly, the teaching size usually ranks concepts similarly across both modalities, even when controlling for (a human proxy of) concept priors, suggesting that the simplicity of concepts may be an inherent property that transcends modality representations.
arXiv.org Artificial Intelligence
Aug-29-2025
- Country:
- Asia
- China > Guangdong Province
- Shenzhen (0.04)
- Middle East > Republic of Türkiye
- Ankara Province > Ankara (0.04)
- China > Guangdong Province
- Europe
- France > Île-de-France
- Norway > Western Norway
- Portugal (0.04)
- Spain (0.04)
- North America > United States
- California > Los Angeles County
- Los Angeles (0.14)
- Wisconsin > Dane County
- Madison (0.04)
- California > Los Angeles County
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Transportation (0.66)
- Technology: