Caption-Driven Explorations: Aligning Image and Text Embeddings through Human-Inspired Foveated Vision
Zanca, Dario, Zugarini, Andrea, Dietz, Simon, Altstidl, Thomas R., Ndjeuha, Mark A. Turban, Schwinn, Leo, Eskofier, Bjoern
–arXiv.org Artificial Intelligence
Understanding human attention is crucial for vision science and AI. While many models exist for free-viewing, less is known about task-driven image exploration. To address this, we introduce CapMIT1003, a dataset with captions and click-contingent image explorations, to study human attention during the captioning task. We also present NevaClip, a zero-shot method for predicting visual scanpaths by combining CLIP models with NeVA algorithms. NevaClip generates fixations to align the representations of foveated visual stimuli and captions. The simulated scanpaths outperform existing human attention models in plausibility for captioning and free-viewing tasks. This research enhances the understanding of human attention and advances scanpath prediction models.
arXiv.org Artificial Intelligence
Aug-19-2024
- Country:
- Europe
- Germany
- Bavaria
- Middle Franconia > Nuremberg (0.05)
- Upper Bavaria > Munich (0.06)
- North Rhine-Westphalia > Upper Bavaria
- Munich (0.05)
- Bavaria
- Italy (0.05)
- Germany
- Europe
- Genre:
- Research Report (0.51)
- Industry:
- Health & Medicine > Therapeutic Area (0.49)
- Technology: