Uncovering and Mitigating Transient Blindness in Multimodal Model Editing
Han, Xiaoqi, Li, Ru, Yi, Ran, Tan, Hongye, Liang, Zhuomin, Gutiérrez-Basulto, Víctor, Pan, Jeff Z.
–arXiv.org Artificial Intelligence
Multimodal Model Editing (MMED) aims to correct erroneous knowledge in multimodal models. Existing evaluation methods, adapted from textual model editing, overstate success by relying on low-similarity or random inputs, obscure overfitting. We propose a comprehensive locality evaluation framework, covering three key dimensions: random-image locality, no-image locality, and consistent-image locality, op-erationalized through seven distinct data types, enabling a detailed and structured analysis of multimodal edits. We introduce De-VQA, a dynamic evaluation for visual question answering, uncovering a phenomenon we term transient blindness, overfitting to edit-similar text while ignoring visuals. Token analysis shows edits disproportionately affect textual tokens. We propose locality-aware adversarial losses to balance cross-modal representations. Empirical results demonstrate that our approach consistently outperforms existing baselines, reducing transient blindness and improving locality by 17% on average.
arXiv.org Artificial Intelligence
Nov-18-2025
- Country:
- Asia
- North America > United States
- Florida > Miami-Dade County > Miami (0.04)
- Genre:
- Research Report > New Finding (0.87)
- Technology: