MGA-VQA: Secure and Interpretable Graph-Augmented Visual Question Answering with Memory-Guided Protection Against Unauthorized Knowledge Use
Mohammadshirazi, Ahmad, Neogi, Pinaki Prasad Guha, Kulshrestha, Dheeraj, Ramnath, Rajiv
–arXiv.org Artificial Intelligence
Document Visual Question Answering (DocVQA) requires models to jointly understand textual semantics, spatial layout, and visual features. Current methods struggle with explicit spatial relationship modeling, inefficiency with high-resolution documents, multi-hop reasoning, and limited interpretability. We propose MGA-VQA, a multi-modal framework that integrates token-level encoding, spatial graph reasoning, memory-augmented inference, and question-guided compression. Unlike prior black-box models, MGA-VQA introduces interpretable graph-based decision pathways and structured memory access for enhanced reasoning transparency. Evaluation across six benchmarks (FUNSD, CORD, SROIE, DocVQA, STE-VQA, and RICO) demonstrates superior accuracy and efficiency, with consistent improvements in both answer prediction and spatial localization.
arXiv.org Artificial Intelligence
Nov-25-2025
- Country:
- North America > United States > Ohio > Franklin County > Columbus (0.05)
- Genre:
- Research Report (0.82)
- Industry:
- Transportation (0.34)
- Technology: