Agentic Moderation: Multi-Agent Design for Safer Vision-Language Models
Ren, Juan, Dras, Mark, Naseem, Usman
–arXiv.org Artificial Intelligence
Abstract--Agentic methods have emerged as a powerful and autonomous paradigm that enhances reasoning, collaboration, and adaptive control, enabling systems to coordinate and independently solve complex tasks. We extend this paradigm to safety alignment by introducing Agentic Moderation, a model-agnostic framework that leverages specialized agents to defend multimodal systems against jailbreak attacks. Unlike prior approaches that apply as a static layer over inputs or outputs and provide only binary classifications(safe or unsafe), our method integrates dynamic, cooperative agents,including Shield, Responder, Evaluator, and Reflector,to achieve context-aware and interpretable moderation. Extensive experiments across five datasets and four representative large vision-language models (L VLMs) demonstrate that our approach reduces the Attack Success Rate (ASR) by 7-19%, maintains a stable Non-Following Rate (NF), and improves the Refusal Rate (RR) by 4-20%, achieving robust, interpretable, and well-balanced safety performance. By harnessing the flexibility and reasoning capacity of agentic architectures, Agentic Moderation provides modular, scalable, and fine-grained safety enforcement, highlighting the broader potential of agentic systems as a foundation for automated safety governance. Large vision-language models (L VLMs) integrate visual and textual modalities, enabling richer multimodal reasoning and expanding their application scope. Malicious users can exploit cross-modal interactions and the continuous nature of visual embedding spaces, which makes adversarial defenses especially challenging. Cross-modality adversarial attacks exploit visual vulnerabilities and modality shifts in semantic meaning. Examples include pixel-level perturbations that embed harmful intent within images [1]-[3], malicious content rendered via typography or flowcharts [4], harmful behaviors that emerge only from the combination of benign-looking text and visual inputs, implicit cross-modal interactions that obscure adversarial intent [5], and hybrid or ensemble strategies that combine these mechanisms [6].
arXiv.org Artificial Intelligence
Oct-30-2025
- Country:
- North America > United States (0.04)
- Oceania > Australia (0.04)
- Genre:
- Research Report (0.68)
- Workflow (0.67)
- Industry:
- Information Technology > Security & Privacy (0.90)
- Technology: