SafeCoT: Improving VLM Safety with Minimal Reasoning
Ma, Jiachen, Zhou, Zhanhui, Yang, Chao, Lu, Chaochao
–arXiv.org Artificial Intelligence
Ensuring safe and appropriate responses from vision-language models (VLMs) remains a critical challenge, particularly in high-risk or ambiguous scenarios. We introduce SafeCoT, a lightweight, interpretable framework that leverages rule-based chain-of-thought (CoT) supervision to improve refusal behavior in VLMs. Unlike prior methods that rely on large-scale safety annotations or complex modeling, SafeCoT uses minimal supervision to help models reason about safety risks and make context-aware refusals. Experiments across multiple benchmarks show that SafeCoT significantly reduces overrefusal and enhances generalization, even with limited training data. Our approach offers a scalable solution for aligning VLMs with safety-critical objectives.
arXiv.org Artificial Intelligence
Jun-12-2025
- Country:
- Asia
- China > Shanghai
- Shanghai (0.04)
- Middle East > Republic of Türkiye
- Karaman Province > Karaman (0.04)
- China > Shanghai
- Asia
- Genre:
- Research Report (0.40)
- Industry:
- Technology: