VLM-Guard: Safeguarding Vision-Language Models via Fulfilling Safety Alignment Gap

Open in new window