VLM-Guard: Safeguarding Vision-Language Models via Fulfilling Safety Alignment Gap