Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models

Open in new window