Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models Ziyi Yin 1 Muchao Y e
–Neural Information Processing Systems
Vision-Language (VL) pre-trained models have shown their superiority on many multimodal tasks. However, the adversarial robustness of such models has not been fully explored. Existing approaches mainly focus on exploring the adversarial robustness under the white-box setting, which is unrealistic. In this paper, we aim to investigate a new yet practical task to craft image and text perturbations using pre-trained VL models to attack black-box fine-tuned models on different downstream tasks.
Neural Information Processing Systems
Feb-16-2026, 08:15:25 GMT
- Country:
- Asia > China
- Liaoning Province > Dalian (0.04)
- Shaanxi Province > Xi'an (0.04)
- North America > United States
- Georgia > Fulton County
- Atlanta (0.04)
- New York > Suffolk County
- Stony Brook (0.04)
- Pennsylvania (0.04)
- Georgia > Fulton County
- Asia > China
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Government (0.84)
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (0.67)
- Natural Language > Text Processing (0.69)
- Vision (1.00)
- Security & Privacy (1.00)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology