On Evaluating Adversarial Robustness of Large Vision-Language Models
–Neural Information Processing Systems
To this end, we propose evaluating the robustness of open-source large VLMs in the most realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning the targeted responses.
Neural Information Processing Systems
Nov-19-2025, 16:04:13 GMT
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.96)
- Large Language Model (1.00)
- Representation & Reasoning (0.93)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence