On Evaluating Adversarial Robustness of Large Vision-Language Models

Neural Information Processing Systems 

To this end, we propose evaluating the robustness of open-source large VLMs in the most realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning the targeted responses.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found