FlipLLM: Efficient Bit-Flip Attacks on Multimodal LLMs using Reinforcement Learning

Khalil, Khurram, Hoque, Khaza Anuarul

arXiv.org Artificial Intelligence 

Abstract--Generative Artificial Intelligence Models like Large Language Models (LLMs) and Large Vision Models (VLMs) exhibit state-of-the-art performance across a wide range of tasks but remain vulnerable to hardware-based threats, specifically bit-flip attacks (BF As), posing a serious risk to their security in safety-critical applications. Existing BF A discovery methods--gradient-based, static analysis, and search-based--lack generalizability and struggle to scale, often failing to analyze the vast parameter space and complex interdependencies of modern foundation models in a reasonable time. This paper proposes FlipLLM, a reinforcement learning (RL) architecture-agnostic framework that formulates BF A discovery as a sequential decision-making problem. FlipLLM combines sensitivity-guided layer pruning with Q-learning to efficiently identify minimal, high-impact bit sets capable of inducing catastrophic failure. We demonstrate the effectiveness and generalizability of FlipLLM by applying it to a diverse set of models, including prominent text-only LLMs (GPT -2 Large, LLaMA 3.1 8B, and DeepSeek-V2 7B), VLMs such as LLaV A 1.6, and datasets, such as MMLU, MMLU-Pro, VQA v2, and T extVQA. Our results show that FlipLLM can identify critical bits that are vulnerable to BF As up to 2.5 faster than SOT A methods. We demonstrate that flipping the FlipLLM-identified bits plummets the accuracy of LLaMA 3.1 8B from 69.9% to 0.2%, and for LLaV A's VQA score from 78% to almost 0%, by flipping as few as 5 and 7 bits, respectively. Further analysis shows that applying standard hardware protection mechanisms, such as ECC SECDED, to the FlipLLM-identified bit locations completely mitigates the BF A impact, demonstrating the practical value of our framework for guiding hardware-level defenses. FlipLLM offers the first scalable and adaptive methodology for exploring the BF A vulnerability of both language and multimodal foundation models, paving the way for comprehensive hardware-security evaluation. Generative Artificial Intelligence models like Large Language Models (LLMs) [1] and Large Vision Models (VLMs) represent a transformative advancement in artificial intelligence, finding integration into mission-critical systems spanning healthcare, finance, and autonomous navigation [2], [3]. Their effective deployment mandates reliable and secure operation across diverse hardware infrastructures, from expansive cloud accelerators to resource-constrained edge devices.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found