A Survey: Towards Privacy and Security in Mobile Large Language Models

Xu, Honghui, Li, Kaiyang, Chen, Wei, Zheng, Danyang, Li, Zhiyuan, Cai, Zhipeng

arXiv.org Artificial Intelligence 

--Mobile Large Language Models (LLMs) are revolutionizing diverse fields such as healthcare, finance, and education with their ability to perform advanced natural language processing tasks on-the-go. However, the deployment of these models in mobile and edge environments introduces significant challenges related to privacy and security due to their resource-intensive nature and the sensitivity of the data they process. This survey provides a comprehensive overview of privacy and security issues associated with mobile LLMs, systematically categorizing existing solutions such as differential privacy, federated learning, and prompt encryption. Furthermore, we analyze vulnerabilities unique to mobile LLMs, including adversarial attacks, membership inference, and side-channel attacks, offering an in-depth comparison of their effectiveness and limitations. T o bridge this gap, we propose potential applications, discuss open challenges, and suggest future research directions, paving the way for the development of trustworthy, privacy-compliant, and scalable mobile LLM systems. The advent of mobile Large Language Models (LLMs) represents a significant milestone in the evolution of AI, enabling advanced natural language processing capabilities to be deployed in mobile and edge environments [1]-[3]. By bringing powerful AI tools closer to end-users, mobile LLMs are revolutionizing industries such as healthcare [4], finance [5], and education [6] with real-time, on-device applications.