pathseeker
PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach
Lin, Zhihao, Ma, Wei, Zhou, Mingyi, Zhao, Yanjie, Wang, Haoyu, Liu, Yang, Wang, Jun, Li, Li
In recent years, Large Language Models (LLMs) have gained widespread use, raising concerns about their security. Traditional jailbreak attacks, which often rely on the model internal information or have limitations when exploring the unsafe behavior of the victim model, limiting their reducing their general applicability. In this paper, we introduce PathSeeker, a novel black-box jailbreak method, which is inspired by the game of rats escaping a maze. We think that each LLM has its unique "security maze", and attackers attempt to find the exit learning from the received feedback and their accumulated experience to compromise the target LLM's security defences. Our approach leverages multi-agent reinforcement learning, where smaller models collaborate to guide the main LLM in performing mutation operations to achieve the attack objectives. By progressively modifying inputs based on the model's feedback, our system induces richer, harmful responses. During our manual attempts to perform jailbreak attacks, we found that the vocabulary of the response of the target model gradually became richer and eventually produced harmful responses. Based on the observation, we also introduce a reward mechanism that exploits the expansion of vocabulary richness in LLM responses to weaken security constraints. Our method outperforms five state-of-the-art attack techniques when tested across 13 commercial and open-source LLMs, achieving high attack success rates, especially in strongly aligned commercial models like GPT-4o-mini, Claude-3.5, and GLM-4-air with strong safety alignment. This study aims to improve the understanding of LLM security vulnerabilities and we hope that this sturdy can contribute to the development of more robust defenses.
Hallmarks of AI Success in the Enterprise
We're in the midst of a rapid uptake of AI in the enterprise across the board, but there are big differences in the results and the workflows in these AI practices. For its latest "State of AI in the Enterprise" report, Deloitte looked for the commonalities that mark successful AI practices, as well as the practices associated with lower achievement. For its fourth annual AI report, titled "Becoming an AI-fueled organization," Deloitte surveyed 2,875 executives from 11 countries in the Americas, EMEA, and APAC to determine how they're using AI, what kinds of results they're getting, and their underlying practices. Deloitte grouped the companies into four main groups, based on the volume of AI projects and their success rate. Transformers, which accounted for 28% of survey respondents, were characterized by high outcomes and a high number of AI deployments, while Pathseekers, which accounted for 26% of respondents, reporting high outcomes, but a low number of deployments.