Chain-of-Trigger: An Agentic Backdoor that Paradoxically Enhances Agentic Robustness

Qiu, Jiyang, Ma, Xinbei, Xu, Yunqing, Zhang, Zhuosheng, Zhao, Hai

arXiv.org Artificial Intelligence 

The rapid deployment of large language model (LLM)-based agents in real-world applications has raised serious concerns about their trustworthiness. In this work, we reveal the security and robustness vulnerabilities of these agents through backdoor attacks. Distinct from traditional backdoors limited to single-step control, we propose the Chain-of-Trigger Backdoor (CoTri), a multi-step backdoor attack designed for long-horizon agentic control. CoTri relies on an ordered sequence. It starts with an initial trigger, and subsequent ones are drawn from the environment, allowing multi-step manipulation that diverts the agent from its intended task. Experimental results show that CoTri achieves a near-perfect attack success rate (ASR) while maintaining a near-zero false trigger rate (FTR). Due to training data modeling the stochastic nature of the environment, the implantation of CoTri paradoxically enhances the agent's performance on benign tasks and even improves its robustness against environmental distractions. Our work highlights that CoTri achieves stable, multi-step control within agents, improving their inherent robustness and task capabilities, which ultimately makes the attack more stealthy and raises potential safty risks. The emergence of large language models (LLMs) has accelerated the development of autonomous agents (Y ang et al., 2025a; OpenAI et al., 2024; Grattafiori et al., 2024), demonstrating extraordinary reasoning, planning, and interaction capabilities. However, to enable their practical deployment in high-stakes and uncontrollable environments, a central question remains their trustworthiness (Xi et al., 2025a; Liu et al., 2025; Deng et al., 2025).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found