Goto

Collaborating Authors

 Kulkarni, Prashant


Temporal Context Awareness: A Defense Framework Against Multi-turn Manipulation Attacks on Large Language Models

arXiv.org Artificial Intelligence

Temporal Context A wareness: A Defense Framework Against Multi-turn Manipulation Attacks on Large Language Models Prashant Kulkarni ORCID: 0009-0004-2344-4840 Mountain View, CA Assaf Namer ORCID: 0009-0008-5579-0544 Mountain View, CA Abstract --Many Large Language Models (LLMs) today are vulnerable to multi-turn manipulation attacks,where adversaries gradually build context through seemingly benign conversational turns to elicit harmful or unauthorized responses. These attacks exploit the temporal nature of dialogue to evade single-turn detection methods, posing a significant risk to the safe deployment of LLMs. This paper introduces the T emporal Context A wareness (TCA)framework, a novel defense mechanism designed to address this challenge by continuously analyzing semantic drift, cross-turn intention consistency, and evolving conversational patterns.The TCA framework integrates dynamic context embedding analysis, cross-turn consistency verification, and progressive risk scoring to detect and mitigate manipulation attempts effectively. Preliminary evaluations on simulated adversarial scenarios demonstrate the framework's potential to identify subtle manipulation patterns often missed by traditional detection techniques, offering a much-needed layer of security for conversational AI systems.In addition to outlining the design of TCA, we analyze diverse attack vectors and their progression across multi-turn conversations, providing valuable insights into adversarial tactics and their impact on LLM vulnerabilities. Our findings underscore the pressing need for robust, context-aware defenses in conversational AI systems and highlight the TCA framework as a promising direction for securing LLMs while preserving their utility in legitimate applications Index T erms --LLM Security, Multi-turn attacks, prompt security, obfuscation, prompt injection, security, trustworthy AI, jailbreak I. I NTRODUCTION Large Language Models (LLMs) have become integral to modern digital infrastructure, powering applications from customer service to healthcare assistance [Chen et al., 2023] [3].