MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?
Wahed, Muntasir, Zhou, Xiaona, Nguyen, Kiet A., Yu, Tianjiao, Diwan, Nirav, Wang, Gang, Hakkani-Tür, Dilek, Lourentzou, Ismini
–arXiv.org Artificial Intelligence
Recent advancements in Large Language Models (LLMs) have significantly enhanced their code generation capabilities. However, their robustness against adversarial misuse, particularly through multi-turn malicious coding prompts, remains underexplored. In this work, we introduce code decomposition attacks, where a malicious coding task is broken down into a series of seemingly benign subtasks across multiple conversational turns to evade safety filters. To facilitate systematic evaluation, we introduce \benchmarkname{}, a large-scale benchmark designed to evaluate the robustness of code LLMs against both single-turn and multi-turn malicious prompts. Empirical results across open- and closed-source models reveal persistent vulnerabilities, especially under multi-turn scenarios. Fine-tuning on MOCHA improves rejection rates while preserving coding ability, and importantly, enhances robustness on external adversarial datasets with up to 32.4% increase in rejection rates without any additional supervision.
arXiv.org Artificial Intelligence
Jul-29-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Illinois > Champaign County > Urbana (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Government > Military (0.93)
- Information Technology > Security & Privacy (1.00)
- Technology: