Beyond Trusting Trust: Multi-Model Validation for Robust Code Generation

McDanel, Bradley

arXiv.org Artificial Intelligence 

UMBC CODEBOT '25 Workshop Columbia, MD / 25-26 February 2025BEYOND TRUSTING TRUST: MUL TI-MODEL V ALIDA TION FOR ROBUST CODE GENERA TION Bradley McDanel Franklin and Marshall College bmcdanel@fandm.edu 1 Introduction Ken Thompson's 1984 essay "Reflections on Trusting Trust" demonstrated that even carefully reviewed source code could hide malicious behavior through compromised compilers - because the malicious code exists only in the compiled binary form, not its source [1]. Today, large language models (LLMs) used as code generators [2, 3] present an even more opaque security challenge than classical compilers. While compiler binaries can be analyzed for malicious behavior, LLMs operate through vast matrices of weights combined in non-linear ways, making it difficult to develop robust methods for identifying embedded behaviors [4, 5]. This paper revisits Thompson's analogy in the context of LLM-based code generation. We show how malicious behavior might be subtly embedded into a widely used model and argue that direct inspection of the model's parameters is currently infeasible.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found