Explaining Software Vulnerabilities with Large Language Models
Johnson, Oshando, Fomina, Alexandra, Krishnamurthy, Ranjith, Chaudhari, Vaibhav, Shanmuganathan, Rohith Kumar, Bodden, Eric
–arXiv.org Artificial Intelligence
Abstract--The prevalence of security vulnerabilities has prompted companies to adopt static application security testing (SAST) tools for vulnerability detection. Nevertheless, these tools frequently exhibit usability limitations, as their generic warning messages do not sufficiently communicate important information to developers, resulting in misunderstandings or oversight of critical findings. In light of recent developments in Large Language Models (LLMs) and their text generation capabilities, our work investigates a hybrid approach that uses LLMs to tackle the SAST explainability challenges. In this paper, we present SAFE, an Integrated Development Environment (IDE) plugin that leverages GPT -4o to explain the causes, impacts, and mitigation strategies of vulnerabilities detected by SAST tools. Our expert user study findings indicate that the explanations generated by SAFE can significantly assist beginner to intermediate developers in understanding and addressing security vulnerabilities, thereby improving the overall usability of SAST tools. With the rise in software security vulnerabilities such as those in the Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Weaknesses list [1], many companies resort to static application security testing (SAST) tools for the detection of software vulnerabilities.
arXiv.org Artificial Intelligence
Nov-7-2025
- Country:
- Asia > South Korea (0.04)
- Europe > Germany
- Lower Saxony > Oldenburg (0.04)
- North Rhine-Westphalia (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > California (0.04)
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.69)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: