Veracode's 2025 GenAI Code Security Report: AI Code Vulnerabilities
In a recent revelation, Veracode, a prominent name in application risk management, has unveiled its 2025 GenAI Code Security Report, shedding light on the security challenges posed by AI-generated code. The comprehensive study evaluated 80 curated coding tasks across more than 100 large language models (LLMs), uncovering that while AI can produce functional code, it introduces security vulnerabilities in nearly half of the cases.
The findings highlight a troubling pattern: when faced with secure and insecure coding options, GenAI models chose the insecure path 45 percent of the time. Despite advancements in generating syntactically correct code, the security aspect has not seen similar progress, remaining stagnant over time.
Jens Wessling, Veracode's Chief Technology Officer, remarked, "The emergence of vibe coding, where developers rely on AI for code generation without explicitly defining security requirements, signifies a fundamental shift in software development. Our research indicates that GenAI models make incorrect security decisions almost half the time, and this trend shows no signs of improvement."
AI is empowering attackers to identify and exploit security vulnerabilities more efficiently. AI-driven tools can thoroughly scan systems, pinpoint weaknesses, and generate exploit code with minimal human intervention. This reduces the entry barrier for less-skilled attackers and enhances the speed and sophistication of attacks, posing a significant threat to traditional security defenses.
LLMs and Common Security Vulnerabilities
To assess the security properties of LLM-generated code, Veracode crafted 80 code completion tasks with known potential for security vulnerabilities, based on the MITRE Common Weakness Enumeration (CWE) system. These tasks prompted over 100 LLMs to auto-complete code blocks in either a secure or insecure manner, which were then analyzed using Veracode Static Analysis. In 45 percent of test cases, LLMs introduced vulnerabilities classified within the OWASP Top 10 - the most critical web application security risks.
Java emerged as the riskiest language for AI code generation, with a security failure rate exceeding 70 percent. Other major languages, such as Python, C#, and JavaScript, also presented significant risks, with failure rates ranging from 38 percent to 45 percent. The research revealed that LLMs failed to secure code against cross-site scripting and log injection in 86 percent and 88 percent of cases, respectively.
"Despite advances in AI-assisted development, security hasn't kept pace," Wessling stated. "Our research indicates models are improving in coding accuracy but not in security. Larger models don't perform significantly better than smaller ones, suggesting a systemic issue rather than a scaling problem."
Managing Application Risks in the AI Era
While GenAI development practices like vibe coding enhance productivity, they also amplify risks. Veracode stresses the importance of a comprehensive risk management program that prevents vulnerabilities before they reach production by integrating code quality checks and automated fixes directly into the development workflow.
Veracode recommends the following proactive measures to ensure security:
- Integrate AI-powered tools like Veracode Fix into developer workflows to remediate security risks in real-time.
- Leverage Static Analysis to detect flaws early and automatically, preventing vulnerable code from advancing through development pipelines.
- Embed security in agentic workflows to automate policy compliance and ensure AI agents enforce secure coding standards.
- Use Software Composition Analysis (SCA) to ensure AI-generated code does not introduce vulnerabilities from third-party dependencies and open-source components.
- Adopt bespoke AI-driven remediation guidance to empower developers with precise fix instructions and train them to use the recommendations effectively.
- Deploy a Package Firewall to automatically detect and block malicious packages, vulnerabilities, and policy violations.
"AI coding assistants and agentic workflows represent the future of software development, and they will continue to evolve rapidly," Wessling concluded. "The challenge for every organization is ensuring security evolves alongside these new capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt."
Veracode stands as a global leader in Application Risk Management for the AI era. With a platform powered by trillions of lines of code scans and a proprietary AI-assisted remediation engine, Veracode is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment.
Links:
SonarSource Acquires RIPS Technologies to Enhance Code Security
Enhance WordPress Code Formatting with New Plugin
Acquisition Security Framework (ASF): Enhancing Cybersecurity in Software Development
ASCET-DEVELOPER: Advanced Tool for Embedded Systems Programming
Enhancing Automotive Software Security in a Digital Era
Five Principles of Secure Software Development for 2025
Securing CI/CD Pipelines: Protecting Against Emerging Threats
