Speed is the promise of AI-assisted development. You describe what you want, the model builds it, and something that used to take days takes hours. For development teams under pressure to ship, that acceleration is hard to resist.
But speed without oversight is a liability. And right now, most organisations are moving fast without asking the questions that matter.
The vibe coding problem
Vibe coding, using AI tools to generate code from natural language prompts, has become standard practice across development teams.
Over 97% of developers report using AI coding tools on their own, often ahead of any official company policy, and well before security review processes have caught up.
The risk isn’t theoretical. Veracode’s 2025 GenAI Code Security Report found that 45% of AI-generated code introduces security vulnerabilities, with many LLMs choosing insecure methods nearly half the time. Veracode tested 100 leading LLMs across 80 curated tasks and found they produced insecure code 45% of the time, with no real improvement across newer or larger models.
The core issue isn’t that AI writes bad code. It’s that developers trust it without checking. Code that runs isn’t the same as code that is safe. Hardcoded credentials, weak authentication logic, insecure dependencies, just because it runs doesn’t mean it’s safe. And when the pressure is to ship by Friday, the code review gets skipped.
That, as experienced infrastructure and security consultant John Boero puts it in the Cyber Security in Focus Podcast, is the lost art of code review. Getting something built quickly is impressive. Getting something built quickly and securely is the job.
What your AI tool is doing with your data
There is a second risk that gets far less attention: what happens to the data you share with AI tools in the first place.
LLMs can memorise portions of training data and reproduce it in responses, and when an AI model processes everything from customer chats to internal memos, there is a very real risk of unintended disclosure through the model itself.
Most enterprise teams assume that paying for a subscription means their data stays private. That assumption is often wrong. LLMs often collect and process sensitive data without clear user consent, creating privacy and compliance risks and once data enters an LLM, it’s effectively permanent, with data retention and training policies that are often unclear or risky.
For organisations handling sensitive client data, proprietary code, or regulated information, this is not an abstract concern. Every prompt sent to a public AI model is a potential data exposure event. The thumbs up, thumbs down feedback loop present on almost every major platform is actively used to train the next version of the model. Your inputs are their most valuable resource.
The practical response is straightforward: know what your teams are putting into these tools, establish clear policies about what data is permissible, and for the most sensitive workloads, consider running a local model. Tools like Microsoft’s enterprise AI offerings have made data privacy a core selling point, with enterprise customers able to ensure their prompt history is not used for model training. The option exists. Most organisations simply haven’t made the decision.
The DevSecOps tension
Speed and security have always pulled in opposite directions. Development teams are under pressure to ship. Security teams are under pressure to slow things down and check. In most organisations, that tension never fully resolves, it just produces friction.
The underlying issue is structural. When security is bolted on at the end of a development cycle, it becomes a blocker. When it is embedded from the start, when developers and security professionals are working toward the same outcome rather than against each other, the friction disappears and the output is better.
This isn’t a new insight. But vibe coding has made it urgent. When a developer can build and deploy an application in an afternoon, the traditional model of staged security review breaks down entirely. The only viable response is to move security into the act of creation, agentic security must become a native companion to AI coding assistants, embedded directly inside AI-first development environments, not bolted on downstream.
The organisations that will manage this well are those where dev and security teams share a common goal, communicate in plain language, and treat code review not as a bureaucratic hurdle but as a professional standard.
The old ways still matter
There is a tendency in technology to treat everything that came before as obsolete. New frameworks, new languages, new platforms, the industry moves quickly, and the assumption is that moving with it means leaving the old behind.
That assumption is wrong, particularly in security.
The vulnerabilities that exist in AI-generated code today are often not new. They are the same categories of weakness, injection flaws, poor authentication, insecure dependencies, that experienced developers learned to watch for decades ago. A generation of engineers who understand assembly, C, and the fundamentals of how hardware and software interact are not outdated. They are exactly the people who can identify what an AI-generated codebase is quietly getting wrong.
Full-stack understanding, from the hardware layer up through the operating system, platform, and application, remains one of the most valuable and undervalued capabilities in security. The ability to ask not just “does this work?” but “how does this work, and where could it break?” is still a human skill. AI cannot reliably substitute for it yet.
What security and development teams should do now
The practical starting point is not complex. Treat AI-generated code as untrusted by default. Review it with the same rigour applied to third-party libraries. Establish clear policies on what data can be shared with which AI tools. Test your code before it reaches production using static and dynamic analysis, not instead of review, but in addition to it.
For security leaders, the conversation with development teams should not be adversarial. It should be practical: here is what the risk looks like, here is how we manage it together, and here is what we will not compromise on regardless of the deadline.
The speed that AI offers is real and genuinely valuable. The risks it introduces are equally real. Managing both is not a choice between innovation and security. It is the job.
Hear the full conversation
In the latest episode of Cyber Security In Focus, Katie Watson speaks with John Boero, a consultant with over 20 years across infrastructure, security, and cloud consulting on multiple continents, on vibe coding risks, AI data privacy, the DevSecOps relationship, and why the engineers who understand the old ways are still the most valuable people in the room.
Listen to Vibe Coding & Security Risk with John Boero on Spotify, Apple Podcast or YouTube or wherever you get your podcasts.

