Estimated reading time: 3 minutes
New research from Harness warns that shadow AI is creating security blind spots, and DevSecOps teams are struggling to keep up.
Software teams are losing control of where and how AI tools and models are being used, according to new research from Harness, which warns that a surge in “shadow AI” is leaving organizations dangerously exposed.
The company’s State of AI-Native Application Security 2025 report finds that as AI-native applications multiply, security teams can no longer see which models, APIs, and data sources are active, highlighting significant limitations with traditional DevSecOps pipelines.
Based on a survey of 500 security practitioners across the US, UK, France, and Germany, the report paints a picture of a development culture racing ahead of governance.
Nearly two-thirds (62%) of respondents admitted they have no visibility into where large language models are used in their organizations, while three-quarters said shadow AI will eclipse the risks once caused by shadow IT.
Real-world consequences are already appearing. More than three-quarters of enterprises reported at least one prompt-injection incident, where an attacker manipulates an LLM’s input to exfiltrate data or trigger malicious actions. Two-thirds said they’ve faced exploits involving vulnerable LLM code, and a similar proportion reported jailbreaks – cases where the AI is tricked into ignoring safety rules.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
A new enterprise blind spot
Developers are building AI into everything, from chatbots and internal copilots to customer-facing features, but few are stopping to ask how those models are secured or monitored. Harness found that 61% of new applications now include AI components, often connecting to third-party LLMs such as OpenAI, Anthropic, or Hugging Face outside of established review, approval, or monitoring processes.
That uncontrolled sprawl is creating a vast new attack surface: 82% of respondents said AI-native apps are becoming the next frontier for cybercriminals due to the speed and opacity of the integrations. What’s more, 63% believe they’re more vulnerable than traditional software, which Harness attributes to a lack of real-time visibility into how AI components behave.
“Shadow AI has become the new enterprise blind spot,” said Adam Arellano, field CTO at Harness. “Traditional security tools were built for static code and predictable systems – not for adaptive, learning models that evolve daily. Security has to live across the entire software lifecycle – before, during, and after code – so teams can move fast without losing visibility or control.”
More like this
Can observability keep up?
The report claims that most enterprises lack the real-time observability required to detect these attacks. More than half of security leaders said they have no continuous insight into model training data, LLM prompts, or even the API traffic linking AI components to core systems. As one section of the report notes, “It seems like a new API connects an LLM to sensitive data every day,” leaving defenders scrambling to keep up.
At the same time, collaboration between developers and security teams remains fractured. Only 34% of developers notify security before starting an AI project, and just over half (53%) before going live. 14% admit they only loop security in after an app is deployed – or even once something goes wrong. Developers cite lack of time and expertise as key reasons, with 62% saying they don’t have the training to implement comprehensive AI security.
This divide has serious implications for how software engineers approach DevSecOps in the age of AI. Three quarters of security leaders say developers still see security as a blocker, while 75% report that AI applications evolve faster than security can keep up. That speed-versus-safety imbalance, the report warns, is turning shadow AI into “a gaping chasm” in cyber defences.
Redrawing the attack surface
The findings echo a wider industry trend: AI is redrawing the attack surface faster than governance models can adapt. Developers now need to secure not only code and APIs, but also model behaviour, training datasets, and AI-generated outputs – all of which can change with each model update.
Arellano sees the solution in embedding security into every stage of software delivery. “Where teams once monitored code and APIs, they now must secure model behavior, training data, and AI-generated connections. The only way forward is for security and development to operate as one – embedding governance directly into the software delivery process.”
Harness recommends a return to DevSecOps fundamentals: building security in from day one, automatically discovering new AI components, and monitoring them continuously for anomalies such as unusual API calls or prompt behaviour. The company also advocates dynamic testing against AI-specific threats, like model poisoning or data leakage, before production release.
For developers, it’s no longer enough to ship fast and fix later. As AI systems grow and change on their own, maintaining visibility has become the new discipline of secure engineering.

Deadline: January 4, 2026
Call for Proposals for London 2026 is open!