Estimated reading time: 4 minutes
Key takeaways:
- Reframe security as performance resilience: shift the mindset from performance versus security to performance through security.
- Embed security into existing performance workflows: security checks can be integrated directly into performance testing and CI/CD automation.
- Drive cultural adoption through shared ownership and automation: align teams around common goals, promote cross-team learning, and automate security validation.
Performance testing can sometimes take precedence over security testing, especially when teams are measured by throughput, latency, and uptime.
However, this performance-centric mindset can inadvertently create security gaps.
Recognizing the cultural gap
The initial challenge was not technical but cultural. The issue became apparent after a release that met all performance benchmarks but later required an urgent security remediation. Although the system performed well under load, a missed security misconfiguration forced a hotfix and operational disruption. That experience exposed a gap in how we defined quality.
Performance engineers were focused on ensuring the system could handle peak load, while security testing was viewed as a separate responsibility – something handled by penetration testers or compliance teams later in the release cycle. This siloed approach led to missed vulnerabilities, late-stage fixes, and friction between performance and security teams.
The first step was to reframe security not as a constraint, but as an enabler of performance stability and reliability.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Building the foundation of security culture
To build a culture of security testing, I began by identifying shared goals between performance and security engineers. Both groups cared deeply about system reliability, data integrity, and availability.
Framing security tests as mechanisms to prevent performance degradation during attacks – such as Denial of Service (DoS), malformed packet floods, or Transport Layer Security (TLS) renegotiation abuse – helped establish a shared sense of ownership.
Next, we embedded security checks into our existing performance validation framework. This meant adding automated TLS validation, authentication checks, and encryption verification directly into the same scripts used for throughput and latency testing, rather than running them in a separate pipeline.
As security validation became part of the same workflow engineers already used, it felt like a natural extension of performance testing, rather than an extra task.
Integrating security into performance workflows
We focused on three levels of integration:
- Pre-deployment testing: automated scripts validated TLS configurations, cipher strength, and Application Programming Interface (API) authentication before load testing began. Misconfigurations were flagged early, ensuring performance tests were run only on hardened environments.
- Runtime security validation: during performance runs, we introduced traffic patterns mimicking real-world attacks (such as excessive session creation, random payload fuzzing, and malformed requests) to observe system resilience. These tests provided insights into both performance bottlenecks and security weaknesses under stress.
- Post-test analysis: beyond latency and throughput graphs, we began reviewing logs for unusual patterns such as connection resets, failed authentications, or CPU spikes caused by encryption overhead. These findings often led to both performance optimizations and improved security controls.
Driving team buy-in
Technical integration was easier than cultural adoption. To drive engagement, I emphasized shared ownership rather than compliance. Developers and testers were encouraged to view vulnerabilities as performance risks.
More like this
We created small internal knowledge sessions where findings were shared across teams, not to assign blame, but to improve understanding.
Gradually, engineers began adding security checks to their own performance scripts, creating a multiplier effect. The turning point came when a security-related configuration issue was detected early through these integrated tests, preventing a performance regression in staging.
Engineers saw firsthand that early security validation reduced rework and last minute firefighting. Within two quarters, more than half of new performance scripts included at least one embedded security assertion. We tracked this through code reviews and CI metrics, which showed an increasing percentage of builds validating both performance and security criteria.
Outcomes and learnings
Over time, the integration of security into performance testing meant late-stage security issues decreased by nearly 40%. Mean time to resolution for vulnerabilities improved by nearly half, as issues were detected earlier in the development cycle rather than during release validation.
Engineers began to see security testing as part of their ‘definition of done’ – when all conditions, or acceptance criteria, that a software product must satisfy are met and ready to be accepted.
The once distinct boundary between performance and security engineering blurred, replaced by a unified focus on resilient performance. The key lessons from this journey include:
- Cultural alignment precedes process change.
- Automation sustains adoption.
- Metrics must reflect both speed and safety.
- Collaboration across roles drives continuous improvement.

London • June 2 & 3, 2026
LDX3 London agenda is live! 🎉
Build a culture of security testing
Building a culture of security testing in a performance-driven environment requires persistence, empathy, and a shared vision of reliability. It is not merely a process shift, it’s an evolution in how teams define quality.
By embedding security principles into existing performance workflows, we created a system that was not only fast but also resilient. The experience reinforced a simple truth: performance without security is temporary; secure performance is sustainable.