Embedding security across design, build, test, deploy, and operate so that security is built in, not bolted on — not added as a final gate before release.
Embedding security across design, build, test, deploy, and operate so that security is built in, not bolted on — not added as a final gate before release.
Lesson outline
Every development team intends to do security. The typical plan: build the feature, get it working, then have security review it before release. It sounds reasonable. In practice, it produces a predictable failure mode: the security review happens the week before launch, finds 12 issues, and the team — under deadline pressure — fixes 2 critical ones and ships the rest as "known issues" or future tickets that never get resolved.
The "bolt-on" failure pattern
Features built without security requirements get retrofitted. Retrofitting breaks things. Teams feel friction, security feels like a blocker, and the adversarial dynamic begins. Developers route around security reviews; security adds more gates; developers route around those. The result: security theater without actual security.
A Secure SDLC breaks this cycle by making security part of every phase — not a separate phase at the end. When security requirements are written alongside functional requirements, threat models are done before a line of code, and SAST runs on every commit, the cost of each security issue is dramatically lower and the developer experience is better: fast, specific, actionable feedback instead of a list of findings 2 days before launch.
The phases of a Secure SDLC — and what security looks like at each one
Click a phase to see what security work happens there — and what it costs to skip it
Fix cost if vulnerability found here: 1×
⚠ Risk if skipped
Missing requirements lead to design flaws that cost 30× to fix later
IBM Cost-of-Fixing Curve — relative cost to fix the same vulnerability
In 1994, IBM's System Science Institute published research on defect costs across the SDLC. The numbers have been validated repeatedly since. The core finding: fixing a defect costs exponentially more the later in the lifecycle it is found.
| Phase found | Relative cost to fix | Why it costs more |
|---|---|---|
| Design | 1× | Change a document or diagram. No code exists yet. |
| Develop | 5× | Refactor code, update tests, redo review. |
| Test / QA | 10× | Code is integrated, tests assume the flawed behavior, regression risk. |
| Deploy / staging | 15× | Config is baked in, artifacts are built, deploy pipeline must re-run. |
| Production (pre-breach) | 30× | Emergency patch, user communication, audit trail, compliance review. |
| Production (post-breach) | 100×+ | Incident response, forensics, regulatory fines, customer notification, legal fees, reputational damage. |
This is the core business case for a Secure SDLC. Security is not a cost center that slows development — it is a cost-reduction program. Spending $500 on a threat modeling session prevents a $50,000 incident response. The math is not close.
The right metric: Mean Time to Detect (MTTD) in the pipeline
Measure how long it takes from a vulnerability being introduced to it being detected and reported to the developer. With SAST on every commit, MTTD can be minutes. Without it, the same issue may not surface for months — in a pen test, in a DAST scan, or in a production alert.
Threat modeling is the highest-leverage security activity in the SDLC. It asks four questions: What are we building? What can go wrong? What are we going to do about it? Did we do a good job? It happens in the design phase — before code, when changes cost nothing.
How to run a threat model (STRIDE method)
01
Draw a data flow diagram of the feature: actors, processes, data stores, and trust boundaries
02
For each element and flow, apply STRIDE: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege
03
For each threat: assess likelihood and impact, then define a mitigation (control that prevents or detects)
04
Document threats and mitigations in the design doc; add mitigations as security requirements for development
05
Schedule a follow-up after implementation to verify mitigations were implemented correctly
Draw a data flow diagram of the feature: actors, processes, data stores, and trust boundaries
For each element and flow, apply STRIDE: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege
For each threat: assess likelihood and impact, then define a mitigation (control that prevents or detects)
Document threats and mitigations in the design doc; add mitigations as security requirements for development
Schedule a follow-up after implementation to verify mitigations were implemented correctly
The output of a threat model is a list of security requirements for the feature — specific, testable, and tied to real threats. These flow directly into the development phase as acceptance criteria.
Make threat modeling lightweight and routine
Teams that treat threat modeling as a 2-hour structured meeting do it rarely. Teams that make it a 30-minute design phase checklist ("here are the 5 questions we answer for every new feature") do it consistently. Consistency beats depth. A lightweight model done every sprint beats a comprehensive model done annually.
Once code is being written, the primary security mechanism is the CI/CD pipeline. Every commit triggers a set of automated checks; findings are reported immediately to the developer while the context is fresh.
| Gate | When it runs | What it catches | Block threshold | Example tool |
|---|---|---|---|---|
| Secret scanning | Pre-commit + CI | API keys, tokens, passwords in code | Any secret found | Gitleaks, TruffleHog |
| SAST | CI on every PR | Injection flaws, path traversal, unsafe deserialization | Critical / High | Semgrep, CodeQL, SonarQube |
| Dependency scanning | CI on every build | Known CVEs in open-source libraries | Critical / High | Snyk, Dependabot, OWASP Dependency-Check |
| Container image scan | CI post-build | CVEs in base image and installed packages | Critical / High | Trivy, Snyk Container, Anchore |
| DAST | Post-deploy to staging | Runtime flaws: XSS, IDOR, auth bypass, misconfig | High / Critical | OWASP ZAP, Burp Enterprise |
| IaC policy scan | CI before apply | Misconfigs in Terraform, K8s manifests | Policy violation | OPA/Conftest, Checkov, tfsec |
Common mistake: too many blocking gates → developers bypass them
If every finding blocks the build, developers will mark findings as "false positive," get exceptions, or disable the tool. Block on critical/high severity and clear policy violations. Track medium/low in a security backlog with defined SLAs. The goal is fast feedback and clear ownership — not a gauntlet that slows every commit.
1# GitHub Actions — all core security gates on every PR2# Blocks merge on critical findings; reports medium/low as warnings3name: Secure CI Pipeline45on:6pull_request:7branches: [main, develop]8push:9branches: [main]1011jobs:12# Gate 1: Secret scanning — block any secrets committed13secret-scan:14name: Secret Scanning15runs-on: ubuntu-latest16steps:17- uses: actions/checkout@v418with:19fetch-depth: 0 # Full history for git-log-based scanning20- name: Run Gitleaks21uses: gitleaks/gitleaks-action@v222env:23GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}24# Fails the job if any secrets are found2526# Gate 2: SAST — static analysis on every commit27sast:28name: Static Analysis (SAST)29runs-on: ubuntu-latest30steps:31- uses: actions/checkout@v432- name: Run Semgrep33uses: semgrep/semgrep-action@v134with:35config: >-36p/owasp-top-ten37p/secrets38p/python39p/javascript40env:41SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}42# Blocks on critical/high; reports medium/low as warnings4344# Gate 3: Dependency scanning45dependency-scan:46name: Dependency Vulnerability Scan47runs-on: ubuntu-latest48steps:49- uses: actions/checkout@v450- name: Run Snyk51uses: snyk/actions/node@master52with:53args: --severity-threshold=high54env:55SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}56# --severity-threshold=high: only fail on high/critical CVEs5758# Gate 4: Container image scanning (runs after build)59container-scan:60name: Container Image Scan61runs-on: ubuntu-latest62needs: [secret-scan, sast]63steps:64- uses: actions/checkout@v465- name: Build image66run: docker build -t myapp:pr-${{ github.run_number }} .67- name: Scan with Trivy68uses: aquasecurity/trivy-action@master69with:70image-ref: myapp:pr-${{ github.run_number }}71format: sarif72output: trivy-results.sarif73severity: CRITICAL,HIGH74exit-code: 1 # Fail on critical/high75- name: Upload Trivy results to Security tab76uses: github/codeql-action/upload-sarif@v377with:78sarif_file: trivy-results.sarif
The deploy phase introduces a distinct class of vulnerabilities: infrastructure misconfigurations, over-privileged deployment roles, and hardcoded secrets in environment variables. The operate phase is where you find out what you missed.
Secure deploy checklist
terraform apply. Block on policy violations (no public S3, RDS encrypted, security groups reviewed)Operate phase: the feedback loop that improves everything upstream
Production alerts and incidents are the most valuable feedback in the Secure SDLC. A production incident caused by an IDOR should immediately trigger: (1) a new SAST/DAST rule to catch the same pattern, (2) an update to the threat modeling checklist, and (3) a security requirement added to the coding standards. The incident makes the entire SDLC more robust for the next feature.
1# OPA policy — run with Conftest before terraform apply2# Blocks any plan that would create a public S3 bucket3package terraform.aws.s345import future.keywords.if6import future.keywords.contains78# Deny any S3 bucket that does not have public access block configured9deny contains msg if {10resource := input.resource_changes[_]11resource.type == "aws_s3_bucket"12resource.change.actions[_] == "create"13not has_public_access_block(resource.address)14msg := sprintf(15"S3 bucket '%v' must have aws_s3_bucket_public_access_block configured",16[resource.address]17)18}1920# Also deny if public access block is configured but allows public access21deny contains msg if {22resource := input.resource_changes[_]23resource.type == "aws_s3_bucket_public_access_block"24resource.change.after.block_public_acls == false25msg := sprintf(26"S3 bucket public access block '%v' must set block_public_acls = true",27[resource.address]28)29}3031has_public_access_block(bucket_address) if {32resource := input.resource_changes[_]33resource.type == "aws_s3_bucket_public_access_block"34resource.change.after.bucket == bucket_address35}
The most common Secure SDLC failure mode at mature organizations is over-gating. When every finding blocks, developers develop immunity — they seek exceptions, mark things as false positives, or route around the tools entirely. The pipeline becomes security theater: it runs, but nobody acts on the output.
| Finding type | Recommended response | Rationale |
|---|---|---|
| Critical CVE in dependency | Block build | Known exploit, high impact, fix is available (version bump) |
| High CVE in dependency | Block build | Exploitable, fix typically available, risk is real |
| Secret in code | Block commit (pre-commit hook) + block build | Secrets rotate-and-replace, no partial fix. Must be caught immediately |
| Critical SAST finding | Block PR merge | High confidence, severe impact — developer fixes with context fresh |
| High SAST finding | Block PR merge with override path | Require security team acknowledgment if overridden; creates audit trail |
| Medium SAST / DAST finding | Create ticket, SLA 30 days | Real risk but not urgent. Dashboards show ticket age to prevent accumulation |
| Low SAST finding | Report only, no SLA | Informational. Include in security tech debt review quarterly |
| IaC policy violation | Block apply | Config-as-code violations should never reach production — same threshold as code |
The override mechanism is as important as the gate
Every gate needs a clear, audited override path for legitimate edge cases. "Block but allow override with written justification and security lead sign-off" is better than an unconfigurable hard block that developers route around. The override creates an audit trail; the routing-around creates a blind spot.
Teams starting a Secure SDLC program face a common mistake: trying to implement everything at once and burning out or triggering developer backlash. The key is sequencing by impact and low friction.
Realistic Secure SDLC rollout — quarter by quarter
01
Q1 — Visibility: Add secret scanning and dependency scanning to CI (informational, no blocks yet). Create a security findings dashboard. Hold a kick-off with developers: here is what we found, here is what we are going to do about it
02
Q2 — Block the obvious: Enable blocking on secrets in code and critical CVEs in dependencies. These have near-zero false positive rates and clear remediation paths. Add basic security requirements to the PR template
03
Q3 — SAST and threat modeling: Add SAST with blocking on critical/high for greenfield code; informational for legacy. Introduce threat modeling as a lightweight design-phase checklist for new features
04
Q4 — DAST and IaC: Add DAST to staging pipeline post-deploy. Add IaC policy scanning (Conftest/Checkov) to infrastructure changes. Begin tracking security tech debt backlog
05
Year 2 — Maturity: Full pipeline coverage, security champions in each team, incident-informed improvements, OWASP SAMM assessment to benchmark maturity and prioritize gaps
Q1 — Visibility: Add secret scanning and dependency scanning to CI (informational, no blocks yet). Create a security findings dashboard. Hold a kick-off with developers: here is what we found, here is what we are going to do about it
Q2 — Block the obvious: Enable blocking on secrets in code and critical CVEs in dependencies. These have near-zero false positive rates and clear remediation paths. Add basic security requirements to the PR template
Q3 — SAST and threat modeling: Add SAST with blocking on critical/high for greenfield code; informational for legacy. Introduce threat modeling as a lightweight design-phase checklist for new features
Q4 — DAST and IaC: Add DAST to staging pipeline post-deploy. Add IaC policy scanning (Conftest/Checkov) to infrastructure changes. Begin tracking security tech debt backlog
Year 2 — Maturity: Full pipeline coverage, security champions in each team, incident-informed improvements, OWASP SAMM assessment to benchmark maturity and prioritize gaps
The single most important principle
Make the secure path the easy path. If the secure default requires extra work, developers will take shortcuts. Pre-configured SAST, automatic secret scanning on commit, one-click threat model templates — the less friction, the more consistent the adoption. Security is a UX problem as much as a technical one.
Senior DevSecOps, Platform Security, and Security Engineering roles. Often asked as "tell me about the secure development process at your last job" — they want to hear about specific gates, tools, and how you handled friction with developers. Also appears in "design a security program from scratch" system design questions.
Common questions:
Strong answer: Mentions the IBM cost curve or shift-left economics without prompting. Describes specific pipeline gates (e.g. "SAST fails build on critical findings, creates tickets for medium"). Talks about developer experience — making secure practices the path of least resistance. Has an opinion on gates vs. guidance trade-offs based on team maturity.
Red flags: Says "we had a security team that did a pen test before release" — this is a security theater sign. Cannot explain the difference between SAST and DAST. Thinks a secure SDLC means slowing down. Has never participated in a threat modeling session. Confuses authentication with authorization.
Quick check · Secure Software Development Lifecycle (SDLC)
1 / 4
Key takeaways
Why does the same vulnerability cost exponentially more to fix as it moves through the SDLC?
Each phase embeds the vulnerability more deeply — more code is written on top of it, tests assume the flawed behavior, deployments bake it in. Fixing it requires unwinding all of that. A design flaw fixed in design changes a document; the same flaw fixed after production breach requires patching code, redeploying, incident response, user notification, and regulatory filing.
What is the difference between a security gate and security guidance?
A gate blocks progress — the build fails, the merge is rejected, the deploy is stopped. Use gates for must-not-ship issues: critical CVEs, secrets in code, failing security tests. Guidance informs but does not block — SAST medium findings create tickets but don't fail the build. Too many gates cause developers to work around them; too few let critical issues through.
At which phase of the SDLC does threat modeling happen, and what does it produce?
Threat modeling happens in the design phase, before code is written. It produces: a list of threats (what could go wrong), their likelihood and impact, and mitigations (controls that prevent or detect the threat). The output feeds into security requirements for the develop phase.
From the books
DevSecOps: A Leader's Guide to Producing Secure Software
Chapter 2: Embedding Security in the Development Lifecycle
The book frames the secure SDLC as a continuous feedback system, not a phase. Security activities at each stage feed back into earlier stages — a production incident informs the next threat model; a SAST finding pattern becomes a new coding standard. This framing shifts the goal from "pass the gate" to "reduce the vulnerability class over time."
💡 Analogy
Building a house with security in mind vs. adding locks after construction. If you plan the alarm system, reinforced doors, and window locks during the blueprint phase, they fit perfectly and cost little. If you retrofit them after the house is built — cutting through walls, moving wiring — the same protection costs 10× more and works less well. The Secure SDLC is the blueprint approach for software.
⚡ Core Idea
Security work costs an order of magnitude more for every phase it is deferred. A design flaw fixed in the design phase costs 1 unit. The same flaw discovered in production and exploited can cost 30–100 units — in engineering time, customer trust, regulatory fines, and breach response. The Secure SDLC makes security part of every phase gate, not a separate "security phase" at the end.
🎯 Why It Matters
Most breaches exploit known vulnerability classes that would have been caught by routine security activities (threat modeling, SAST, dependency scanning). The gap is not technology — it's process. Teams that embed security gates in their pipeline catch 80%+ of vulnerabilities before production, at a fraction of the remediation cost.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.