Skip to main content
Career Paths
Concepts
Secure Sdlc
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Secure Software Development Lifecycle (SDLC)

Embedding security across design, build, test, deploy, and operate so that security is built in, not bolted on — not added as a final gate before release.

🎯Key Takeaways
🏗 Design phase is where the highest-leverage security work happens — threat modeling a feature costs hours and prevents weeks of incident response
💰 IBM research: the same vulnerability costs 1× to fix in design, 5× in development, 10× in testing, 15× at deploy, and 30–100× after a production breach
🚦 Gates block; guidance informs. Block on critical/high findings and policy violations. Track medium/low in tickets with defined SLAs
🤝 Security is cross-functional — developers write secure code, ops secures infrastructure, security teams provide tooling and standards. No single team "owns" it
🔄 The feedback loop is the product: SAST in CI catches issues in minutes; without it, the same findings surface in a quarterly pen test or a production incident
📋 Runbooks are security infrastructure — incident response plans written in advance, tested in drills, living in version control alongside the code

Secure Software Development Lifecycle (SDLC)

Embedding security across design, build, test, deploy, and operate so that security is built in, not bolted on — not added as a final gate before release.

~13 min read
Be the first to complete!
What you'll learn
  • 🏗 Design phase is where the highest-leverage security work happens — threat modeling a feature costs hours and prevents weeks of incident response
  • 💰 IBM research: the same vulnerability costs 1× to fix in design, 5× in development, 10× in testing, 15× at deploy, and 30–100× after a production breach
  • 🚦 Gates block; guidance informs. Block on critical/high findings and policy violations. Track medium/low in tickets with defined SLAs
  • 🤝 Security is cross-functional — developers write secure code, ops secures infrastructure, security teams provide tooling and standards. No single team "owns" it
  • 🔄 The feedback loop is the product: SAST in CI catches issues in minutes; without it, the same findings surface in a quarterly pen test or a production incident
  • 📋 Runbooks are security infrastructure — incident response plans written in advance, tested in drills, living in version control alongside the code

Lesson outline

Why this matters

Why "security at the end" always fails

Every development team intends to do security. The typical plan: build the feature, get it working, then have security review it before release. It sounds reasonable. In practice, it produces a predictable failure mode: the security review happens the week before launch, finds 12 issues, and the team — under deadline pressure — fixes 2 critical ones and ships the rest as "known issues" or future tickets that never get resolved.

The "bolt-on" failure pattern

Features built without security requirements get retrofitted. Retrofitting breaks things. Teams feel friction, security feels like a blocker, and the adversarial dynamic begins. Developers route around security reviews; security adds more gates; developers route around those. The result: security theater without actual security.

A Secure SDLC breaks this cycle by making security part of every phase — not a separate phase at the end. When security requirements are written alongside functional requirements, threat models are done before a line of code, and SAST runs on every commit, the cost of each security issue is dramatically lower and the developer experience is better: fast, specific, actionable feedback instead of a list of findings 2 days before launch.

The phases of a Secure SDLC — and what security looks like at each one

  • Design — Threat modeling, security requirements, architecture review — before any code is written
  • Develop — Secure coding standards, SAST in IDE/CI, secret scanning, security-aware code review
  • Test — DAST against running app, dependency scanning, container scanning, security regression tests
  • Deploy — Secrets from vault, IaC policy checks, signed artifact verification, least-privilege deploy roles
  • Operate — Security monitoring, alert triage, incident response runbooks, patch management

Secure SDLC Phase Explorer

Click a phase to see what security work happens there — and what it costs to skip it

→
→
→
→
📐

Design Phase

Fix cost if vulnerability found here: 1×

⚠ Risk if skipped

Missing requirements lead to design flaws that cost 30× to fix later

  • •Threat modeling
  • •Security requirements
  • •Architecture review
  • •Data classification

IBM Cost-of-Fixing Curve — relative cost to fix the same vulnerability

Design
1×
Develop
5×
Test
10×
Deploy
15×
Operate
30×+

The economics: IBM's cost-of-fixing curve

In 1994, IBM's System Science Institute published research on defect costs across the SDLC. The numbers have been validated repeatedly since. The core finding: fixing a defect costs exponentially more the later in the lifecycle it is found.

Phase foundRelative cost to fixWhy it costs more
Design1×Change a document or diagram. No code exists yet.
Develop5×Refactor code, update tests, redo review.
Test / QA10×Code is integrated, tests assume the flawed behavior, regression risk.
Deploy / staging15×Config is baked in, artifacts are built, deploy pipeline must re-run.
Production (pre-breach)30×Emergency patch, user communication, audit trail, compliance review.
Production (post-breach)100×+Incident response, forensics, regulatory fines, customer notification, legal fees, reputational damage.

This is the core business case for a Secure SDLC. Security is not a cost center that slows development — it is a cost-reduction program. Spending $500 on a threat modeling session prevents a $50,000 incident response. The math is not close.

The right metric: Mean Time to Detect (MTTD) in the pipeline

Measure how long it takes from a vulnerability being introduced to it being detected and reported to the developer. With SAST on every commit, MTTD can be minutes. Without it, the same issue may not surface for months — in a pen test, in a DAST scan, or in a production alert.

Phase 1 — Design: threat modeling before code exists

Threat modeling is the highest-leverage security activity in the SDLC. It asks four questions: What are we building? What can go wrong? What are we going to do about it? Did we do a good job? It happens in the design phase — before code, when changes cost nothing.

How to run a threat model (STRIDE method)

→

01

Draw a data flow diagram of the feature: actors, processes, data stores, and trust boundaries

→

02

For each element and flow, apply STRIDE: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege

→

03

For each threat: assess likelihood and impact, then define a mitigation (control that prevents or detects)

→

04

Document threats and mitigations in the design doc; add mitigations as security requirements for development

05

Schedule a follow-up after implementation to verify mitigations were implemented correctly

1

Draw a data flow diagram of the feature: actors, processes, data stores, and trust boundaries

2

For each element and flow, apply STRIDE: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege

3

For each threat: assess likelihood and impact, then define a mitigation (control that prevents or detects)

4

Document threats and mitigations in the design doc; add mitigations as security requirements for development

5

Schedule a follow-up after implementation to verify mitigations were implemented correctly

The output of a threat model is a list of security requirements for the feature — specific, testable, and tied to real threats. These flow directly into the development phase as acceptance criteria.

Make threat modeling lightweight and routine

Teams that treat threat modeling as a 2-hour structured meeting do it rarely. Teams that make it a 30-minute design phase checklist ("here are the 5 questions we answer for every new feature") do it consistently. Consistency beats depth. A lightweight model done every sprint beats a comprehensive model done annually.

Phases 2–3 — Develop and Test: automated pipeline gates

Once code is being written, the primary security mechanism is the CI/CD pipeline. Every commit triggers a set of automated checks; findings are reported immediately to the developer while the context is fresh.

GateWhen it runsWhat it catchesBlock thresholdExample tool
Secret scanningPre-commit + CIAPI keys, tokens, passwords in codeAny secret foundGitleaks, TruffleHog
SASTCI on every PRInjection flaws, path traversal, unsafe deserializationCritical / HighSemgrep, CodeQL, SonarQube
Dependency scanningCI on every buildKnown CVEs in open-source librariesCritical / HighSnyk, Dependabot, OWASP Dependency-Check
Container image scanCI post-buildCVEs in base image and installed packagesCritical / HighTrivy, Snyk Container, Anchore
DASTPost-deploy to stagingRuntime flaws: XSS, IDOR, auth bypass, misconfigHigh / CriticalOWASP ZAP, Burp Enterprise
IaC policy scanCI before applyMisconfigs in Terraform, K8s manifestsPolicy violationOPA/Conftest, Checkov, tfsec

Common mistake: too many blocking gates → developers bypass them

If every finding blocks the build, developers will mark findings as "false positive," get exceptions, or disable the tool. Block on critical/high severity and clear policy violations. Track medium/low in a security backlog with defined SLAs. The goal is fast feedback and clear ownership — not a gauntlet that slows every commit.

.github/workflows/secure-pipeline.yml
1# GitHub Actions — all core security gates on every PR
2# Blocks merge on critical findings; reports medium/low as warnings
3name: Secure CI Pipeline
4
5on:
6 pull_request:
7 branches: [main, develop]
8 push:
9 branches: [main]
10
11jobs:
12 # Gate 1: Secret scanning — block any secrets committed
13 secret-scan:
14 name: Secret Scanning
15 runs-on: ubuntu-latest
16 steps:
17 - uses: actions/checkout@v4
18 with:
19 fetch-depth: 0 # Full history for git-log-based scanning
20 - name: Run Gitleaks
21 uses: gitleaks/gitleaks-action@v2
22 env:
23 GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
24 # Fails the job if any secrets are found
25
26 # Gate 2: SAST — static analysis on every commit
27 sast:
28 name: Static Analysis (SAST)
29 runs-on: ubuntu-latest
30 steps:
31 - uses: actions/checkout@v4
32 - name: Run Semgrep
33 uses: semgrep/semgrep-action@v1
34 with:
35 config: >-
36 p/owasp-top-ten
37 p/secrets
38 p/python
39 p/javascript
40 env:
41 SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
42 # Blocks on critical/high; reports medium/low as warnings
43
44 # Gate 3: Dependency scanning
45 dependency-scan:
46 name: Dependency Vulnerability Scan
47 runs-on: ubuntu-latest
48 steps:
49 - uses: actions/checkout@v4
50 - name: Run Snyk
51 uses: snyk/actions/node@master
52 with:
53 args: --severity-threshold=high
54 env:
55 SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
56 # --severity-threshold=high: only fail on high/critical CVEs
57
58 # Gate 4: Container image scanning (runs after build)
59 container-scan:
60 name: Container Image Scan
61 runs-on: ubuntu-latest
62 needs: [secret-scan, sast]
63 steps:
64 - uses: actions/checkout@v4
65 - name: Build image
66 run: docker build -t myapp:pr-${{ github.run_number }} .
67 - name: Scan with Trivy
68 uses: aquasecurity/trivy-action@master
69 with:
70 image-ref: myapp:pr-${{ github.run_number }}
71 format: sarif
72 output: trivy-results.sarif
73 severity: CRITICAL,HIGH
74 exit-code: 1 # Fail on critical/high
75 - name: Upload Trivy results to Security tab
76 uses: github/codeql-action/upload-sarif@v3
77 with:
78 sarif_file: trivy-results.sarif

Phases 4–5 — Deploy and Operate: secure pipelines and monitoring

The deploy phase introduces a distinct class of vulnerabilities: infrastructure misconfigurations, over-privileged deployment roles, and hardcoded secrets in environment variables. The operate phase is where you find out what you missed.

Secure deploy checklist

  • Secrets from vault only — No secrets in env vars baked into images or config files. Pull from HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault at runtime
  • IaC policy scan before apply — Run OPA/Conftest or Checkov against Terraform plan before terraform apply. Block on policy violations (no public S3, RDS encrypted, security groups reviewed)
  • Signed artifact verification — Only deploy artifacts signed by your CI pipeline (cosign). Reject unsigned or tampered images at deploy time
  • Least-privilege deploy role — The deploy pipeline role has only the permissions needed to deploy — not admin. Rotate credentials after each deploy in ephemeral environments
  • Immutable infrastructure — Never SSH into production to fix things. Patch by replacing the instance/container with a new build. This forces all changes through the secure pipeline

Operate phase: the feedback loop that improves everything upstream

Production alerts and incidents are the most valuable feedback in the Secure SDLC. A production incident caused by an IDOR should immediately trigger: (1) a new SAST/DAST rule to catch the same pattern, (2) an update to the threat modeling checklist, and (3) a security requirement added to the coding standards. The incident makes the entire SDLC more robust for the next feature.

terraform/policy/no-public-s3.rego
1# OPA policy — run with Conftest before terraform apply
2# Blocks any plan that would create a public S3 bucket
3package terraform.aws.s3
4
5import future.keywords.if
6import future.keywords.contains
7
8# Deny any S3 bucket that does not have public access block configured
9deny contains msg if {
10 resource := input.resource_changes[_]
11 resource.type == "aws_s3_bucket"
12 resource.change.actions[_] == "create"
13 not has_public_access_block(resource.address)
14 msg := sprintf(
15 "S3 bucket '%v' must have aws_s3_bucket_public_access_block configured",
16 [resource.address]
17 )
18}
19
20# Also deny if public access block is configured but allows public access
21deny contains msg if {
22 resource := input.resource_changes[_]
23 resource.type == "aws_s3_bucket_public_access_block"
24 resource.change.after.block_public_acls == false
25 msg := sprintf(
26 "S3 bucket public access block '%v' must set block_public_acls = true",
27 [resource.address]
28 )
29}
30
31has_public_access_block(bucket_address) if {
32 resource := input.resource_changes[_]
33 resource.type == "aws_s3_bucket_public_access_block"
34 resource.change.after.bucket == bucket_address
35}

Gates vs. guidance: the art of calibration

The most common Secure SDLC failure mode at mature organizations is over-gating. When every finding blocks, developers develop immunity — they seek exceptions, mark things as false positives, or route around the tools entirely. The pipeline becomes security theater: it runs, but nobody acts on the output.

Finding typeRecommended responseRationale
Critical CVE in dependencyBlock buildKnown exploit, high impact, fix is available (version bump)
High CVE in dependencyBlock buildExploitable, fix typically available, risk is real
Secret in codeBlock commit (pre-commit hook) + block buildSecrets rotate-and-replace, no partial fix. Must be caught immediately
Critical SAST findingBlock PR mergeHigh confidence, severe impact — developer fixes with context fresh
High SAST findingBlock PR merge with override pathRequire security team acknowledgment if overridden; creates audit trail
Medium SAST / DAST findingCreate ticket, SLA 30 daysReal risk but not urgent. Dashboards show ticket age to prevent accumulation
Low SAST findingReport only, no SLAInformational. Include in security tech debt review quarterly
IaC policy violationBlock applyConfig-as-code violations should never reach production — same threshold as code

The override mechanism is as important as the gate

Every gate needs a clear, audited override path for legitimate edge cases. "Block but allow override with written justification and security lead sign-off" is better than an unconfigurable hard block that developers route around. The override creates an audit trail; the routing-around creates a blind spot.

Building a Secure SDLC at your organization: a realistic roadmap

Teams starting a Secure SDLC program face a common mistake: trying to implement everything at once and burning out or triggering developer backlash. The key is sequencing by impact and low friction.

Realistic Secure SDLC rollout — quarter by quarter

→

01

Q1 — Visibility: Add secret scanning and dependency scanning to CI (informational, no blocks yet). Create a security findings dashboard. Hold a kick-off with developers: here is what we found, here is what we are going to do about it

→

02

Q2 — Block the obvious: Enable blocking on secrets in code and critical CVEs in dependencies. These have near-zero false positive rates and clear remediation paths. Add basic security requirements to the PR template

→

03

Q3 — SAST and threat modeling: Add SAST with blocking on critical/high for greenfield code; informational for legacy. Introduce threat modeling as a lightweight design-phase checklist for new features

→

04

Q4 — DAST and IaC: Add DAST to staging pipeline post-deploy. Add IaC policy scanning (Conftest/Checkov) to infrastructure changes. Begin tracking security tech debt backlog

05

Year 2 — Maturity: Full pipeline coverage, security champions in each team, incident-informed improvements, OWASP SAMM assessment to benchmark maturity and prioritize gaps

1

Q1 — Visibility: Add secret scanning and dependency scanning to CI (informational, no blocks yet). Create a security findings dashboard. Hold a kick-off with developers: here is what we found, here is what we are going to do about it

2

Q2 — Block the obvious: Enable blocking on secrets in code and critical CVEs in dependencies. These have near-zero false positive rates and clear remediation paths. Add basic security requirements to the PR template

3

Q3 — SAST and threat modeling: Add SAST with blocking on critical/high for greenfield code; informational for legacy. Introduce threat modeling as a lightweight design-phase checklist for new features

4

Q4 — DAST and IaC: Add DAST to staging pipeline post-deploy. Add IaC policy scanning (Conftest/Checkov) to infrastructure changes. Begin tracking security tech debt backlog

5

Year 2 — Maturity: Full pipeline coverage, security champions in each team, incident-informed improvements, OWASP SAMM assessment to benchmark maturity and prioritize gaps

The single most important principle

Make the secure path the easy path. If the secure default requires extra work, developers will take shortcuts. Pre-configured SAST, automatic secret scanning on commit, one-click threat model templates — the less friction, the more consistent the adoption. Security is a UX problem as much as a technical one.

How this might come up in interviews

Senior DevSecOps, Platform Security, and Security Engineering roles. Often asked as "tell me about the secure development process at your last job" — they want to hear about specific gates, tools, and how you handled friction with developers. Also appears in "design a security program from scratch" system design questions.

Common questions:

  • Walk me through what a secure SDLC looks like at an organization you've worked at. What was missing?
  • How do you balance security gates with developer velocity? When do you block vs. inform?
  • A developer pushes code with a critical SAST finding but the release deadline is in 2 hours. What do you do?
  • What is threat modeling and when in the SDLC should it happen?
  • Describe the difference between a security gate and security guidance. Give examples of each.
  • How would you introduce a Secure SDLC at a startup that has never done security reviews?
  • What metrics would you use to measure the health of a Secure SDLC program?

Strong answer: Mentions the IBM cost curve or shift-left economics without prompting. Describes specific pipeline gates (e.g. "SAST fails build on critical findings, creates tickets for medium"). Talks about developer experience — making secure practices the path of least resistance. Has an opinion on gates vs. guidance trade-offs based on team maturity.

Red flags: Says "we had a security team that did a pen test before release" — this is a security theater sign. Cannot explain the difference between SAST and DAST. Thinks a secure SDLC means slowing down. Has never participated in a threat modeling session. Confuses authentication with authorization.

Quick check · Secure Software Development Lifecycle (SDLC)

1 / 4

According to IBM research, a vulnerability discovered in production costs approximately how much more to fix than the same vulnerability caught during the design phase?

Key takeaways

  • 🏗 Design phase is where the highest-leverage security work happens — threat modeling a feature costs hours and prevents weeks of incident response
  • 💰 IBM research: the same vulnerability costs 1× to fix in design, 5× in development, 10× in testing, 15× at deploy, and 30–100× after a production breach
  • 🚦 Gates block; guidance informs. Block on critical/high findings and policy violations. Track medium/low in tickets with defined SLAs
  • 🤝 Security is cross-functional — developers write secure code, ops secures infrastructure, security teams provide tooling and standards. No single team "owns" it
  • 🔄 The feedback loop is the product: SAST in CI catches issues in minutes; without it, the same findings surface in a quarterly pen test or a production incident
  • 📋 Runbooks are security infrastructure — incident response plans written in advance, tested in drills, living in version control alongside the code
Before you move on: can you answer these?

Why does the same vulnerability cost exponentially more to fix as it moves through the SDLC?

Each phase embeds the vulnerability more deeply — more code is written on top of it, tests assume the flawed behavior, deployments bake it in. Fixing it requires unwinding all of that. A design flaw fixed in design changes a document; the same flaw fixed after production breach requires patching code, redeploying, incident response, user notification, and regulatory filing.

What is the difference between a security gate and security guidance?

A gate blocks progress — the build fails, the merge is rejected, the deploy is stopped. Use gates for must-not-ship issues: critical CVEs, secrets in code, failing security tests. Guidance informs but does not block — SAST medium findings create tickets but don't fail the build. Too many gates cause developers to work around them; too few let critical issues through.

At which phase of the SDLC does threat modeling happen, and what does it produce?

Threat modeling happens in the design phase, before code is written. It produces: a list of threats (what could go wrong), their likelihood and impact, and mitigations (controls that prevent or detect the threat). The output feeds into security requirements for the develop phase.

From the books

DevSecOps: A Leader's Guide to Producing Secure Software

Chapter 2: Embedding Security in the Development Lifecycle

The book frames the secure SDLC as a continuous feedback system, not a phase. Security activities at each stage feed back into earlier stages — a production incident informs the next threat model; a SAST finding pattern becomes a new coding standard. This framing shifts the goal from "pass the gate" to "reduce the vulnerability class over time."

🧠Mental Model

💡 Analogy

Building a house with security in mind vs. adding locks after construction. If you plan the alarm system, reinforced doors, and window locks during the blueprint phase, they fit perfectly and cost little. If you retrofit them after the house is built — cutting through walls, moving wiring — the same protection costs 10× more and works less well. The Secure SDLC is the blueprint approach for software.

⚡ Core Idea

Security work costs an order of magnitude more for every phase it is deferred. A design flaw fixed in the design phase costs 1 unit. The same flaw discovered in production and exploited can cost 30–100 units — in engineering time, customer trust, regulatory fines, and breach response. The Secure SDLC makes security part of every phase gate, not a separate "security phase" at the end.

🎯 Why It Matters

Most breaches exploit known vulnerability classes that would have been caught by routine security activities (threat modeling, SAST, dependency scanning). The gap is not technology — it's process. Teams that embed security gates in their pipeline catch 80%+ of vulnerabilities before production, at a fraction of the remediation cost.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.