Skip to main content
Career Paths
Concepts
Sast Dast
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

SAST and DAST: Static and Dynamic Security Testing

How SAST scans source code for vulnerabilities before runtime, how DAST attacks the running application to find what SAST misses, and how to run both in a DevSecOps pipeline.

🎯Key Takeaways
🔍 SAST reads source code without running it — fast, fits in CI, catches injection, secrets, unsafe patterns. Runs in minutes on every commit
🎯 DAST sends real attack payloads to a live app — catches IDOR, misconfig, broken auth, business logic flaws that have no code signature
🚫 Neither scanner catches everything: IDOR and misconfigs are DAST-only; hardcoded secrets are SAST-only. Run both
⚡ Pipeline rule: SAST on every PR commit → build → deploy to staging → DAST → promote to prod
🎛 Tune before you block: new SAST rules need a false-positive review period before they block builds — developer trust depends on signal quality
📊 Track MTTD (Mean Time to Detect): SAST should find issues in minutes; DAST in hours. Anything else is a pipeline gap

SAST and DAST: Static and Dynamic Security Testing

How SAST scans source code for vulnerabilities before runtime, how DAST attacks the running application to find what SAST misses, and how to run both in a DevSecOps pipeline.

~10 min read
Be the first to complete!
What you'll learn
  • 🔍 SAST reads source code without running it — fast, fits in CI, catches injection, secrets, unsafe patterns. Runs in minutes on every commit
  • 🎯 DAST sends real attack payloads to a live app — catches IDOR, misconfig, broken auth, business logic flaws that have no code signature
  • 🚫 Neither scanner catches everything: IDOR and misconfigs are DAST-only; hardcoded secrets are SAST-only. Run both
  • ⚡ Pipeline rule: SAST on every PR commit → build → deploy to staging → DAST → promote to prod
  • 🎛 Tune before you block: new SAST rules need a false-positive review period before they block builds — developer trust depends on signal quality
  • 📊 Track MTTD (Mean Time to Detect): SAST should find issues in minutes; DAST in hours. Anything else is a pipeline gap

Lesson outline

Why this matters

The fundamental difference: code vs. runtime

SAST and DAST test security in completely different ways, at completely different points in the pipeline — and that is exactly the point. They are designed to catch different classes of vulnerabilities.

SASTDAST
What it analyzesSource code, bytecode, or binariesRunning application (HTTP requests/responses)
When it runsIn CI on every commit — no app neededPost-deploy to staging — requires live app
SpeedSeconds to minutesMinutes to hours
What it findsInjection, secrets, unsafe patterns, data-flow bugsIDOR, misconfig, broken auth, runtime behavior
What it missesBusiness logic flaws, misconfigs, IDORSecrets in code, code-level injection patterns
False positive rateHigher (static analysis has limitations)Lower (tests actual behavior)

The complementary principle

Every major security framework (OWASP, NIST, BSIMM) recommends running both SAST and DAST. Teams that run only one scanner have a systematic blind spot. The fastest path to comprehensive automated security testing is: SAST on every PR + DAST on every staging deploy.

SAST vs DAST: What Each Scanner Finds

Click a vulnerability to see which scanner catches it — and why

🔍SAST

Analyzes source code — no running app needed. Fast, catches issues on every commit.

🎯DAST

Tests the running application by sending real attack payloads. Catches runtime and config flaws.

SQL Injection

User input flows directly into a SQL query without parameterization.

critical
query = "SELECT * FROM users WHERE id = " + request.id

🔍 SAST detection

✅ SAST traces data flow from request.id to SQL query and flags the concatenation.

🎯 DAST detection

✅ DAST sends payloads like `1 OR 1=1` and detects query manipulation in responses.

✅ Fix

Use parameterized queries: db.query("SELECT * FROM users WHERE id = $1", [id])

Coverage summary

1 SAST only
3 DAST only
2 Both catch it

Neither scanner catches everything. Run both — SAST in CI on every commit, DAST post-deploy to staging on every release.

SAST in depth: how it finds vulnerabilities in code

SAST tools analyze source code using several techniques, each with different trade-offs between accuracy and speed:

SAST analysis techniques

  • Pattern matching — Regex or AST-level patterns — fast, low false positives. Finds: hardcoded secrets (AWS key pattern), banned functions (strcpy in C), obvious SQL concatenation
  • Data-flow analysis — Traces how data moves from source (user input) to sink (SQL query, file write). Finds: injection flaws, path traversal, XSS. Higher accuracy than pattern matching but slower
  • Control-flow analysis — Analyzes execution paths — finds: null pointer dereference, use-after-free, unreachable code with security implications
  • Taint analysis — Marks user-controlled data as "tainted" and tracks it through the codebase — flags when tainted data reaches a dangerous operation without sanitization

Secret scanning is a distinct SAST category

Tools like Gitleaks and TruffleHog are specialized SAST tools focused exclusively on finding credentials in code. Run them as a separate gate from general SAST — they're faster, have near-zero false positives, and the remediation (revoke + rotate the secret) is clear and immediate. Block on any secret found, no exceptions.

.semgrep/custom-rules.yml
1# Custom Semgrep rules for common vulnerability patterns
2# Run: semgrep --config .semgrep/ src/
3
4rules:
5 # Rule 1: Detect SQL injection via string concatenation
6 - id: sql-injection-string-concat
7 patterns:
8 - pattern: |
9 $QUERY = "..." + $INPUT
10 $DB.query($QUERY, ...)
11 - pattern-not: |
12 $QUERY = "..." + $SAFE_VALUE # allowlist safe patterns
13 message: |
14 SQL injection risk: user input concatenated into query string.
15 Use parameterized queries: db.query("SELECT ... WHERE id = $1", [id])
16 severity: ERROR
17 languages: [javascript, typescript]
18 metadata:
19 cwe: CWE-89
20 owasp: A03:2021
21
22 # Rule 2: Detect hardcoded AWS credentials
23 - id: hardcoded-aws-key
24 patterns:
25 - pattern-regex: 'AKIA[0-9A-Z]{16}'
26 message: |
27 Hardcoded AWS access key detected. Remove from code immediately.
28 Load credentials from environment variables or AWS IAM roles.
29 severity: ERROR
30 languages: [javascript, typescript, python, go, java]
31 metadata:
32 cwe: CWE-798
33
34 # Rule 3: Detect use of eval() with user input
35 - id: eval-with-user-input
36 patterns:
37 - pattern: eval($INPUT)
38 - pattern-not: eval("literal string")
39 message: |
40 eval() with dynamic input is a code injection risk.
41 Refactor to avoid eval() entirely.
42 severity: WARNING
43 languages: [javascript, typescript]
44 metadata:
45 cwe: CWE-95

DAST in depth: testing the running application

DAST tools interact with the application the same way an attacker would — by sending HTTP requests and analyzing responses. They are most effective when configured with an authenticated session so they can reach protected endpoints.

How DAST scans work

→

01

Spider / crawl: discover all endpoints by following links, parsing JavaScript, and consuming the API spec (OpenAPI/Swagger if available)

→

02

Passive scan: analyze responses for security headers, cookie flags, and information disclosure without sending attack payloads

→

03

Active scan: send attack payloads (SQL injection strings, XSS vectors, path traversal) to each parameter and analyze responses for signs of exploitation

→

04

Authenticated scan: repeat all of the above with an authenticated session — discovers endpoints only accessible to logged-in users

05

Report: categorize findings by severity, include request/response pairs as evidence, generate remediation guidance

1

Spider / crawl: discover all endpoints by following links, parsing JavaScript, and consuming the API spec (OpenAPI/Swagger if available)

2

Passive scan: analyze responses for security headers, cookie flags, and information disclosure without sending attack payloads

3

Active scan: send attack payloads (SQL injection strings, XSS vectors, path traversal) to each parameter and analyze responses for signs of exploitation

4

Authenticated scan: repeat all of the above with an authenticated session — discovers endpoints only accessible to logged-in users

5

Report: categorize findings by severity, include request/response pairs as evidence, generate remediation guidance

DAST against production: use with extreme care

DAST sends real attack payloads — SQL injection strings, large payloads, automated form submissions. Running aggressive DAST against a production application can: corrupt data, trigger real transactions, send emails to real users, or cause outages. Always run automated DAST against staging. For production, use passive scanning only (observation without attack payloads).

DAST vulnerability classHow DAST detects itWhy SAST misses it
IDORLog in as User A, change resource ID to User B's, check if data returnsNo code pattern — authorization is a business logic check, not a syntax rule
Broken authTry expired tokens, manipulate JWT signatures, test session fixationAuth bypass depends on server config and session state, invisible to static analysis
Security misconfigCheck response headers, test default credentials, probe admin endpointsConfig values are env vars or server settings — not visible in source code
Insecure deserializationSend crafted serialized payloads and look for command executionRuntime behavior of deserialization libraries depends on actual data and server state
.github/workflows/dast-staging.yml
1# Run OWASP ZAP DAST scan against staging after deploy
2# Blocks promotion to production on high/critical findings
3name: DAST Staging Scan
4
5on:
6 workflow_call:
7 inputs:
8 staging-url:
9 required: true
10 type: string
11
12jobs:
13 dast-scan:
14 name: OWASP ZAP DAST Scan
15 runs-on: ubuntu-latest
16 steps:
17 - name: ZAP Baseline Scan (passive — no attack payloads)
18 uses: zaproxy/action-baseline@v0.10.0
19 with:
20 target: ${{ inputs.staging-url }}
21 rules_file_name: .zap/rules.tsv
22 cmd_options: '-a' # Include alpha-quality passive rules
23
24 - name: ZAP Full Scan (active — sends attack payloads)
25 uses: zaproxy/action-full-scan@v0.9.0
26 with:
27 target: ${{ inputs.staging-url }}
28 rules_file_name: .zap/rules.tsv
29 fail_action: true # Fail workflow on high/critical findings
30 cmd_options: '-z "-config scanner.threadPerHost=5"'
31
32 - name: Upload ZAP report
33 uses: actions/upload-artifact@v4
34 with:
35 name: zap-report
36 path: report_html.html
37 if: always() # Upload even if scan fails

Running both in a complete security pipeline

The full security testing pipeline combines SAST and DAST at the optimal points:

Complete pipeline with SAST + DAST

→

01

Pre-commit: Secret scan (Gitleaks pre-commit hook) — blocks any commit containing a credential

→

02

PR/CI: SAST (Semgrep, CodeQL) — blocks merge on critical/high; reports medium/low as comments on the PR

→

03

CI: Dependency scan (Snyk, Dependabot) — blocks on critical CVEs in open-source libraries

→

04

CI: Container image scan (Trivy) — blocks on critical CVEs in base image and installed packages

→

05

Deploy to staging: Run automated DAST (OWASP ZAP) — baseline passive scan + active scan

→

06

DAST gate: Block promotion to production on DAST high/critical findings

07

Production: Scheduled passive DAST + monitoring (SIEM, WAF alerts) — no active payloads

1

Pre-commit: Secret scan (Gitleaks pre-commit hook) — blocks any commit containing a credential

2

PR/CI: SAST (Semgrep, CodeQL) — blocks merge on critical/high; reports medium/low as comments on the PR

3

CI: Dependency scan (Snyk, Dependabot) — blocks on critical CVEs in open-source libraries

4

CI: Container image scan (Trivy) — blocks on critical CVEs in base image and installed packages

5

Deploy to staging: Run automated DAST (OWASP ZAP) — baseline passive scan + active scan

6

DAST gate: Block promotion to production on DAST high/critical findings

7

Production: Scheduled passive DAST + monitoring (SIEM, WAF alerts) — no active payloads

Publish all findings to one security dashboard

SAST findings, DAST findings, dependency CVEs, and image scan results should all flow into a single security dashboard (Defect Dojo, GitHub Security tab, or your SIEM). Developers should have one place to see their security backlog, not six different tool UIs. Unified visibility = faster remediation.

Tuning for signal quality: reducing false positives

The biggest failure mode with SAST is over-blocking on false positives until developers disable or bypass the tool. Signal quality is not optional — it is the foundation of developer trust.

Practical false positive management

  • Start in warn-only mode — Run new rules as informational for 2-4 weeks. Review findings with the team to classify true positives vs. false positives before enabling blocking
  • Tune rule confidence — Most SAST tools let you configure which rules block vs. warn. Block only on high-confidence rules (low false positive rate). Warn on everything else
  • Create allowlists for known safe patterns — If a rule flags a pattern that your codebase uses safely (e.g., eval() in a template engine that is always sandboxed), add it to the allowlist with a comment explaining why it is safe
  • Track false positive rate per rule — Any rule with >20% false positives should be tuned or demoted to warn. Developers will trust the tool when it is right >90% of the time
  • Require written justification for overrides — When a developer marks a finding as "false positive" or gets an exception, require a comment in the code and a ticket. This creates an audit trail and discourages dismissing real issues
How this might come up in interviews

DevSecOps Engineer and Application Security Engineer roles. Often in "how would you implement a security testing pipeline?" questions. Also appears in senior developer interviews as "how do you ensure code security before it reaches production?"

Common questions:

  • What is the difference between SAST and DAST? Give examples of vulnerability types each catches.
  • Why can't SAST catch IDOR vulnerabilities? Why can't DAST catch hardcoded secrets?
  • Where does SAST run in your CI/CD pipeline? Where does DAST run? Why is the placement different?
  • You have a SAST tool reporting 200 findings on a legacy codebase. How do you decide what to block on and what to track as tech debt?
  • A developer argues that SAST has too many false positives and wants to disable it. How do you respond?
  • What is IAST and how does it differ from SAST and DAST?
  • Name 3 specific vulnerability types that DAST finds that SAST typically misses.

Strong answer: Immediately gives the IDOR or misconfig example for "DAST-only." Explains pipeline placement with the reasoning (SAST has no app to connect to; DAST needs a deployed app). Talks about false positive rates and how to tune rules. Mentions IAST or RASP as advanced approaches. Has a view on blocking vs. informing thresholds.

Red flags: Thinks SAST and DAST do the same thing. Cannot name a specific vulnerability DAST catches that SAST misses. Says "we use SAST so we're covered." Does not know what a false positive is or how to tune for it. Has never integrated DAST into a CI/CD pipeline.

Quick check · SAST and DAST: Static and Dynamic Security Testing

1 / 4

Which of these vulnerability types can SAST detect but DAST typically cannot?

Key takeaways

  • 🔍 SAST reads source code without running it — fast, fits in CI, catches injection, secrets, unsafe patterns. Runs in minutes on every commit
  • 🎯 DAST sends real attack payloads to a live app — catches IDOR, misconfig, broken auth, business logic flaws that have no code signature
  • 🚫 Neither scanner catches everything: IDOR and misconfigs are DAST-only; hardcoded secrets are SAST-only. Run both
  • ⚡ Pipeline rule: SAST on every PR commit → build → deploy to staging → DAST → promote to prod
  • 🎛 Tune before you block: new SAST rules need a false-positive review period before they block builds — developer trust depends on signal quality
  • 📊 Track MTTD (Mean Time to Detect): SAST should find issues in minutes; DAST in hours. Anything else is a pipeline gap
Before you move on: can you answer these?

A developer hardcodes an AWS access key in a configuration file. Which scanner — SAST or DAST — detects this, and why can't the other?

SAST detects it — secret scanners (a form of SAST) scan source code and match patterns like the AKIA prefix of AWS access keys. DAST cannot detect it because DAST tests a running application by sending HTTP requests — it has no visibility into source code or the static files and environment variables inside the application.

An API endpoint returns data for any user ID provided, without checking that the authenticated user owns that data (IDOR). Which scanner detects this?

DAST detects it. DAST logs in as User A, records their user ID, then substitutes User B's ID and checks if data is returned. SAST cannot detect IDOR because there is no code-level pattern — the code correctly reads the ID parameter and queries the database; the missing piece is a business logic check that SAST has no way to reason about.

In a CI/CD pipeline, why does SAST run earlier (on every commit) while DAST runs later (post-deploy to staging)?

SAST needs only source code — no running application, no network, no deploy. It can run in seconds on every commit, giving immediate feedback. DAST needs a fully deployed application to send requests to — it cannot run until the app exists in a test environment. The timing reflects what each tool requires to function.

From the books

DevSecOps: A Leader's Guide to Producing Secure Software

Chapter 5: Testing for Security — Beyond Unit Tests

The book distinguishes "code-level" security testing (SAST, secret scanning) from "behavior-level" testing (DAST, pen testing) and argues that organizations mature their security programs by adding behavior-level testing after establishing code-level practices. Most teams start with SAST because it is fast and CI-friendly; DAST comes later because it requires a running environment. The goal is both running continuously.

🧠Mental Model

💡 Analogy

SAST is a code inspector who reads blueprints and flags structural flaws before the building is constructed. DAST is a penetration tester who shows up after the building is built and tries to break in — testing the locks, probing the windows, and discovering that the alarm system was accidentally left off. You need both: the inspector catches design flaws early and cheaply; the pen tester finds what survived the inspection and what the building's actual environment introduced.

⚡ Core Idea

SAST and DAST are complementary, not competing. SAST runs on source code with no application running — fast, integrates into CI on every commit, finds injection flaws, secrets, and unsafe patterns by tracing data flow. DAST sends real attack payloads to a live application — finds runtime misconfigurations, IDOR, broken access control, and business logic flaws that have no code signature. Each has a blind spot the other covers.

🎯 Why It Matters

Teams that run only SAST miss runtime and configuration vulnerabilities (IDOR, misconfig, business logic). Teams that run only DAST find issues too late — after code is merged and deployed, when fixes are expensive. The pipeline pattern of SAST on every commit + DAST on every staging deploy catches the broadest vulnerability surface at the point where fixes cost the least.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.