How SAST scans source code for vulnerabilities before runtime, how DAST attacks the running application to find what SAST misses, and how to run both in a DevSecOps pipeline.
How SAST scans source code for vulnerabilities before runtime, how DAST attacks the running application to find what SAST misses, and how to run both in a DevSecOps pipeline.
Lesson outline
SAST and DAST test security in completely different ways, at completely different points in the pipeline — and that is exactly the point. They are designed to catch different classes of vulnerabilities.
| SAST | DAST | |
|---|---|---|
| What it analyzes | Source code, bytecode, or binaries | Running application (HTTP requests/responses) |
| When it runs | In CI on every commit — no app needed | Post-deploy to staging — requires live app |
| Speed | Seconds to minutes | Minutes to hours |
| What it finds | Injection, secrets, unsafe patterns, data-flow bugs | IDOR, misconfig, broken auth, runtime behavior |
| What it misses | Business logic flaws, misconfigs, IDOR | Secrets in code, code-level injection patterns |
| False positive rate | Higher (static analysis has limitations) | Lower (tests actual behavior) |
The complementary principle
Every major security framework (OWASP, NIST, BSIMM) recommends running both SAST and DAST. Teams that run only one scanner have a systematic blind spot. The fastest path to comprehensive automated security testing is: SAST on every PR + DAST on every staging deploy.
Click a vulnerability to see which scanner catches it — and why
Analyzes source code — no running app needed. Fast, catches issues on every commit.
Tests the running application by sending real attack payloads. Catches runtime and config flaws.
User input flows directly into a SQL query without parameterization.
🔍 SAST detection
✅ SAST traces data flow from request.id to SQL query and flags the concatenation.
🎯 DAST detection
✅ DAST sends payloads like `1 OR 1=1` and detects query manipulation in responses.
✅ Fix
Use parameterized queries: db.query("SELECT * FROM users WHERE id = $1", [id])
Coverage summary
Neither scanner catches everything. Run both — SAST in CI on every commit, DAST post-deploy to staging on every release.
SAST tools analyze source code using several techniques, each with different trade-offs between accuracy and speed:
SAST analysis techniques
Secret scanning is a distinct SAST category
Tools like Gitleaks and TruffleHog are specialized SAST tools focused exclusively on finding credentials in code. Run them as a separate gate from general SAST — they're faster, have near-zero false positives, and the remediation (revoke + rotate the secret) is clear and immediate. Block on any secret found, no exceptions.
1# Custom Semgrep rules for common vulnerability patterns2# Run: semgrep --config .semgrep/ src/34rules:5# Rule 1: Detect SQL injection via string concatenation6- id: sql-injection-string-concat7patterns:8- pattern: |9$QUERY = "..." + $INPUT10$DB.query($QUERY, ...)11- pattern-not: |12$QUERY = "..." + $SAFE_VALUE # allowlist safe patterns13message: |14SQL injection risk: user input concatenated into query string.15Use parameterized queries: db.query("SELECT ... WHERE id = $1", [id])16severity: ERROR17languages: [javascript, typescript]18metadata:19cwe: CWE-8920owasp: A03:20212122# Rule 2: Detect hardcoded AWS credentials23- id: hardcoded-aws-key24patterns:25- pattern-regex: 'AKIA[0-9A-Z]{16}'26message: |27Hardcoded AWS access key detected. Remove from code immediately.28Load credentials from environment variables or AWS IAM roles.29severity: ERROR30languages: [javascript, typescript, python, go, java]31metadata:32cwe: CWE-7983334# Rule 3: Detect use of eval() with user input35- id: eval-with-user-input36patterns:37- pattern: eval($INPUT)38- pattern-not: eval("literal string")39message: |40eval() with dynamic input is a code injection risk.41Refactor to avoid eval() entirely.42severity: WARNING43languages: [javascript, typescript]44metadata:45cwe: CWE-95
DAST tools interact with the application the same way an attacker would — by sending HTTP requests and analyzing responses. They are most effective when configured with an authenticated session so they can reach protected endpoints.
How DAST scans work
01
Spider / crawl: discover all endpoints by following links, parsing JavaScript, and consuming the API spec (OpenAPI/Swagger if available)
02
Passive scan: analyze responses for security headers, cookie flags, and information disclosure without sending attack payloads
03
Active scan: send attack payloads (SQL injection strings, XSS vectors, path traversal) to each parameter and analyze responses for signs of exploitation
04
Authenticated scan: repeat all of the above with an authenticated session — discovers endpoints only accessible to logged-in users
05
Report: categorize findings by severity, include request/response pairs as evidence, generate remediation guidance
Spider / crawl: discover all endpoints by following links, parsing JavaScript, and consuming the API spec (OpenAPI/Swagger if available)
Passive scan: analyze responses for security headers, cookie flags, and information disclosure without sending attack payloads
Active scan: send attack payloads (SQL injection strings, XSS vectors, path traversal) to each parameter and analyze responses for signs of exploitation
Authenticated scan: repeat all of the above with an authenticated session — discovers endpoints only accessible to logged-in users
Report: categorize findings by severity, include request/response pairs as evidence, generate remediation guidance
DAST against production: use with extreme care
DAST sends real attack payloads — SQL injection strings, large payloads, automated form submissions. Running aggressive DAST against a production application can: corrupt data, trigger real transactions, send emails to real users, or cause outages. Always run automated DAST against staging. For production, use passive scanning only (observation without attack payloads).
| DAST vulnerability class | How DAST detects it | Why SAST misses it |
|---|---|---|
| IDOR | Log in as User A, change resource ID to User B's, check if data returns | No code pattern — authorization is a business logic check, not a syntax rule |
| Broken auth | Try expired tokens, manipulate JWT signatures, test session fixation | Auth bypass depends on server config and session state, invisible to static analysis |
| Security misconfig | Check response headers, test default credentials, probe admin endpoints | Config values are env vars or server settings — not visible in source code |
| Insecure deserialization | Send crafted serialized payloads and look for command execution | Runtime behavior of deserialization libraries depends on actual data and server state |
1# Run OWASP ZAP DAST scan against staging after deploy2# Blocks promotion to production on high/critical findings3name: DAST Staging Scan45on:6workflow_call:7inputs:8staging-url:9required: true10type: string1112jobs:13dast-scan:14name: OWASP ZAP DAST Scan15runs-on: ubuntu-latest16steps:17- name: ZAP Baseline Scan (passive — no attack payloads)18uses: zaproxy/action-baseline@v0.10.019with:20target: ${{ inputs.staging-url }}21rules_file_name: .zap/rules.tsv22cmd_options: '-a' # Include alpha-quality passive rules2324- name: ZAP Full Scan (active — sends attack payloads)25uses: zaproxy/action-full-scan@v0.9.026with:27target: ${{ inputs.staging-url }}28rules_file_name: .zap/rules.tsv29fail_action: true # Fail workflow on high/critical findings30cmd_options: '-z "-config scanner.threadPerHost=5"'3132- name: Upload ZAP report33uses: actions/upload-artifact@v434with:35name: zap-report36path: report_html.html37if: always() # Upload even if scan fails
The full security testing pipeline combines SAST and DAST at the optimal points:
Complete pipeline with SAST + DAST
01
Pre-commit: Secret scan (Gitleaks pre-commit hook) — blocks any commit containing a credential
02
PR/CI: SAST (Semgrep, CodeQL) — blocks merge on critical/high; reports medium/low as comments on the PR
03
CI: Dependency scan (Snyk, Dependabot) — blocks on critical CVEs in open-source libraries
04
CI: Container image scan (Trivy) — blocks on critical CVEs in base image and installed packages
05
Deploy to staging: Run automated DAST (OWASP ZAP) — baseline passive scan + active scan
06
DAST gate: Block promotion to production on DAST high/critical findings
07
Production: Scheduled passive DAST + monitoring (SIEM, WAF alerts) — no active payloads
Pre-commit: Secret scan (Gitleaks pre-commit hook) — blocks any commit containing a credential
PR/CI: SAST (Semgrep, CodeQL) — blocks merge on critical/high; reports medium/low as comments on the PR
CI: Dependency scan (Snyk, Dependabot) — blocks on critical CVEs in open-source libraries
CI: Container image scan (Trivy) — blocks on critical CVEs in base image and installed packages
Deploy to staging: Run automated DAST (OWASP ZAP) — baseline passive scan + active scan
DAST gate: Block promotion to production on DAST high/critical findings
Production: Scheduled passive DAST + monitoring (SIEM, WAF alerts) — no active payloads
Publish all findings to one security dashboard
SAST findings, DAST findings, dependency CVEs, and image scan results should all flow into a single security dashboard (Defect Dojo, GitHub Security tab, or your SIEM). Developers should have one place to see their security backlog, not six different tool UIs. Unified visibility = faster remediation.
The biggest failure mode with SAST is over-blocking on false positives until developers disable or bypass the tool. Signal quality is not optional — it is the foundation of developer trust.
Practical false positive management
DevSecOps Engineer and Application Security Engineer roles. Often in "how would you implement a security testing pipeline?" questions. Also appears in senior developer interviews as "how do you ensure code security before it reaches production?"
Common questions:
Strong answer: Immediately gives the IDOR or misconfig example for "DAST-only." Explains pipeline placement with the reasoning (SAST has no app to connect to; DAST needs a deployed app). Talks about false positive rates and how to tune rules. Mentions IAST or RASP as advanced approaches. Has a view on blocking vs. informing thresholds.
Red flags: Thinks SAST and DAST do the same thing. Cannot name a specific vulnerability DAST catches that SAST misses. Says "we use SAST so we're covered." Does not know what a false positive is or how to tune for it. Has never integrated DAST into a CI/CD pipeline.
Quick check · SAST and DAST: Static and Dynamic Security Testing
1 / 4
Key takeaways
A developer hardcodes an AWS access key in a configuration file. Which scanner — SAST or DAST — detects this, and why can't the other?
SAST detects it — secret scanners (a form of SAST) scan source code and match patterns like the AKIA prefix of AWS access keys. DAST cannot detect it because DAST tests a running application by sending HTTP requests — it has no visibility into source code or the static files and environment variables inside the application.
An API endpoint returns data for any user ID provided, without checking that the authenticated user owns that data (IDOR). Which scanner detects this?
DAST detects it. DAST logs in as User A, records their user ID, then substitutes User B's ID and checks if data is returned. SAST cannot detect IDOR because there is no code-level pattern — the code correctly reads the ID parameter and queries the database; the missing piece is a business logic check that SAST has no way to reason about.
In a CI/CD pipeline, why does SAST run earlier (on every commit) while DAST runs later (post-deploy to staging)?
SAST needs only source code — no running application, no network, no deploy. It can run in seconds on every commit, giving immediate feedback. DAST needs a fully deployed application to send requests to — it cannot run until the app exists in a test environment. The timing reflects what each tool requires to function.
From the books
DevSecOps: A Leader's Guide to Producing Secure Software
Chapter 5: Testing for Security — Beyond Unit Tests
The book distinguishes "code-level" security testing (SAST, secret scanning) from "behavior-level" testing (DAST, pen testing) and argues that organizations mature their security programs by adding behavior-level testing after establishing code-level practices. Most teams start with SAST because it is fast and CI-friendly; DAST comes later because it requires a running environment. The goal is both running continuously.
💡 Analogy
SAST is a code inspector who reads blueprints and flags structural flaws before the building is constructed. DAST is a penetration tester who shows up after the building is built and tries to break in — testing the locks, probing the windows, and discovering that the alarm system was accidentally left off. You need both: the inspector catches design flaws early and cheaply; the pen tester finds what survived the inspection and what the building's actual environment introduced.
⚡ Core Idea
SAST and DAST are complementary, not competing. SAST runs on source code with no application running — fast, integrates into CI on every commit, finds injection flaws, secrets, and unsafe patterns by tracing data flow. DAST sends real attack payloads to a live application — finds runtime misconfigurations, IDOR, broken access control, and business logic flaws that have no code signature. Each has a blind spot the other covers.
🎯 Why It Matters
Teams that run only SAST miss runtime and configuration vulnerabilities (IDOR, misconfig, business logic). Teams that run only DAST find issues too late — after code is merged and deployed, when fixes are expensive. The pipeline pattern of SAST on every commit + DAST on every staging deploy catches the broadest vulnerability surface at the point where fixes cost the least.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.