Skip to main content
Career Paths
Concepts
Compliance As Code
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Compliance as Code

Expressing security and compliance rules as code so they are automated, auditable, and consistent — replacing manual checklists with policy engines that run in pipelines, admission controllers, and continuous monitors.

🎯Key Takeaways
Compliance as code replaces manual audit checklists with machine-readable policy rules that run in pipelines, admission webhooks, and continuous monitors
Three layers catch different things: pipeline (before creation), admission (before scheduling), continuous (runtime drift)
OPA/Rego for IaC policy is the most powerful but steepest learning curve; Kyverno is the best Kubernetes-native choice
Every policy rule should map to a compliance framework control — the code IS the audit evidence
Audit mode first, then enforce: establish a baseline of existing violations before blocking new ones
Continuous compliance catches configuration drift — changes made via the console or outside of your pipeline

Compliance as Code

Expressing security and compliance rules as code so they are automated, auditable, and consistent — replacing manual checklists with policy engines that run in pipelines, admission controllers, and continuous monitors.

~8 min read
Be the first to complete!
What you'll learn
  • Compliance as code replaces manual audit checklists with machine-readable policy rules that run in pipelines, admission webhooks, and continuous monitors
  • Three layers catch different things: pipeline (before creation), admission (before scheduling), continuous (runtime drift)
  • OPA/Rego for IaC policy is the most powerful but steepest learning curve; Kyverno is the best Kubernetes-native choice
  • Every policy rule should map to a compliance framework control — the code IS the audit evidence
  • Audit mode first, then enforce: establish a baseline of existing violations before blocking new ones
  • Continuous compliance catches configuration drift — changes made via the console or outside of your pipeline

Lesson outline

From Audit Spreadsheets to Policy Engines

Traditional compliance runs on annual or quarterly audit cycles: a spreadsheet of controls, a human checking each one, evidence collected by email. By the time an auditor finds an unencrypted database, it has been running in production for months.

Compliance as code (also called policy as code) replaces the spreadsheet with machine-readable rules that run continuously — in your CI pipeline, in Kubernetes admission webhooks, and as scheduled checks against live infrastructure. A violation that would have taken weeks to discover in a manual audit is caught in seconds, before it ever reaches production.

Mental model

Think of compliance as code as turning an audit checklist into unit tests. Instead of a human checking "is this S3 bucket public?" once a quarter, the policy runs on every Terraform plan and every hour against live resources. It either passes or it fails — automatically, consistently, with a paper trail.

The four core benefits

  • Consistency — The same rule runs in every environment, every time. No human interpretation variance.
  • Speed — Violations caught in the pipeline (seconds) instead of audits (months). The cost of fixing at apply-time is near zero; the cost at audit-time is high.
  • Auditability — The policy code IS the evidence. Auditors can read the rules, run them, and see results — no manual evidence collection.
  • Shift left — Developers get feedback at PR time ("this S3 bucket will fail the public-access-block policy") not during a production incident or audit finding.
📜Policy LifecycleCompliance as Code
OPA / RegoCloud APIs
deny if input.resource.public == true
KyvernoKubernetes
pattern:
  spec:
    securityContext:
      runAsNonRoot: true
ConftestIaC / Terraform
deny[msg] {
  not input.encrypted
  msg := "disk must be encrypted"
}
AWS ConfigRuntime drift
managed rule:
  s3-bucket-public-read-prohibited
⚡Live Policy Simulatorterraform plan | conftest test
main.tfNON-COMPLIANT
resource "aws_s3_bucket" "data" {  bucket = "my-insecure-bucket"  # missing: public_access_block  # missing: tags}resource "aws_db_instance" "main" {  identifier     = "prod-db"  engine         = "postgres"  instance_class = "db.t3.micro"  # missing: storage_encrypted  # missing: tags}
Policy checks3 violations
S3 public access block
public_access_block = true
✗ FAIL
RDS encryption
storage_encrypted = true
✗ FAIL
Required tags
tags include "Environment" and "Team"
✗ FAIL
❌

Pipeline blocked — 3 policy violations

Fix and re-run.

🗺️Compliance Framework MappingPolicy → Control reference
Policy RuleCIS BenchmarkPCI-DSSSOC 2HIPAA
S3 public access blockCIS 2.1.5Req 6.4CC6.1§164.312
RDS encryption at restCIS 2.3.1Req 3.5CC6.7§164.312(a)
No root containerCIS 5.1.6Req 6.2CC6.1N/A
Mandatory resource tags——CC2.1§164.308
CIS BenchmarkPCI-DSSSOC 2HIPAA

The Three Enforcement Layers

Compliance as code runs at three distinct layers, each catching different classes of violation at different stages of the software lifecycle.

LayerWhen it runsWhat it evaluatesToolsEnforcement action
Pipeline (pre-apply)Before terraform apply or helm upgradeInfrastructure plan (JSON), Helm values, Kubernetes manifestsConftest + OPA/Rego, Checkov, tfsec, TerrascanBlock pipeline — change never applied
Admission (pre-schedule)Before every kubectl apply / createKubernetes resources: Pods, Deployments, Services, IngressOPA Gatekeeper, KyvernoReject resource — pod never scheduled
Continuous (runtime)Scheduled or event-driven against live resourcesLive cloud resources (EC2, S3, RDS, IAM), K8s workloadsAWS Config, Cloud Custodian, ProwlerAlert, auto-remediate, or create ticket

Use all three layers — they catch different things

Pipeline catches new violations before they are created. Admission catches runtime misconfigurations in Kubernetes. Continuous catches drift: things that were compliant when created but changed later (console click, config drift, new CVE). You need all three for full coverage.

Writing Policies: OPA/Rego for Cloud Infrastructure

Open Policy Agent (OPA) is the most widely used policy engine for infrastructure. Policies are written in Rego, a declarative query language. OPA evaluates Rego policies against JSON data (Terraform plans, Kubernetes manifests, API request payloads).

The key insight: a Terraform plan is just JSON. A Kubernetes manifest is just JSON. A Dockerfile is just text. All can be evaluated by a policy engine that reads structured data.

policies/aws/s3.rego
1# OPA/Rego policy: S3 buckets must have public access blocked
package path: conftest/OPA uses this to organize and reference policies
2# Run with: conftest test plan.json --policy policies/
3package terraform.aws.s3
4
5import future.keywords.if
6import future.keywords.in
7
8# Deny any S3 bucket that lacks a public access block resource
9deny[msg] if {
deny rules: any message added to this set causes the policy check to fail
10 # Find every aws_s3_bucket resource in the plan
11 resource := input.resource_changes[_]
12 resource.type == "aws_s3_bucket"
13
14 bucket_name := resource.name
15
16 # Check if there is a matching public access block resource
17 not has_public_access_block(bucket_name)
18
19 msg := sprintf(
20 "S3 bucket '%v' must have aws_s3_bucket_public_access_block with block_public_acls=true",
21 [bucket_name]
22 )
23}
Helper rule keeps the main deny rule readable — Rego is declarative, not procedural
24
25# Helper: returns true if a matching public access block exists
26has_public_access_block(bucket_name) if {
27 block := input.resource_changes[_]
28 block.type == "aws_s3_bucket_public_access_block"
29 block.change.after.bucket == bucket_name
30 block.change.after.block_public_acls == true
31 block.change.after.block_public_policy == true
32 block.change.after.restrict_public_buckets == true
33}
34
35# Deny RDS instances without encryption
Each deny rule is independent — one failure does not prevent others from running
36deny[msg] if {
37 resource := input.resource_changes[_]
38 resource.type == "aws_db_instance"
39 resource.change.after.storage_encrypted != true
40
Always embed the compliance framework reference in the message — helps developers understand why
41 msg := sprintf(
42 "RDS instance '%v' must have storage_encrypted = true (PCI-DSS Req 3.5, CIS 2.3.1)",
43 [resource.name]
44 )
45}
46
47# Deny EC2 instances without required tags
48deny[msg] if {
49 resource := input.resource_changes[_]
50 resource.type == "aws_instance"
51
52 required_tags := {"Environment", "Team", "CostCenter"}
53 actual_tags := {k | resource.change.after.tags[k]}
54 missing := required_tags - actual_tags
55 count(missing) > 0
56
57 msg := sprintf(
58 "EC2 instance '%v' is missing required tags: %v",
59 [resource.name, missing]
60 )
61}

Kubernetes Admission: Policy as a Gatekeeping Webhook

In Kubernetes, admission controllers intercept API server requests before they are persisted. OPA Gatekeeper and Kyverno both run as admission webhooks — when you kubectl apply a pod, the API server sends it to the webhook, which evaluates policies and either allows or rejects it.

The difference between the pipeline layer and admission layer: pipeline catches what you are about to create; admission catches what you are actually submitting to the cluster, including changes made outside of your pipeline (kubectl apply directly, Helm releases from a local machine, operator-created resources).

kyverno-compliance-policies.yaml
1# Kyverno policies for compliance-as-code in Kubernetes
2# Deploy with: kubectl apply -f kyverno-compliance-policies.yaml
3
4---
5# Policy 1: Containers must not run as root
6apiVersion: kyverno.io/v1
7kind: ClusterPolicy
8metadata:
9 name: disallow-root-containers
10 annotations:
11 policies.kyverno.io/category: Pod Security
12 policies.kyverno.io/controls: "CIS 5.1.6, SOC2 CC6.1"
13spec:
validationFailureAction: Enforce blocks the request. Use Audit during rollout to baseline before enforcing.
14 validationFailureAction: Enforce
15 rules:
16 - name: no-root-user
17 match:
18 any:
19 - resources:
20 kinds: [Pod]
21 namespaces: [production, staging]
namespace scoping: only enforce in production and staging — gives dev teams flexibility while protecting what matters
22 validate:
Always include the compliance framework control in the error message — helps developers fix the right thing
23 message: >
24 Containers must not run as root (runAsUser: 0).
25 Set securityContext.runAsNonRoot: true and runAsUser >= 1000.
26 Controls: CIS 5.1.6, SOC2 CC6.1
27 pattern:
28 spec:
29 containers:
30 - securityContext:
31 runAsNonRoot: true
32 runAsUser: ">=1000"
33
34---
35# Policy 2: Resource limits must be set (prevents noisy-neighbour attacks)
36apiVersion: kyverno.io/v1
37kind: ClusterPolicy
38metadata:
39 name: require-resource-limits
40 annotations:
41 policies.kyverno.io/category: Resource Management
42 policies.kyverno.io/controls: "CIS 5.1.3"
43spec:
44 validationFailureAction: Enforce
45 rules:
46 - name: check-resource-limits
47 match:
48 any:
49 - resources:
50 kinds: [Pod]
51 namespaces: [production, staging]
52 validate:
53 message: >
54 All containers must define CPU and memory limits.
55 Controls: CIS 5.1.3
56 pattern:
57 spec:
58 containers:
59 - resources:
60 limits:
61 memory: "?*"
62 cpu: "?*"
63
64---
65# Policy 3: Disallow hostPath volumes (container escape risk)
hostPath volumes allow containers to mount host filesystem paths — a common container escape technique
66apiVersion: kyverno.io/v1
67kind: ClusterPolicy
68metadata:
69 name: disallow-host-path
70 annotations:
71 policies.kyverno.io/controls: "CIS 5.1.2, PCI-DSS Req 6.2"
72spec:
73 validationFailureAction: Enforce
74 rules:
75 - name: no-hostpath
76 match:
77 any:
78 - resources:
79 kinds: [Pod]
80 validate:
81 message: "hostPath volumes are not allowed — use PersistentVolumeClaims instead."
82 deny:
83 conditions:
84 any:
85 - key: "{{ request.object.spec.volumes[].hostPath | length(@) }}"
86 operator: GreaterThan
87 value: 0

Mapping Policies to Compliance Frameworks

The real leverage of compliance as code is framework mapping: each policy rule maps to one or more compliance controls. When an auditor asks "how do you ensure S3 buckets are not public?", you show them the Rego rule, the CI pipeline run results, and the continuous compliance check history. No spreadsheet needed.

Policy RuleCIS BenchmarkPCI-DSSSOC 2HIPAA
S3 public access blockCIS 2.1.5Req 6.4CC6.1§164.312
RDS encryption at restCIS 2.3.1Req 3.5CC6.7§164.312(a)(2)(iv)
No root containersCIS 5.1.6Req 6.2CC6.1N/A
Resource limits setCIS 5.1.3Req 6.2A1.2N/A
No hostPath volumesCIS 5.1.2Req 6.2CC6.1N/A
Required resource tags——CC2.1§164.308(a)(1)
TLS 1.2+ on ALB listenersCIS 3.9Req 4.1CC6.7§164.312(e)(2)(ii)

Compliance report generation

Tools like Prowler generate compliance reports by running hundreds of checks mapped to CIS, PCI-DSS, SOC 2, and HIPAA controls. Output is JSON/CSV/HTML — directly shareable with auditors. This replaces weeks of manual evidence collection with a single command: prowler aws --compliance cis_aws_2.0.

How this might come up in interviews

Compliance as code questions appear in senior DevSecOps, platform engineering, and cloud security architect interviews. Expect both conceptual questions ("explain policy as code") and practical ones ("write a Rego rule that denies S3 buckets without encryption"). Companies with SOC 2, PCI-DSS, or FedRAMP requirements will weight this heavily.

Common questions:

  • What is the difference between compliance as code and security scanning?
  • How would you enforce that all production Kubernetes pods run as non-root?
  • What is OPA/Rego and where does it fit in a DevSecOps pipeline?
  • How do you handle compliance drift — when a resource was compliant at creation but changed later?
  • How do you map policy rules to compliance frameworks like PCI-DSS or SOC 2?

Strong answer: Mentioning the gap between pipeline and continuous checks — and why you need both. Knowing that Kyverno can also mutate resources (add security contexts automatically). Understanding that compliance-as-code produces audit evidence natively. Mentioning Prowler for AWS compliance report generation.

Red flags: Thinking compliance as code is the same as vulnerability scanning. Not knowing what an admission controller is. Treating compliance as an annual audit activity. Conflating writing policies with enforcing them (enforcement requires admission controllers or pipeline gates, not just code).

Quick check · Compliance as Code

1 / 4

A developer submits a Terraform PR that adds an RDS instance without storage_encrypted = true. What is the EARLIEST point in the DevOps lifecycle where compliance as code can block this?

Key takeaways

  • Compliance as code replaces manual audit checklists with machine-readable policy rules that run in pipelines, admission webhooks, and continuous monitors
  • Three layers catch different things: pipeline (before creation), admission (before scheduling), continuous (runtime drift)
  • OPA/Rego for IaC policy is the most powerful but steepest learning curve; Kyverno is the best Kubernetes-native choice
  • Every policy rule should map to a compliance framework control — the code IS the audit evidence
  • Audit mode first, then enforce: establish a baseline of existing violations before blocking new ones
  • Continuous compliance catches configuration drift — changes made via the console or outside of your pipeline
Before you move on: can you answer these?

A new developer asks why you run both OPA Conftest in the CI pipeline AND Kyverno in Kubernetes. Explain why both are needed.

Conftest catches violations before terraform apply or helm install — blocking the creation of bad resources. Kyverno catches resources applied directly to Kubernetes outside of the pipeline (kubectl from a laptop, operator-created resources, Helm releases that bypass CI). The two layers are complementary: pipeline = before creation, admission = before scheduling, regardless of how the resource was submitted.

How do you handle the transition from no policies to enforced policies without causing production outages?

Roll out in audit mode first. Deploy the admission controller with validationFailureAction: Audit — it logs all violations but allows resources. Review the audit log to find all existing violations. Fix them (update deployments, add security contexts, etc.). Only then switch to Enforce mode. This prevents the scenario where flipping to Enforce immediately breaks existing workloads.

From the books

Hacking the Cloud — The Art of Cloud Security (Springer, 2023)

Covers OPA policy-as-code patterns for AWS and Kubernetes environments, including framework mapping to CIS Benchmarks and PCI-DSS controls.

🧠Mental Model

💡 Analogy

Automated building code inspections

⚡ Core Idea

Building codes define what is allowed (load-bearing walls, electrical standards, fire egress). Inspectors check compliance. Compliance as code does the same thing for cloud infrastructure — but instead of annual inspections, every commit and every deployment is checked automatically.

🎯 Why It Matters

Manual compliance audits find violations months after they are introduced. By then, fixing them requires downtime, data migrations, or emergency patches. Compliance as code catches violations at the point of introduction — when a developer adds a misconfigured resource — making fixes cheap and fast.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.