Skip to main content
Career Paths
Concepts
Network Security Pipelines
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Network Security in Pipelines

How to apply network-level security controls — VPC segmentation, firewall rules, mTLS, egress filtering, and private networking — to CI/CD pipelines and cloud infrastructure.

🎯Key Takeaways
CI/CD runners are high-value targets — they hold secrets, cloud credentials, and access to deployment pipelines.
Isolate CI/CD VPCs completely from production — deployments go via scoped API calls, not direct network paths.
Implement egress allowlisting — runners should only reach known package registries, SCM, and cloud APIs.
Use VPC endpoints for cloud provider APIs to keep traffic off the public internet.
Log all proxy/firewall denials to your SIEM — unexpected blocked connections are incident signals.
Use ephemeral Kubernetes runners — each job gets a fresh pod, eliminating persistent runner state.
mTLS for runner-to-service authentication eliminates reliance on long-lived tokens.
Administrative interfaces (K8s Dashboard, Argo CD) must be behind VPN — never publicly accessible.

Network Security in Pipelines

How to apply network-level security controls — VPC segmentation, firewall rules, mTLS, egress filtering, and private networking — to CI/CD pipelines and cloud infrastructure.

~9 min read
Be the first to complete!
Why this matters

CI/CD runners, build agents, and cloud services are often over-privileged on the network — they can reach the internet freely, connect to any internal service, and accept connections from anywhere.

Without this knowledge

A compromised build agent can exfiltrate secrets, pivot to production databases, download malware, or communicate with attacker C2 infrastructure — all over unrestricted network paths.

With this knowledge

Network controls constrain what CI/CD infrastructure can reach. Even if a runner is compromised, it cannot exfiltrate data or pivot because egress is restricted, internal services are not reachable, and lateral movement is blocked.

What you'll learn
  • CI/CD runners are high-value targets — they hold secrets, cloud credentials, and access to deployment pipelines.
  • Isolate CI/CD VPCs completely from production — deployments go via scoped API calls, not direct network paths.
  • Implement egress allowlisting — runners should only reach known package registries, SCM, and cloud APIs.
  • Use VPC endpoints for cloud provider APIs to keep traffic off the public internet.
  • Log all proxy/firewall denials to your SIEM — unexpected blocked connections are incident signals.
  • Use ephemeral Kubernetes runners — each job gets a fresh pod, eliminating persistent runner state.
  • mTLS for runner-to-service authentication eliminates reliance on long-lived tokens.
  • Administrative interfaces (K8s Dashboard, Argo CD) must be behind VPN — never publicly accessible.

Lesson outline

Why CI/CD Pipelines Are High-Value Network Targets

CI/CD runners sit at the intersection of your source code, secrets, cloud credentials, and production environment. They are among the most privileged entities on your network.

What a compromised build agent can do without network controls

  • Exfiltrate secrets — POST injected env vars (AWS keys, Vault tokens) to attacker server over outbound HTTPS
  • Download malware — Fetch and execute malicious binaries or scripts from attacker-controlled URLs
  • Pivot to production — If runners share a VPC with prod, they can directly query internal databases or APIs
  • C2 communication — Establish reverse shell or beacon to Command and Control server over allowed ports (80/443)
  • Cryptomining — Use build agent compute/network for cryptocurrency mining
  • Supply chain poisoning — Push malicious artifacts to package registries or container repositories

The Codecov attack exploited unrestricted outbound access

The malicious Codecov bash uploader exfiltrated CI environment variables (secrets) via HTTP POST to an attacker server. This worked because CI runners had unrestricted outbound internet access. Egress filtering would have blocked the exfiltration even after the script ran.

VPC Architecture for CI/CD

The foundation of network security in pipelines is placing CI/CD infrastructure in a properly segmented VPC with clear boundaries between build, staging, and production networks.


  ┌─────────────────────────────────────────────────────────────────────┐
  │  AWS Account / GCP Project                                          │
  │                                                                     │
  │  ┌──────────────────────┐    ┌──────────────────────┐               │
  │  │  CI/CD VPC           │    │  Production VPC       │               │
  │  │  10.10.0.0/16        │    │  10.20.0.0/16         │               │
  │  │                      │    │                       │               │
  │  │  ┌────────────────┐  │    │  ┌─────────────────┐ │               │
  │  │  │ Build Subnet   │  │    │  │ App Subnet      │ │               │
  │  │  │ 10.10.1.0/24   │  │    │  │ 10.20.1.0/24   │ │               │
  │  │  │ (runners)      │  │    │  │ (services)      │ │               │
  │  │  └───────┬────────┘  │    │  └─────────────────┘ │               │
  │  │          │           │    │                       │               │
  │  │  ┌───────▼────────┐  │    │  ┌─────────────────┐ │               │
  │  │  │ Artifact Subnet│  │    │  │ DB Subnet       │ │               │
  │  │  │ 10.10.2.0/24   │  │    │  │ 10.20.2.0/24   │ │               │
  │  │  │ (registries)   │  │    │  │ (databases)     │ │               │
  │  │  └────────────────┘  │    │  └─────────────────┘ │               │
  │  │                      │    │                       │               │
  │  │  Egress: allowlist   │    │  NO direct peering    │               │
  │  │  only known URLs     │    │  to CI/CD VPC         │               │
  │  └──────────────────────┘    └──────────────────────┘               │
  │                                                                     │
  │  ❌ No VPC peering between CI/CD and Prod                           │
  │  ✅ Deployments go via API (kubectl, AWS API) with IAM/RBAC         │
  └─────────────────────────────────────────────────────────────────────┘

CI/CD VPC is completely isolated from production. Deployments are made via API calls with scoped credentials — not via direct network access.

Key VPC segmentation rules

  • No peering between CI/CD and prod VPCs — Deployments happen via cloud APIs (kubectl, AWS SDK) with scoped IAM roles — not via direct network paths
  • Build subnets have no inbound rules — Runners initiate all connections — nothing should be connecting to them
  • Artifact subnet only reachable from build subnet — Internal registries (Nexus, JFrog, ECR) are not internet-accessible
  • Use private endpoints for cloud services — S3, ECR, Secrets Manager, KMS — use VPC endpoints so traffic never leaves the AWS network
  • NAT Gateway for controlled egress — All outbound internet traffic routes via NAT Gateway with logging — never directly from runner IPs
Best Practice

Egress Filtering: Limiting What Pipelines Can Reach

Egress filtering restricts what outbound connections your CI/CD runners are allowed to make. This is one of the most effective controls for limiting the blast radius of a compromised build.

Implementing egress allowlisting

→

01

Identify all legitimate outbound destinations: npm registry, PyPI, Docker Hub, GitHub, your cloud provider APIs, internal artifact server

→

02

Deploy a forward proxy (Squid, Envoy, AWS Network Firewall) in the CI/CD VPC

→

03

Configure runners to route all HTTP/HTTPS through the proxy

→

04

Set the proxy to allowlist only the identified domains — deny everything else

→

05

Log all proxy requests to your SIEM — alert on denied requests (indicates attempted exfiltration or malicious download)

06

Review and update the allowlist quarterly as dependencies change

1

Identify all legitimate outbound destinations: npm registry, PyPI, Docker Hub, GitHub, your cloud provider APIs, internal artifact server

2

Deploy a forward proxy (Squid, Envoy, AWS Network Firewall) in the CI/CD VPC

3

Configure runners to route all HTTP/HTTPS through the proxy

4

Set the proxy to allowlist only the identified domains — deny everything else

5

Log all proxy requests to your SIEM — alert on denied requests (indicates attempted exfiltration or malicious download)

6

Review and update the allowlist quarterly as dependencies change

Egress ControlMechanismGranularityBest For
AWS Network FirewallManaged stateful firewall with domain filteringDomain + IP + portAWS-native, large scale
Squid ProxyOpen-source forward proxy with ACLsDomain + URL patternSelf-managed, any cloud
Security Group rulesIP/CIDR + port allowlistIP/port only (no DNS)Simple port-level blocking
Firewall policies (GCP)VPC firewall rules + Cloud ArmorIP/CIDR + portGCP-native
Calico/Cilium NetworkPolicyKubernetes-native network policyPod label + portK8s-based runners

Log all denied egress — they tell a story

Every blocked outbound connection from a runner is a signal. Expected denied connections (typos, misconfigured tools) cluster around known patterns. Denied connections to IP addresses you have never seen, or to domains resembling C2 infrastructure, are incidents. Route proxy logs to your SIEM with alerting.

network-firewall-cicd.tf
1# AWS Network Firewall — domain-based egress allowlist for CI/CD VPC
2# Terraform configuration
3
4resource "aws_networkfirewall_rule_group" "cicd_egress_allowlist" {
5 capacity = 100
6 name = "cicd-egress-allowlist"
7 type = "STATEFUL"
8
9 rule_group {
10 rules_source {
11 rules_source_list {
ALLOWLIST = deny all except listed domains
12 generated_rules_type = "ALLOWLIST"
13 target_types = ["HTTP_HOST", "TLS_SNI"]
TLS_SNI inspects SNI header — works for HTTPS without decryption
14 targets = [
15 # Package registries
16 "registry.npmjs.org",
17 "pypi.org",
18 "files.pythonhosted.org",
19 "proxy.golang.org",
20 # Container registries
21 "registry-1.docker.io",
22 "auth.docker.io",
23 "ghcr.io",
24 # Source control
25 "github.com",
26 "api.github.com",
27 # AWS APIs (use VPC endpoints for these instead)
28 "ecr.us-east-1.amazonaws.com",
29 "secretsmanager.us-east-1.amazonaws.com",
30 # Internal artifact server
31 "artifacts.internal.company.com",
32 ]
33 }
34 }
Use VPC endpoints for AWS APIs instead of internet egress
35 }
36}
37
38# All other outbound traffic is dropped by the default deny rule
39# Denied connections appear in CloudWatch Logs for alerting

mTLS and Service-to-Service Authentication

Mutual TLS (mTLS) ensures both parties in a connection authenticate with certificates. In pipelines, this means your runners prove their identity to internal services, and vice versa.

Where mTLS matters in CI/CD

  • Runner → Vault — Runner presents a client cert to Vault instead of (or in addition to) a token — prevents token theft pivot attacks
  • Runner → Internal artifact server — Nexus/JFrog requires client cert from runners — prevents rogue runners from publishing artifacts
  • Runner → Kubernetes API — Kubeconfig uses TLS client auth — standard for kubectl-based deployments
  • Service mesh (Istio/Linkerd) — All service-to-service traffic in the cluster uses mTLS automatically — no code changes required

Use short-lived certificates for CI/CD runners

Issue runner certificates with a 1-hour TTL using Vault's PKI engine. The runner requests a cert at job start, uses it throughout the job, and it expires automatically. No long-lived credentials to rotate or revoke — the cert is valid only for the duration of the build.

Private Runners and Self-Hosted Agents

GitHub-hosted runners and public CI services share compute with other customers and have broad internet access. For sensitive workloads, self-hosted private runners in your own VPC are essential.

ConsiderationPublic CI runnersPrivate self-hosted runners
Network isolation❌ Shared environment, broad egress✅ Your VPC, controlled egress
Secret access⚠️ Secrets injected as env vars✅ Can use Vault agent sidecar, instance profiles
Compliance❌ Data leaves your environment✅ Data stays in your environment
Ephemeral✅ Always clean✅ Use ephemeral runners (k8s pod per job)
Cost✅ No infra to manage⚠️ EC2/K8s costs + operational overhead
Audit logging⚠️ Limited✅ Full VPC Flow Logs, CloudTrail, proxy logs

Use ephemeral Kubernetes runners for every job

Tools like actions-runner-controller (GitHub) and GitLab Runner on Kubernetes spin up a fresh pod for each CI job and destroy it when done. There is no persistent state on the runner, no build residue, no leaked environment from a previous job. Each job starts with a clean, minimal container.

How this might come up in interviews

Network security in pipelines appears in platform engineering and DevSecOps architect interviews. Expect scenario questions about blast radius reduction and defence-in-depth for build infrastructure.

Common questions:

  • How would you secure a CI/CD runner from network-level attacks?
  • What is the risk of a CI/CD VPC being peered with production?
  • How do you implement egress filtering for build agents?
  • A compromised runner is trying to reach an external IP. What controls would catch or block this?

Strong answer: Mentions VPC isolation between CI/CD and prod, egress allowlisting, ephemeral runners, VPC endpoints for cloud APIs, and logging proxy denials to SIEM.

Red flags: Not knowing what VPC peering is, thinking security groups alone are sufficient for egress control, or suggesting that private GitHub-hosted runners solve all network security problems.

Quick check · Network Security in Pipelines

1 / 3

Why should CI/CD VPCs NOT be peered with production VPCs?

Key takeaways

  • CI/CD runners are high-value targets — they hold secrets, cloud credentials, and access to deployment pipelines.
  • Isolate CI/CD VPCs completely from production — deployments go via scoped API calls, not direct network paths.
  • Implement egress allowlisting — runners should only reach known package registries, SCM, and cloud APIs.
  • Use VPC endpoints for cloud provider APIs to keep traffic off the public internet.
  • Log all proxy/firewall denials to your SIEM — unexpected blocked connections are incident signals.
  • Use ephemeral Kubernetes runners — each job gets a fresh pod, eliminating persistent runner state.
  • mTLS for runner-to-service authentication eliminates reliance on long-lived tokens.
  • Administrative interfaces (K8s Dashboard, Argo CD) must be behind VPN — never publicly accessible.
Before you move on: can you answer these?

Your CI/CD VPC is peered with production. Why is this a security risk even if the runners have no production credentials?

VPC peering creates a network path. Even without credentials, a compromised runner can perform network reconnaissance (port scans, service discovery), exploit unauthenticated internal services, or act as a pivot point if it obtains credentials via lateral movement. Network access and credential access are separate attack vectors — you must restrict both. The principle of least privilege applies to network access, not just IAM.

What is the difference between using a Security Group egress rule and a forward proxy for CI/CD egress filtering?

Security Group egress rules filter by IP address and port — they cannot filter by domain name. An attacker can bypass them by pointing a malicious server at an allowed IP, or by using a CDN IP shared with legitimate services. A forward proxy (Squid, AWS Network Firewall) operates at the application layer and filters by domain name (HTTP Host header / TLS SNI), making it much harder to bypass with IP tricks. Proxy logs also show domain names, making anomaly detection far more effective.

From the books

Hacking Kubernetes (Andrew Martin & Michael Hausenblas, O'Reilly)

Chapter 3: Container and Pod-Level Security

Covers Kubernetes attack vectors including network policy bypass, lateral movement, and cluster escape. Essential reading for anyone securing K8s-based CI/CD.

🧠Mental Model

💡 Analogy

A clean-room for sensitive work

⚡ Core Idea

Just as a pharmaceutical clean-room controls what goes in and out to prevent contamination, your CI/CD network controls what traffic can reach runners (nothing inbound) and what runners can reach (allowlisted destinations only). Compromise is contained by the network perimeter.

🎯 Why It Matters

Network controls are the last line of defence after application-layer controls fail. A compromised runner with unrestricted network access becomes a jumping-off point for your entire infrastructure. The same runner with allowlisted egress and no prod peering is a dead end for an attacker.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.