Skip to main content
Career Paths
Concepts
Network Policies Deep Dive
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Network Policies: Pod-to-Pod Firewalling

NetworkPolicy resources define ingress and egress rules for pod traffic. Without them, all pods can reach all pods. With a misconfigured policy, legitimate traffic is silently blocked with no error logs. Getting them right requires understanding label selectors and namespace isolation.

Relevant for:Mid-levelSeniorStaff
Why this matters at your level
Mid-level

Understand the default-allow model and what "deny all" means. Know how podSelector and namespaceSelector work in NetworkPolicy specs.

Senior

Design namespace isolation with default-deny + explicit allow rules. Always include DNS egress. Test policies with network connectivity verification (not just "kubectl apply works").

Staff

Implement cluster-wide zero-trust with Calico GlobalNetworkPolicy or Cilium ClusterwidNetworkPolicy. Design policy templates for microservice namespaces. Integrate with GitOps for policy-as-code enforcement.

Network Policies: Pod-to-Pod Firewalling

NetworkPolicy resources define ingress and egress rules for pod traffic. Without them, all pods can reach all pods. With a misconfigured policy, legitimate traffic is silently blocked with no error logs. Getting them right requires understanding label selectors and namespace isolation.

~3 min read
Be the first to complete!
LIVEProduction Outage -- NetworkPolicy Blocks DNS -- 2021
Breaking News
T+0

deny-all ingress NetworkPolicy applied to application namespace

T+2m

DNS lookups start timing out across all pods in the namespace

T+3m

Service-to-service HTTP calls fail with "failed to resolve host" errors

T+20m

Team confirms Services and Endpoints are correct; suspects NetworkPolicy

T+25m

egress allow rule for port 53 to kube-system added; DNS restored

—Full service disruption
—The one rule everyone forgets
—Error logs mentioning NetworkPolicy

The question this raises

What does a default-deny NetworkPolicy actually block, and what implicit traffic does every namespace need to function that is easy to forget?

Test your assumption first

You apply a NetworkPolicy that allows ingress from podSelector: {app: frontend}. A pod with labels {app: frontend, version: v2} tries to connect. Is it allowed?

Lesson outline

What Network Policies Solve

Default Allow is a Security Anti-Pattern

By default, every pod in a Kubernetes cluster can reach every other pod. A single compromised pod can reach your database, other services, and cloud metadata endpoints. NetworkPolicy resources implement pod-to-pod firewalling -- but only if your CNI enforces them (Calico, Cilium) and only if you design them correctly.

Default deny + explicit allow

Use for: Apply a deny-all-ingress + deny-all-egress policy to a namespace, then add specific allow rules for known communication paths. Zero-trust baseline. Must include DNS egress to kube-system:53 or pods cannot resolve service names.

Namespace isolation

Use for: Block cross-namespace traffic by default. Only allow ingress from pods in the same namespace (no namespaceSelector). Microservice team namespaces become security boundaries.

Database tier isolation

Use for: Allow ingress to database pods only from specific app pods (podSelector: app=myapp). Block all other access. Reduces blast radius if any other pod is compromised.

The System View: Policy Matching Model

Namespace: production

Pod: frontend (app=frontend)    Pod: api (app=api)    Pod: db (app=db)

NetworkPolicy: allow-frontend-to-api
  podSelector: {app: api}          <- this policy applies TO api pods
  policyTypes: [Ingress]
  ingress:
  - from:
    - podSelector: {app: frontend} <- allow FROM frontend pods only
  Result: ONLY frontend can send ingress to api; all others blocked

NetworkPolicy: deny-all-egress (on frontend)
  podSelector: {app: frontend}
  policyTypes: [Egress]
  # no egress rules = block ALL egress
  MISSING: egress to kube-dns port 53 = DNS broken!

Fix -- add DNS egress:
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - port: 53
      protocol: UDP

NetworkPolicy applies TO the selected pods and controls who can reach them; forgetting DNS egress silently breaks service discovery

Common NetworkPolicy Mistakes

Situation
Before
After

Deny-all egress without DNS allow

“All pods in namespace lose DNS resolution; service-to-service calls fail with "failed to resolve host"; no mention of NetworkPolicy in logs”

“Explicit egress rule allows UDP/TCP port 53 to kube-system namespace; DNS works; all other egress blocked”

namespaceSelector without namespace labels

“NetworkPolicy namespaceSelector matches on labels; target namespace has no labels; policy never matches; traffic blocked unexpectedly”

“kubectl label namespace monitoring kubernetes.io/metadata.name=monitoring; or use kubernetes.io/metadata.name auto-label (K8s 1.21+)”

How NetworkPolicy Rules Are Evaluated

NetworkPolicy evaluation logic

→

01

1. A packet arrives at a pod (ingress) or leaves a pod (egress)

→

02

2. CNI dataplane checks: are there any NetworkPolicy objects that select this pod?

→

03

3. If NO policies select this pod: all traffic allowed (default allow)

→

04

4. If ANY policies select this pod: ONLY traffic matching at least one allow rule is permitted

→

05

5. Ingress and Egress are evaluated independently -- a pod can have ingress policies but no egress policies

06

6. Within a rule, multiple from/to entries are OR-ed; items within one from/to are AND-ed

1

1. A packet arrives at a pod (ingress) or leaves a pod (egress)

2

2. CNI dataplane checks: are there any NetworkPolicy objects that select this pod?

3

3. If NO policies select this pod: all traffic allowed (default allow)

4

4. If ANY policies select this pod: ONLY traffic matching at least one allow rule is permitted

5

5. Ingress and Egress are evaluated independently -- a pod can have ingress policies but no egress policies

6

6. Within a rule, multiple from/to entries are OR-ed; items within one from/to are AND-ed

namespace-isolation.yaml
1# Step 1: Deny all ingress and egress by default
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: default-deny-all
6 namespace: production
7spec:
Empty podSelector selects ALL pods in the namespace
8 podSelector: {} # selects ALL pods in namespace
9 policyTypes:
10 - Ingress
11 - Egress
12
13---
14# Step 2: Allow DNS egress (always required)
15apiVersion: networking.k8s.io/v1
16kind: NetworkPolicy
17metadata:
18 name: allow-dns-egress
19 namespace: production
20spec:
21 podSelector: {}
22 policyTypes: [Egress]
DNS egress MUST come before or alongside deny-all; forgetting this blocks all service discovery
23 egress:
24 - to:
25 - namespaceSelector:
26 matchLabels:
27 kubernetes.io/metadata.name: kube-system
28 ports:
29 - port: 53
30 protocol: UDP
31 - port: 53
32 protocol: TCP

What Breaks in Production: Blast Radius

NetworkPolicy failure modes

  • Deny-all blocks DNS (silent) — DNS lookups fail silently as "connection timeout." App logs show "failed to resolve" not "NetworkPolicy blocked." Always add DNS egress (port 53 UDP+TCP to kube-system) when applying deny-all egress policies.
  • Policy with wrong label selector — A typo in a podSelector label matches no pods -- NetworkPolicy silently does nothing (or blocks everything, depending on polarity). Always test with kubectl exec + nc/curl from blocked and allowed pods after applying.
  • Health probe traffic blocked — kubelet sends probes to pod IP directly. Some CNI implementations exempt localhost/node traffic; others enforce NetworkPolicy on probes. Verify liveness/readiness probes still work after adding ingress policies.
  • CNI not enforcing (Flannel) — NetworkPolicy objects exist but CNI (Flannel) ignores them. Team believes they have zero-trust; they do not. Verify with a test that explicitly confirms traffic is blocked where it should be.

Deny-all egress without DNS allow breaks all service discovery

Bug
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
spec:
  podSelector: {}
  policyTypes: [Egress]
  # No egress rules = deny ALL egress
  # DNS (port 53) also blocked -> service discovery broken
  # All HTTP calls fail with "failed to resolve host"
Fix
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress-except-dns
spec:
  podSelector: {}
  policyTypes: [Egress]
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
  # Add more specific egress rules below

Any deny-all egress policy MUST include DNS egress as the first rule. Without it, all pods lose service discovery immediately. Add specific service egress rules after the DNS rule.

Decision Guide: Network Policy Strategy

Do you need pod-to-pod network segmentation (compliance, zero-trust)?
YesApply default-deny policy per namespace; add explicit allow rules; verify with connectivity tests
NoNo NetworkPolicy needed for simple trusted internal clusters
Do you need cluster-wide baseline policies (block metadata endpoint, allow monitoring)?
YesUse Calico GlobalNetworkPolicy or Cilium ClusterwidNetworkPolicy -- applies across all namespaces
NoPer-namespace NetworkPolicy is sufficient for most use cases
Do you need L7 policies (allow GET /api but not DELETE /api)?
YesNetworkPolicy cannot do L7; use Istio AuthorizationPolicy with mTLS for HTTP-level access control
NoStandard L3/L4 NetworkPolicy is sufficient

Cost and Complexity: Policy Patterns

PatternSecurity postureOperational costRiskWhen to use
No policies (default)No segmentationNoneFull lateral movementDev/local clusters only
Ingress-only isolationControls who can call each serviceLowEgress unrestrictedBasic microservice protection
Default deny + explicit allowsZero-trust pod networkingHigh (enumerate all flows)Misconfigured rules cause outagesProduction with compliance requirements
Calico GlobalNetworkPolicyCluster-wide baselineMediumMisconfigured global rules block everythingMulti-tenant clusters
Cilium L7 policyHTTP/gRPC method-level controlHighComplexity of L7 rule managementService mesh-level zero trust

Exam Answer vs. Production Reality

1 / 3

Default allow vs deny

📖 What the exam expects

By default, Kubernetes allows all pod-to-pod traffic. A NetworkPolicy with empty podSelector and policyTypes: [Ingress] denies all ingress to all pods in the namespace.

Toggle between what certifications teach and what production actually requires

How this might come up in interviews

Security architecture questions about Kubernetes network segmentation and debugging questions about mysterious connectivity failures after policy changes.

Common questions:

  • What is the default network policy in a Kubernetes namespace?
  • What does an empty podSelector in a NetworkPolicy mean?
  • How would you implement namespace isolation in Kubernetes?
  • What traffic does a deny-all policy accidentally block that you must remember to allow?

Strong answer: Mentions Calico GlobalNetworkPolicy for cluster-wide baseline rules, testing policies with kubectl exec and nc/curl before applying to production, and using Cilium Hubble for visualizing blocked flows.

Red flags: Applying a deny-all policy without testing connectivity, or not knowing that NetworkPolicy requires a supporting CNI.

Related concepts

Explore topics that connect to this one.

  • CNI Plugins: How Pods Get Their IPs
  • Network Segmentation & Zero Trust in Kubernetes
  • Kubernetes Authentication

Suggested next

Often learned after this topic.

DNS & Service Discovery: CoreDNS and the 5x Query Problem

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.

Sign in to track your progress and mark lessons complete.

Continue learning

DNS & Service Discovery: CoreDNS and the 5x Query Problem

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.