NetworkPolicy resources define ingress and egress rules for pod traffic. Without them, all pods can reach all pods. With a misconfigured policy, legitimate traffic is silently blocked with no error logs. Getting them right requires understanding label selectors and namespace isolation.
Understand the default-allow model and what "deny all" means. Know how podSelector and namespaceSelector work in NetworkPolicy specs.
Design namespace isolation with default-deny + explicit allow rules. Always include DNS egress. Test policies with network connectivity verification (not just "kubectl apply works").
Implement cluster-wide zero-trust with Calico GlobalNetworkPolicy or Cilium ClusterwidNetworkPolicy. Design policy templates for microservice namespaces. Integrate with GitOps for policy-as-code enforcement.
NetworkPolicy resources define ingress and egress rules for pod traffic. Without them, all pods can reach all pods. With a misconfigured policy, legitimate traffic is silently blocked with no error logs. Getting them right requires understanding label selectors and namespace isolation.
deny-all ingress NetworkPolicy applied to application namespace
DNS lookups start timing out across all pods in the namespace
Service-to-service HTTP calls fail with "failed to resolve host" errors
Team confirms Services and Endpoints are correct; suspects NetworkPolicy
egress allow rule for port 53 to kube-system added; DNS restored
The question this raises
What does a default-deny NetworkPolicy actually block, and what implicit traffic does every namespace need to function that is easy to forget?
You apply a NetworkPolicy that allows ingress from podSelector: {app: frontend}. A pod with labels {app: frontend, version: v2} tries to connect. Is it allowed?
Lesson outline
Default Allow is a Security Anti-Pattern
By default, every pod in a Kubernetes cluster can reach every other pod. A single compromised pod can reach your database, other services, and cloud metadata endpoints. NetworkPolicy resources implement pod-to-pod firewalling -- but only if your CNI enforces them (Calico, Cilium) and only if you design them correctly.
Default deny + explicit allow
Use for: Apply a deny-all-ingress + deny-all-egress policy to a namespace, then add specific allow rules for known communication paths. Zero-trust baseline. Must include DNS egress to kube-system:53 or pods cannot resolve service names.
Namespace isolation
Use for: Block cross-namespace traffic by default. Only allow ingress from pods in the same namespace (no namespaceSelector). Microservice team namespaces become security boundaries.
Database tier isolation
Use for: Allow ingress to database pods only from specific app pods (podSelector: app=myapp). Block all other access. Reduces blast radius if any other pod is compromised.
Namespace: production
Pod: frontend (app=frontend) Pod: api (app=api) Pod: db (app=db)
NetworkPolicy: allow-frontend-to-api
podSelector: {app: api} <- this policy applies TO api pods
policyTypes: [Ingress]
ingress:
- from:
- podSelector: {app: frontend} <- allow FROM frontend pods only
Result: ONLY frontend can send ingress to api; all others blocked
NetworkPolicy: deny-all-egress (on frontend)
podSelector: {app: frontend}
policyTypes: [Egress]
# no egress rules = block ALL egress
MISSING: egress to kube-dns port 53 = DNS broken!
Fix -- add DNS egress:
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDPNetworkPolicy applies TO the selected pods and controls who can reach them; forgetting DNS egress silently breaks service discovery
Common NetworkPolicy Mistakes
Deny-all egress without DNS allow
“All pods in namespace lose DNS resolution; service-to-service calls fail with "failed to resolve host"; no mention of NetworkPolicy in logs”
“Explicit egress rule allows UDP/TCP port 53 to kube-system namespace; DNS works; all other egress blocked”
namespaceSelector without namespace labels
“NetworkPolicy namespaceSelector matches on labels; target namespace has no labels; policy never matches; traffic blocked unexpectedly”
“kubectl label namespace monitoring kubernetes.io/metadata.name=monitoring; or use kubernetes.io/metadata.name auto-label (K8s 1.21+)”
NetworkPolicy evaluation logic
01
1. A packet arrives at a pod (ingress) or leaves a pod (egress)
02
2. CNI dataplane checks: are there any NetworkPolicy objects that select this pod?
03
3. If NO policies select this pod: all traffic allowed (default allow)
04
4. If ANY policies select this pod: ONLY traffic matching at least one allow rule is permitted
05
5. Ingress and Egress are evaluated independently -- a pod can have ingress policies but no egress policies
06
6. Within a rule, multiple from/to entries are OR-ed; items within one from/to are AND-ed
1. A packet arrives at a pod (ingress) or leaves a pod (egress)
2. CNI dataplane checks: are there any NetworkPolicy objects that select this pod?
3. If NO policies select this pod: all traffic allowed (default allow)
4. If ANY policies select this pod: ONLY traffic matching at least one allow rule is permitted
5. Ingress and Egress are evaluated independently -- a pod can have ingress policies but no egress policies
6. Within a rule, multiple from/to entries are OR-ed; items within one from/to are AND-ed
1# Step 1: Deny all ingress and egress by default2apiVersion: networking.k8s.io/v13kind: NetworkPolicy4metadata:5name: default-deny-all6namespace: production7spec:Empty podSelector selects ALL pods in the namespace8podSelector: {} # selects ALL pods in namespace9policyTypes:10- Ingress11- Egress1213---14# Step 2: Allow DNS egress (always required)15apiVersion: networking.k8s.io/v116kind: NetworkPolicy17metadata:18name: allow-dns-egress19namespace: production20spec:21podSelector: {}22policyTypes: [Egress]DNS egress MUST come before or alongside deny-all; forgetting this blocks all service discovery23egress:24- to:25- namespaceSelector:26matchLabels:27kubernetes.io/metadata.name: kube-system28ports:29- port: 5330protocol: UDP31- port: 5332protocol: TCP
NetworkPolicy failure modes
Deny-all egress without DNS allow breaks all service discovery
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
spec:
podSelector: {}
policyTypes: [Egress]
# No egress rules = deny ALL egress
# DNS (port 53) also blocked -> service discovery broken
# All HTTP calls fail with "failed to resolve host"apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress-except-dns
spec:
podSelector: {}
policyTypes: [Egress]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# Add more specific egress rules belowAny deny-all egress policy MUST include DNS egress as the first rule. Without it, all pods lose service discovery immediately. Add specific service egress rules after the DNS rule.
| Pattern | Security posture | Operational cost | Risk | When to use |
|---|---|---|---|---|
| No policies (default) | No segmentation | None | Full lateral movement | Dev/local clusters only |
| Ingress-only isolation | Controls who can call each service | Low | Egress unrestricted | Basic microservice protection |
| Default deny + explicit allows | Zero-trust pod networking | High (enumerate all flows) | Misconfigured rules cause outages | Production with compliance requirements |
| Calico GlobalNetworkPolicy | Cluster-wide baseline | Medium | Misconfigured global rules block everything | Multi-tenant clusters |
| Cilium L7 policy | HTTP/gRPC method-level control | High | Complexity of L7 rule management | Service mesh-level zero trust |
Default allow vs deny
📖 What the exam expects
By default, Kubernetes allows all pod-to-pod traffic. A NetworkPolicy with empty podSelector and policyTypes: [Ingress] denies all ingress to all pods in the namespace.
Toggle between what certifications teach and what production actually requires
Security architecture questions about Kubernetes network segmentation and debugging questions about mysterious connectivity failures after policy changes.
Common questions:
Strong answer: Mentions Calico GlobalNetworkPolicy for cluster-wide baseline rules, testing policies with kubectl exec and nc/curl before applying to production, and using Cilium Hubble for visualizing blocked flows.
Red flags: Applying a deny-all policy without testing connectivity, or not knowing that NetworkPolicy requires a supporting CNI.
Related concepts
Explore topics that connect to this one.
Suggested next
Often learned after this topic.
DNS & Service Discovery: CoreDNS and the 5x Query ProblemReady to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.