Multi-tenant clusters serve multiple teams or customers on shared infrastructure. Soft multi-tenancy (namespace isolation) is good enough for trusted teams. Hard multi-tenancy (untrusted tenants) requires node isolation, strict RBAC, admission controls, and network segmentation.
Understand soft vs hard multi-tenancy. Design namespace-per-tenant with RBAC, NetworkPolicy, ResourceQuota, LimitRange. Disable auto-mount tokens for customer workloads.
Evaluate when to use separate clusters vs shared clusters for tenants. Design Cluster API or vCluster for virtual cluster per tenant. Architect node pools with taints for tenant isolation at the OS layer.
Multi-tenant clusters serve multiple teams or customers on shared infrastructure. Soft multi-tenancy (namespace isolation) is good enough for trusted teams. Hard multi-tenancy (untrusted tenants) requires node isolation, strict RBAC, admission controls, and network segmentation.
Attacker uses SSRF vulnerability to call K8s API with auto-mounted SA token
Node listing reveals cluster topology and adjacent tenant namespaces
Misconfigured ClusterRole allows reading Secrets from adjacent tenant namespace
Adjacent tenant database credentials exposed; GDPR breach notification required
All SA tokens disabled; hard multi-tenancy investigation begins; enterprise customer churned
The question this raises
What is the difference between soft and hard multi-tenancy in Kubernetes, and when is namespace-level isolation insufficient for security boundaries?
You are building a SaaS platform where customer workloads run in your Kubernetes cluster. What is the minimum isolation you should implement for each customer?
Lesson outline
Two Very Different Isolation Models
Soft multi-tenancy (trusted internal teams) and hard multi-tenancy (external customers or untrusted workloads) require fundamentally different approaches. Soft multi-tenancy uses namespace-scoped RBAC and NetworkPolicy. Hard multi-tenancy requires per-tenant node pools, disabled SA tokens, strict admission control, and potentially virtual clusters (vCluster).
Soft: Namespace-per-team
Use for: Internal teams in a single organization. RBAC prevents team-A from modifying team-B resources. NetworkPolicy blocks cross-team traffic by default. ResourceQuota prevents one team from consuming all cluster resources. Trust assumption: all teams are employees.
Hard: Dedicated node pool per tenant
Use for: External customers or untrusted workloads. Each tenant gets dedicated nodes with taints + tolerations. No pod co-location between tenants at node level. Prevents kernel exploit lateral movement across tenant boundary.
vCluster: Virtual cluster per tenant
Use for: Each tenant gets a virtual K8s API server in their namespace. Full RBAC, CRD support, standard tooling. Host cluster controls actual node scheduling. Better isolation than namespace, lower cost than separate full clusters.
Soft Multi-Tenancy (Internal Teams):
Cluster
Namespace: team-a Namespace: team-b
RBAC: team-a SA RBAC: team-b SA
NetworkPolicy: NetworkPolicy:
deny cross-ns deny cross-ns
ResourceQuota: ResourceQuota:
cpu: 10, mem: 20Gi cpu: 10, mem: 20Gi
Shared nodes (co-located pods)
Hard Multi-Tenancy (External Customers):
Cluster
Tenant A namespace Tenant B namespace
Node pool A (taint: tenant=a) Node pool B (taint: tenant=b)
[pods only on A nodes] [pods only on B nodes]
No SA token auto-mount No SA token auto-mount
Deny-all NetworkPolicy Deny-all NetworkPolicy
Node-level kernel isolation (different physical nodes)
vCluster:
Host Namespace: tenant-a
vCluster A (virtual API server, virtual etcd)
Syncs only Pods/Services to host namespace
Tenant gets: full K8s API, own RBAC, own CRDsEach tier adds more isolation at the cost of operational complexity; choose based on tenant trust level
Multi-Tenancy Design Evolution
SaaS platform with namespace-only isolation
“SSRF in customer workload + auto-mounted SA token -> K8s API access -> cross-tenant namespace enumeration -> adjacent tenant Secret read”
“automountServiceAccountToken: false; deny-all NetworkPolicy; Kyverno blocks listing cross-namespace resources; vCluster per tenant considered for API server isolation”
Shared node pool for all tenants
“Tenant A kernel exploit (e.g., container escape) can pivot to Tenant B pods on same node via shared kernel; host-level persistence affects all tenants”
“Dedicated node pools per tenant with taints; no cross-tenant pod co-location; kernel exploit limited to attacker's own nodes”
Setting up a tenant namespace with soft multi-tenancy controls
01
1. Create namespace: kubectl create ns tenant-xyz
02
2. Apply ResourceQuota: limit total CPU, memory, pod count for the namespace
03
3. Apply LimitRange: default request/limits for pods without explicit settings
04
4. Create ServiceAccount with automountServiceAccountToken: false
05
5. Create Role + RoleBinding: tenant SA gets only needed permissions within their namespace
06
6. Apply deny-all-ingress + deny-all-egress + DNS-egress NetworkPolicy
07
7. Label namespace with PodSecurity: enforce: baseline (minimum)
1. Create namespace: kubectl create ns tenant-xyz
2. Apply ResourceQuota: limit total CPU, memory, pod count for the namespace
3. Apply LimitRange: default request/limits for pods without explicit settings
4. Create ServiceAccount with automountServiceAccountToken: false
5. Create Role + RoleBinding: tenant SA gets only needed permissions within their namespace
6. Apply deny-all-ingress + deny-all-egress + DNS-egress NetworkPolicy
7. Label namespace with PodSecurity: enforce: baseline (minimum)
1apiVersion: v12kind: Namespace3metadata:4name: tenant-xyz5labels:PSS baseline prevents privileged pods and container escape fields6pod-security.kubernetes.io/enforce: baseline7tenant: xyz89---10apiVersion: v111kind: ResourceQuota12metadata:13name: tenant-quota14namespace: tenant-xyz15spec:16hard:17requests.cpu: "4"18requests.memory: "8Gi"19limits.cpu: "8"20limits.memory: "16Gi"21pods: "20"2223---automountServiceAccountToken: false -- prevents SA token from becoming a pivot to K8s API24apiVersion: v125kind: ServiceAccount26metadata:27name: tenant-app28namespace: tenant-xyz29automountServiceAccountToken: false
Multi-tenancy failure modes
Namespace isolation without disabling SA token auto-mount
apiVersion: v1
kind: Namespace
metadata:
name: tenant-xyz
# No SA token disable
# Default SA auto-mounts token in every pod
# SSRF vulnerability in tenant app:
# -> curl https://kubernetes.default.svc/api/v1/namespaces
# -H "Authorization: Bearer $(cat /var/run/secrets/.../token)"
# -> Returns list of ALL namespaces visible to SA
# -> Cross-tenant namespace discoveryapiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: tenant-xyz
automountServiceAccountToken: false # patch default SA
---
# For pods that DO need API access: explicit mount with minimal permissions
spec:
serviceAccountName: tenant-app-specific
automountServiceAccountToken: true # only on SAs that need itDisabling token auto-mount on the default ServiceAccount prevents any pod in the namespace from accidentally getting K8s API access. Explicitly enable it only on the specific SAs that need it with the minimum required permissions.
| Model | Isolation strength | Cost | Operational overhead | When to use |
|---|---|---|---|---|
| Namespace only | Low (no security boundary) | Lowest | None | Never for production multi-tenancy |
| Namespace + RBAC + NetPol + Quota | Medium (soft) | Low | Low | Internal teams, trusted users |
| Dedicated node pools per tenant | High (kernel-separate) | Medium-high | Medium | Untrusted workloads, compliance |
| vCluster per tenant | High (virtual API server) | Medium | Medium | SaaS with K8s-native customers |
| Separate clusters per tenant | Highest | Very high | Very high | Maximum compliance, largest enterprise customers |
Soft vs hard multi-tenancy
📖 What the exam expects
Soft: namespace isolation for trusted internal teams. Hard: stronger isolation for untrusted external customers (different nodes, strict RBAC, no SA tokens, NetworkPolicy deny-all).
Toggle between what certifications teach and what production actually requires
SaaS architecture design questions, platform engineering interview rounds about shared cluster design.
Common questions:
Strong answer: Mentions vCluster for virtual cluster isolation, dedicated node pools with taints for node-level separation, and Cluster API for programmatic multi-cluster management at scale.
Red flags: Thinking namespaces provide security isolation by default, or not knowing about auto-mounted SA tokens as a cross-tenant pivot vector.
Related concepts
Explore topics that connect to this one.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.