Skip to main content
Career Paths
Concepts
Kubernetes Multi Tenancy
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Kubernetes Multi-Tenancy: Sharing Clusters Safely

Multi-tenant clusters serve multiple teams or customers on shared infrastructure. Soft multi-tenancy (namespace isolation) is good enough for trusted teams. Hard multi-tenancy (untrusted tenants) requires node isolation, strict RBAC, admission controls, and network segmentation.

Relevant for:SeniorStaff
Why this matters at your level
Senior

Understand soft vs hard multi-tenancy. Design namespace-per-tenant with RBAC, NetworkPolicy, ResourceQuota, LimitRange. Disable auto-mount tokens for customer workloads.

Staff

Evaluate when to use separate clusters vs shared clusters for tenants. Design Cluster API or vCluster for virtual cluster per tenant. Architect node pools with taints for tenant isolation at the OS layer.

Kubernetes Multi-Tenancy: Sharing Clusters Safely

Multi-tenant clusters serve multiple teams or customers on shared infrastructure. Soft multi-tenancy (namespace isolation) is good enough for trusted teams. Hard multi-tenancy (untrusted tenants) requires node isolation, strict RBAC, admission controls, and network segmentation.

~3 min read
Be the first to complete!
LIVETenant Escape -- Shared Kubernetes Cluster -- SaaS Platform -- 2021
Breaking News
T+0

Attacker uses SSRF vulnerability to call K8s API with auto-mounted SA token

T+10m

Node listing reveals cluster topology and adjacent tenant namespaces

T+20m

Misconfigured ClusterRole allows reading Secrets from adjacent tenant namespace

T+1h

Adjacent tenant database credentials exposed; GDPR breach notification required

T+1d

All SA tokens disabled; hard multi-tenancy investigation begins; enterprise customer churned

—Breach notification required
—Initial foothold leading to cross-tenant access
—Auto-mounted token enabled the API pivot

The question this raises

What is the difference between soft and hard multi-tenancy in Kubernetes, and when is namespace-level isolation insufficient for security boundaries?

Test your assumption first

You are building a SaaS platform where customer workloads run in your Kubernetes cluster. What is the minimum isolation you should implement for each customer?

Lesson outline

What Multi-Tenancy Solves

Two Very Different Isolation Models

Soft multi-tenancy (trusted internal teams) and hard multi-tenancy (external customers or untrusted workloads) require fundamentally different approaches. Soft multi-tenancy uses namespace-scoped RBAC and NetworkPolicy. Hard multi-tenancy requires per-tenant node pools, disabled SA tokens, strict admission control, and potentially virtual clusters (vCluster).

Soft: Namespace-per-team

Use for: Internal teams in a single organization. RBAC prevents team-A from modifying team-B resources. NetworkPolicy blocks cross-team traffic by default. ResourceQuota prevents one team from consuming all cluster resources. Trust assumption: all teams are employees.

Hard: Dedicated node pool per tenant

Use for: External customers or untrusted workloads. Each tenant gets dedicated nodes with taints + tolerations. No pod co-location between tenants at node level. Prevents kernel exploit lateral movement across tenant boundary.

vCluster: Virtual cluster per tenant

Use for: Each tenant gets a virtual K8s API server in their namespace. Full RBAC, CRD support, standard tooling. Host cluster controls actual node scheduling. Better isolation than namespace, lower cost than separate full clusters.

The System View: Isolation Layers

Soft Multi-Tenancy (Internal Teams):
Cluster
  Namespace: team-a    Namespace: team-b
  RBAC: team-a SA      RBAC: team-b SA
  NetworkPolicy:       NetworkPolicy:
    deny cross-ns        deny cross-ns
  ResourceQuota:       ResourceQuota:
    cpu: 10, mem: 20Gi   cpu: 10, mem: 20Gi
  Shared nodes (co-located pods)

Hard Multi-Tenancy (External Customers):
Cluster
  Tenant A namespace           Tenant B namespace
  Node pool A (taint: tenant=a) Node pool B (taint: tenant=b)
    [pods only on A nodes]       [pods only on B nodes]
  No SA token auto-mount       No SA token auto-mount
  Deny-all NetworkPolicy       Deny-all NetworkPolicy
  Node-level kernel isolation  (different physical nodes)

vCluster:
  Host Namespace: tenant-a
    vCluster A (virtual API server, virtual etcd)
    Syncs only Pods/Services to host namespace
    Tenant gets: full K8s API, own RBAC, own CRDs

Each tier adds more isolation at the cost of operational complexity; choose based on tenant trust level

Multi-Tenancy Design Evolution

Situation
Before
After

SaaS platform with namespace-only isolation

“SSRF in customer workload + auto-mounted SA token -> K8s API access -> cross-tenant namespace enumeration -> adjacent tenant Secret read”

“automountServiceAccountToken: false; deny-all NetworkPolicy; Kyverno blocks listing cross-namespace resources; vCluster per tenant considered for API server isolation”

Shared node pool for all tenants

“Tenant A kernel exploit (e.g., container escape) can pivot to Tenant B pods on same node via shared kernel; host-level persistence affects all tenants”

“Dedicated node pools per tenant with taints; no cross-tenant pod co-location; kernel exploit limited to attacker's own nodes”

How Namespace Isolation Is Configured

Setting up a tenant namespace with soft multi-tenancy controls

→

01

1. Create namespace: kubectl create ns tenant-xyz

→

02

2. Apply ResourceQuota: limit total CPU, memory, pod count for the namespace

→

03

3. Apply LimitRange: default request/limits for pods without explicit settings

→

04

4. Create ServiceAccount with automountServiceAccountToken: false

→

05

5. Create Role + RoleBinding: tenant SA gets only needed permissions within their namespace

→

06

6. Apply deny-all-ingress + deny-all-egress + DNS-egress NetworkPolicy

07

7. Label namespace with PodSecurity: enforce: baseline (minimum)

1

1. Create namespace: kubectl create ns tenant-xyz

2

2. Apply ResourceQuota: limit total CPU, memory, pod count for the namespace

3

3. Apply LimitRange: default request/limits for pods without explicit settings

4

4. Create ServiceAccount with automountServiceAccountToken: false

5

5. Create Role + RoleBinding: tenant SA gets only needed permissions within their namespace

6

6. Apply deny-all-ingress + deny-all-egress + DNS-egress NetworkPolicy

7

7. Label namespace with PodSecurity: enforce: baseline (minimum)

tenant-namespace.yaml
1apiVersion: v1
2kind: Namespace
3metadata:
4 name: tenant-xyz
5 labels:
PSS baseline prevents privileged pods and container escape fields
6 pod-security.kubernetes.io/enforce: baseline
7 tenant: xyz
8
9---
10apiVersion: v1
11kind: ResourceQuota
12metadata:
13 name: tenant-quota
14 namespace: tenant-xyz
15spec:
16 hard:
17 requests.cpu: "4"
18 requests.memory: "8Gi"
19 limits.cpu: "8"
20 limits.memory: "16Gi"
21 pods: "20"
22
23---
automountServiceAccountToken: false -- prevents SA token from becoming a pivot to K8s API
24apiVersion: v1
25kind: ServiceAccount
26metadata:
27 name: tenant-app
28 namespace: tenant-xyz
29automountServiceAccountToken: false

What Breaks in Production: Blast Radius

Multi-tenancy failure modes

  • Auto-mounted SA token enables K8s API pivot — Default SA token in every pod. SSRF or RCE in customer workload -> uses token to call K8s API -> enumerates cluster topology -> finds misconfigurations -> cross-tenant access. Always disable auto-mount for customer workloads.
  • ResourceQuota not set -- noisy neighbor — One tenant deploys 200 pods consuming all cluster CPU. Other tenants see degraded performance. ResourceQuota prevents one tenant from exhausting cluster resources. Set at namespace creation, not reactively after incidents.
  • Shared nodes allow kernel-level lateral movement — Container escape CVE gives attacker code execution on the node host. All tenants co-located on that node are potentially accessible. For hard multi-tenancy, never co-locate tenant workloads on the same nodes.
  • ClusterRole visible to tenant SA — Tenant SA with "list nodes" permission reveals cluster topology including other tenant namespace names. RBAC should grant only namespace-scoped roles. No ClusterRole access for tenant SAs.

Namespace isolation without disabling SA token auto-mount

Bug
apiVersion: v1
kind: Namespace
metadata:
  name: tenant-xyz
# No SA token disable
# Default SA auto-mounts token in every pod
# SSRF vulnerability in tenant app:
#   -> curl https://kubernetes.default.svc/api/v1/namespaces
#      -H "Authorization: Bearer $(cat /var/run/secrets/.../token)"
# -> Returns list of ALL namespaces visible to SA
# -> Cross-tenant namespace discovery
Fix
apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
  namespace: tenant-xyz
automountServiceAccountToken: false  # patch default SA

---
# For pods that DO need API access: explicit mount with minimal permissions
spec:
  serviceAccountName: tenant-app-specific
  automountServiceAccountToken: true  # only on SAs that need it

Disabling token auto-mount on the default ServiceAccount prevents any pod in the namespace from accidentally getting K8s API access. Explicitly enable it only on the specific SAs that need it with the minimum required permissions.

Decision Guide: Choosing an Isolation Model

Are tenants internal trusted teams (same organization)?
YesSoft multi-tenancy: namespace + RBAC + NetworkPolicy + ResourceQuota is sufficient
NoContinue to next question for untrusted/external tenants
Do tenants need full Kubernetes API access (create CRDs, manage RBAC)?
YesvCluster per tenant: virtual K8s API server with full feature parity; host controls node scheduling
NoDedicated namespace with hard restrictions is sufficient if tenant does not need K8s API
Is kernel-level isolation required (high compliance, hostile tenants)?
YesDedicated node pool per tenant with taints; no cross-tenant pod co-location; potentially separate clusters
NoShared nodes with namespace + RBAC + NetworkPolicy is cost-effective for medium trust level

Cost and Complexity: Multi-Tenancy Models

ModelIsolation strengthCostOperational overheadWhen to use
Namespace onlyLow (no security boundary)LowestNoneNever for production multi-tenancy
Namespace + RBAC + NetPol + QuotaMedium (soft)LowLowInternal teams, trusted users
Dedicated node pools per tenantHigh (kernel-separate)Medium-highMediumUntrusted workloads, compliance
vCluster per tenantHigh (virtual API server)MediumMediumSaaS with K8s-native customers
Separate clusters per tenantHighestVery highVery highMaximum compliance, largest enterprise customers

Exam Answer vs. Production Reality

1 / 3

Soft vs hard multi-tenancy

📖 What the exam expects

Soft: namespace isolation for trusted internal teams. Hard: stronger isolation for untrusted external customers (different nodes, strict RBAC, no SA tokens, NetworkPolicy deny-all).

Toggle between what certifications teach and what production actually requires

How this might come up in interviews

SaaS architecture design questions, platform engineering interview rounds about shared cluster design.

Common questions:

  • What is the difference between soft and hard multi-tenancy in Kubernetes?
  • What are the security limitations of namespace-based isolation?
  • How would you design a Kubernetes platform for running untrusted customer workloads?
  • What is vCluster and when would you use it?

Strong answer: Mentions vCluster for virtual cluster isolation, dedicated node pools with taints for node-level separation, and Cluster API for programmatic multi-cluster management at scale.

Red flags: Thinking namespaces provide security isolation by default, or not knowing about auto-mounted SA tokens as a cross-tenant pivot vector.

Related concepts

Explore topics that connect to this one.

  • Cluster Hardening & CIS Benchmark
  • RBAC & Service Accounts: Identity and Authorization in Kubernetes
  • Network Policies: Pod-to-Pod Firewalling

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.