Skip to main content
Career Paths
Concepts
Vpc Peering
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

VPC Peering & Transit Gateways

How to connect multiple VPCs — the mechanics and limits of direct VPC peering, why the n*(n-1)/2 connection problem forces a Transit Gateway at scale, and when PrivateLink is the right tool instead.

🎯Key Takeaways
VPC peering is non-transitive: A↔B and B↔C does not mean A can reach C
VPC peering scales as O(n²) connections — manageable for 2–5 VPCs, unmanageable for 10+
Transit Gateway provides transitive hub-and-spoke routing with centralised route management
Route tables must be manually updated on both sides of a peering connection — peering alone does not create routes
Plan CIDR blocks from an IPAM system before creating VPCs — overlapping CIDRs cannot be peered, ever

VPC Peering & Transit Gateways

How to connect multiple VPCs — the mechanics and limits of direct VPC peering, why the n*(n-1)/2 connection problem forces a Transit Gateway at scale, and when PrivateLink is the right tool instead.

~6 min read
Be the first to complete!
What you'll learn
  • VPC peering is non-transitive: A↔B and B↔C does not mean A can reach C
  • VPC peering scales as O(n²) connections — manageable for 2–5 VPCs, unmanageable for 10+
  • Transit Gateway provides transitive hub-and-spoke routing with centralised route management
  • Route tables must be manually updated on both sides of a peering connection — peering alone does not create routes
  • Plan CIDR blocks from an IPAM system before creating VPCs — overlapping CIDRs cannot be peered, ever

VPC Peering: Point-to-Point Connections

VPC peering creates a direct private network connection between two VPCs. Traffic flows over the AWS backbone network, never touching the public internet. The connection is non-transitive: if VPC A peers with VPC B, and VPC B peers with VPC C, VPC A cannot reach VPC C through B. Each pair needs its own peering connection.

VPC peering facts that matter operationally

  • Cross-account and cross-region peering both work — Peering works across AWS accounts (common for microservices in separate accounts) and across regions. Cross-region peering incurs data transfer charges ($0.01–0.02/GB depending on regions).
  • CIDR blocks must not overlap — If VPC A uses 10.0.0.0/16 and VPC B uses 10.0.0.0/16, you cannot peer them. Ever. Plan your IP address schema at the organisation level before creating VPCs — fixing overlapping CIDRs later requires recreating VPCs.
  • Route tables must be updated manually on both sides — Creating a peering connection does not automatically add routes. You must add a route in VPC A's route table pointing to VPC B's CIDR via the peering connection ID, and vice versa. This is a common source of "peering is up but I cannot connect" bugs.
  • Non-transitive by design — Traffic cannot hop through a peered VPC to reach a third VPC. This is a fundamental property, not a configuration option. If you need transitive routing, you need a Transit Gateway.

The n*(n-1)/2 connection explosion

To fully connect 10 VPCs with peering, you need 10*(10-1)/2 = 45 peering connections. Each requires route table entries on both sides. For 20 VPCs: 190 connections, 380 route table entries. For 50 VPCs: 1,225 connections. This is not manageable. AWS route tables also have a default limit of 50 routes per table. VPC peering works for connecting 2–5 VPCs; beyond that, you need Transit Gateway.

vpc-peering.sh
1# Create a VPC peering connection between two VPCs in the same account
2 aws ec2 create-vpc-peering-connection \
3 --vpc-id vpc-0abc123 \
4 --peer-vpc-id vpc-0def456 \
5 --peer-region us-west-2 # omit for same-region
6
7 # Accept the peering request (requester auto-accepts same-account same-region)
8 aws ec2 accept-vpc-peering-connection \
9 --vpc-peering-connection-id pcx-0xyz789
Peering connection must be explicitly accepted — it is not automatic for cross-account
10
11 # CRITICAL: Add routes on BOTH sides — peering does not do this automatically
12 # In VPC A's route table, route to VPC B's CIDR:
Forgetting to add routes is the #1 cause of "peering is established but connectivity fails"
13 aws ec2 create-route \
14 --route-table-id rtb-0aaabbbccc \
15 --destination-cidr-block 10.1.0.0/16 \
16 --vpc-peering-connection-id pcx-0xyz789
17
18 # In VPC B's route table, route to VPC A's CIDR:
19 aws ec2 create-route \
20 --route-table-id rtb-0dddeeefff \
21 --destination-cidr-block 10.0.0.0/16 \
22 --vpc-peering-connection-id pcx-0xyz789

Transit Gateway: Hub-and-Spoke at Scale

AWS Transit Gateway (TGW) is a central hub that connects VPCs and on-premises networks in a hub-and-spoke model. Instead of each VPC needing a direct connection to every other VPC, each VPC connects to the TGW once. The TGW provides transitive routing — traffic from VPC A can reach VPC C via the TGW without VPC A and C having a direct connection.

CharacteristicVPC PeeringTransit Gateway
TopologyMesh (N*(N-1)/2 connections)Hub-and-spoke (N connections)
Transitive routingNo — each pair needs direct connectionYes — VPCs connect via TGW
Route managementManual route table entries per VPCCentralised TGW route tables
BandwidthNo limit (uses VPC networking)Up to 50 Gbps per attachment (burstable)
Cross-accountYes (with RAM share)Yes (with Resource Access Manager)
Data transfer cost$0.01–0.02/GB cross-region$0.02/GB processed + $0.05/hr per attachment
Best for2–5 VPCs, simple connectivity6+ VPCs, complex routing policies

Use TGW route tables to segment traffic domains

A Transit Gateway supports multiple route tables. A common pattern: create a "Production" TGW route table that allows Production VPCs to talk to each other and to on-premises (via VPN/Direct Connect). Create a "Development" route table that allows Dev VPCs to talk to each other but not to Production. Create a "Shared Services" route table for DNS, directory services, and monitoring VPCs reachable from all domains. This gives you network segmentation at the organisation level without complex NACLs.

PrivateLink: the third option for service-specific access

AWS PrivateLink (VPC Endpoints for services) exposes a specific service endpoint privately without peering the VPCs. If Team A has a payment API in VPC A and Team B needs to call it from VPC B, you can create a PrivateLink endpoint — Team B gets a private IP in their VPC that proxies to Team A's API, without the VPCs being able to reach each other broadly. PrivateLink is the right choice when you need service-to-service access without full network access between VPCs.

Choosing Between Peering, TGW, and PrivateLink

The right connectivity pattern depends on the number of VPCs, whether transitive routing is needed, and whether you want network-level or service-level access.

Decision rules

  • Use VPC Peering: connecting 2–5 VPCs with simple, static routing needs — Lower cost than TGW at small scale. Suitable for connecting a few environments (dev/staging/prod) or a production VPC to a shared services VPC. Becomes unmanageable past 5–6 VPCs.
  • Use Transit Gateway: 6+ VPCs, multi-account architectures, or on-premises connectivity — The operational cost of managing peering connections and routes at scale far exceeds the TGW hourly cost. TGW also handles VPN and Direct Connect attachments in the same routing fabric.
  • Use PrivateLink: exposing a service to consumers without full VPC network access — Team isolation with service access. Marketplace-style service publishing. Cross-account service consumption where you do not want the consumer VPC to have any network access to the producer VPC beyond a single endpoint.

TGW is not free — model the cost before migrating peering

Transit Gateway costs $0.05/hr per attachment (~$36/month per VPC attached) plus $0.02/GB of data processed. For 10 VPCs passing 500 GB/month each: 10 * $36 = $360/month in attachment fees + 10 * 500 * $0.02 = $100/month in data processing = $460/month. Compare this to VPC peering for the same topology: 10*(10-1)/2 = 45 peering connections at $0.01/GB cross-AZ. If your cross-VPC traffic is low volume, peering may be cheaper despite the complexity.

How this might come up in interviews

Cloud infrastructure design interviews and solutions architect assessments. Often appears as "we have multiple accounts and VPCs that need to communicate — how would you design this?" Expect to draw a diagram and justify cost and operational trade-offs.

Common questions:

  • Explain why VPC peering is non-transitive and what the implications are at scale.
  • You have 15 VPCs that all need to communicate. Would you use VPC peering or Transit Gateway? Why?
  • A VPC peering connection shows "Active" status but resources cannot communicate. What would you check?
  • What is AWS PrivateLink and when would you use it instead of VPC peering?
  • How would you design the network topology for a 50-VPC multi-account AWS organisation?

Try this question: How many VPCs are involved? Do all VPCs need to communicate with all others, or only with shared services? Is there an on-premises network to connect? What are the data transfer volumes (affects cost model for TGW vs peering)?

Strong answer: Proposes Transit Gateway immediately for multi-account architectures. Mentions CIDR planning and IPAM unprompted. Distinguishes PrivateLink for service isolation from TGW for network connectivity.

Red flags: Recommends full mesh peering for 20+ VPCs. Does not know that route tables must be updated manually after peering. Confuses PrivateLink with VPC peering.

Key takeaways

  • VPC peering is non-transitive: A↔B and B↔C does not mean A can reach C
  • VPC peering scales as O(n²) connections — manageable for 2–5 VPCs, unmanageable for 10+
  • Transit Gateway provides transitive hub-and-spoke routing with centralised route management
  • Route tables must be manually updated on both sides of a peering connection — peering alone does not create routes
  • Plan CIDR blocks from an IPAM system before creating VPCs — overlapping CIDRs cannot be peered, ever
🧠Mental Model

💡 Analogy

VPC peering is like building a private bilateral road between two cities. City A and City B have a direct road. City A and City C have a direct road. But there is no through-traffic: you cannot drive from A to C via B — you must use the A-to-C road directly. If you have 20 cities, you need 190 roads. Transit Gateway is like building a central highway interchange. Every city builds one on-ramp to the interchange, and from there you can reach any city. 20 cities need only 20 on-ramps. PrivateLink is a dedicated courier service — City A sends a courier to City B's specific address, but City B cannot explore City A's streets.

⚡ Core Idea

VPC peering is non-transitive and scales as O(n²). Transit Gateway is transitive and scales as O(n). Use peering for a handful of VPCs; use TGW when you have many VPCs or need centralised routing policy. Use PrivateLink when you want service-level access without network-level access.

🎯 Why It Matters

VPC connectivity architecture decisions are expensive to reverse. Overlapping CIDR blocks prevent peering and require VPC recreation. A peering-based architecture that grows to 20+ VPCs becomes unmaintainable. Getting the topology right early — planning non-overlapping CIDRs, choosing TGW for multi-account designs — saves months of painful networking remediation.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.