How to connect multiple VPCs — the mechanics and limits of direct VPC peering, why the n*(n-1)/2 connection problem forces a Transit Gateway at scale, and when PrivateLink is the right tool instead.
How to connect multiple VPCs — the mechanics and limits of direct VPC peering, why the n*(n-1)/2 connection problem forces a Transit Gateway at scale, and when PrivateLink is the right tool instead.
VPC peering creates a direct private network connection between two VPCs. Traffic flows over the AWS backbone network, never touching the public internet. The connection is non-transitive: if VPC A peers with VPC B, and VPC B peers with VPC C, VPC A cannot reach VPC C through B. Each pair needs its own peering connection.
VPC peering facts that matter operationally
The n*(n-1)/2 connection explosion
To fully connect 10 VPCs with peering, you need 10*(10-1)/2 = 45 peering connections. Each requires route table entries on both sides. For 20 VPCs: 190 connections, 380 route table entries. For 50 VPCs: 1,225 connections. This is not manageable. AWS route tables also have a default limit of 50 routes per table. VPC peering works for connecting 2–5 VPCs; beyond that, you need Transit Gateway.
1# Create a VPC peering connection between two VPCs in the same account2aws ec2 create-vpc-peering-connection \3--vpc-id vpc-0abc123 \4--peer-vpc-id vpc-0def456 \5--peer-region us-west-2 # omit for same-region67# Accept the peering request (requester auto-accepts same-account same-region)8aws ec2 accept-vpc-peering-connection \9--vpc-peering-connection-id pcx-0xyz789Peering connection must be explicitly accepted — it is not automatic for cross-account1011# CRITICAL: Add routes on BOTH sides — peering does not do this automatically12# In VPC A's route table, route to VPC B's CIDR:Forgetting to add routes is the #1 cause of "peering is established but connectivity fails"13aws ec2 create-route \14--route-table-id rtb-0aaabbbccc \15--destination-cidr-block 10.1.0.0/16 \16--vpc-peering-connection-id pcx-0xyz7891718# In VPC B's route table, route to VPC A's CIDR:19aws ec2 create-route \20--route-table-id rtb-0dddeeefff \21--destination-cidr-block 10.0.0.0/16 \22--vpc-peering-connection-id pcx-0xyz789
AWS Transit Gateway (TGW) is a central hub that connects VPCs and on-premises networks in a hub-and-spoke model. Instead of each VPC needing a direct connection to every other VPC, each VPC connects to the TGW once. The TGW provides transitive routing — traffic from VPC A can reach VPC C via the TGW without VPC A and C having a direct connection.
| Characteristic | VPC Peering | Transit Gateway |
|---|---|---|
| Topology | Mesh (N*(N-1)/2 connections) | Hub-and-spoke (N connections) |
| Transitive routing | No — each pair needs direct connection | Yes — VPCs connect via TGW |
| Route management | Manual route table entries per VPC | Centralised TGW route tables |
| Bandwidth | No limit (uses VPC networking) | Up to 50 Gbps per attachment (burstable) |
| Cross-account | Yes (with RAM share) | Yes (with Resource Access Manager) |
| Data transfer cost | $0.01–0.02/GB cross-region | $0.02/GB processed + $0.05/hr per attachment |
| Best for | 2–5 VPCs, simple connectivity | 6+ VPCs, complex routing policies |
Use TGW route tables to segment traffic domains
A Transit Gateway supports multiple route tables. A common pattern: create a "Production" TGW route table that allows Production VPCs to talk to each other and to on-premises (via VPN/Direct Connect). Create a "Development" route table that allows Dev VPCs to talk to each other but not to Production. Create a "Shared Services" route table for DNS, directory services, and monitoring VPCs reachable from all domains. This gives you network segmentation at the organisation level without complex NACLs.
PrivateLink: the third option for service-specific access
AWS PrivateLink (VPC Endpoints for services) exposes a specific service endpoint privately without peering the VPCs. If Team A has a payment API in VPC A and Team B needs to call it from VPC B, you can create a PrivateLink endpoint — Team B gets a private IP in their VPC that proxies to Team A's API, without the VPCs being able to reach each other broadly. PrivateLink is the right choice when you need service-to-service access without full network access between VPCs.
The right connectivity pattern depends on the number of VPCs, whether transitive routing is needed, and whether you want network-level or service-level access.
Decision rules
TGW is not free — model the cost before migrating peering
Transit Gateway costs $0.05/hr per attachment (~$36/month per VPC attached) plus $0.02/GB of data processed. For 10 VPCs passing 500 GB/month each: 10 * $36 = $360/month in attachment fees + 10 * 500 * $0.02 = $100/month in data processing = $460/month. Compare this to VPC peering for the same topology: 10*(10-1)/2 = 45 peering connections at $0.01/GB cross-AZ. If your cross-VPC traffic is low volume, peering may be cheaper despite the complexity.
Cloud infrastructure design interviews and solutions architect assessments. Often appears as "we have multiple accounts and VPCs that need to communicate — how would you design this?" Expect to draw a diagram and justify cost and operational trade-offs.
Common questions:
Try this question: How many VPCs are involved? Do all VPCs need to communicate with all others, or only with shared services? Is there an on-premises network to connect? What are the data transfer volumes (affects cost model for TGW vs peering)?
Strong answer: Proposes Transit Gateway immediately for multi-account architectures. Mentions CIDR planning and IPAM unprompted. Distinguishes PrivateLink for service isolation from TGW for network connectivity.
Red flags: Recommends full mesh peering for 20+ VPCs. Does not know that route tables must be updated manually after peering. Confuses PrivateLink with VPC peering.
Key takeaways
💡 Analogy
VPC peering is like building a private bilateral road between two cities. City A and City B have a direct road. City A and City C have a direct road. But there is no through-traffic: you cannot drive from A to C via B — you must use the A-to-C road directly. If you have 20 cities, you need 190 roads. Transit Gateway is like building a central highway interchange. Every city builds one on-ramp to the interchange, and from there you can reach any city. 20 cities need only 20 on-ramps. PrivateLink is a dedicated courier service — City A sends a courier to City B's specific address, but City B cannot explore City A's streets.
⚡ Core Idea
VPC peering is non-transitive and scales as O(n²). Transit Gateway is transitive and scales as O(n). Use peering for a handful of VPCs; use TGW when you have many VPCs or need centralised routing policy. Use PrivateLink when you want service-level access without network-level access.
🎯 Why It Matters
VPC connectivity architecture decisions are expensive to reverse. Overlapping CIDR blocks prevent peering and require VPC recreation. A peering-based architecture that grows to 20+ VPCs becomes unmaintainable. Getting the topology right early — planning non-overlapping CIDRs, choosing TGW for multi-account designs — saves months of painful networking remediation.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.