Skip to main content
Career Paths
Concepts
Terraform
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Terraform deep dive

How Terraform manages infrastructure as code using state, plans, modules, and providers. The tool that turns "Brent logs into the console" into a repeatable, peer-reviewed, version-controlled workflow.

🎯Key Takeaways
Terraform makes infrastructure declarative, version-controlled, and reviewable. The workflow is: write HCL → terraform plan (review the diff) → terraform apply (execute the plan).
State is the source of truth for what Terraform manages. Store it remotely with locking (S3 + DynamoDB). Never edit it manually.
`terraform plan` is your safety net — always read the plan carefully, especially for `-` (destroy) and `-/+` (destroy-recreate) operations.
Modules are the unit of reuse — write once, call with parameters. Pin module versions.
Add `lifecycle { prevent_destroy = true }` to all stateful production resources as a non-negotiable baseline.
Partial Terraform adoption (some resources in Terraform, some manual) is the most dangerous state — commit to importing everything or accept the risks.

Terraform deep dive

How Terraform manages infrastructure as code using state, plans, modules, and providers. The tool that turns "Brent logs into the console" into a repeatable, peer-reviewed, version-controlled workflow.

~8 min read
Be the first to complete!
What you'll learn
  • Terraform makes infrastructure declarative, version-controlled, and reviewable. The workflow is: write HCL → terraform plan (review the diff) → terraform apply (execute the plan).
  • State is the source of truth for what Terraform manages. Store it remotely with locking (S3 + DynamoDB). Never edit it manually.
  • `terraform plan` is your safety net — always read the plan carefully, especially for `-` (destroy) and `-/+` (destroy-recreate) operations.
  • Modules are the unit of reuse — write once, call with parameters. Pin module versions.
  • Add `lifecycle { prevent_destroy = true }` to all stateful production resources as a non-negotiable baseline.
  • Partial Terraform adoption (some resources in Terraform, some manual) is the most dangerous state — commit to importing everything or accept the risks.

Lesson outline

Why Terraform in a DevOps value stream

Without Terraform, infrastructure is created by hand in cloud consoles. Every click is a snowflake: hard to repeat, impossible to review, and easy to forget. The DevOps Handbook calls this the deployment equivalent of a hero manually deploying code from their laptop — it works until it doesn't, and when it breaks no one knows why.

Terraform describes infrastructure in HCL (HashiCorp Configuration Language) files that you version control, review in pull requests, and apply through CI/CD pipelines. That turns 'an engineer clicked through 47 console screens' into `git log --oneline` showing every infrastructure change with author, timestamp, and reason.

Terraform workflow: write → plan → apply

1. Write: declare desired infrastructure in .tf files. 2. Plan: terraform plan shows a diff of what will change — resources to add (+), modify (~), or destroy (-). 3. Apply: terraform apply executes the plan. In CI/CD: plan runs on PR, apply runs on merge to main.

Terraform workflow at a glance

Code (HCL files)
→
Plan
→
Apply
→
Remote state

State: the map between code and reality

Terraform maintains a state file (terraform.tfstate) that maps every resource in your HCL config to a real cloud resource (its ID, attributes, region, dependencies). State is what enables `terraform plan` to know the difference between 'create this VPC' and 'this VPC already exists and needs an additional subnet.'

Never store state locally in a team

Local state (default) means your laptop is the source of truth. If two engineers apply simultaneously, state corrupts. If your laptop is lost, state is gone. Use remote state (S3 + DynamoDB lock, Terraform Cloud, GCS) from day one. Treat the state file as production data.

State management best practices

  • Remote backend — Store state in S3 with DynamoDB locking (AWS), GCS with state locking (GCP), or Terraform Cloud. Never local.
  • State encryption — Enable server-side encryption on the S3 bucket or GCS bucket. State can contain sensitive resource attributes.
  • State isolation — Use separate state files per environment (dev/staging/prod) and per logical component. One giant state file means a terraform plan for a small change evaluates thousands of resources.
  • Never edit state manually — Use terraform state mv, terraform state rm, and terraform import to manipulate state. Manual edits corrupt it.
  • State backups — Enable versioning on the S3 bucket. S3 versioning means every state file mutation is recoverable. This has saved teams from catastrophic accidents.

Plans, change reviews, and CI/CD integration

The `terraform plan` command is Terraform's killer feature for teams. It shows the exact proposed diff — resources to add (+), modify (~), or destroy (-) — before any change touches the cloud. No more 'I wonder what this will do.'

Mature Terraform CI/CD pipeline pattern

  • On every PR — terraform fmt --check (format validation), terraform validate (syntax check), terraform plan — plan output posted as PR comment. Reviewers see exact infra changes alongside code changes.
  • On merge to main — terraform apply -auto-approve runs against the saved plan file from the PR. Never apply from a fresh plan — the apply executes exactly what was reviewed.
  • Plan file workflow — terraform plan -out=tfplan saves the plan. terraform apply tfplan applies exactly that plan. Prevents drift between review and apply.
  • Drift detection — Run terraform plan on a schedule (daily cron). If it shows changes when none were expected, someone modified infra outside Terraform. Alert the team.

Watch for destroy operations in plans

A terraform plan showing - (destroy) on a database, VPC, or stateful resource is a STOP sign. Terraform sometimes destroys and recreates resources when in-place updates are not possible (e.g. renaming an RDS instance). Always understand WHY a destroy is planned before applying. Use lifecycle { prevent_destroy = true } on critical resources.

Modules: reusable infrastructure patterns

Modules are Terraform's unit of reuse. Instead of copy-pasting 200 lines of VPC configuration across 10 projects, you write a VPC module once and call it with parameters. This is infrastructure DRY (Don't Repeat Yourself).

Module design principles

  • Single responsibility — One module does one thing: a VPC module, a database module, an EKS module. Avoid mega-modules that provision entire stacks.
  • Clear inputs/outputs — Variables are the API of your module. Name them clearly, add descriptions and validation. Outputs expose only what callers need.
  • Version pinning — When using public modules (Terraform Registry), always pin versions: version = "~> 5.1". Unpinned modules break on major version upgrades.
  • Module testing — Use Terratest (Go) or Terraform's native test command to write integration tests that provision real infrastructure, assert on outputs, and destroy.

Workspaces, environments, and multi-account patterns

A common Terraform question: how do you manage dev/staging/prod environments without copy-pasting configuration? There are two main patterns:

Environment management patterns

  • Directory per environment — environments/dev/, environments/staging/, environments/prod/ — each with its own .tfvars and remote state. Preferred by most teams: complete isolation, separate state, explicit diffs between environments.
  • Workspaces — terraform workspace new staging creates an isolated state namespace within the same backend. Simpler but less isolation — the same code runs against all workspaces. Better for ephemeral environments (feature branches).
  • AWS Organizations + multi-account — Each environment is a separate AWS account. The Terraform AWS provider assumes a role in each account. Total blast-radius isolation: a mistake in dev cannot touch prod.

Use tfvars files for environment configuration

Define all environment-specific values in environments/prod.tfvars, environments/staging.tfvars. Apply with terraform apply -var-file=environments/prod.tfvars. This keeps the HCL logic identical across environments and makes diffs between environments explicit.

How to practise

Start with a small, destroyable target in a sandbox AWS account: provision a single VPC, a public subnet, an EC2 instance, and a security group. Run `terraform plan`, study the output, then `terraform apply`. Destroy it and recreate it three times. Muscle memory for plan → review → apply → destroy is the foundation.

Next challenge: import an existing manually-created resource into Terraform state using `terraform import`, then manage it going forward. This simulates the most common real-world scenario: infrastructure that exists before Terraform was adopted.

Advanced: build a module, write a CI/CD pipeline that posts plan output to GitHub PR comments, and set up `lifecycle { prevent_destroy = true }` on a simulated production database.

How this might come up in interviews

Terraform interview questions range from conceptual (what is state, why does it matter) to practical (how do you handle a plan that wants to destroy a database). Know the `lifecycle` meta-arguments: `prevent_destroy`, `create_before_destroy`, `ignore_changes`. Understand the difference between `terraform import` and writing a resource from scratch. Be ready to explain when you would use workspaces vs separate directories for environments (and why most teams prefer directories). Have a story about a Terraform incident or a complex module you built.

Quick check · Terraform deep dive

1 / 4

A `terraform plan` shows a `-/+` (destroy and recreate) on your production RDS database. What does this mean and what should you do?

Key takeaways

  • Terraform makes infrastructure declarative, version-controlled, and reviewable. The workflow is: write HCL → terraform plan (review the diff) → terraform apply (execute the plan).
  • State is the source of truth for what Terraform manages. Store it remotely with locking (S3 + DynamoDB). Never edit it manually.
  • `terraform plan` is your safety net — always read the plan carefully, especially for `-` (destroy) and `-/+` (destroy-recreate) operations.
  • Modules are the unit of reuse — write once, call with parameters. Pin module versions.
  • Add `lifecycle { prevent_destroy = true }` to all stateful production resources as a non-negotiable baseline.
  • Partial Terraform adoption (some resources in Terraform, some manual) is the most dangerous state — commit to importing everything or accept the risks.

From the books

The DevOps Handbook

Part III

Infrastructure as code turns manual, snowflake environments into repeatable, version-controlled configurations. Every environment can be recreated from source control. Drift between environments becomes visible and actionable.

🧠Mental Model

💡 Analogy

Terraform is like an architect's blueprint system for buildings. Before blueprints, each builder constructed buildings from memory and verbal instructions — every building was unique, reconstruction was guesswork, and no one could review the design before breaking ground. Blueprints (HCL files) describe exactly what the building looks like. The building permit process (terraform plan) reviews the blueprint before any construction starts. The construction company (terraform apply) executes the blueprint exactly. The as-built drawings (state file) record what was actually built — the gap between blueprints and as-built is tracked automatically. Future renovations start from the current as-built, not from memory.

⚡ Core Idea

Terraform makes infrastructure declarative: you describe the desired state, Terraform computes the diff against current state, and applies only the necessary changes. The three primitives are: HCL config (desired state), state file (current state), and plan (the diff). Everything else — modules, workspaces, providers — is composition on top of these three.

🎯 Why It Matters

Manual infrastructure is invisible — no one knows what was clicked, when, or why. Terraform makes infrastructure visible, reviewable, and reproducible. A new team member can read the HCL and understand the entire infrastructure. An auditor can see every change via git log. A disaster recovery scenario becomes: check out the repo, run terraform apply — instead of spending weeks reconstructing from memory and screenshots.

Related concepts

Explore topics that connect to this one.

  • Infrastructure as Code: Terraform & CloudFormation
  • GitOps Principles
  • Configuration Management

Suggested next

Often learned after this topic.

GitOps Principles

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Continue learning

GitOps Principles

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.