// HashiCorp Terraform & IaC

Terraform Study
Guide

30 QUESTIONS 6 DOMAINS IAC & OPS
MASTERED
0 / 30
FILTER:
EASY
MEDIUM
HARD
Core concepts & HCL 5 questions
01 What is Terraform, and how does it differ from configuration management tools like Ansible?
Terraform is an infrastructure as code (IaC) tool focused on provisioning and managing cloud resources through declarative configuration and a dependency graph. It maintains state to map config to real infrastructure.

Ansible (and similar CM tools) emphasizes converging machine configuration (packages, files, services) over SSH or agents. It can provision VMs but is not centered on multi-cloud resource graphs and state the way Terraform is.

Common pattern: Terraform creates the network, compute, and managed services; Ansible (or cloud-init) configures software on those instances.
Interview framing: Terraform = "what infrastructure exists"; CM = "what is installed and how it behaves on that infrastructure."
02 Describe the standard Terraform CLI workflow.
  • init — Downloads providers and modules, configures the backend, prepares .terraform/
  • plan — Compares desired configuration to state and produces an execution plan (no changes applied)
  • apply — Executes the plan (creates, updates, destroys resources as needed)
  • destroy — Removes resources managed by this configuration (use carefully in shared state)
Variants: plan -out then apply on the saved plan for reviewed, repeatable applies.
03 What are the main top-level block types in a Terraform configuration?
Common blocks include:
  • terraform — Backend, required providers/versions, experiments
  • provider — Plugin configuration (region, credentials source, features)
  • resource — Managed infrastructure objects
  • data — Read-only lookups of existing infrastructure
  • variable — Input parameters
  • output — Exported values after apply
  • module — Reusable configuration packages
  • locals — Named local expressions to reduce repetition
04 What is the difference between arguments and attributes in a resource block?
Arguments are what you set in configuration to describe the desired state (inputs to the provider). They appear inside the resource block body.

Attributes are values the provider exposes after the resource exists — often read-only or computed (IDs, ARNs, generated endpoints). You reference them as resource_type.name.attribute in expressions.

Some fields are both: you can set an argument at create time, and Terraform also exposes it as an attribute for references elsewhere.
05 When do you use locals versus variable blocks?
Variables are inputs from outside the module — operators pass them via CLI, .tfvars, or CI. They define the module's contract and should stay minimal and stable.

Locals are internal derived values — computed from variables, resources, or other locals. Use them to DRY up repeated expressions, name complex calculations, and keep variable surfaces small.

Rule of thumb: if callers should not need to set it, compute it with a local.
Too many variables is a smell; often several can collapse into locals with sensible defaults inside the module.
State & backends 5 questions
06 What is Terraform state, and why does Terraform need it?
State is a JSON snapshot mapping each resource address in your configuration to real infrastructure IDs and attributes. Terraform uses it to:
  • Know which remote object corresponds to each resource block
  • Plan updates vs creates vs destroys accurately
  • Track dependencies and store metadata (e.g. for modules)
State is required for correct operation; it is not optional metadata. Remote state enables teams to share one source of truth.
07 What problems does a remote backend solve compared to local terraform.tfstate?
Local state breaks down for teams: no shared locking, merge conflicts in Git, no secure central store. Remote backends typically provide:
  • Shared state — Everyone and CI use the same state file
  • State locking — Prevents concurrent applies corrupting state
  • Encryption & access control — IAM or cloud policies on the bucket/account
  • Optional versioning — S3 versioning, etc., for recovery
08 How does state locking work, and what happens if a lock cannot be acquired?
Backends that support locking acquire a lock before modifying state (e.g. DynamoDB lock table with S3, or native locking in Terraform Cloud). Another process cannot start a conflicting apply until the lock is released.

If a run crashes, a stale lock may remain; operators use terraform force-unlock only after confirming no active apply is running — otherwise you risk state corruption.
Never force-unlock casually in production; verify the holding process is dead.
09 What is terraform import, and what does it not do for you?
terraform import associates an existing cloud resource with a resource address in state. Use it when infrastructure was created outside Terraform or after state loss.

It does not generate configuration for you (in older workflows you write the resource block yourself; newer versions may assist with plan -generate-config-out depending on version). Import also does not change remote infrastructure — it only updates state.

After import, run plan and reconcile until the plan is clean (config must match reality).
10 How do you migrate state to a new backend safely?
1. Add the new backend block (or change it) in terraform configuration.

2. Run terraform init -migrate-state (or follow prompts on re-init). Terraform copies state from the old backend to the new one.

3. Verify the new remote has the file, locking works, and IAM/policies are correct.

4. Keep a backup of the old state until a successful apply from the new backend.

For large moves (splitting state), prefer terraform state mv / state rm with extreme care and read-only validation plans first.
Modules & composition 5 questions
11 What is a Terraform module, and what is the "root module"?
A module is a directory of .tf files with optional inputs (variable) and outputs (output). It encapsulates a reusable piece of infrastructure (VPC, EKS cluster, RDS instance).

The root module is the working directory where you run Terraform — the top-level configuration that calls child modules with module blocks. Everything else is a child module.
12 How do you pass data out of a child module to the root module?
Declare output blocks in the child module. In the root, reference them as module.module_name.output_name. Those values can feed other modules or root-level outputs.

Inputs flow into modules via module "x" { ... } arguments mapped to variable blocks inside the child.
13 How do you version modules from Git versus the Terraform Public/Private Registry?
Git/source: Use source = "git::https://..." with a ?ref= tag, branch, or commit SHA. Prefer immutable refs (tags or SHAs) for production — not main without pinning.

Registry: Use source = "namespace/name/provider" with a version constraint in the module block. Semver ranges give controlled upgrades; the registry resolves compatible versions.

Private modules mirror the same patterns with registry tokens or SSH for Git.
14 When should you use explicit depends_on instead of implicit dependencies?
Terraform infers order from references between resources (implicit dependencies). Use depends_on when there is a real ordering requirement that is not visible in the configuration — for example, a resource must exist before another API becomes consistent, or IAM propagation delays.

Overusing depends_on hides the data flow and can slow applies; prefer references when possible.
If you can express the relationship with an attribute reference, do that instead of depends_on.
15 Compare count and for_each for multiple resource instances.
count — Integer index [0], [1]. Simple but reordering the list can cause destroy/recreate of the "wrong" index. Use for homogeneous sets where order is stable or you accept replacement.

for_each — Map or set of string keys; instance addresses use the key. Safer when identities must stay stable when the collection changes (rename keys deliberately). Required for resources that do not support count in some edge cases — generally preferred for maps of named objects.

You cannot use both on the same block.
Resources & lifecycle 5 questions
16 What does a lifecycle block control? Name common meta-arguments.
lifecycle changes how Terraform manages a specific resource instance. Common meta-arguments:
  • create_before_destroy — Create replacement before destroying old (reduces downtime when supported)
  • prevent_destroy — Fail plan/apply if destroy is proposed
  • ignore_changes — Do not update for listed attributes (drift or external updates)
  • replace_triggered_by — Force replace when other resources change (1.2+)
17 What does create_before_destroy do, and when is it useful?
On replacement, Terraform creates the new resource first, updates dependents, then destroys the old instance. Useful when a brief outage from "destroy then create" is unacceptable — e.g. load-balanced compute, zero-downtime certs, or resources where names must overlap during cutover.

The provider must support unique identifiers during overlap; some resource types cannot use CBD meaningfully.
18 Why are provisioners discouraged, and what patterns replace them?
Provisioners run arbitrary commands during apply (local-exec, remote-exec, etc.). They complicate state, error handling, idempotency, and drift — and they tangle imperative steps into declarative config.

Prefer: user_data / cloud-init, custom data scripts, configuration management triggered outside Terraform, immutable images (Packer), or provider-native features (Lambda on deploy, ECS task definitions, etc.).

HashiCorp documentation treats provisioners as a last resort.
19 What is configuration drift, and how does Terraform detect it?
Drift is when real infrastructure differs from what state + configuration expect (manual console changes, failed applies, autoscaling outside Terraform, etc.).

On plan, Terraform refreshes state from providers (unless refresh is disabled) and compares to desired config, proposing updates to reconcile. terraform plan -refresh-only focuses on refreshing state without other changes.
20 When do you use a data source instead of a resource?
Use data when you need to read existing infrastructure you do not manage in this configuration (shared VPC, AMI lookup, current AWS account ID, secrets metadata). Data sources never create or destroy remote objects.

Use resource when this Terraform project should own the lifecycle of that object.
Providers, CLI & workspaces 5 questions
21 How do you configure multiple instances of the same provider (e.g. multiple AWS regions or accounts)?
Declare multiple provider blocks with the alias meta-argument and different settings (region, assume_role, etc.). Pass provider = aws.west (or map of providers) into modules/resources that should use that instance.

The default (unaliased) provider is the one without alias.
22 What are Terraform workspaces, and what are they not a substitute for?
Workspaces are multiple named state files inside the same backend (e.g. env:/dev, env:/prod prefixes). They let one configuration switch state with workspace select.

They are not full environment isolation by themselves — same code, same backend credentials risks human error. Many teams prefer separate directories, backends, or repos per environment, or Terraform Cloud workspaces with RBAC.

Workspaces do not change variable values automatically; you still need var files or CI parameters per env.
23 What is the variable definition precedence order (highest wins first)?
From highest to lowest priority (Terraform merges later steps only for unset values; tighter rules apply per context):
  • CLI: -var and -var-file flags on the command
  • Environment variables: TF_VAR_name
  • Auto-loaded files: terraform.tfvars, *.auto.tfvars
  • Default values in variable blocks
Exact edge cases exist for HCL vs JSON tfvars; check docs for your version when debugging surprises.
24 What is .terraform.lock.hcl, and should it be committed?
The dependency lock file records the exact provider plugin versions and hashes Terraform selected during init. It ensures reproducible installs across laptops and CI.

Yes, commit it for applications and shared modules (unless your org standard says otherwise). It prevents "works on my machine" provider drift and supply-chain surprises.
After upgrading providers, run init and commit the updated lock file with the code change.
25 When is -target acceptable, and what are the risks?
-target limits the plan/apply graph to specific resources — useful for emergency remediation, breaking circular dependency deadlocks during migration, or incremental import workflows.

Risks: partial applies leave state and reality out of sync with the full configuration; dependencies may be skipped; subsequent full plans may show large unexpected diffs. It is not a substitute for proper module boundaries or smaller state files.

HashiCorp recommends returning to a full apply as soon as possible.
Security & platform 5 questions
26 How should you handle secrets in Terraform?
Never commit plaintext secrets to VCS. Prefer:
  • Vault, AWS Secrets Manager, SSM Parameter Store with data sources or external data providers
  • Environment / CI secret stores injected as TF_VAR_ or ephemeral workspaces
  • Terraform Cloud/Enterprise sensitive variables (encrypted at rest)
Remember: state files often contain secret values in plain text — protect backend access, enable encryption, restrict IAM, and use remote backends with tight ACLs.
27 What does sensitive = true on a variable or output do?
It prevents Terraform from printing the value in CLI plan/apply output (redacted as <sensitive>). It does not encrypt values in state or on disk — state remains sensitive. Use for reducing accidental exposure in logs; combine with proper secret storage and backend controls.
28 Compare local CLI applies with Terraform Cloud remote operations.
Local: Terraform runs on your machine or CI worker; credentials live on that runner; state can be S3, etc.

Remote (TFC/TFE): Plans and applies run on HashiCorp-managed or private agents — centralized RBAC, audit logs, private registry, policy enforcement (Sentinel/OPA), and consistent runner environments. VCS-driven workflows trigger runs on merge.

Trade-off: remote adds latency and cost but improves governance for larger orgs.
29 What are check blocks (Terraform 1.5+), and how do they relate to Sentinel or OPA?
Check blocks are native HCL assertions evaluated during plan (and optionally continuously in some workflows). They express validations tied to resources/data — failing checks surface as actionable plan diagnostics without a separate policy language.

Sentinel (TFC Enterprise policy) and OPA are external policy engines that inspect JSON plan output or runtimes — stronger for org-wide guardrails, cross-workspace rules, and integration with approval flows.

Use checks for module-level self-validation; use Sentinel/OPA for centralized compliance at scale.
30 Describe a solid pattern for multi-account AWS infrastructure with Terraform.
Common patterns:
  • One state per environment or blast-radius unit — e.g. networking stack vs app stack, or separate accounts for dev/stage/prod
  • Provider assumption — Use assume_role in the provider from a central tooling account; CI role with least privilege per account
  • Shared services — Read shared VPC or org data via remote state data "terraform_remote_state" or SSM data sources
  • Control Tower / Organizations — Terraform manages accounts, OUs, SCPs from a management or security account with tight state backend permissions
Avoid one giant state for the entire org — parallel applies, permissions, and blast radius become unmanageable.
Be ready to explain how you split state and how CI authenticates to each account.