Terraform Interview Questions- PART 1

Nidhi Ashtikar
11 min readMay 4, 2024

--

1. What is Terraform in AWS ?

Terraform is an “Infrastructure as a code” tool that allows you to create, update, and version your infrastructure through coding instead of manual processes.

2. What are the most useful Terraform commands?

  • terraform init — Initializes the current directory, also Plugin Installation, Module Installation, Backend Initialization, and Version Checking is done in the background.
  • terraform refresh — Terraform compares the current state of your infrastructure with the state described in your Terraform configuration files and updates its state file accordingly.
  • terraform output — Retrieve the values of output variables defined in your Terraform configuration.
  • terraform apply — Terraform to apply the changes described in your Terraform configuration files to your infrastructure.
  • terraform destroy — Terraform is used to destroy all the resources defined in your Terraform configuration.
  • terraform graph — Creates a DOT-formatted graph
  • terraform plan — Terraform is used to generate an execution plan based on your Terraform configuration files.

Scenario-Based Interview Questions

🔹Question 1: You have an existing infrastructure on AWS, and you need to use Terraform to manage it. How would you import these resources into your Terraform configuration?

We can use the terraform import command, we need to create a configuration file (dummy file)

terraform import [OPTION] ADDRESS_ID 

terraform import aws_instance.localname i-abcd123

🔹 Question 2: You are working with multiple environments (e.g., dev, prod) and want to avoid duplicating code. How would you structure your Terraform configurations to achieve code reuse?

OR

Your team wants to ensure that the infrastructure is consistently provisioned across multiple environments. How would you implement a consistent environment configuration?

We make use of modules so that we can avoid duplication of code.

terraform workspace list #To list all workpace 

terraform workspace new dev
terraform workspace new prod

#Switching Between Workspaces

terraform workspace select <workspace_name>

🔹 Question 3: Describe a situation where you might need to use the terraform remote backend, and what advantages does it offer in state management?

The terraform remote backend allows you to store Terraform state files in a centralized location, such as an object storage service like Amazon S3.

Benefits: Shared state, locking, secure state storage.

terraform {
backend "s3" {
bucket = "<your_bucket_name>"
key = "terraform.tfstate"
region = "us-west-2" # Update with your region
dynamodb_table = "terraform_locks" # Optional: Use DynamoDB for state locking
}
}

🔹 Question 4: You need to create a highly available architecture in AWS using Terraform. Explain how you would implement an Auto Scaling Group with load balancing.

1. Define Load Balancer Resources: If you don’t already have a load balancer in your infrastructure, define the load balancer resources including the load balancer itself, a target group, and any necessary listeners and listener rules.

resource "aws_lb" "example" {
name = "example-lb"
internal = false
load_balancer_type = "application"

security_groups = ["${aws_security_group.example.id}"]

subnets = ["${aws_subnet.example.id}"]
}


resource "aws_lb_target_group" "example" {
name = "example-tg"
port = 80
protocol = "HTTP"
target_type = "instance"

health_check {
path = "/"
interval = 30
timeout = 5
healthy_threshold = 2
unhealthy_threshold = 2
}

vpc_id = "${aws_vpc.example.id}"
}

2. Define Auto Scaling Group: Define an Auto Scaling Group, referencing the existing or newly created launch configuration and specifying the load balancer target group to distribute traffic.

resource "aws_autoscaling_group" "example" {
name = "example-asg"
min_size = 2
max_size = 5
desired_capacity = 2
vpc_zone_identifier = ["${aws_subnet.example.id}"]

target_group_arns = ["${aws_lb_target_group.example.arn}"]
}

3. Attach ASG to Load Balancer: Ensure that the ASG is attached to the load balancer target group so that instances launched by the ASG are automatically registered with the load balancer.

resource "aws_autoscaling_attachment" "example" {
autoscaling_group_name = "${aws_autoscaling_group.example.name}"
alb_target_group_arn = "${aws_lb_target_group.example.arn}"
}

🔹 Question 5: Your team is adopting a multi-cloud strategy, and you need to manage resources on both AWS and Azure using Terraform. How would you structure your Terraform code to handle this?

Organize Directories by Cloud Provider: Create separate directories for AWS and Azure within your Terraform project to keep the code organized and maintainable.

terraform_project/
├── aws/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
├── azure/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
└── modules/
├── aws_module/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
├── azure_module/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf

Manage Provider Configurations: Declare provider configurations separately for AWS and Azure within their respective directories. This ensures that Terraform applies the correct provider settings for each cloud environment.

# AWS provider configuration in aws/main.tf
provider "aws" {
region = "us-west-2"
}

# Azure provider configuration in azure/main.tf
provider "azurerm" {
features {}
}
# Example of conditional resource creation based on provider
resource "aws_instance" "example" {
count = var.provider == "aws" ? 1 : 0
# AWS-specific configuration
}

resource "azurerm_virtual_machine" "example" {
count = var.provider == "azure" ? 1 : 0
# Azure-specific configuration
}

🔹 Question 6: You want to run specific scripts after provisioning resources with Terraform. How would you achieve this, and what provisioners might you use?

Running specific scripts after provisioning resources with Terraform can be achieved using provisioners.

Provisioners are a feature in Terraform that allow you to execute scripts or commands on local or remote machines as part of resource creation or destruction.

  1. Inline Provisioners: Inline provisioners allow you to specify scripts directly within your Terraform configuration.
resource "aws_instance" "example" {
# Instance configuration

provisioner "remote-exec" {
inline = [
"echo Hello from provisioner",
"sudo apt-get update",
"sudo apt-get install -y nginx",
"sudo systemctl start nginx"
]
}
}

2. File Provisioners: File provisioners allow you to upload files from your local machine to a remote resource.

resource "aws_instance" "example" {
# Instance configuration

provisioner "file" {
source = "local/path/to/script.sh"
destination = "/remote/path/to/script.sh"
}

provisioner "remote-exec" {
inline = [
"chmod +x /remote/path/to/script.sh",
"/remote/path/to/script.sh"
]
}
}

3. External Provisioners: External provisioners execute scripts or commands that are located externally from the Terraform configuration.

resource "aws_instance" "example" {
# Instance configuration

provisioner "local-exec" {
command = "bash external_script.sh"
}
}

🔹 Question 7: You are dealing with sensitive information, such as API keys, in your Terraform configuration. What approach would you take to manage these securely?

  1. Use Environment Variables: Store sensitive information as environment variables on your local machine or CI/CD environment. Terraform can read these environment variables during runtime without exposing them directly in the configuration files.
  2. Utilize Terraform Variables: Assign them default values equal to empty strings or placeholders. Then, reference these variables throughout your configuration.
variable "aws_access_key" {
description = "AWS access key"
default = ""
}

variable "aws_secret_key" {
description = "AWS secret key"
default = ""
}

3. Use Input Variables or TFVar Files:

variable "aws_access_key" {
description = "AWS access key"
}

variable "aws_secret_key" {
description = "AWS secret key"
}

Then, create a terraform.tfvars file with the actual values:

aws_access_key = "your-access-key"
aws_secret_key = "your-secret-key"

Or input the values directly when running Terraform commands:

terraform apply -var="aws_access_key=your-access-key" -var="aws_secret_key=your-secret-key"

🔹 Question 8: Describe a scenario where you might need to use Terraform workspaces, and how would you structure your project to take advantage of them?

Project Structure:

terraform_project/
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── staging/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── production/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
├── modules/
│ ├── vpc/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── ec2/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── rds/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
├── global_variables.tf
├── backend.tf
├── provider.tf

🔹 Question 9: You’ve made changes to your Terraform configuration, and now you want to preview the execution plan before applying the changes. How would you do this?

To preview the execution plan before applying changes to your Terraform configuration, you can use the terraform plan command.

terraform plan

🔹 Question 10: Your team has decided to adopt GitOps practices for managing infrastructure with Terraform. How would you integrate Terraform with version control systems like Git?

1. Choose a Git Repository

Select a Git repository hosting platform (e.g., GitHub, GitLab, Bitbucket).

Create a repository to store Terraform configurations.

#Create a new repository named terraform-aws on GitHub.
#Steps:
#Go to GitHub and create a new repository.
#Clone the repository to your local machine:


git clone https://github.com/your-username/terraform-aws.git

2. Commit Terraform Configurations

Commit Terraform configuration files (*.tf) to the repository.

Include main configuration files, variable definitions, output definitions, and modules.

#Commit Terraform configuration files to the repository. 

#Create a directory structure for Terraform configurations:

terraform-aws/
├── main.tf
├── variables.tf
├── outputs.tf
├── modules/
│ ├── vpc/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ ├── ec2/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
#Add and commit Terraform configuration files to the repository:

git add .
git commit -m "Initial Terraform configurations"
git push origin main

3. Use Git Branches for Environments

Utilize Git branches to represent different environments (e.g., dev, staging, production).

Each environment branch contains environment-specific Terraform configurations.

#Example: Create separate branches for development, staging, and production environments.
#Steps:
#Create a new branch for the dev environment:

git checkout -b dev

#Commit environment-specific Terraform configurations to the dev branch.
#Repeat steps 1-2 for staging and production branches.

4. Automate Pull Requests and Reviews

Implement a workflow requiring pull requests (PRs) and code reviews for Terraform changes.

Ensure changes are reviewed and approved by team members before merging.

#Configure GitHub to require pull requests and code reviews for Terraform changes.
#Steps:
#Go to the repository settings on GitHub.
#Under "Branch protection rules", enable required pull request reviews and specify the number of reviewers required.

5. Leverage Git Hooks

Utilize Git hooks to trigger actions automatically based on Git events.

Use pre-commit hooks for Terraform validation or formatting checks.

#Make the script executable:

chmod +x .git/hooks/pre-commit

6. Continuous Integration/Continuous Deployment (CI/CD)

Set up CI/CD pipelines to automate Terraform workflows.

Perform tasks such as validation, plan generation (terraform plan), and applying changes (terraform apply).

7. Manage Secrets Securely

Avoid storing sensitive information directly in Terraform configurations.

Utilize environment variables, Terraform Vault integration, or secrets management solutions.

8. State Management

Store Terraform state files remotely in a secure backend.

Options include Terraform Cloud, AWS S3, Azure Blob Storage, etc.

9. Version Control Infrastructure Changes

Treat infrastructure changes as code by managing them in version control.

Track changes over time, revert to previous versions, and collaborate effectively.

🔹 Question 11: You need to manage infrastructure secrets, such as database passwords, in your Terraform configuration. What method or provider might you use?

Secret Manager
Terraform Vault Provider
Environment Variables
Parameter Store (AWS)
External Data Sources
Third-Party Plugins

🔹 Question 12: You are tasked with migrating your existing infrastructure from Terraform v0.11 to v0.12. What considerations and steps would you take?

  1. Review Documentation: Understand v0.12 changes.
  2. Update Configuration Files: Modify syntax and features.
  3. Resolve Errors: Address syntax errors and deprecations
  4. Test and Iterate: Validate changes and fix issues.
terraform 0.12upgrade

🔹 Question 13: Explain a situation where you might need to use terraform taint and what effect it has on resources.

Terraform’s taint the command is used to mark a resource for recreation during the next terraform apply.
This is particularly useful in situations where you want to force the recreation of a resource due to configuration changes, updates, or issues encountered during provisioning.

Scenario:

Imagine you have an AWS EC2 instance provisioned using Terraform, and you need to update its configuration or apply changes that are not automatically detected by Terraform. For example, you want to change the instance type from t2.micro to t2.small

Steps:

  1. Identify the Resource: Determine the resource you want to mark for recreation. In this case, it’s the AWS EC2 instance.
  2. Taint the Resource: Use the terraform taint command to mark the resource as tainted:
terraform taint aws_instance.localname 

3. Reapply Changes: Run terraform apply to reapply the Terraform configuration, which will recreate the tainted resource:

terraform apply

🔹 Question 14 Your team is adopting GitLab CI/CD for automating Terraform workflows. Describe how you would structure your CI/CD pipeline for Terraform, including key stages.

1. Initialization: (terraform init)

Initialize the pipeline and set up necessary environment variables or configurations.

Actions:

  • Configure pipeline triggers and variables.
  • Set up authentication credentials for accessing cloud providers or other services.
  • Initialize Terraform and any required dependencies.

2. Plan: (terraform plan)

Generate an execution plan to preview changes before applying them.

Actions:

  • Run terraform plan to generate a plan for the proposed changes.
  • Capture and store the plan output for review and validation.
  • Optionally, perform syntax validation or linting on Terraform configurations.

3. Validate: (terraform validate )

Validate the Terraform configuration files and execution plan.

Actions:

  • Parse and validate the generated execution plan for errors or warnings.
  • Verify that the plan aligns with the desired infrastructure changes and policies.
  • Fail the pipeline if validation checks fail or errors are detected.

4. Apply (Manual Stage): (terraform apply)

Apply the changes to provision or update infrastructure.

Actions:

  • Triggered manually after reviewing and approving the execution plan.
  • Run terraform apply to apply the proposed changes to the infrastructure.
  • Prompt for confirmation before proceeding with the deployment.

5. Destroy (Optional Manual Stage):

Destroy infrastructure resources as needed.

Actions:

  • Triggered manually when infrastructure resources need to be destroyed.
  • Run terraform destroy to tear down the infrastructure provisioned by Terraform.
  • Prompt for confirmation before proceeding with the destruction.

6. Cleanup:

Perform cleanup tasks and finalize the pipeline execution.

Actions:

  • Clean up temporary files or artifacts generated during the pipeline execution.
  • Handle any post-deployment tasks or notifications.
  • Optionally, trigger additional actions or pipelines based on the pipeline outcome.

Additional Considerations:

  • Parallelism: Utilize GitLab CI/CD’s parallel job execution feature to optimize pipeline performance and speed up the provisioning process by running concurrent stages.
  • Environment-specific Pipelines: Create separate pipelines or stages for different environments (e.g., development, staging, production) to maintain isolation and manage environment-specific configurations.
  • Pipeline Triggers: Set up triggers to automatically start the pipeline on code commits, merge requests, or other events to ensure continuous integration and deployment.
  • Error Handling: Implement error handling and notifications to alert stakeholders of pipeline failures or issues during execution.

By structuring your GitLab CI/CD pipeline for Terraform with these key stages and considerations, you can automate the provisioning and management of infrastructure effectively while maintaining control and visibility over the deployment process.

Declarative Jenkins Pipeline script:

pipeline {
agent any

environment {
// Define environment variables here
}

stages {
stage('Initialization') {
steps {
// Initialize pipeline and setup environment
// Example: sh 'terraform init'
}
}

stage('Plan Dev Environment') {
when {
expression { params.ENVIRONMENT == 'dev' }
}
steps {
// Generate Terraform execution plan for dev environment
// Example: sh 'terraform plan -out=tfplan_dev'
}
}

stage('Plan Prod Environment') {
when {
expression { params.ENVIRONMENT == 'prod' }
}
steps {
// Generate Terraform execution plan for prod environment
// Example: sh 'terraform plan -out=tfplan_prod'
}
}

stage('Apply Dev Environment') {
when {
expression { params.ENVIRONMENT == 'dev' && params.APPLY_DEV == 'true' }
}
steps {
// Apply changes to dev environment
// Example: sh 'terraform apply tfplan_dev'
}
}

stage('Apply Prod Environment') {
when {
expression { params.ENVIRONMENT == 'prod' && params.APPLY_PROD == 'true' }
}
steps {
// Apply changes to prod environment
// Example: sh 'terraform apply tfplan_prod'
}
}

stage('Cleanup') {
steps {
// Clean up temporary files or artifacts
// Example: sh 'rm -rf tfplan_dev tfplan_prod'
}
}
}

post {
always {
// Perform cleanup or finalization tasks
// Example: echo 'Pipeline completed'
}
}
}

If you found this guide helpful then do click on 👏 the button.

Follow for more Learning like this 😊

If there’s a specific topic you’re curious about, feel free to drop a personal note or comment. I’m here to help you explore whatever interests you!

Thanks for spending your valuable time learning to enhance your knowledge!

--

--

Nidhi Ashtikar
Nidhi Ashtikar

Written by Nidhi Ashtikar

Experienced AWS DevOps professional with a passion for writing insightful articles.

Responses (4)