Mastering Terraform: Scenario-Based Interview Questions & Solutions | Terraform Interview Mastery

DGR Uploads
8 Feb 202415:22

Summary

TLDRThis video script offers a comprehensive guide to 15 scenario-based interview questions for Terraform, a popular infrastructure as code tool. It covers essential topics such as importing existing infrastructure, leveraging Terraform modules for code reusability, utilizing remote backends for state management, and implementing auto-scaling groups for high availability. The script also addresses multi-cloud management, sensitive information handling, workspaces, version control integration, and CI/CD pipeline structuring for Terraform. It's a valuable resource for those preparing for DevOps interviews or looking to enhance their Terraform expertise.

Takeaways

  • 📝 Use the `terraform import` command to integrate existing infrastructure into Terraform management.
  • 🔄 Leverage Terraform modules for code reusability and maintainability across multiple environments.
  • 🗄 Utilize Terraform remote backends for centralized state management, facilitating collaboration and state locking.
  • 🛡️ Implement auto-scaling groups and load balancers in AWS for a highly available architecture using Terraform.
  • 🌐 Handle multicloud infrastructure with Terraform by defining multiple provider blocks for different cloud platforms.
  • 📜 Execute scripts post-provisioning with Terraform using local and remote exec provisioners within resource blocks.
  • 🔒 Securely manage sensitive information in Terraform by using environment variables, external files, or secret managers.
  • 🌐 Terraform workspaces allow for the use of a single configuration file across multiple environments with separate state files.
  • 📉 Preview changes with `terraform plan` to understand the impact of Terraform configurations before applying them.
  • 🔄 Integrate Terraform with version control systems like Git for version management and GitOps practices.
  • 🔑 Manage infrastructure secrets using external data sources or secret managers, avoiding hardcoded secrets in the configuration file.
  • 🔄 Ensure consistent environment configuration using Terraform modules to promote code consistency across different environments.
  • 🚀 When migrating Terraform versions, update syntax, address deprecations, and handle breaking changes with the `terraform 0.12upgrade` command.
  • 🛑 Use `terraform taint` to force the destruction and recreation of a resource when necessary, such as when attributes cannot be changed in place.
  • 🔧 Structure CI/CD pipelines for Terraform with stages for initialization, planning, and applying changes, including manual approval steps for security.

Q & A

  • How can you import existing AWS infrastructure into Terraform for management?

    -You can use the `terraform import` command to import existing resources. First, you need to write a dummy configuration file and then run the `terraform import` command with the resource type and your local name, followed by the instance ID of the resource you want to import. Terraform will update the state file with this information and start managing the resource.

  • What is the purpose of Terraform modules and how do they help with code reusability?

    -Terraform modules are used to promote code reusability and maintainability. They allow you to write the configuration once and call it multiple times with different parameters for different environments, thus avoiding code duplication and making the infrastructure management more efficient.

  • Why might you use a Terraform remote backend for state management, and what are its advantages?

    -A Terraform remote backend is used to store state files in a remote location, which is beneficial for collaboration among multiple team members. It offers advantages such as shared state file access, state file locking to prevent concurrent operations, and enhanced security by not storing sensitive state information locally.

  • How can you create a highly available architecture in AWS using Terraform?

    -You can create a highly available architecture by using Terraform to provision auto scaling groups and load balancers. The auto scaling group ensures that multiple instances are running, and the load balancer distributes traffic efficiently among these instances.

  • How can you structure Terraform code to manage resources on both AWS and Azure in a multicloud strategy?

    -In Terraform, you can define multiple provider blocks in the same configuration file for different cloud platforms like AWS and Azure. This allows you to manage resources across multiple clouds using a single Terraform configuration.

  • What are provisioners in Terraform and how can they be used to run scripts after provisioning resources?

    -Provisioners in Terraform are used to execute scripts or commands on local or remote machines after the resources have been provisioned. You can use `local-exec` for local machine scripts and `remote-exec` for scripts on remote resources like EC2 instances within your Terraform configuration blocks.

  • How should you manage sensitive information like API keys in Terraform configurations securely?

    -Sensitive information should not be hardcoded in the Terraform configuration files. Instead, use environment variables, external files, or centralized secret management tools like HashiCorp Vault or AWS Secrets Manager to securely store and access sensitive data.

  • What are Terraform workspaces and how can they be used for multiple environments?

    -Terraform workspaces allow you to use a single configuration file for multiple environments. Each workspace is a copy of the configuration file that maintains its own state file, enabling you to execute the same configuration in different environments like Dev, QA, and Prod.

  • How can you preview the execution plan before applying changes in Terraform?

    -You can use the `terraform plan` command to review the execution plan, which provides a detailed overview of the changes Terraform will apply when you execute the configuration. This helps in understanding and verifying the impact of the changes before they are applied.

  • How can you integrate Terraform with version control systems like Git for GitOps practices?

    -You can maintain Terraform configuration files in a version control system like Git, using it to manage different versions of the code and leveraging branching strategies for various environments. This aligns with GitOps practices, allowing for a workflow that includes code review, branching, and merging for infrastructure changes.

  • What is the recommended method for managing infrastructure secrets like database passwords in Terraform?

    -It is recommended to use external data sources or secret managers to manage infrastructure secrets securely. Avoid hardcoding secrets in the Terraform configuration file to prevent exposure if the code is pushed to a public repository.

  • How can you ensure consistent environment configuration across multiple environments using Terraform?

    -Terraform modules can be used to create consistent environment configurations. By calling the same module with different variables for each environment, you can ensure that the infrastructure setup is consistent across Dev, UAT, and Prod environments.

  • What considerations and steps should be taken when migrating from Terraform version 0.11 to version 0.12?

    -When upgrading Terraform versions, you need to update the syntax in the configuration files, address any deprecated features, and handle any breaking changes. The `terraform 0.12upgrade` command can be utilized to automatically handle some of these updates.

  • What is the purpose of the `terraform taint` command and when should it be used?

    -The `terraform taint` command is used when you want to destroy and recreate a resource, such as when an EC2 instance is corrupted. It marks the resource as tainted, and the next `terraform apply` will replace the tainted resource with a new one.

  • How can you structure a CI/CD pipeline for Terraform in GitLab, including key stages?

    -A CI/CD pipeline for Terraform in GitLab should include stages for `init`, `plan`, and `apply`. The `init` stage initializes Terraform configuration files, `plan` generates a preview of the actions to be taken, and `apply` executes the plan. It's also important to use environment-specific variables, protect sensitive data, and implement manual approval steps for critical changes.

Outlines

00:00

📘 Terraform Interview Questions Overview

This paragraph introduces a session focused on 15 scenario-based interview questions related to Terraform. The speaker emphasizes the importance of these questions for those preparing for Terraform interviews and provides a brief overview of what to expect, including real-world scenarios. The session encourages viewers to subscribe and promises a deep dive into Terraform's practical applications.

05:01

🔄 Importing Existing AWS Infrastructure with Terraform

The speaker discusses how to integrate existing AWS infrastructure into Terraform management using the 'terraform import' command. They explain the process of writing a dummy configuration file and executing the import command with the correct syntax, including resource type and local name. The paragraph highlights the ability to manage previously manually created resources through Terraform, ensuring individual resource import and state file updates.

10:01

🛠 Structuring Terraform Configurations for Multiple Environments

This section addresses the challenge of avoiding code duplication across multiple environments like Dev, Prod, etc. The speaker introduces Terraform modules as a solution for code reusability and maintainability. They explain how modules can be parameterized for different environments, promoting efficient and organized Terraform configuration management.

15:02

🗄️ Terraform Remote Backends for State Management

The paragraph delves into the use of Terraform remote backends for state file management, offering advantages such as collaboration and state file locking. The speaker outlines various options for remote backends, including S3 buckets and Azure storage, and discusses the benefits of centralized state file storage and access control for multiple users.

🚀 Creating Highly Available Architectures with Terraform

The speaker explains how to create a highly available architecture in AWS using Terraform, specifically focusing on the implementation of auto-scaling groups and load balancing. They provide an example code snippet for creating an auto-scaling group and setting up a load balancer, ensuring efficient traffic distribution and high availability.

🌐 Managing Multicloud Infrastructure with Terraform

This paragraph covers the structure of Terraform code for managing resources on multiple cloud platforms, such as AWS and Azure. The speaker describes the use of multiple provider blocks within the same configuration file and the organization of resources within each provider block, highlighting Terraform's support for multicloud strategies.

🛠️ Running Scripts Post-Provisioning with Terraform

The speaker discusses the use of provisioners in Terraform to execute scripts or commands after resource provisioning. They differentiate between local and remote exec provisioners and provide an example of how to specify provisioners within a resource block, including connectivity information for Terraform to execute commands on remote machines.

🔑 Managing Sensitive Information in Terraform Configurations

The paragraph addresses the secure management of sensitive information such as API keys in Terraform configurations. The speaker advises against hardcoding sensitive data and recommends using environment variables, external files, or centralized secret management tools like HashiCorp Vault or AWS Secrets Manager.

🌿 Using Terraform Workspaces for Multiple Environments

The speaker introduces Terraform workspaces for managing multiple environments with a single configuration file. They explain how workspaces allow for the execution of the same configuration file in different environments, each maintaining its own state file, and how this approach promotes efficient environment management.

📋 Previewing Terraform Execution Plans

This paragraph explains how to preview the execution plan before applying changes in Terraform using the 'terraform plan' command. The speaker highlights the importance of reviewing the detailed overview of changes that Terraform will apply, ensuring a clear understanding of the impact of configuration updates.

🔄 Integrating Terraform with Version Control Systems

The speaker discusses the adoption of GitOps practices for managing infrastructure with Terraform, focusing on the integration with version control systems like Git. They describe maintaining Terraform configuration files on platforms like GitHub, utilizing branching strategies for different environments, and following a GitOps workflow for changes.

🗝️ Managing Infrastructure Secrets with Terraform

The paragraph addresses the management of infrastructure secrets such as database passwords in Terraform configurations. The speaker reiterates the importance of not hardcoding sensitive data and suggests using external data sources or secret managers to securely maintain sensitive information.

🔍 Ensuring Consistent Environment Configurations with Terraform

The speaker explains how to implement consistent environment configurations across multiple setups using Terraform modules. They discuss the benefits of code reusability and consistency, emphasizing the use of modules to launch resources like EC2 instances in different environments with variable parameters.

🛑 Upgrading Terraform Versions and Best Practices

This paragraph covers the considerations and steps for migrating infrastructure from Terraform version 0.11 to version 0.12. The speaker advises updating configuration syntax, addressing deprecated features, and handling breaking changes, while also mentioning the 'terraform 0.12upgrade' command to assist with automatic updates.

📌 Using Terraform Taint for Resource Replacement

The speaker explains the use of 'terraform taint' for situations where a resource needs to be destroyed and recreated, such as when an EC2 instance is corrupted. They detail how tainting a resource signals Terraform to replace it during the next 'terraform apply', facilitating the recreation of non-functional resources.

🤖 Structuring CI/CD Pipelines for Terraform with GitLab

The paragraph outlines how to structure CI/CD pipelines for Terraform using GitLab, including key stages such as 'init', 'plan', and 'apply'. The speaker recommends using environment-specific variables, protecting sensitive data, and implementing manual approval steps to ensure safe and controlled Terraform executions.

👍 Conclusion and Call to Action

The speaker concludes the session by summarizing the covered content and encouraging viewers to like, subscribe, and engage with the channel for more insights on DevOps and Terraform. They highlight the value of the discussed interview questions for those preparing for DevOps roles involving Terraform.

Mindmap

Keywords

💡Terraform

Terraform is an infrastructure as code (IaC) tool developed by HashiCorp that enables the creation, modification, and versioning of infrastructure through code. It is pivotal in the video's theme, as the script discusses various scenarios related to its use in managing cloud resources. For instance, it mentions using Terraform to import existing AWS infrastructure and to manage resources across multiple cloud platforms.

💡Scenario-based Interview Questions

Scenario-based interview questions are hypothetical situations presented to candidates to assess their problem-solving skills and knowledge. In the context of the video, these questions are focused on Terraform and its application in real-world infrastructure management. The script outlines 15 such questions, demonstrating how to handle different challenges in Terraform-based environments.

💡Infrastructure as Code (IaC)

Infrastructure as Code (IaC) is a DevOps practice where infrastructure is provisioned and managed using code and software development practices. The video script emphasizes IaC through the use of Terraform, showcasing how to script and automate the deployment of infrastructure resources.

💡Terraform Import

The 'terraform import' command is a feature in Terraform that allows existing infrastructure to be brought under Terraform's management. The script provides an example of how to use this command to import an EC2 instance into Terraform's state file, thus enabling Terraform to manage it.

💡Terraform Modules

Terraform modules are reusable units of infrastructure that can be shared and used across different projects. The script highlights their importance in promoting code reusability and maintainability, as well as parameterizing them for different environments to avoid code duplication.

💡Remote Backend

A remote backend in Terraform is a storage location for the state file, which keeps track of the infrastructure's current state. The script discusses using remote backends like S3 buckets for state management to enable collaboration, state file sharing, and locking, which is crucial for teams working on the same infrastructure.

💡Auto Scaling Groups

Auto Scaling Groups are a feature in cloud platforms like AWS that allow applications to scale based on demand. The video script explains how to implement auto scaling groups with load balancing using Terraform to ensure high availability of applications.

💡Multicloud Strategy

A multicloud strategy involves using more than one cloud service provider to build and deploy applications. The script addresses structuring Terraform code to manage resources across multiple cloud platforms like AWS and Azure, demonstrating Terraform's versatility in a multicloud environment.

💡Terraform Provisioners

Terraform provisioners are used to execute scripts or commands on local or remote machines after provisioning resources. The script describes using local exec and remote exec provisioners to run scripts on EC2 instances, showing how to integrate post-provisioning tasks into Terraform workflows.

💡Sensitive Information

Sensitive information refers to data that requires a higher level of security, such as API keys or database passwords. The script advises against hardcoding such information in Terraform configurations, instead recommending the use of environment variables, external files, or secret management tools like AWS Secrets Manager.

💡Terraform Workspaces

Terraform workspaces allow a single configuration file to be used for multiple environments, such as development, QA, and production. The script explains how to structure a project to take advantage of workspaces, enabling the management of different environments without duplicating configuration code.

💡Terraform Plan

The 'terraform plan' command is used to preview the execution plan of a Terraform configuration, showing the intended changes before they are applied. The script emphasizes the importance of using 'terraform plan' to review and understand the impact of changes on infrastructure.

💡GitOps

GitOps is an operational framework for managing infrastructure and application delivery using Git as a single source of truth. The script discusses integrating Terraform with GitOps practices, including the use of version control systems like Git for Terraform configuration files, and adopting a branching strategy for different environments.

💡Terraform Taint

Terraform taint is used to mark a resource for destruction and recreation. The script explains how to use 'terraform taint' when a resource, such as an EC2 instance, needs to be replaced, illustrating its use in scenarios where attributes cannot be changed in place.

💡CI/CD Pipeline

CI/CD stands for Continuous Integration/Continuous Delivery or Deployment, which are practices in software development to automate the integration and delivery of code changes. The script outlines structuring a CI/CD pipeline for Terraform, including stages like 'init', 'plan', and 'apply', and emphasizes the importance of manual approval steps for critical changes.

Highlights

Introduction to 15 scenario-based interview questions for Terraform.

Importing existing AWS infrastructure into Terraform using the 'terraform import' command.

Using dummy configuration files for the initial import process.

Structuring Terraform configurations for code reusability with multiple environments using Terraform modules.

Parameterizing modules for different environments to promote code maintainability.

Advantages of using Terraform remote backends for state management, including collaboration and state file locking.

Creating highly available architectures in AWS with Terraform through auto-scaling groups and load balancing.

Managing multicloud resources with Terraform by defining multiple provider blocks.

Executing scripts post-provisioning with Terraform using local and remote exec provisioners.

Securing sensitive information in Terraform configurations by avoiding hardcoding and using environment variables or external files.

Utilizing Terraform workspaces for managing multiple environments with a single configuration file.

Previewing execution plans with 'terraform plan' before applying changes in Terraform.

Integrating Terraform with version control systems like Git for managing infrastructure as code.

Managing infrastructure secrets securely without hardcoding in Terraform configurations.

Ensuring consistent environment configuration across multiple environments using Terraform modules.

Migrating infrastructure from Terraform version 0.11 to 0.12, addressing syntax updates and deprecated features.

Using 'terraform taint' to destroy and recreate resources when necessary.

Structuring CI/CD pipelines for Terraform in GitLab, including key stages like init, plan, and apply.

Recommendations for securing sensitive data and implementing manual approval steps in Terraform CI/CD pipelines.

Conclusion summarizing the importance of these interview questions for Terraform in DevOps roles.

Transcripts

play00:05

hello and welcome back to my channel in

play00:08

today's session we will be looking at 15

play00:11

uh scenario based interview questions

play00:13

that you can expect as part of your uh

play00:15

terraform now whether you're preparing

play00:18

for an interview uh where you're showing

play00:20

experience you can definitely expect

play00:23

expect scenario based questions um in

play00:25

terms of your terraform so in this

play00:28

session we will be covering 15 scenario

play00:30

based questions that you can uh

play00:32

definitely expect as part of your

play00:34

interview so these questions are your

play00:36

real world questions that you can expect

play00:39

in terms of your terraform so before I

play00:41

start off with the session please don't

play00:43

forget to hit that subscribe button so

play00:45

let's get started with this so the first

play00:46

scenario based question we have is you

play00:49

have an existing infrastructure on AWS

play00:52

and you want to use terraform to manage

play00:55

it how would you import these resources

play00:57

into your terraform configuration so

play01:00

basically uh we already have some

play01:02

infrastructure let's say which was

play01:04

created manually now we want to start

play01:06

managing that as well by making use of

play01:09

your terraform so how can we achieve

play01:10

that now for that we can make use of

play01:12

this uh command called terraform import

play01:14

command so this will help us to import

play01:16

your existing resources and then

play01:18

terraform can start uh managing that so

play01:22

uh with this ideally you'll have to

play01:23

write a dummy configuration file uh and

play01:27

then you will need to run this terraform

play01:28

import command so this is the syntax and

play01:30

here is an example command that you can

play01:32

use so terraform import the resource

play01:35

type and your local name so this will be

play01:37

in the uh uh configuration file that you

play01:40

would have written in advance and then

play01:44

the uh instance ID so let's say you're

play01:46

you're importing an ec2 instance you

play01:49

need to pass pass that instance ID and

play01:51

terraform will basically update the

play01:53

state file with this information and it

play01:55

will start managing these resources for

play01:57

us so with this tform import command

play01:59

we'll have to import individual

play02:01

resources we cannot import multiple

play02:03

resources but we can make use of your

play02:05

terraform import command for that the

play02:07

next scenario based question we have is

play02:10

you're working with multiple

play02:11

environments let's say you have Dev

play02:12

proda and then so on and you want to

play02:14

avoid duplicating your code so how would

play02:16

you structure your terraform

play02:18

configurations to achieve code uh

play02:20

reusability so this is where we can make

play02:23

use of your terraform modules so

play02:25

terraform modules mainly helps with your

play02:27

uh code reusability so you can write

play02:30

once and then we can start calling it

play02:32

any number of times we want which um uh

play02:35

basically promotes your code reusability

play02:38

all right so with this it gives you a

play02:40

code reusability as well as your code

play02:43

maintainability and then when we are

play02:45

calling this modules we can uh

play02:47

parameterize it based on the different

play02:48

environments we have so if you are

play02:50

executing for the dev you can pass the

play02:52

parameters accordingly and if you're

play02:54

executing it for prod you can pass the

play02:56

parameters accordingly so terraform

play02:58

modules is what we can Implement for

play03:00

this the next scenario base question we

play03:02

have is describe a situation where you

play03:04

might need to use the terraform remote

play03:07

backend and what are the advantages that

play03:09

it offers in State Management now we

play03:11

know that your terraform uh maintains a

play03:14

state file which is basically the

play03:16

information about all the resources it

play03:18

manages so uh we can make use of your

play03:21

terraform uh remote backends to store

play03:23

these State files in a remote location

play03:25

so instead of storing the state files on

play03:27

the local machine we can push it in a

play03:29

remote location a common location which

play03:32

is accessed by multiple people so we

play03:34

have lots of options available so we

play03:36

have S3 buckets we have Azure storage we

play03:38

can also use hashar provided option use

play03:41

console to uh store our state files

play03:45

remotely now what what advantage it

play03:47

provides so one it provides you with

play03:49

collaboration so multiple people can

play03:51

work with it it provides for you with

play03:53

the option to share your state file as

play03:55

well as locking your state file so when

play03:58

one person is doing some

play04:00

operations the state file will be logged

play04:03

and it will not allow any operations

play04:05

from other

play04:07

users the next scenario based question

play04:09

we have is you need to create a highly

play04:10

available architecture in AWS using

play04:12

terraform explain how would you

play04:14

implement and auto scaling groups with

play04:17

load balancing so with this we basically

play04:19

creating the uh resource block with the

play04:21

respective resource type so if you want

play04:23

to create a auto scaling group so here I

play04:25

just have a snippet an example code so

play04:28

aore Auto scaling underscore group so

play04:30

this is the resource type we'll be using

play04:32

and then we'll be filling in the details

play04:34

and um in terms of your load balancing

play04:37

we can create a load balancer so here

play04:39

awor lb that will be the resource type

play04:42

and then we'll have to also make sure

play04:44

that the instances um uh that we are

play04:48

creating are part of your load balances

play04:49

and your auto scaling groups which will

play04:51

ensure Distributing of your traffic

play04:54

efficiently all right so whenever we

play04:57

talk about making your applications

play04:58

highly available Auto scaling groups and

play05:00

load balancer is what we have so we can

play05:02

make use of your terraform to create

play05:03

these

play05:04

resources the next question we have is

play05:07

your team is adopting a multicloud

play05:09

strategy and you need to manage

play05:10

resources on both AWS and Azure using

play05:13

terraform so how do you structure your

play05:15

terraform code to handle this now we

play05:17

know that your terraform supports

play05:18

multicloud platform so we can use

play05:20

terraform to create infrastructure on uh

play05:23

multiple Cloud Platforms in this case

play05:25

let's say your AWS and aure so we can

play05:28

provide multip mle provider blocks in

play05:30

the same configuration file so here for

play05:32

example if you see I have a provider

play05:34

block for AWS I have a provider block

play05:36

for Azure I have a provider block for uh

play05:38

Google Cloud okay so we can Define

play05:41

multiple uh provider blocks in the same

play05:44

configuration file and then we will need

play05:46

to define the resources accordingly

play05:48

within each of these provider blocks so

play05:51

if I'm create if I want to create

play05:52

resource for AWS I'll be defining the

play05:54

resources here for Azure I'll be

play05:56

providing the resource here and then so

play05:58

on the next question we have is you want

play06:00

to run specific scripts after

play06:02

provisioning your resources with

play06:04

terraform so how would you achieve this

play06:06

and what Provisions uh might you use so

play06:11

when we talk about your provisioners in

play06:13

terraform we have local exec and your uh

play06:17

remote exec we can use this to execute

play06:19

any scripts or commands on your local

play06:21

machine as well as your remote machines

play06:24

like you know let's say you're launching

play06:25

an E2 instance you want to run some

play06:27

commands we can make use of your remote

play06:29

for that now uh we we'll generally be

play06:32

specifying this provisioners within your

play06:34

resource block so for example here I

play06:37

have the resource block and then within

play06:38

the resource block we will be defining

play06:40

the provisioner so here is the remote

play06:42

exec where I'm running some inline

play06:44

commands so first I'm give an execute

play06:46

permission and then I'm executing that

play06:48

script and here I have the connectivity

play06:50

so your your terraform needs to

play06:52

establish the connectivity so we are

play06:54

providing the connectivity information

play06:55

here so terapon will use this

play06:57

connectivity information connect to that

play06:59

instance and then execute this commands

play07:01

for us okay the next question we have is

play07:05

you're dealing with sensitive

play07:06

information such as API keys in your

play07:08

terraform configuration what approach

play07:10

would you take to manage this securely

play07:12

so it is always recommended that we

play07:14

don't hardcode any sensitive information

play07:16

within your configuration file so we can

play07:19

either make use of your environment

play07:20

variables or we can make use of your

play07:22

external files to store this sensitive

play07:24

data so we should never uh keep this

play07:27

data in the configuration files we

play07:29

should always make sure we are keeping

play07:31

it in a secure location we can also

play07:33

consider using hashiko Walt for

play07:36

centralized secret management so if

play07:38

you're on AWS we can definitely consider

play07:40

using the secrets manager where we can

play07:42

store all our secrets and then we can

play07:44

start fetching that uh information in

play07:47

your terraform by making use of your

play07:48

data

play07:49

source the next question we have is

play07:52

describe a scenario where you might need

play07:54

to use terraform workspaces and how

play07:56

would you structure a project to take

play07:58

advantage of them so terraform

play08:00

workspaces can be used whenever you want

play08:02

to use um a single configuration file

play08:06

for multiple environments okay so that's

play08:09

where we can make use of your workspaces

play08:11

so let's say we have a config file and I

play08:13

want to execute the same config file for

play08:15

my different different environment so

play08:16

let's say for proud for Dev QA sis and

play08:20

then uat okay so I want one single file

play08:23

but then I want to execute it um

play08:24

environment wise now that's where we can

play08:26

make use of your workspace so for each

play08:28

of these environment we can create your

play08:31

workspace which is nothing but a copy of

play08:33

this um uh config file and each of this

play08:36

workpace will maintain its own uh State

play08:38

file so when we when I execute the

play08:40

config file in the respective workspaces

play08:42

it will get executed in the respective

play08:44

um enironment so this is where we can

play08:46

make use of your

play08:48

workspaces the next question you have is

play08:50

you have made changes to your terraform

play08:52

configuration and now you want to

play08:54

preview the execution Plan before

play08:56

applying the changes how would you do

play08:58

this so terraform provid proves us with

play08:59

a command for this so we have this

play09:00

command called terraform plan which we

play09:02

can use to um review the execution plan

play09:05

as to what exactly your terraform is

play09:07

changing or what what actions my

play09:10

terraform is going to take when I

play09:12

execute the configuration file so this

play09:15

provides us with a detailed overview of

play09:17

the changes that terraform will apply

play09:19

when I execute that configuration code

play09:23

so we can make use of your terraform

play09:25

plan for this the next question we have

play09:27

is your team has decided to adopt G Ops

play09:29

practices for managing infrastructure

play09:31

with terraform how would you integrate

play09:33

terraform with Version Control Systems

play09:35

like uh git so like any Version Control

play09:39

System git is also version control

play09:41

system that we can use and like how like

play09:43

how we maintain all our code we can also

play09:45

maintain our terraform configuration

play09:47

files on GitHub in this case G or GitHub

play09:50

so uh we can maintain different

play09:52

different versions of your code and we

play09:53

can start managing the uh code using

play09:56

this GitHub so we can also Leverage

play09:59

branching strategy for different

play10:01

different environments and we can follow

play10:03

a gitops uh workflow for uh change so

play10:06

basically um we can start pushing our

play10:09

code to uh GitHub and start maintaining

play10:11

branching strategy that we want to

play10:13

follow to uh depending on the

play10:16

environments that you have been working

play10:18

on the next question we have is you need

play10:21

to manage the infrastructure Secrets

play10:22

such as your database passwords uh in

play10:25

your terraform configuration so what

play10:27

method or provider might you use so like

play10:30

we already discussed it is always

play10:32

recommended that we should not keep our

play10:35

sensitive data within uh terraform

play10:38

config file so we'll have to make use of

play10:40

external data source or we'll have to

play10:43

use a secret manager to maintain our

play10:45

sensitive data okay so Secrets manager

play10:47

could be the service that we have in AWS

play10:50

or you can make use of your hash or vaal

play10:52

to store your sensitive data so avoid

play10:55

hardcoding your secrets in configuration

play10:57

so it's never recommended to hard code

play11:00

your secrets within the configuration

play11:02

file it's always a

play11:04

risk if the code gets Exposed on a

play11:08

public repo so anyone can see those uh

play11:10

sensitive data so we should never be

play11:11

hardcoding the data the next question we

play11:14

have is your team wants to ensure that

play11:15

the infrastructure is consistently

play11:17

provisioned across multiple environments

play11:20

how would you implement a consistent

play11:21

environment configuration so again for

play11:24

this we can make use of your terraform

play11:26

modules which uh helps us to make our

play11:29

code reusable so let's say for example

play11:32

you have an ec2 instance and this

play11:33

instance needs to be launched in your

play11:35

Dev uat and prod environment now we can

play11:38

make use of the same code by calling the

play11:41

terraform modules by calling the

play11:42

terraform modules to execute the

play11:45

configuration file on the respective

play11:47

environment so module abstracts your

play11:50

complexity and it mainly promotes your

play11:52

code consistency so we'll have the same

play11:54

piece of code but then the variables

play11:56

will uh change based on the environment

play11:59

that we are executing but the main

play12:01

configuration file will remain the same

play12:03

and that way we can ensure that all the

play12:05

environments will have same consistency

play12:07

in terms of your infrastructure

play12:09

setup the next question we have is your

play12:11

task with migrating your existing

play12:13

infrastructure from terraform version

play12:15

0.11 to version 0.12 so what kind of

play12:18

considerations and steps would you uh

play12:21

take so whenever we are upgrading our

play12:24

terraform from one version to another

play12:25

version we have to make sure that we

play12:27

update the syntax in the configurations

play12:29

file accordingly address any deprecated

play12:32

features and handle any breaking changes

play12:36

so we have to make sure that we take

play12:37

care of this and also we can utilize

play12:39

this command which is a terraform 0.1 to

play12:42

upgrade command to automatically handle

play12:45

some of these updates uh for us okay so

play12:48

uh these are the few of the things that

play12:50

we'll have to make sure that we keep in

play12:51

mind whenever we are upgrading from one

play12:53

version to another

play12:55

version the next question we have is

play12:57

explain a situation where you might need

play12:59

to use terraform taint and what effect

play13:01

it has on resources so terraform taint

play13:03

can be used whenever you want to destroy

play13:05

and recreate a resource so let's say for

play13:08

example you have an ec2 instance and

play13:10

let's say the instance is corrupted I

play13:12

want to destroy that and launch a new

play13:14

instance so we can make use of your

play13:15

terraform taint for that so terraform

play13:17

taint mainly helps you to recreate your

play13:20

uh resources okay so um could be for any

play13:24

reason the server is no longer working

play13:26

as expected we can destroy that and

play13:27

recreate it by making use of your

play13:29

terraform taint so with this you'll be

play13:32

marking the resource as tainted so that

play13:34

the next time when I do a terraform

play13:36

apply aform apply will know that a

play13:38

resource has been tainted and it will

play13:40

replace that with a stable resource for

play13:43

us so use it when a resource needs to be

play13:45

replaced such as when updating certain

play13:47

attributes that cannot be changed in

play13:51

place the next question we have is your

play13:53

team is adopting gitlab cicd for

play13:55

automating terraform work workflows

play13:58

descri describe how would you structure

play13:59

your cicd pipeline for terraform

play14:01

including key stages so uh with this

play14:04

essentially when we talk about your cicd

play14:06

stages we'll have your init plan we'll

play14:09

have the plan and then the apply so init

play14:11

is where we'll initialize your terraform

play14:13

uh configuration files plan will help us

play14:16

to generate a preview of the actions

play14:18

your terraform is going to take and then

play14:20

apply can be used to execute those uh

play14:22

plan for us okay so other than this it

play14:25

is also recommended that we use

play14:27

environment specific variables and then

play14:28

protect our sensitive data and also

play14:31

Implement manual approval steps so you

play14:34

know uh do not have Auto approvals for

play14:37

your terraform apply always have a

play14:39

manual approval for any critical changes

play14:41

that we have okay so these are some of

play14:43

the recommendations that we'll have to

play14:45

keep in mind uh when we start setting up

play14:47

your cicd for your terraform execution

play14:50

so there you have it we have covered 15

play14:52

scenario based interview question that

play14:53

you can expect as part of terraform um

play14:56

this is something that you can

play14:57

definitely expect

play14:59

in terms of your devops interview on on

play15:02

the terraform tool if you found the

play15:04

video helpful give it a thumbs up uh

play15:06

don't forget to like the video and

play15:09

subscribe to the channel for more um

play15:11

insights on uh devops um until next time

play15:15

happy

play15:21

learning

Rate This

5.0 / 5 (0 votes)

Etiquetas Relacionadas
TerraformInterviewDevOpsAWSInfrastructureScenariosCode ReusabilityState ManagementMulticloudCI/CDSecrets
¿Necesitas un resumen en inglés?