1. What is Terraform and how it is different from other IaaC tools?
Terraform is an Infrastructure as Code (IaaC) tool developed by HashiCorp, designed to automate and manage the provisioning of infrastructure in a declarative and version-controlled manner. Unlike some other IaaC tools, Terraform is cloud-agnostic, meaning it can be used to manage resources across various cloud providers, as well as on-premises infrastructure.
2. How do you call a main.tf module?
In Terraform, the main configuration file is often named main.tf. To use this module, simply run the terraform init command in the same directory where your main.tf is located. Terraform will initialize the configuration and download any necessary plugins or modules.
3. What exactly is Sentinel? Can you provide a few examples where we can use Sentinel policies?
Sentinel is a policy as code framework integrated with Terraform Enterprise. It allows users to define and enforce policies on infrastructure provisioning. For instance, you can create policies to restrict certain regions, control resource naming conventions, or enforce security and compliance standards. Sentinel policies provide a powerful tool for governance and ensuring infrastructure adheres to organizational standards.
4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?
To create multiple instances of the same resource, you can use the count or for_each argument. For example:
resource "aws_instance" "example" {
count = 3
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
This will create three instances of the specified AMI and instance type.
5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?
A. Set the environment variable TF_LOG=TRACE B. Set verbose logging for each provider in your Terraform configuration C. Set the environment variable TF_VAR_log=TRACE D. Set the environment variable TF_LOG_PATH
Answer: A. Set the environment variable TF_LOG=TRACE
Setting TF_LOG=TRACE will enable debug messages, providing detailed information about Terraform's internal operations, including provider loading paths.
6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure.
The terraform destroy command destroys all resources defined in your configuration. To save a specific resource, you can use the -target flag followed by the resource address. For example:
terraform destroy -target=aws_instance.example
This will only destroy the specified AWS instance while leaving other resources intact.
7. Which module is used to store .tfstate file in S3?
The terraform backend configuration is used to specify where Terraform should store its state file. To store the .tfstate file in an S3 bucket, you can use the following:
terraform {
backend "s3" {
bucket = "your-s3-bucket-name"
key = "path/to/your/terraform.tfstate"
region = "your-region"
}
}
8. How do you manage sensitive data in Terraform, such as API keys or passwords?
Sensitive data like API keys or passwords can be managed using Terraform's sensitive input variable type. By marking variables as sensitive, their values are redacted in the Terraform CLI output, making it more secure. Additionally, you can use tools like HashiCorp Vault to manage and retrieve sensitive data dynamically during the Terraform execution.
9. You are working on a Terraform project that needs to provision an S3 bucket and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
To achieve this, you would use the aws_s3_bucket resource to create the S3 bucket and the aws_iam_user resource to create the user. Additionally, you would need to define an IAM policy and attach it to the user, granting the necessary permissions for read and write access to the S3 bucket.
resource "aws_s3_bucket" "example_bucket" {
bucket = "example-bucket-name"
# other bucket configurations
}
resource "aws_iam_user" "example_user" {
name = "example-user"
# other user configurations
}
resource "aws_iam_user_policy_attachment" "example_user_policy" {
user = aws_iam_user.example_user.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
Ensure that you replace placeholder values with your specific configurations.
10. Who maintains Terraform providers?
Terraform providers are maintained by the respective cloud service providers or community contributors. Each provider is responsible for implementing and updating resources specific to their platform. HashiCorp maintains some core providers, and many others are developed and maintained by the community, ensuring broad support for various infrastructure services.
11. How can we export data from one module to another?
To export data from one module to another in Terraform, you can use output variables. For example, if you have a module named moduleA and you want to use its output in another module named moduleB, you can define an output variable in moduleA:
output "example_output" {
value = aws_instance.example.id
}
In moduleB, you can reference this output variable by calling the module and variable:
module "moduleA" {
source = "./path/to/moduleA"
}
resource "example_resource" "example" {
instance_id = module.moduleA.example_output
}
This way, you can pass information between modules using output variables.
Thank you for reading!