1. What is Terraform and how it is different from other IaaC tools?
One of the main differences between Terraform and other IaC tools is that Terraform is provider-agnostic. This means it can manage infrastructure across multiple cloud providers (such as AWS, Azure, Google Cloud Platform, etc.) as well as on-premises infrastructure, using the same configuration language and syntax.
Terraform’s “dry run” feature allows you to preview changes before applying them, reducing the risk of unintended changes or downtime. Additionally, Terraform supports state management, which allows you to track the current state of your infrastructure and changes made to it over time. This makes it easier to manage complex infrastructure configurations and changes.
2. How do you call a main.tf module?
In Terraform, a main.tf file is the main configuration file that defines the resources and their properties. It is typically located in the root directory of a Terraform module.
To use a Terraform module that contains a main.tf file, you need to reference it in your Terraform code by creating a new Terraform configuration file that includes a module block. Within this block, you specify the location of the module, either as a local file path or a remote repository URL. You also give a unique name to the module that you can use to reference its resources in your Terraform code.
module "my_module" {
source = "./my_module_directory"
}
In this example, my_module is a unique name for the module, and ./my_module_directory is the path to the directory containing the main.tf file of the module. This tells Terraform to use the resources defined in the main.tf file of the module.
3. What exactly is Sentinel? Can you provide a few examples that we can use for Sentinel policies?
Sentinel is a policy-as-code framework that allows you to define and enforce policies for your infrastructure-as-code (IaC) deployments. It helps to ensure that your infrastructure deployments are secure, compliant, efficient, and cost-effective. You can use Sentinel policies for a variety of rules, such as compliance, security, naming conventions, resource limits, and cost optimization.
Here are a few examples of Sentinel policies:
Compliance: A policy that checks whether a VM is deployed in a specific region to comply with data residency regulations.
Security: A policy that ensures that storage accounts are encrypted using a specific encryption method.
Naming conventions: A policy that checks whether a resource group name includes a specific prefix.
Resource limits: A policy that prevents users from creating more than a certain number of virtual machines in a specific subscription.
Cost optimization: A policy that checks whether resources are tagged with a specific label that indicates their business purpose.
4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?
To create multiple instances of the same resource in Terraform, you can use either the count or for_each block in your resource definition. The count argument allows you to specify the number of instances to create, while the for_each block allows you to create multiple instances based on a map or set of values. Using these blocks helps avoid duplicating your code and streamlines your Terraform configuration file.
5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?
A. Set the environment variable TF_LOG=TRACE
B. Set verbose logging for each provider in your Terraform configuration
C. Set the environment variable TF_VAR_log=TRACE
D. Set the environment variable TF_LOG_PATH
Ans. A. Set the environment variable TF_LOG=TRACE
6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure.
terraform destroy
When running terraform destroy , Terraform will attempt to destroy all resources in the configuration. If you want to save a particular resource from being destroyed, you can use the -target option to specify the resource to destroy.
For example, if you have a resource named aws_instance.example that you want to save from being destroyed, you can run the following command:
terraform destroy -target=aws_instance.example
This command will only destroy the resource aws_instance.example, and all other resources in the configuration will be left intact.
7. Which module is used to store .tfstate file in S3?
The terraform module named terraform/backend is used to store the .tfstate file in an S3 bucket. This module sets up a remote backend in S3 and configures the Terraform state to be stored there. To use this module, you would need to pass the appropriate backend configuration to your Terraform configuration file.
8. How do you manage sensitive data in Terraform, such as API keys or passwords?
In Terraform, sensitive data such as API keys or passwords can be managed using the following techniques:
Use Terraform input variables: Define input variables in your Terraform configuration file and use them to prompt the user for sensitive data during runtime.
Store sensitive data in environment variables: Store sensitive data as environment variables and reference them in your Terraform configuration file.
Use external secret management tools: Use external secret management tools like Vault or AWS Secrets Manager to store and manage sensitive data.
Use encrypted state files: Encrypt your Terraform state files using tools like sops or age to protect sensitive data stored in the state file.
9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
To provision an S3 bucket and a user with read and write access to the bucket, I would use the following Terraform resources:
aws_s3_bucket : This resource would define the S3 bucket and its configuration options such as its name, region, access control, and versioning.
aws_iam_user: This resource would define the user and its configuration options such as its name and permissions.
aws_iam_access_key: This resource would create access keys for the user, which would be used to authenticate the user when accessing the S3 bucket.
aws_s3_bucket_policy: This resource would define the permissions that the user would have on the S3 bucket.
Here is an example Terraform configuration file that creates an S3 bucket, an IAM user, access keys, and a bucket policy:
provider "aws" {
region = "ap-south-1"
}
resource "aws_s3_bucket" "example_bucket" {
bucket = "example-bucket"
acl = "private"
}
resource "aws_iam_user" "example_user" {
name = "example-user"
}
resource "aws_iam_access_key" "example_access_key" {
user = aws_iam_user.example_user.name
}
resource "aws_s3_bucket_policy" "example_policy" {
bucket = aws_s3_bucket.example_bucket.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
AWS = aws_iam_user.example_user.arn
}
Action = [
"s3:GetObject",
"s3:PutObject"
]
Resource = "${aws_s3_bucket.example_bucket.arn}/*"
}
]
})
}
This configuration file defines an S3 bucket with a private access control list, an IAM user named “example-user”, access keys for the user, and a bucket policy that allows the user to read and write objects in the bucket.
10. Who maintains Terraform providers?
Terraform providers are maintained by both the open-source community and the respective companies that the providers belong to. The community contributes to the development of Terraform providers by submitting code contributions, bug reports, and feature requests through the project’s GitHub repository. The respective companies, such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure, maintain their Terraform providers by ensuring compatibility with their respective cloud services and updating them with new features and bug fixes as necessary. Terraform providers are usually updated and released separately from the core Terraform project to provide a modular and extensible architecture.
11. How can we export data from one module to another?
In Terraform, you can export data from one module to another using outputs. Outputs allow you to expose values from one module that can be consumed by another module or referenced outside of the module.
To export data from one module, you define an output in the module’s outputs.tf file, and assign a value to it. For example:
output "my_output" {
value = "some value"
}
Then, in the consuming module, you can reference this output using the syntax <module_name>.<output_name>
For example:
module "my_module" {
source = "./my_module"
my_output = module.other_module.my_output
}
This would assign the value of my_output from the other_module to the my_output variable in the my_module.
Thank you for reading!