Streamlining Multi-Environment Infrastructure  in Azure with Jenkins, Terragrunt, Tfsec, Tflint, and Terraform-docs.

Streamlining Multi-Environment Infrastructure in Azure with Jenkins, Terragrunt, Tfsec, Tflint, and Terraform-docs.

Azure Azure everywhere! - Buzz and Woody (Toy Story) Meme | Make a Meme

Introduction

Setting up and managing things in Cloud computing projects can be a big job that takes a lot of time. To make this easier, we use special tools like Terraform, which helps us create and control cloud resources using configuration files.

This article is all about making the process smoother. We want to manage different environments more efficiently, meaning we don't want to write the same code over and over (we call this the "Don't Repeat Yourself" rule). We also want to handle different backends for each environment. This is where Terragrunt comes in – it solves these challenges!

In the spotlight is Jenkins, which does some important jobs for us. It helps create documentation using Terraform-docs, checks our Terraform code for issues with TFlint and tfsec, and helps set up our infrastructure in various environments. All of this ensures that our work with Cloud infrastructure is easy and effective.

We will try to create the following architecture:

Tools / Technologies

To demonstrate the solution, I use the following tools/technologies:

  • Terraform - Terraform is an open-source infrastructure as code software tool created by HashiCorp.

  • Terragrunt - Terragrunt acts as a thin wrapper for Terraform, providing additional features such as remote state management, locking, and configurations for multiple environments. It helps address challenges related to code reuse and managing multiple backends.

  • Terraform-docs - Terraform-docs generates documentation for Terraform modules in various formats, providing clear and organized insights into the purpose and structure of your infrastructure code.

  • Tfsec - Tfsec is a security scanner for your Terraform code. It ensures that your infrastructure configurations adhere to best practices and security standards, helping to identify and mitigate potential vulnerabilities.

  • TFlint - TFlint is a Terraform linter that analyzes your Terraform configurations for potential errors, providing early feedback and ensuring adherence to coding standards.

  • Jenkins - Jenkins is an open-source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery

  • Visual Studio 2022 -A fully-featured, extensible, free IDE for creating modern applications for Android, iOS, Windows, as well as web applications and cloud services.

  • Azure - Microsoft Azure is a cloud computing service operated by Microsoft for application management via Microsoft-managed data centers.

  • GitHub - GitHub, Inc. is a provider of Internet hosting for software development and version control using Git.

Project Structure

This is what our project structure will look like:

|-- modules/
    |-- app_services/
        |-- main.tf
        |-- outputs.tf
        |-- variables.tf
    |-- network/
        |-- main.tf
        |-- outputs.tf
        |-- variables.tf
    |-- app_gateway/
        |-- main.tf
        |-- variables.tf
    |-- traffic_manager/
        |-- main.tf
        |-- variables.tf
    |-- resource_grp/
        |-- main.tf
        |-- variables.tf
        |-- outputs.tf
|--Terragrunt_files
    ├── dev
       ├── east_app_gateway
       │   └── terragrunt.hcl
       ├── east_rg
       │   └── terragrunt.hcl
       ├── east_vnet
       │   └── terragrunt.hcl
       ├── east_webapp
       │   └── terragrunt.hcl
       ├── traffic_manager_rg
       │   └── terragrunt.hcl
       ├── traffic_manager
       │   └── terragrunt.hcl
       ├── west_app_gateway
       │   └── terragrunt.hcl
       ├── west_rg
       │   └── terragrunt.hcl
       ├── west_vnet
       │   └── terragrunt.hcl
       ├── west_webapp
       │   └── terragrunt.hcl
       ├── terragrunt.hcl

    ├── preprod
       ├── east_app_gateway
       │   └── terragrunt.hcl
       ├── east_rg
       │   └── terragrunt.hcl
       ├── east_vnet
       │   └── terragrunt.hcl
       ├── east_webapp
       │   └── terragrunt.hcl
       ├── traffic_manager_rg
       │   └── terragrunt.hcl
       ├── traffic_manager
       │   └── terragrunt.hcl
       ├── west_app_gateway
       │   └── terragrunt.hcl
       ├── west_rg
       │   └── terragrunt.hcl
       ├── west_vnet
       │   └── terragrunt.hcl
       ├── west_webapp
       │   └── terragrunt.hcl
       ├── terragrunt.hcl

    ├── prod
        ├── east_app_gateway
        │   └── terragrunt.hcl
        ├── east_rg
        │   └── terragrunt.hcl
        ├── east_vnet
        │   └── terragrunt.hcl
        ├── east_webapp
        │   └── terragrunt.hcl
        ├── traffic_manager_rg
        │   └── terragrunt.hcl
        ├── traffic_manager
        │   └── terragrunt.hcl
        ├── west_app_gateway
        │   └── terragrunt.hcl
        ├── west_rg
        │   └── terragrunt.hcl
        ├── west_vnet
        │   └── terragrunt.hcl
        ├── west_webapp
        │   └── terragrunt.hcl
        ├── terragrunt.hcl
|--Jenkinsfile

Let's start by creating Terraform modules

Create modules

关于terraform Modules - 创建和发布您自己的模块_devops_weixin_0010034-CI/CD

We will use a modular structure to help us in organizing and maintaining our Terraform configuration effectively. The structure of the modules is as follows:

|-- modules/
    |-- app_services/
        |-- main.tf
        |-- outputs.tf
        |-- variables.tf
    |-- network/
        |-- main.tf
        |-- outputs.tf
        |-- variables.tf
    |-- app_gateway/
        |-- main.tf
        |-- variables.tf
    |-- traffic_manager/
        |-- main.tf
        |-- variables.tf
    |-- resource_grp/
        |-- main.tf
        |-- variables.tf
        |-- outputs.tf

OKAAAY LETS GO by whoroscope Sound Effect - Meme Button - Tuna

Resource Group

First we will create a module for azure resource group

Create a folder resource_grp and add the following files:

#main.tf
resource "azurerm_resource_group" "rg" {
  name     = var.rg_name
  location = var.location
}
#variables.tf
variable "rg_name" {
  description = "Name for the resource group"
  type        = string
}

variable "location" {
  description = "Location for the resource group"
  type        = string
}
#outputs.tf
output "rg_name" {
  value = azurerm_resource_group.rg.name
}

output "location" {
    value = azurerm_resource_group.rg.location
}

This Terraform configuration create an Azure Resource Group, and it provides outputs for the name and location of the created resource group.

This module will be used to create our three resource group of each environment as shown in the architecture of the project.

Azure Web Apps

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Node.js, PHP, and Python. Applications run and scale with ease on both Windows and Linux-based environments.

Create a web app

Start Visual Studio 2022 and select Create a new project.

In the Create a new project dialog, select ASP.NET Core Web App, and then select Next.

In the Configure your new project dialog, name your project, and then select Next.

After successufully creating the project . Go and modify the Index.cshtml file under pages folder and put "Welcom From Web App West"

Now we will create a git repository that we will use later

The same thing for the East region Web App, just modify the index.cshtml with "Welcom From Web App East"

Web App Services module

Create a folder named app_services and add the following files.

#main.tf
# Create the Linux App Service Plan
resource "azurerm_service_plan" "appserviceplan" {
  name                = var.app_service_plan_name
  location            = var.location
  resource_group_name = var.rg_name
  os_type             = "Linux"
  sku_name            = "B1"
}
# Create the web app
resource "azurerm_linux_web_app" "webapp" {
  name                = var.app_service_name
  location            = var.location
  resource_group_name = var.rg_name
  service_plan_id       = azurerm_service_plan.appserviceplan.id
  https_only            = true
  site_config { 
    minimum_tls_version = "1.2"
  }
}

#  Deploy code from GitHub repo
resource "azurerm_app_service_source_control" "sourcecontrol" {
  app_id             = azurerm_linux_web_app.webapp.id
  repo_url           = var.repo_url
  branch             = var.branch
  use_manual_integration = true
  use_mercurial      = false
}
#variables.tf
variable "rg_name" {
  description = "Name for the resource group"
  type        = string
}

variable "location" {
  description = "Location for the resource group"
  type        = string
}

variable "app_service_plan_name" {
  description = "Name for the App Service Plan"
  type        = string
}

variable "app_service_name" {
  description = "Name for the App Service"
  type        = string
}

variable "repo_url" {
  description = "URL of the Git repository"
  type        = string
}

variable "branch" {
  description = "Branch of the Git repository"
  type        = string
}
#output
output "webapp_name" {
  value       = azurerm_linux_web_app.webapp.default_hostname
  description = "The default hostname of the web app."
}
output "id" {
  value       = azurerm_linux_web_app.webapp.id
  description = "The ID of the web app."
}

In the main.tf file of the App Services module, the code creates Azure App Services along with necessary infrastructure. It defines an Azure Resource Group, an App Service Plan specifying the infrastructure details, and an App Service for hosting web applications. The configuration includes settings for the .NET framework version and integration with a Git repository for source control.

In the variables.tf file, essential variables like the resource group name (rg_name), location (location), App Service Plan name (app_service_plan_name), App Service name (app_service_name), Git repository URL (repo_url), and Git branch (branch) are declared.

The outputs.tf file outputs the default hostname (webapp_name) and the ID (id) of the Azure App Service, which will be used by other modules especially Application Gateway

Network

Create a folder network and add the following files

#main.tf
resource "azurerm_virtual_network" "vnet" {
  name                = var.vnet_name
  resource_group_name = var.rg_name
  location            = var.location
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "subnet" {
  name                 = "default"
  resource_group_name = var.rg_name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes    = ["10.0.0.0/24"]
}

resource "azurerm_public_ip" "ip" {
  name                = var.public_ip_name
  resource_group_name = var.rg_name
  location            = var.location
  sku                 = "Standard"
  allocation_method   = "Static"
  domain_name_label   =  var.domain_name
}
#outputs.tf
output "public_ip_id" { 
    value = azurerm_public_ip.ip.id 
}
#variables.tf
variable "rg_name" {
  description = "Name for the resource group"
  type        = string
}

variable "location" {
  description = "Location for the resource group"
  type        = string
}

variable "vnet_name" {
  description = "Name for the virtual network"
  type        = string
}

variable "public_ip_name" {
  description = "Name for the public IP"
  type        = string
}

variable "domain_name" {
  description = "Domaine Name for the public IP"
  type        = string
}

This will prepare the necessary infrastructure to be used by the application Gateway

Application Gateway

Azure Application Gateway is a load balancing service for web traffic, that helps make your applications more scalable and highly available. The Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities.

We will create a terraform module to create the Azure Application Gateway. We will be adding the Web Application Firewall (OWASP 3.0).

Application Gateway requires several other services — namely:

  • Virtual Network (VNET)

  • Subnet

  • Public IP

Therefore we will use the module for Network Services created previously

Create an app_gateway folder and add the following files

#main.tf
resource "azurerm_application_gateway" "app_gateway" {
  name                = var.name
  resource_group_name = var.rg_name
  location            = var.location

  sku {
    name     = "WAF_v2"
    tier     = "WAF_v2"
    capacity = 2
  }

  waf_configuration {
    enabled          = "true"
    firewall_mode    = "Detection"
    rule_set_type    = "OWASP"
    rule_set_version = "3.0"
  }

  gateway_ip_configuration {
    name      = "subnet"
    subnet_id = var.vnet_subnet_id
  }

  frontend_port {
    name = "Http"
    port = 80
  }

  frontend_ip_configuration {
    name                 = "frontend"
    public_ip_address_id = var.public_ip_id
  }

  backend_address_pool {
    name   = "AppService"
    fqdns  = [var.app_service_fqdn]
  }

  http_listener {
    name                           = "http"
    frontend_ip_configuration_name = "frontend"
    frontend_port_name             = "http"
    protocol                       = "Http"
  }

  probe {
    name                = "probe"
    protocol            = "Http"
    path                = "/"
    host                = var.app_service_fqdn
    interval            = "30"
    timeout             = "30"
    unhealthy_threshold = "3"
  }

  backend_http_settings {
    name                  = "http"
    cookie_based_affinity = "Disabled"
    port                  = 80
    protocol              = "Http"
    request_timeout       = 1
    probe_name            = "probe"
    pick_host_name_from_backend_address = "true"
  }

  request_routing_rule {
    name                       = "http"
    rule_type                  = "Basic"
    http_listener_name         = "http"
    backend_address_pool_name  = "AppService"
    backend_http_settings_name = "http"
    priority                   = 100
  }
}
#variables.tf
variable "name" {
  description = "Name for the Application Gateway"
  type        = string
}
variable "rg_name" {
  description = "Name for the Resource Group"
  type        = string
}

variable "location" {
  description = "Location for the Application Gateway"
  type        = string
}

variable "vnet_subnet_id" {
  description = "ID of the VNet Subnet for the Application Gateway"
  type        = string
}

variable "public_ip_id" {
  description = "ID of the Public IP for the Application Gateway"
  type        = string
}

variable "app_service_fqdn" {
  description = "FQDN of the App Service to be used in the Application Gateway configuration"
  type        = string
}

In the main.tf file of the Application Gateway module, the azurerm_application_gateway resource block specifies the gateway's name, resource group, and location. It also configures the gateway with Azure Web Application Firewall (WAF) capabilities.

The Application Gateway is designed to route traffic efficiently. It connects to a specified VNet subnet, uses a designated public IP, and directs traffic to a backend App Service using the provided Fully Qualified Domain Name (FQDN). Various settings ensure the proper functioning of the gateway, including probe configuration, backend HTTP settings, and request routing rules.

In the accompanying variables.tf file, essential variables like the gateway name, resource group name, location, VNet subnet ID, Public IP ID, and App Service FQDN are declared. These variables allow us to easily customize the Application Gateway configuration based on their specific needs, promoting integration with terragrunt files.

The next step is to setup Traffic Manager to route traffic to the gateways.

Traffic Manager

Azure Traffic Manager is a DNS-based traffic load balancer. This service allows you to distribute traffic to your public facing applications across the global Azure regions. Traffic Manager also provides your public endpoints with high availability and quick responsiveness.

We will create a module for Traffic manager as well

Create a folder named traffic_manager and add the following files

#main.tf
resource "azurerm_traffic_manager_profile" "traffic_profile" {
  name                   = var.profile_name
  resource_group_name    = var.rg_name
  traffic_routing_method = "Priority"

  dns_config {
    relative_name = var.name
    ttl           = var.ttl
  }

  monitor_config {
    protocol                     = var.monitor_protocol
    port                         = var.monitor_port
    path                         = var.monitor_path
    interval_in_seconds          = var.monitor_interval
    timeout_in_seconds           = var.monitor_timeout
    tolerated_number_of_failures = var.monitor_failures
  }
}

resource "azurerm_traffic_manager_azure_endpoint" "primary_endpoint" {
  name               = "primary-endpoint"
  profile_id         = azurerm_traffic_manager_profile.traffic_profile.id
  priority           = 1
  weight             = 100
  target_resource_id = var.primary_target_resource_id
}

resource "azurerm_traffic_manager_azure_endpoint" "secondary_endpoint" {
  name               = "secondary-endpoint"
  profile_id         = azurerm_traffic_manager_profile.traffic_profile.id
  priority           = 2
  weight             = 100
  target_resource_id = var.secondary_target_resource_id
}
#variables.tf
variable "name" {
  description = "Relative name for DNS configuration"
  type        = string
}

variable "rg_name" {
  description = "Name of the resource group"
  type        = string
}

variable "profile_name" {
  description = "Name of the Traffic Manager profile"
  type        = string
}

variable "ttl" {
  description = "Time to live for DNS records"
  type        = number
}

variable "monitor_protocol" {
  description = "Protocol used for monitoring"
  type        = string
}

variable "monitor_port" {
  description = "Port used for monitoring"
  type        = number
}

variable "monitor_path" {
  description = "Path used for monitoring"
  type        = string
}

variable "monitor_interval" {
  description = "Interval in seconds for monitoring"
  type        = number
}

variable "monitor_timeout" {
  description = "Timeout in seconds for monitoring"
  type        = number
}

variable "monitor_failures" {
  description = "Number of tolerated failures for monitoring"
  type        = number
}

variable "primary_target_resource_id" {
  description = "ID of the primary target resource (e.g., App Service)"
  type        = string
}

variable "secondary_target_resource_id" {
  description = "ID of the secondary target resource (e.g., App Service)"
  type        = string
}

In the main.tf , the azurerm_traffic_manager_profile resource defines the main Traffic Manager profile. It includes configurations such as the profile name, resource group name, and traffic routing method (in this case, "Priority").DNS configurations include the relative name and time-to-live (TTL) for DNS records.The monitor_config block specifies settings for health monitoring, such as the protocol, port, path, interval, timeout, and tolerated failures.Two azurerm_traffic_manager_azure_endpoint resources define the primary and secondary endpoints.Endpoints represent the Azure resources to which traffic is directed. They include settings like priority, weight, and the target resource's ID.The target_resource_id is the Azure Resource Manager (ARM) ID of the target resource, such as an Application Gateway in our case.

The variables.tf file declares input variables with descriptions and types. Variables like name, rg_name, profile_name, and others are used to parameterize the configuration, making it more flexible and reusable.We can customize these variables based on our specific requirements when we use Terragrunt files.

Terragrunt

Terragrunt: Simplify the management of your Terraform infrastructures -  Sokube

To deploy our infrastructure to multiple environments(Dev,PreProd,Prod) we will use Terragrunt ,because it simplifies the organization and management of Terraform code, enhancing clarity and maintainability by centralizing configuration logic while avoiding duplication across various environments.
Terragrunt acts as a smart coordinator for your Terraform modules. Instead of manually configuring each environment in your Terraform code, you create terragrunt.hcl files to define what modules to use and what parameters to pass. This way, Terragrunt separates the logical structure (Terraform modules) from the specific configurations needed in different environments.

First create a folder Terragrunt_files and create three folders within it for each environment(DEV,PREPROD,PROD)

For each environment folder :

we will put the folder of the resource that we will create within it, and create a terragrunt.hcl file within the resource folder.

we will create a root file terragrunt.hcl , a file that allows us to manage global configurations for all of our environments and dynamically generate a backend.

Our folder structure should look like this

|--Terragrunt_files
    ├── dev
       ├── east_app_gateway
       │   └── terragrunt.hcl
       ├── east_rg
       │   └── terragrunt.hcl
       ├── east_vnet
       │   └── terragrunt.hcl
       ├── east_webapp
       │   └── terragrunt.hcl
       ├── traffic_manager_rg
       │   └── terragrunt.hcl
       ├── traffic_manager
       │   └── terragrunt.hcl
       ├── west_app_gateway
       │   └── terragrunt.hcl
       ├── west_rg
       │   └── terragrunt.hcl
       ├── west_vnet
       │   └── terragrunt.hcl
       ├── west_webapp
       │   └── terragrunt.hcl
       ├── terragrunt.hcl

    ├── preprod
       ├── east_app_gateway
       │   └── terragrunt.hcl
       ├── east_rg
       │   └── terragrunt.hcl
       ├── east_vnet
       │   └── terragrunt.hcl
       ├── east_webapp
       │   └── terragrunt.hcl
       ├── traffic_manager_rg
       │   └── terragrunt.hcl
       ├── traffic_manager
       │   └── terragrunt.hcl
       ├── west_app_gateway
       │   └── terragrunt.hcl
       ├── west_rg
       │   └── terragrunt.hcl
       ├── west_vnet
       │   └── terragrunt.hcl
       ├── west_webapp
       │   └── terragrunt.hcl
       ├── terragrunt.hcl

    ├── prod
        ├── east_app_gateway
        │   └── terragrunt.hcl
        ├── east_rg
        │   └── terragrunt.hcl
        ├── east_vnet
        │   └── terragrunt.hcl
        ├── east_webapp
        │   └── terragrunt.hcl
        ├── traffic_manager_rg
        │   └── terragrunt.hcl
        ├── traffic_manager
        │   └── terragrunt.hcl
        ├── west_app_gateway
        │   └── terragrunt.hcl
        ├── west_rg
        │   └── terragrunt.hcl
        ├── west_vnet
        │   └── terragrunt.hcl
        ├── west_webapp
        │   └── terragrunt.hcl
        ├── terragrunt.hcl

Let's take the terragrunt.hcl file of east_app_gateway for example

include {
  path = find_in_parent_folders()
}
terraform {
    source = "../../../modules/app_gateway"
}
dependency "east_network" {
    config_path = "../east_vnet"
    mock_outputs = {
      subnet_id  = "id_mock"
      public_ip_id= "pubid_mock"
    }
}
dependency "east_webapp" {
    config_path = "../east_webapp"
    mock_outputs = {
      webapp_name = "webapp_mock"
    }
}
dependency "east_rg" {
    config_path = "../east_rg"
    mock_outputs = {
      rg_name  = "east_rg_dev_mock"
      location = "East US"
    }
}
inputs = {
  name                 = "app-gateway-eastus-dev"
  rg_name              = dependency.east_rg.outputs.rg_name
  location             = dependency.east_rg.outputs.location
  vnet_subnet_id       = dependency.east_network.outputs.subnet_id
  public_ip_id         = dependency.east_network.outputs.public_ip_id
  app_service_fqdn     = dependency.east_webapp.outputs.webapp_name
}

The include block specifies the location of the root Terraform configuration file using find_in_parent_folders().

The terraform block defines the source location for the Terraform configuration. In this case, it points to the app_gateway module located at the specified relative path.

These dependency blocks specify dependencies on other Terraform configurations (modules)(web app, resource group, and network). In the context of Terragrunt, when running commands like terragrunt init or terragrunt apply, Terragrunt uses the information in dependency blocks to determine the correct order in which to apply changes to different modules. It ensures that dependencies are resolved before applying the changes, creating a modular and manageable infrastructure deployment process.

Mock outputs are provided for each dependency, simulating the outputs of the referenced modules during Terragrunt planning.

The inputs block provides input values for the variables of the referenced module (../../../modules/app_gateway).These inputs include a name, resource group name, location, subnet ID, public IP ID, and application service FQDN.

With the same logic,we create other terragrunt configuration files.

My GitHub Repo

The root terragrunt.hcl file will look like

# Indicate what region to deploy the resources into
generate "provider" {
  path = "provider.tf"
  if_exists = "overwrite_terragrunt"
  contents = <<EOF
 terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~>3.0"
    }
    random = {
      source  = "hashicorp/random"
      version = "~>3.0"
    }
  }
}

provider "azurerm" {
  features {}
}
 EOF
}

# Backend Bucket
generate "backend" {
  path = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = <<EOF
terraform {
  backend "azurerm" {
      resource_group_name  = "tfstate"
      storage_account_name = "tfstate24429"
      container_name       = "tfstate"
      key                  = "dev/${path_relative_to_include()}/terraform.tfstate"
  }
}
 EOF
}

this file sets up the provider configuration for Azure (azurerm) and the backend configuration for storing the Terraform state in an Azure Storage Account. The backend configuration includes a directory structure, placing the state file under the dev directory.

The generate "provider" block produces the provider.tf file with Azure and random provider configurations, while the generate "backend" block generates the backend.tf file with the Azure Storage Account backend configuration. Terragrunt uses these generated files to manage the backend and provider settings dynamically based on the environment.

NB: we will create the storage account for the backend configuration in the Jenkins section don't worry :)

Pacha perfect Meme Generator - Imgflip

Jenkins

Now it's time for automation, we will use Jenkins to generate documentation using terraform-docs, and commit the updated documentation to the main branch if changes are detected. Subsequently, linting and security testing are conducted on the Terraform modules using tflint and tfsec. The pipeline then proceeds to execute Terraform commands through Terragrunt for each environment, allowing users to choose between applying or destroying infrastructure changes.. Jenkins pipeline will leverage tool installations, credentials, and parameters to orchestrate the entire process, providing a streamlined and automated approach to infrastructure deployment, documentation, and testing.

Do you even build? - Jenkins - Sticker | TeePublic

Tools To install on the machine running Jenkins:

-Install Terragrunt https://terragrunt.gruntwork.io/docs/getting-started/install/

-Install az cli https://learn.microsoft.com/en-us/cli/azure/install-azure-cli

-Install Terraform-docs https://terraform-docs.io/user-guide/installation/

Jenkins Configuration

Set up Terraform Installation :

Install Terraform Plugin

When Terraform Plugin is installed, Go to Manage Jenkins → Global Tool Configuration → Click on Terraform Installations → Enable the Install automatically checkbox.

Github Credentials:

Now we’ll set up the personal acess token that Jenkins will use to interact with our Repository .In GitHub account >Settings\>Developer Settings\>Personal Access Tokens> Generate New Token

Next, we will copy that Token and go to "Manage Jenkins" > "Manage Credentials" > "Add Credentials" Chose Username and Password and give an ID

Azure Credentials:

Store terraform.tfstate remotly

USING REMOTE STATE TO TRANSFER DATA FROM ONE MODULE TO ANOTHER IN TERRAFORM  | by Amit Sharma | Medium

Configure remote state storage account

This storage account will be used by Terraform backend configurations. Backends define where Terraform's state snapshots are stored

#!/bin/bash

RESOURCE_GROUP_NAME=tfstate
STORAGE_ACCOUNT_NAME=tfstate$RANDOM
CONTAINER_NAME=tfstate

# Create resource group
az group create --name $RESOURCE_GROUP_NAME --location eastus

# Create storage account
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob

# Create blob container
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME

Run this command and copy the access key

ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' -o tsv)
echo $ACCOUNT_KEY

Create Azure Storage Credentials:

We need add Azure Credentials for Terraform executions,

  1. Select the drop option Add Credentials from the global item;

  2. Fill in the form:

  • Kind: Secret text

  • Secret: <your copied access key>

  • ID: ARM_ACCESS_KEY

  • Description: Azure Storage Access Key

Pipeline

Create a new pipeline Job

Pipeline - Jenkinsfile

Our Jenkinsfile will look like this:

pipeline {
    parameters {
        choice(
            name: 'ACTION',
            choices: ['apply', 'destroy'],
            description: 'Select the action you want to perform'
        )
    }
    agent any
    tools {
        terraform "terraform"
    }
    environment {
        ARM_ACCESS_KEY = credentials('ARM_ACCESS_KEY')
    }
    stages {
        stage('Checkout') {
            steps {
                script {
                    checkout scmGit(branches: [[name: '*/main']], extensions: [], userRemoteConfigs: [[credentialsId: 'GitHubcredentials', url: 'https://github.com/Selmouni-Abdelilah/Terragrunt_Jenkins.git ']])
                }
            }
        }
        stage('Azure login'){
            steps{
                withCredentials([azureServicePrincipal('Azure_credentials')]) {
                    sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
                }
            }
        }
        stage("Generate Documentation") {
            steps {
                script {
                    sh "terraform-docs markdown . --recursive --output-file README.md"
                    sh "rm -f README.md"
                    sh "git add ."
                    def changes = sh(script: 'git status --porcelain', returnStdout: true).trim()
                    if (changes) {
                        sh "git commit -m 'Add terraform documentation from Jenkins'"
                    }
                    def currentBranch = sh(script: 'git rev-parse --abbrev-ref HEAD', returnStdout: true).trim()
                    if (currentBranch != 'main') {
                            sh 'git checkout -B main'
                    }
                }
            }
        }
        stage("Push to Git Repository") {
            steps {
                withCredentials([gitUsernamePassword(credentialsId: 'GitHubcredentials', gitToolName: 'Default')]) {
                    sh "git push -u origin main"
                }
            }
        }
        stage('Tflint Testing') {
            steps {
                script {
                    sh ' sudo curl -s https://raw.githubusercontent.com/terraform-linters/tflint/master/install_linux.sh | bash '
                    dir('modules') {
                        sh ' tflint --init --recursive'
                        sh ' tflint --module --recursive --force '
                    }
                }
            }    
        }
        stage('Tfsec Testing') {
            steps {
                script {
                    sh ' sudo curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash '
                    dir('modules') {
                        sh ' tfsec -s'
                    }
                }
            }    
        }
        stage('Dev Environment') {
            steps {
                script {
                    dir('Terragrunt_files/dev') {
                        sh 'terragrunt run-all  init'
                        sh "terragrunt run-all  ${params.ACTION} --terragrunt-non-interactive "
                    }
                }
            }    
        }
        stage('PreProd Environment') {
            steps {
                script {
                    dir('Terragrunt_files/preprod') {
                        sh 'terragrunt run-all  init'
                        sh "terragrunt run-all  ${params.ACTION} --terragrunt-non-interactive "
                    }
                }
            }    
        }
        stage('Prod Environment') {
            steps {
                script {
                    dir('Terragrunt_files/prod') {
                        sh 'terragrunt run-all  init'
                        sh "terragrunt run-all  ${params.ACTION} --terragrunt-non-interactive "
                    }
                }
            }    
        }
    }
    post {
        // Clean after build
        always {
            cleanWs(cleanWhenNotBuilt: false,
                    deleteDirs: true,
                    disableDeferredWipeout: true,
                    notFailBuild: true,
                    patterns: [[pattern: '**/*', type: 'INCLUDE']]
            )
        }
    }
}

Let's break down the key stages of the pipeline:

  1. Checkout:

    • The pipeline begins by checking out the specified branch (main) of the Git repository containing Terraform and Terragrunt configurations.
  2. Azure Login:

    • The Azure login stage uses the provided Azure Service Principal credentials to authenticate with Azure.
  3. Generate Documentation:

    • The terraform-docs tool is employed to generate Markdown documentation for Terraform modules.

    • The generated documentation is then cleaned, committed, and pushed to the main branch in the Git repository if changes are detected.

  4. Push to Git Repository:

    • This stage pushes the committed changes to the main branch of the Git repository.
  5. Tflint Testing:

    • The pipeline performs linting of Terraform code using tflint to catch potential errors and enforce best practices.

    • The linting process is applied recursively to all Terraform modules.

  6. Tfsec Testing:

    • Security testing is conducted using tfsec to identify potential security vulnerabilities or misconfigurations.

    • This stage focuses on scanning the Terraform modules.

  7. Dev, PreProd, and Prod Environments:

    • These stages are responsible for deploying or destroying infrastructure changes using Terragrunt in each respective environment.

    • The ACTION parameter allows users to choose between applying or destroying infrastructure changes.

  8. Post-Build Cleanup:

    • Finally, the post-build stage ensures workspace cleanliness by deleting all files and directories.

Pipeline Execution

Step 1: terraform apply

You will not see the Build with parameters in the first time , so hit build and abort the pipeline then refresh the page you will see Build with parameters

Hit the build with parameters, and choose apply as an Action

Results

If everything is OK your build should succeed

Terraform-docs

terraform-docs is a utility to generate documentation from Terraform modules in various output formats.

In this stage terraform-docs will generate Markdown documentation recursively for all modules in the current directory and its subdirectories.

Navigate to your repository and inspect the README.md files generated by terraform-docs through Jenkins for each module

TFlint

TFLint is a popular open-source linter and static analysis tool designed explicitly for Terraform. It performs automated checks on Terraform configurations to identify potential issues, errors, and violations of best practices. TFLint helps maintain code quality, consistency, and reliability in Terraform projects.

To configure TFlint :Add a .tflint.hcl file for each module, Identify and add the rules specific to each module.

our .tflint.hcl will look like this(feel free to modify rules depending on your needs)

config {
#Enables module inspection
module = true
force = false
}

plugin "azurerm" {
    enabled = true
    version = "0.25.1"
    source  = "github.com/terraform-linters/tflint-ruleset-azurerm"
}

# Disallow deprecated (0.11-style) interpolation
rule "terraform_deprecated_interpolation" {
enabled = true
}

# Disallow legacy dot index syntax.
rule "terraform_deprecated_index" {
enabled = true
}

# Disallow variables, data sources, and locals that are declared but never used.
rule "terraform_unused_declarations" {
enabled = true
}

# Disallow // comments in favor of #.
rule "terraform_comment_syntax" {
enabled = false
}

# Disallow output declarations without description.
rule "terraform_documented_outputs" {
enabled = true
}

# Disallow variable declarations without description.
rule "terraform_documented_variables" {
enabled = true
}

# Disallow variable declarations without type.
rule "terraform_typed_variables" {
enabled = true
}

# Disallow specifying a git or mercurial repository as a module source without pinning to a version.
rule "terraform_module_pinned_source" {
enabled = true
}

# Enforces naming conventions
rule "terraform_naming_convention" {
enabled = true

#Require specific naming structure
variable {
format = "snake_case"
}

locals {
format = "snake_case"
}

output {
format = "snake_case"
}

#Allow any format
resource {
format = "none"
}

module {
format = "none"
}

data {
format = "none"
}

}

# Disallow terraform declarations without require_version.
rule "azurerm_linux_virtual_machine_invalid_size" {
enabled = true
}

# Require that all providers have version constraints through required_providers.
rule "azurerm_traffic_manager_profile_invalid_traffic_routing_method" {
enabled = true
}
rule "azurerm_traffic_manager_profile_invalid_profile_status" {
enabled = true
}

# Ensure that a module complies with the Terraform Standard Module Structure
rule "terraform_standard_module_structure" {
enabled = true
}

# terraform.workspace should not be used with a "remote" backend with remote execution.
rule "terraform_workspace_remote" {
enabled = true
}

In the stage of our pipeline ,TFLint is initialized with tflint --init --recursive to download the required plugins for linting.The main linting command is executed with tflint --module --recursive --force, which performs linting on Terraform files in a module structure. The --force flag is used to continue processing files even if some errors are encountered.

You can see the errors encountered in the console output: missing description for some outputs ....

Tfsec

tfsec is a static analysis security scanner for your Terraform code. Designed to run locally and in your CI pipelines, developer-friendly output and fully documented checks mean detection and remediation can take place as quickly and efficiently as possible.tfsec takes a developer-first approach to scanning your Terraform templates; using static analysis and deep integration with the official HCL parser it ensures that security issues can be detected before your infrastructure changes take effect.

In this stage ,Jenkins installs Tfsec by fetching the installation script from its GitHub repository and then executing it. After that, it changes the working directory to the 'modules' directory and runs Tfsec with the -s flag, which stands for scanning. The -s flag is used to perform a security scan on Terraform files. This helps identify potential security issues or misconfigurations in your Terraform code.

Results:

Fortunately Tfsec didn't find any vulnerability with the code and it succeed the test case

Provision Infrastructure

The stages (Dev Environment,PreProd Environment,Prod Environment) in the Jenkins pipeline are responsible for deploying the infrastructure across different environments using Terragrunt.

Navigate to your Azure Subscription and Inspect resources created

At this point , everything is good.

Step 2: terraform destroy

Click on build with parameters and choose the same environment but this time with destroy action

Results

The output show that the resources were destroyed successfully

Go to your Azure portal and check if the resources have been deleted.

Meme Creator - Funny Hooray we are done ! Meme Generator at MemeCreator.org!

Thank you for reading! Feel free to explore the code used in this article on my GitHub repository.