Exploring Azure Architectures: Automation, Security, and Scalability
PermalinkIntroduction
In this blog, we're going to tackle the creation of a full-fledged cloud system step by step. Our focus will be on thorough testing to ensure each component functions optimally. Through this process, we'll become well-versed in cloud architectures, learning how to design modern, secure, and dependable cloud applications. Furthermore, we'll automate tasks such as infrastructure setup and application deployment using tools like Terraform and Azure Pipelines, making our system more efficient and scalable.
Here's how our architecture in Azure will look:
PermalinkStep1: Setting Up Infrastructure Provisioning with Terraform and Azure DevOps
Setting Up Terraform
In our infrastructure provisioning process on Azure, Terraform plays a pivotal role. By using Terraform, we ensure a robust, safe, and efficient method for building, modifying, and versioning our infrastructure. Here's how we configure Terraform for our Azure environment:
Terraform Configuration Files: All Terraform configuration files are stored in our repository at Link. These files outline the infrastructure components to be created. Customize resource names in the
variables.tf
file as required for your setup.Backend Configuration: In
provider.tf
, configure the backend snippet to point to your actual storage account where the Terraform state file will reside. Additionally, insqldb.tf
, replaceobject_id
with your Microsoft account Object ID.Resource Group: Create a resource group named
azarch-resource-group
as we'll use it in our Terraform configuration as a data resource.
Setting up Azure DevOps
To integrate Terraform into our deployment pipeline, we configure Azure DevOps accordingly:
Azure DevOps Pipeline Configuration
Begin by creating a new pipeline within Azure DevOps and add the necessary steps to download and execute Terraform. You can locate the pipeline within the same repository.
trigger: - main - feature/* pool: vmImage: ubuntu-latest stages: - stage: TerraformPlan jobs: - job: Plan steps: - task: TerraformInstaller@1 inputs: terraformVersion: 'latest' - task: TerraformTaskV4@4 inputs: provider: 'azurerm' command: 'init' commandOptions: '-upgrade' backendServiceArm: 'tfstate-svc-conn' backendAzureRmResourceGroupName: 'tfstate' backendAzureRmStorageAccountName: 'tfstate24429' backendAzureRmContainerName: 'tfstate' backendAzureRmKey: 'terraform.tfstate' - task: TerraformTaskV4@4 inputs: provider: 'azurerm' command: 'plan' commandOptions: '-lock=false --var azarch-vm_username=$(Vm_Username) --var azarch-vm_passwd=$(Vm_Passwd) --var db_admin_login=$(Db_Username) --var db_admin_password=$(Db_Passwd) -out $(Build.ArtifactStagingDirectory)/tfplan' environmentServiceNameAzureRM: 'svc-conn-to-azure' - task: PublishBuildArtifacts@1 inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'tfplan' publishLocation: 'Container' - stage: condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) dependsOn: TerraformPlan jobs: - job: Apply steps: - task: DownloadPipelineArtifact@2 inputs: buildType: 'current' artifactName: 'tfplan' targetPath: '$(System.DefaultWorkingDirectory)' - task: TerraformInstaller@1 inputs: terraformVersion: 'latest' - task: TerraformTaskV4@4 inputs: provider: 'azurerm' command: 'init' commandOptions: '-upgrade' backendServiceArm: 'tfstate-svc-conn' backendAzureRmResourceGroupName: 'tfstate' backendAzureRmStorageAccountName: 'tfstate24429' backendAzureRmContainerName: 'tfstate' backendAzureRmKey: 'terraform.tfstate' - task: TerraformTaskV4@4 inputs: provider: 'azurerm' command: 'apply' commandOptions: '-lock=false --auto-approve $(System.DefaultWorkingDirectory)/tfplan' environmentServiceNameAzureRM: 'svc-conn-to-azure'
The pipeline follows the typical Terraform workflow of planning changes before applying them to the infrastructure. It also ensures that the deployment only occurs on successful planning and on the
main
branch to maintain a controlled deployment process.Service Connection Setup
Before running the pipeline, ensure you have set up a service connection to access your Azure subscription:
Navigate to Project Settings in Azure DevOps, then to Pipelines > Service Connections > New service connection. Choose Azure Resource Manager > Service Principal (automatic), select your Azure subscription, name the connection ('svc-conn-to-azure'), and save it.
Similarly create another one for the backend storage account where you want to store your terraform.tfstate and name it ('tfstate-svc-conn')
Creating and Running the Pipeline
With the service connection in place, follow these steps to create and run the pipeline:
Create a new pipeline, select your GitHub/Azure repository where the code is stored, and choose "using existing pipeline."
Configure variables such as
Vm_Username
,Vm_Passwd
(as a secret),Db_Username
, andDb_Password
(as secrets) within the pipeline settings.Run the pipeline to initiate the deployment process using the Azure service connection for authentication and Terraform for resource deployment.
This setup ensures a structured and controlled deployment process for managing infrastructure on Azure using Terraform and Azure DevOps.
Before proceeding to the next steps, please make sure to securely save the following credentials as we will need them later on.
Connection String for : Azure SQL , Azure Redis, EventGrid Storage Acc,CosmosDB
PermalinkStep2: Setting Up the Shopping Cart App on AKS
In this phase, we'll deploy the shopping cart application to AKS (Azure Kubernetes Service) created previously using Terraform and we'ensure secure handling of sensitive credentials and connection to the SQL database.
Create Secrets for Sensitive Credentials
To securely manage sensitive information used by our application, we'll create Kubernetes secrets objects for storing connection strings:
kubectl create secret generic menudb --from-literal=connection-string="<sqldb connection string>"
kubectl create secret generic storageconnectionstring --from-literal=connection-string="<eventgridstorageacc-connection-string>"
kubectl create secret generic Redis --from-literal=connection-string="<Redis Connection String>"
Private Endpoint to SQL Database
If you did the first step, we created a private endpoint for the SQL database and disabled public access using terraform to enhance security of our architecture.
Clone Application Repository
Find the code for the shopping cart along with kubernetes manifest in this link . In the manifest.yaml
file, you'll find an environment variable mapping block. This block ensures that the application fetches sensitive information from the Kubernetes secrets:
env:
- name: MenusDB
valueFrom:
secretKeyRef:
name: menudb
key: connection-string
- name: StorageConnectionString
valueFrom:
secretKeyRef:
name: storageconnectionstring
key: connection-string
- name: Redis
valueFrom:
secretKeyRef:
name: redis
key: connection-string
Setup Azure DevOps
To ensure Continuous integration and continuous deployment, we'll configure Azure DevOps with necessary service connections:
Create Service Connections: In Azure DevOps, navigate to Project Settings > Pipelines > Service Connections > New service connection. Create two service connections:
Docker registry > Service Principal (automatic) for ACR (Azure Container Registry) named 'azarchcontainerregistry'.
Kubernetes > Azure Subscription for AKS (Azure Kubernetes Service) named 'kubernetesServiceConnection'.
Azure Pipelines Configuration
Here's the azure-pipelines.yaml
file for the pipeline:
trigger:
- main
- feature/*
pool:
vmImage: ubuntu-latest
steps:
- task: Docker@2
inputs:
containerRegistry: 'azarchcontainerregistry'
repository: 'cart'
command: 'buildAndPush'
Dockerfile: '**/Dockerfile'
tags: 'latest'
- task: KubernetesManifest@1
inputs:
action: 'deploy'
connectionType: 'kubernetesServiceConnection'
kubernetesServiceConnection: 'azarch-aks1-default'
manifests: 'manifest.yaml'
Creating and Running the Pipeline
Now, with all service connections set up, create a new pipeline in Azure DevOps using this configuration file. This pipeline will handle building the Docker image, pushing it to ACR, and deploying it to AKS automatically.
The shopping cart app is ready to upload orders to the storage account for upcoming processing.
PermalinkStep3: Setting Up Order Processing Using Event-Driven Approach
In this section, we'll discuss setting up order processing using an event-driven approach. This method allows for a more responsive and scalable system by triggering actions based on events occurring within the system. Specifically, each time an order is placed in the shopping cart, it will be uploaded to the storage account. Event Grid will handle this event, sending blob information to Azure Functions, which will then fetch the order content and store it in Cosmos DB. Let's get started!
Integration between Storage Account, Event Grid, and Azure Functions
- If you've followed the first step and deployed the infrastructure through Terraform, you'll have the integration between the storage account and Event Grid topic already set up. Now, we'll focus on integrating Azure Functions with the Event Grid topic.
Deploy Azure Function to Azure Functions App using Azure DevOps
First, find the code for the Azure Function in this [repository link]. The function is responsible for handling event triggers and fetching the content of the blob, then storing it in Cosmos DB.
Next, automate the deployment of the function to the Azure Functions App using Azure Pipelines. Take a look at the
azure-pipelines.yaml
file in the same repository for the deployment configuration.This pipeline ensures continuous building of the function and continuous deployment to the Azure Functions App.
Running the Pipeline
Before running the pipeline, set up environment variables as secrets that refer to the connection strings to be used (e.g.,
CosmosDBConnection
andStorageConnectionString
) this will enhance the security of our architecture.After a successful run of the pipeline, navigate to your Azure Function App. You'll see that your function is ready to receive events.
Configure Event Grid Integration with Azure Function App
- Define an event subscription within Event Grid to capture order placement events. Configure the subscription to trigger your Azure Function whenever such an event occurs.
Event-Driven Flow Testing:
Upload an order in JSON format to the storage account. Event Grid will detect the event and invoke your configured Azure Function.
Check the logs to ensure that the Azure Function retrieved the order details from the blob using the provided blob information and processed them accordingly.
Verify if the order is successfully stored in Cosmos DB.
PermalinkStep4: Setting Up Inventory Management App Using App Services
In this step, we'll deploy the inventory application to the App Service created previously using Terraform and ensure secure handling of connections to the SQL database.
Grant Database Access to Microsoft Entra User
- Enable Microsoft Entra authentication to the SQL Database by assigning a Microsoft Entra user as the admin of the server. Replace the
object_id
insqldb.tf
file with your actual object ID in Entra ID.
- Enable Microsoft Entra authentication to the SQL Database by assigning a Microsoft Entra user as the admin of the server. Replace the
Use Managed Identity Connectivity
- Configure your App Service app to connect to the SQL Database with a system-assigned managed identity. Managed identity was enabled on the App Service in the first step using Terraform.
Grant Permissions to Managed Identity
Connect to the database using
sqlcmd
or the query editor from the portal and run the following commands to grant permissions:CREATE USER [<appservice-name>] FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER [<appservice-name>]; ALTER ROLE db_datawriter ADD MEMBER [<appservice-name>]; GO
Connection String
Your connection string should look like:
Server=tcp:<sql-server-name>.database.windows.net,1433;Initial Catalog=azarch-sql-database;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Authentication="Active Directory Managed Identity";
- Save this connection string as it will be used later.
Deploy to App Service through Azure Pipelines
- Clone the repository containing the app code and
azure-pipelines.yaml
from this link
- Clone the repository containing the app code and
Azure Pipelines Configuration
The pipeline provided ensures automatic build and deployment of our app to the App Service. Create a pipeline in Azure DevOps using this configuration file and run it accordingly.
trigger: - main - feature/* pool: vmImage: windows-latest stages: - stage: Build displayName: 'Build Stage' jobs: - job: Build displayName: 'Build Job' steps: - task: UseDotNet@2 inputs: packageType: 'sdk' version: '6.0.x' - task: DotNetCoreCLI@2 displayName: 'Dotnet restore' inputs: command: 'restore' projects: '**/*.csproj' feedsToUse: 'config' nugetConfigPath: 'NuGet.config' - task: DotNetCoreCLI@2 displayName: 'Dotnet build' inputs: command: 'build' projects: '**/*.csproj' arguments: '--configuration $(BuildConfiguration) --no-restore' - task: DotNetCoreCLI@2 displayName: Publish inputs: command: 'publish' publishWebProjects: false projects: '**/*.csproj' arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory) --no-restore' - task: PublishPipelineArtifact@1 displayName: Publish pipeline artifacts inputs: targetPath: '$(Build.ArtifactStagingDirectory)' artifact: 'package' publishLocation: 'pipeline' - stage: Deploy displayName: 'Deploy Stage' dependsOn: Build condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) jobs: - job: Deploy displayName: 'Deploy Job' steps: - task: DownloadPipelineArtifact@2 inputs: buildType: 'current' artifactName: 'package' targetPath: '$(System.DefaultWorkingDirectory)' - task: AzureRmWebAppDeployment@4 inputs: ConnectionType: 'AzureRM' azureSubscription: 'svc-conn-to-azure' appType: 'webAppLinux' WebAppName: 'azarchinventorywebapp' packageForLinux: '$(System.DefaultWorkingDirectory)/*.zip' RuntimeStack: 'DOTNETCORE|6.0'
Create Connection String in App Service
Navigate to your app service environment variables and create a new connection string:
Name: MenusDB
Value: Connection string saved previously
Type: SQL
Our inventory management app is now deployed to the App Service with secure connectivity to the SQL database.
PermalinkStep5: Setting Up Catalog App
In this step we'll be setting Up Catalog App inside the VM created using Terraform ,so let's start.
1.Configure IIS web server
Connect to your VM using RDP and try to run this command in Powershell as administrator. This will configure an IIS web server.
Install-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-Features
2.Storing Secrets in Azure Key Vault
Since we've already created an Azure Key Vault using Terraform, the next step is to create secrets to store our connection strings for SQL and Redis.
Navigate to the Secrets tab in Azure Key Vault and create the necessary secrets.
Allow the VM to use these secrets by assigning roles to it. Since we've enabled managed identity for the VM, navigate to IAM roles > Add role > Choose Key Vault Secrets User > Choose managed identity for members > Choose VM identity > Click on Save.
Modify appsettings.json file with your actual AKV url
3.Build Pipeline
You can find the code for the application in the following link: [link].
Using the existing
azure-pipelines.yaml
file, create a new pipeline in Azure DevOps and run it accordingly.
trigger:
- main
- feature/*
pool:
vmImage: windows-latest
stages:
- stage: Build
displayName: 'Build Stage'
jobs:
- job: Build
displayName: 'Build Job'
steps:
- task: UseDotNet@2
inputs:
packageType: 'sdk'
version: '6.0.x'
- task: DotNetCoreCLI@2
displayName: 'Dotnet restore'
inputs:
command: 'restore'
projects: '**/*.csproj'
feedsToUse: 'config'
nugetConfigPath: 'NuGet.config'
- task: DotNetCoreCLI@2
displayName: 'Dotnet build'
inputs:
command: 'build'
projects: '**/*.csproj'
arguments: '--configuration $(BuildConfiguration) --no-restore'
- task: DotNetCoreCLI@2
displayName: Publish
inputs:
command: 'publish'
publishWebProjects: false
projects: '**/*.csproj'
arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory) --no-restore'
- task: PublishPipelineArtifact@1
displayName: Publish pipeline artifacts
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'vmpackage'
publishLocation: 'pipeline'
This pipeline will build and publish the code to be used in the release pipeline
4.Release pipeline
Follow this link from Microsoft Learn to create the release pipeline for the project: Microsoft Learn link for creating release pipeline.
For the artifact: Use the artifact from the build pipeline
PermalinkStep6: Setting Up the Application Gateway
In this final step, we'll configure the Application Gateway. This step ensures secure access to our Catalog VM and Inventory App Service, even though we've disabled public access to them. The Application Gateway serves as a Layer 7 load balancer, providing a single point of access without exposing these resources to the public. It's important to note that this secure configuration was already set up using Terraform in the first step of this blog. We utilized service endpoints for the App Service and VNet peering for the VM, ensuring a secure and controlled access mechanism.
PermalinkConclusion
In this blog post we emphasized the importance of automation, security, and scalability in modern cloud infrastructure setups. By using Azure services, best practices, and automation tools like Terraform and Azure DevOps, organizations can create strong, secure, and scalable infrastructures to effectively host their applications in the cloud.