Azure and Terraform, Round Two

I recently blogged about using Terraform to manage resources in Azure. To be honest, my implementation was okay, but it could definitely improve. This post is an update on how I’ve updated the structure and usage of Terraform within projects.

Project Structure

On any given project that has Terraform resources, my folder structure looks like this:

project
│   .gitignore
│   azure-pipelines.yml
│   create-storage.sh  
│
└───terraform
    │   data.tf
    │   locals.tf
    │   main.tf
    │   provider.tf
    │   variables.tf
    │   versions.tf
    │
    └───environments
        └───development
            │   terraform.tfvars
        └───staging
            │   terraform.tfvars
        └───production
            │   terraform.tfvars

.gitignore

Pretty standard .gitignore file here. I use JetBrains IDEs, so I pull in the IntelliJ-standard entries, plus a few more. Go here for the exact .gitignore I use.

azure-pipelines.yml

As my resources are in Azure, it makes sense to use Azure DevOps for build and deploy pipelines. The build pipeline is explicitly defined with Azure Pipeline’s YAML schema. The release pipeline, unfortunately, is currently only defined within the web UI of Azure Pipelines (it’s really just a terraform apply at the end of the day, anyway).

Generally speaking, the Terraform bits in my azure-pipelines.yml is the same from project to project. Note that I truncated the file to only include the development environment, but the other environments are basically the same but with updated variables.

variables:
  AZURE_SUBSCRIPTION: 'xxx'
  BASE_ENVIRONMENT_PATH: 'terraform/environments/$(ENVIRONMENT_NAME)'
  CONTAINER_NAME: $(ENVIRONMENT_PREFIX)terraform
  KEYVAULT_NAME: 'product-terraform-kv'
  KEYVAULT_SECRET_NAME: $(ENVIRONMENT_PREFIX)-storage-account-key
  LOCATION: 'eastus'
  STORAGE_ACCOUNT_NAME: 'productterraform'
  TERRAFORM_PATH: 'terraform'
  TERRAFORM_STATE: 'product_infrastructure.tfstate'
  TERRAFORM_VERSION: '0.12.3'
  TF_IN_AUTOMATION: 'true'

stages:
- stage: Setup
  jobs:
  - job: SetupDevelopmentStorage
    variables:
      ENVIRONMENT_PREFIX: 'd'
      ENVIRONMENT_NAME: 'development'
    displayName: 'Setup Development Storage'
    steps:
    - task: AzureCLI@1
      displayName: 'Run Setup Script'
      inputs:
        azureSubscription: $(AZURE_SUBSCRIPTION)
        scriptPath: './create-storage.sh'

- stage: Test
  dependsOn: Setup
  jobs:
  - job: TestDevelopmentTerraform
    variables:
      ENVIRONMENT_PREFIX: 'd'
      ENVIRONMENT_NAME: 'development'
    displayName: 'Test Development Terraform'
    steps:
    - task: AzureKeyVault@1
      displayName: 'Azure Key Vault: $(KEYVAULT_NAME)'
      inputs:
        azureSubscription: '$(AZURE_SUBSCRIPTION)'
        KeyVaultName: '$(KEYVAULT_NAME)'
        SecretsFilter: '$(KEYVAULT_SECRET_NAME)'

    - task: charleszipp.azure-pipelines-tasks-terraform.azure-pipelines-tasks-terraform-installer.TerraformInstaller@0
      displayName: 'Use Terraform $(TERRAFORM_VERSION)'
      inputs:
        terraformVersion: $(TERRAFORM_VERSION)

    - task: charleszipp.azure-pipelines-tasks-terraform.azure-pipelines-tasks-terraform-cli.TerraformCLI@0
      displayName: 'terraform init'
      inputs:
        command: init
        workingDirectory: '$(BASE_ENVIRONMENT_PATH)'
        commandOptions: '-backend-config="access_key=$(d-storage-account-key)" -backend-config="storage_account_name=$(STORAGE_ACCOUNT_NAME)" -backend-config="container_name=$(ENVIRONMENT_PREFIX)terraform" -backend-config="key=$(TERRAFORM_STATE)"'

    - task: charleszipp.azure-pipelines-tasks-terraform.azure-pipelines-tasks-terraform-cli.TerraformCLI@0
      displayName: 'terraform validate'
      inputs:
        command: validate
        workingDirectory: '$(BASE_ENVIRONMENT_PATH)'
        commandOptions: '-var-file=".\environments\$(ENVIRONMENT_NAME)\terraform.tfvars"'

- stage: Package
  dependsOn: Test
  jobs:
  - job: PackageTerraform
    displayName: 'Package Terraform'
    steps:
    - task: PublishBuildArtifacts@1
      displayName: 'Publish Terraform Artifacts'
      inputs:
        pathToPublish: '$(TERRAFORM_PATH)'
        artifactName: tf

That’s a lot of configuration, but I’ll attempt to condense it down. The pipeline is broken up into three separate Stages: Setup, Test, and Package.

Setup Stage

The Setup stage solves what I call “The Chicken and Egg Problem.” It boils down to requiring Azure resources to store Terraform state, but we cannot create those Azure resources via Terraform because it doesn’t know where store it yet. Instead of relying on Terraform to create those resources, I call a separate script. It sets some environment variables, and then it calls out to a shell script located in source: create-storage.sh. The contents of this script are below.

#!/bin/sh

RESOURCE_GROUP_NAME=product-terraform-rg
STORAGE_ACCOUNT_NAME=productterraform
KEYVAULT_NAME=product-terraform-kv

# Create resource group
az group create --name $RESOURCE_GROUP_NAME --location ${LOCATION}

# Create storage account
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob

# Get storage account key
ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query [0].value -o tsv)

# Create Key Vault
az keyvault create --name $KEYVAULT_NAME --resource-group $RESOURCE_GROUP_NAME --location ${LOCATION}

# Store account key in secret
az keyvault secret set --name ${KEYVAULT_SECRET_NAME} --vault-name $KEYVAULT_NAME --value $ACCOUNT_KEY

# Create blob container
az storage container create --name ${CONTAINER_NAME} --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY

The script itself is pretty straightforward. It ensures a standard resource group for each given product exists. Within that resource group, it creates a storage account, key vault, key vault secret, and a blob container. The script pulls the storage account’s key from the Azure CLI and stores it within the key vault secret. This key will be used to in future terraform init calls. The blob container will hold the Terraform state files created later in the process.

In case the application being deployed to Azure requires a database, I have a slightly altered version of the script that will generate a random database password and store it within the same key vault, but in a separate secret. That version can be seen below.

#!/bin/sh

RESOURCE_GROUP_NAME=product-terraform-rg
STORAGE_ACCOUNT_NAME=productterraform
KEYVAULT_NAME=product-terraform-kv

# Create resource group
az group create --name $RESOURCE_GROUP_NAME --location $LOCATION > /dev/null

# Create storage account
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob > /dev/null

# Get storage account key
ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query [0].value -o tsv)

# Create Key Vault
az keyvault create --name $KEYVAULT_NAME --resource-group $RESOURCE_GROUP_NAME --location $LOCATION > /dev/null

# Store account key in secret
az keyvault secret set --name $KEYVAULT_SECRET_NAME --vault-name $KEYVAULT_NAME --value $ACCOUNT_KEY > /dev/null

# Check if database password exists
DB_SECRET_INFO=$((az keyvault secret show --name $DB_PASSWORD_SECRET_NAME --vault-name $KEYVAULT_NAME) 2>&1)

# Create the database password if it doesn't exist
if [[ $DB_SECRET_INFO =~ "(SecretNotFound)" ]]; then
  NEW_UUID=$(openssl rand -base64 24)
  az keyvault secret set --name $DB_PASSWORD_SECRET_NAME --vault-name $KEYVAULT_NAME --value $NEW_UUID > /dev/null
fi

# Create blob container
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY > /dev/null

As you can see, this is mostly the same script, but with a small UUID generator if the database password has not already been generated. There are a number of ways to generate a random string, but the openssl rand -base64 24 was the most straightforward (and it worked on the Azure Linux worker machines).

Test Stage

The Test Stage installs a specific version of Terraform, runs a terraform init with assistance from the values retrieved from the previously-created key vault, and then runs a terraform validate.

You’ll notice that the terraform-init uses the $(d-storage-account-key) variable. The Azure Key Vault step prior to that will pull out the value from the key vault secret into that variable. Unfortunately, I haven’t discovered a way to double-reference a variable, so I have to keep it as a hard-coded reference. For reference, I would much rather have something like $($(KEYVAULT_SECRET_NAME)), but that doesn’t seem to be possible currently.

The terraform validate step’s details are important: it points directly to the environment-specific terraform.tfvars. This is how I accomplish multi-environment releases with a single codebase.

Package Stage

The Package Stage is the simplest of the pipeline: it just runs an out-of-the-box PublishBuildArtifacts task, pointed to the terraform directory and dropping it into the tf artifact. This will be used later in the release pipeline.

Terraform Artifacts

I’ve broken down the Terraform artifacts into a number of files for ease of use.

data.tf

For infrastructure-only repositories, this file is very straightforward:

data "azurerm_subscription" "this" {
}

However, if the given repository is building off another repository (e.g., an application-specific repository building on top of an infrastructure-specific repository), there will obviously be other data blocks here. A sample one can be seen below.

data "azurerm_resource_group" "this" {
  name = local.resource_group_name
}

data "azurerm_app_service_plan" "this" {
  name                = local.app_service_plan_name
  resource_group_name = data.azurerm_resource_group.this.name
}

data "azurerm_storage_account" "this" {
  name                = local.storage_account_name
  resource_group_name = data.azurerm_resource_group.this.name
}

data "azurerm_sql_server" "this" {
  name                = local.sql_server_name
  resource_group_name = data.azurerm_resource_group.this.name
}

locals.tf

I typically use the locals.tf file to define aggregated resource names that I’m going to be using in a number of places.

locals {
  resource_group_name   = "${var.environment_prefix}-${var.application_name}-rg"
  app_service_plan_name = "${var.environment_prefix}-${var.application_name}-plan"
  scope                 = "/subscriptions/${var.subscription_id}/resourceGroups/${azurerm_resource_group.this.name}"
}

main.tf

My main.tf is where I create the Azure resources themselves. There’s very little interesting or unique about this file, except that I’m generally not creating my own modules to group items. I simply haven’t had a good reason to at this point.

It is likely useful to point out that each repository only has one main.tf defined. This is important, as it alludes to the fact that each environment has the same types of Azure resources. While everything is variable-driven, so the resources themselves can be configured differently, each different environment will have the same resources in total.

provider.tf

Nothing crazy here.

terraform {
  backend "azurerm" {
  }
}

provider "azurerm" {
  version = "~>1.30.1"
}

I try to make it a point to upgrade my provider and Terraform versions as much as possible, but I’m typically working across 10-15 repositories at a time, so once I get all the repositories on a single version, I’ll stick to that version for awhile.

variables.tf

Again, nothing special here. Fancy new Terraform v0.12 usage in the role_assignments variable below!

variable "environment_prefix" {
}

variable "application_name" {
}

variable "location" {
}

variable "subscription_id" {
}

variable "app_service_plan_sku_tier" {
}

variable "app_service_plan_sku_size" {
}

variable "tags" {
  type = map(string)
}

variable "role_assignments" {
  type = list(object({ username = string, object_id = string, role_definition = string }))
}

versions.tf

I like to explicitly define what version of Terraform to support for a given repository. This is where that’s done.

terraform {
  required_version = ">= 0.12"
}

terraform.tfvars

Each environment has its own terraform.tfvars file. This is where the values for the given variables (defined in variables.tf above) are passed in if they are free to be exposed publicly. If there are secret values that need to be passed in, they are stored within a key vault and pulled in during the release pipeline, similar to the storage account key above.

environment_prefix        = "d"
application_name          = "product"
location                  = "eastus"
subscription_id           = "xxx"
app_service_plan_sku_tier = "Shared"
app_service_plan_sku_size = "D1"

tags = {
  "terraform"   = "true",
  "environment" = "Development",
  "application" = "Product"
}

role_assignments = [
  {
    username        = "xxx"
    object_id       = "xxx",
    role_definition = "Owner"
  }
]

Release Pipeline

As stated previously, Azure DevOps has a limitation in that it only allows Release Pipelines to be edited with the in-browser UI. This sucks, but I’ve come to live with it.

The Release Pipeline for any given project generally looks the same:

  1. Pull secrets from Azure Key Vault
  2. Install Terraform
  3. Run terraform init
  4. Run terraform apply

Then, if the pipeline requires it, and there’s an application to deploy:

  1. Set Terraform outputs to Azure Pipeline variables
  2. Deploy application to Azure App Services
  3. Set values from pipeline variables as necessary

This section is intentionally light on details, as there’s not really much to talk about it.

tl;dr

All-in-all, my approach to Terraform on Azure has changed pretty heavily in the past 7ish months. Instead of defining resources for each environment, I’ve now consolidated resource creation into a single file, and I’m setting the variables in each environment directory instead. Again, this is explicitly because I don’t have a use case which requires different resources per environment.

In addition to the project structure changes, the “Chicken and Egg Problem” has been solved within the Azure Pipeline itself. Instead of having to manually create resources before running Terraform the first time, I can now rely on the pipeline itself to manage the backing data storage. This has been my biggest improvement to how I run pipelines in Azure DevOps.

As always, if there’s something you want to chat about more directly, hit me up on Twitter, as that’s where I’m most active.