Cloud

Access Secrets in AKV using Managed identities for AKS

Purpose of this post

The purpose of this post is to show you how to access secrets from AKS cluster that are stored in Azure Key Vault.

In one of my previous blog posts, i have shown how to access keys from Key vault from Azure DevOps, where i have configured the release pipeline to fetch the secret from key vault and substitute it during runtime for the pipeline.

we have many other ways of accessing keys from key vault from any of the Azure resources we deploy, Using managed identities is one of the secure and easy ways to access to keep our app secure.

What are Managed Identities?

There are a lot of posts that are there which helps you understand what a managed identity is. If you go to Microsoft docs, here is the definition of managed identities you will get.

“Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like Azure Key Vault where developers can store credentials in a secure manner or to access storage accounts.”
Definition credits: Microsoft docs

In simple words, any azure resource that supports azure ad authentication can have managed identities. Once we enable managed identity for an azure, a service principal will be created in active directory on behalf of that azure resource you create.
With this, you can grant access to the an Azure resource that has managed identity enabled on the target azure resource you want to access.

For example, if you want to have a webapp access your key vault, all you need to do is to enable managed identity on your webapp and grant access to the managed identity of your webapp in the access policies of the key vault.

Without managed identities, in the above mentioned scenario you would need a service principal and a client secret to be created for your application (webapp in above scenario), and that service principal has to be granted permission on the target azure resource (key vault in above scenario). You need to configure your webapp to use the client id and secret to make calls to key vault to fetch the secrets.

You would have to manage the client id and secret by yourself. Incase if the service principal credentials are compromised, you need to change the secret every time and update the application code to consume the new secret. This is not only a bit insecure, but also tedious to update client secrets in multiple places.

Managed Identities to rescue

With managed identities you no longer have to create a service principal for your app, but when the feature is enabled on the azure resource, it will not only create an SP for you, but also it would manage rotation of keys by itself. You no longer need to keep client id and client secret of your service principal in your source code to access the target resource.

Kindly note that we are removing the burden of maintaining the service principal credentials in your code.
But you still need to have appropriate libraries and respective code to access the target resource. For example, if your app is going to access key vault and if the app is going to be on a webapp with managed identity enabled, you no longer need to pass the service principal credentials to call the key vault api endpoint. You can call the key vault api endpoint directly from your webapp as it has managed identity enabled and that managed identity is granted permission on the key vault.

With that, Lets dive into some demo.

Here are the steps we are going to follow:

  1. Create an AKS cluster
  2. Enable managed identity to AKS cluster
  3. Create a key vault with a secret in it.
  4. Enable access to managed identity of AKS via access policies in key vault.
  5. Access the secret in the key vault from a Pod in AKS.

We are going to create 2 resources in this demo.

  1. AKS Cluster
  2. Azure Key Vault

In this demo, i have created a sample AKS cluster using following commands after i have logged in to Azure from my azure cli

az group create --name yourresourcegroupname --location uksouth 
az aks create -g yourresourcegroupname -n MyAKS --location uksouth --generate-ssh-keys

As we are discussing about managed identities and not about AKS, the above should suffice for creating an AKS cluster.

Once AKS cluster is created, you should see a new resource group created with name “MC_” this is for the underlying resource your AKS cluster needs to function.

Once its created, click on the VMSS that was created for your AKS cluster.

Alt Text

Once in the VMSS blade, click on the identity and notice that we have option for System Assigned managed identity.

Alt Text

Enable it by click on “on”.

Alt Text

Once its enabled, you should see a new managed identity resource created in the “MC_” resource group.

Alt Text

Next create and Azure key vault resource and secret in it.

az group create --name "yourresourcegroupname" -l "locationofyourresorucegroup"

I have created below key vault and a secret.

Alt Text

As of now, we have created an AKS cluster, enabled system assigned managed identity and created a Key Vault with a new secret in it.

Next, we are going to add permission to AKS to access key vault. To do so, go to access policies of Key vault and click on “Add access policy” option.

Alt Text

Select “secret management” in the configure from template option.
Note that i have selected “secret management” for the sake of this POC. In real production environment, get, list permissions should be enough.

Alt Text

In the “Select principal” option, click on “none selected” to select one and choose “AKS Service principal” Object ID and “add”.

Alt Text

You should see the access policy added in the list of access policies and click ‘save’.

Alt Text

Once done, connect to AKS Cluster using below commands

az aks get-credentials --resource-group yourresourcegroupname --name youraksclustername --overwrite-existing

Once done, spin up an nginx pod using below commands.

Alt Text

use following command to login to the pod interactively

kubectl exec -i -t nginx --container nginx -- /bin/bash

To access the secret key in azure key vault, we need to hit the api to obtain the access token as described in this document.

Alt Text

Once the token is obtained, you can access the secret key value in the key vault using below command.

curl ‘https:///secrets/?api-version=2016-10-01’ -H “Authorization: Bearer “

Alt Text

This way we can access the values in the key vault from AKS with managed identities enabled.

In this blog post, we have seen what managed identies are in a nut shell and seen how to enabled managed identity for AKS cluster and access the key vault from AKS cluster with help of access policy granted to managed identity of AKS cluster.

System Assigned managed identity lives as long as the resource is in Azure. Once the resource is deleted, the corresponding managed identity and its service principal are also deleted from Azure AD.

We also have whats called an User identity which exists even after a resource is deleted and you can assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it and you are responsible for cleaning it up after use.

Hope you enjoyed reading this blog post.

Thanks for reading!!

Cloud

Using Azure Keyvault for Azure Webapps with Azure DevOps

Purpose of this post

The purpose of this post is to show you how we can use Azure Key Vault to secure secrets of a webapp and call them from Azure DevOps using Variable groups. This is one of the ways to handle secrets for your deployments. One of the other ways is to use Managed Identities which is more secure. I’ll cover that in a different blog post.

What are secrets and why is secret management important?

Secrets management is the process of securely and efficiently managing the safe usage of credentials by authorized application. In a way, secrets management can be seen as an enhanced version of password management. While the scope of managed credentials is larger, the goal is the same — to protect critical assets from unauthorized access.

For managing sensitive application configuration like DB connection strings, API Keys and other types of application related sensitive keys. It is recommended to use Azure Key Vault or any other secret management solution for storing secrets. Azure Key Vault is a cloud service for securely storing and accessing secrets like connection strings, account keys, or the passwords for PFX (private key files). Azure Key vault can be used for all commonly used services like Azure Webapp, Azure Kubernetes, Azure Virtual Machines and many other Azure Services.

Data like connection strings, API tokens, Client ID, Password are considered as sensitive information and handling them poorly may not only lead into security incidents but also my compromise your entire system.

Here are a couple of poorly handled secret management practices.

  • Maintaining secrets in source code repository in a settings or environment config file
  • Having same password/keys for all the environments
  • Secrets are shared across all the team members
  • Teams using service accounts to connect to the database or a server

Avoiding the above would be the first step for an effective secret management.

Using Azure KeyVault for App Services

Using Azure DevOps, All the sensitive data like Connection Strings, Secrets, API Keys, and any other data you categorize as sensitive. These values can be fetched directly from Azure Key Vault, instead of configuring them on pipeline.

Let’s take an example of configuring DB Connection string for an Azure WebApp using Azure KeyVault.

Lets create a KeyVault along with a secret in it. Notice that the key value is secret.

image

Similarly, lets create one more for UAT DB connection. Once created, it will show the keys created as in below screenshot.

Alt Text

Now in Azure DevOps, create a new variable group under the library section of pipelines.

Alt Text
  1. Give variable group a name
  2. Make sure to select the options “Allow access to all pipelines”, “Link secrets from Azure KeyVault”.
  3. Choose KeyVault name and authorize.
  4. Click on “Add” and select secrets for using them in the pipeline.

Below is the screenshot for reference.

Alt Text

Once done, in the pipeline, go to variables section click on ‘Variable groups’ and click on ‘Link variable group’ to choose the variable group that is created.

In the stages, select the environments and click link option.

Alt Text

Now the next step is to configure the task for applying the DB connection string to the app service.

Add and configure “Azure App Service Settings” task and in the connection strings settings, configure the JSON value for applying DB Connection string. The value here is $(Dev-DBConnectionString) that is stored in Azure KeyVault. It is picked up by the pipeline during the execution.

Alt Text

Below are the logs of the execution for the pipeline. Here it shows that the pipeline is able to fetch the value and it being a sensitive parameter, value of DB connection string is hidden in the logs.

Alt Text

In the webapp, under configuration->Database connection strings, we will be able to see the actual value.

Alt Text

Once we click on the ‘show values’ we can see the value of connection string.

Alt Text

For configuring the other application settings which are NON-SENSITIVE, we can use ‘App Settings’ Section of “Azure App Service Settings” task to configure application settings, similar to DB Connection strings, we can use the values from key vault as well.

Alt Text

During the execution, we can see that application key that is configured in the above setting.

Alt Text

The other way to manage secrets without key vault is to use variable and the padlock option to lock the key value as shown in the below screenshots.

Alt Text
Alt Text

This way the secret is not visible to anyone, but if you would like to know the value, you need to other ways to handle it, the suggested approach is to implement a solution like Azure Key Vault with right access polices.

This brings us to the end of this blog post and we have seen how to use Azure Key Vault for Azure Web Apps with Azure DevOps and various options available to handle secrets in Azure DevOps using Variable groups and Variables.

Hope you enjoyed reading it. Happy Learning!!

Cloud

DevSecOps with Azure DevOps

Purpose of this post

The purpose of this blog post is to give you high level overview on what DevSecOps is and some steps on how security can be integrated in your Azure DevOps pipeline with help of some readily available tasks in Azure DevOps related to some commonly used security scanning tools in build and release pipelines.

Continuous Integration, Deployment and Delivery

If you are reading this article, I’m assuming that you must have encountered these terms – CI and CD by and you do have a fair understanding of them.

Let’s recap on what we mean by each of these terms.

Continuous integration

Continuous integration is a process of automating the build and testing the quality of the code when someone in the team commits the code to your source control. This ensures that a particular set of unit tests are run, build compiles successfully, without any issues. In case, if the build fails, the person committing the code should be notified to fix the issues encountered. This is one of the software engineering practices where the feedback on the newly developed code is provided to the developers immediately, with different types of tests.

Continuous Delivery Vs Deployment

Both Continuous Delivery and Deployment are interesting terms. In Continuous Delivery, once the CI is done and the code is integrated in your source code, the ability to deploy the code automatically to various stages of the pipeline seamlessly and making sure that the code is production ready is Continuous Delivery, However in continuous delivery, the code is not deployed to production automatically. A manual intervention is required.

Where as in continuous deployment, Every build or change that is integrated passes all the quality checks, deployment gates and they get deployed from lower environments till Production automatically without any human intervention.

CI & CD helps you deliver the code faster, great!!!

but how about security?

DevSecOps is no longer a buzz word, or maybe it still is, but lot of organizations are shifting gears towards implementing the notion of including security in their Software Development lifecycle.

What is DevSecOps?

Security needs to shift from an afterthought to being evaluated at every step of the process. Securing applications is a continuous process that encompasses secure infrastructure, designing an architecture with layered security, continuous security validation, and monitoring for attacks.

In simple terms, the key focus around DevSecOps is that you need to make sure that the product you are developing is secure right from the time you start coding it and that the best practices of ensuring that security are met at every stage of your pipeline and an ongoing practice. In other words, security should be met as one of the key elements from the initial phase of development cycle, rather than looking at the security aspects at the end of the product sign-off/deployment. This is also called as ‘shift-left’ strategy of security. It’s more of injecting security in your pipeline at each stage.

How do can we achieve security at various stages of the pipeline?

There are multiple stages involved in getting your code deployed to your servers/cloud hosted solutions right from the developers coding it from the pipeline till deploying them.

Let’s now see few of them and how we can achieve integrating security around our pipelines using this.

Precommit–Hooks/IDE Plugins:

Precommit hooks/IDE Plugins are usually used to find and remediate issues quickly in the code even before a developer commits the code to the remote repository. Some of the common issues that can be found or eliminated are credentials exposed in code like SQL connection strings, AWS Secret keys, Azure storage account keys, API Keys, etc. When these are found in the early stage of the development cycle, it helps in preventing accidental damage.
There are multiple tools/plugins which are available and can be integrated in a developer’s IDE. A developer still get around these and commit the code bypassing these Precommit hooks. These are just the first line of defense but not meant to be your full-fledged solution for identifying major security vulnerabilities. Some of the Precommit hooks tools include – Git-Secret, Talisman. Some of the IDE plugins include .NET Security Guard, 42Cruch, etc. You can find more about other tools here:

https://owasp.org/www-community/Source_Code_Analysis_Tools

Secrets Management:

Using secret management for entire code base is one of the best practices. There could be a secret management tool that you can use like an Azure Key Vault, AWS secret manager, HashiCorp vault built into your pipeline already for accessing the secure credentials. The same secret management has to be used by your entire code base and not just the DevOps pipelines.

Software Composition Analysis:

As the name indicates, SCA is all about analyzing the software/code for determining the vulnerable open-source components, third party libraries that your code is dependent on.
In majority of the cases of software development, very less portion of code is written and rest of it is imported/dependent on from external libraries.

SCA focuses on not only determining the vulnerable open source components, but also shows you if there are any outdated components are present in your repo & also highlights issues with opensource licensing. WhiteSource bolt is one of the light weight tools that does scanning of the code integrates with Azure DevOps and shares the vulnerabilities and fixes in a report.

SAST (Static analysis security testing):

While SCA is focused on determining issues related to open source/third party components used in our code, it doesn’t actually analyze the code that is written by us.
This will be done by SAST. Some common issues that can be found are like SQL Injection, Cross-site scripting, insecure libraries, etc. Using these tools needs collaboration with security personnel as the initial reports generated by these reports can be quite intimidating and you may encounter certain false-positives. CheckMarx is one of the SAST tools.

DAST (Dynamic Analysis Security Testing):

Key differences between SAST and DAST is that while vulnerabilities can be determined in the third libraries in our code, it doesn’t actually scan the deployed site itself. There could be some more vulnerabilities which can’t be determined until the application is deployed into one of the lower environments like PreProd by providing target site URL. You can run DAST in a passive or aggressive mode. While passive test runs fairly quick, aggressive tests run for more time.

In general, a manual Pen test/DAST can take longer time. This will be done by a Pentester from security team. A manual test can’t be done every time you check-in the code or deploy the code as pen testing itself would take some amount of time.

I have worked on cloud migrations to Azure & AWS and we usually raise a request for DAST & Pentest test to security team at the last leg of migration lifecycle and get a sign-off from security team after all the identified vulnerabilities are fixed. Usually security team the takes a week or sometimes more than a week for them to complete the report, they run scripts, test data, try to break the application and what not to see if the application we migrated is secure enough. Once the vulnerability report is out, we look at the critical & high issues reported and start working on fixing them. Majority of the times, the deliverable timelines used to get extended based on the amount of work we had to do to remediate the issues raised. With DAST testing using ZAP Auto Scanner task in Azure DevOps, we can identify and fix the issues even before they become a bottle neck later.

And, security just doesn’t mean DAST/ Pentest or code quality alone. The infrastructure you deploy should also be secure. With your environment deployed on Azure, you have Azure Polices/Initiatives that help you govern and put rail guards around your infrastructure by auditing & enforcing the rules you specify. You could enforce polices to make sure that your infrastructure meets your desired state. For example, using Azure Polices, you can enforce use of managed azure disks only, storage accounts are not publicly accessible, Subnets in a particular VNet doesn’t allow Inbound Internet Traffic, SQL Server firewalls doesn’t allow internet traffic, etc. These are just of few of the tasks that you can achieve using Azure Polices. We will take a look at how Azure polices work in another blog post and also enabling effective monitoring and alerting is another key aspect.

Azure DevOps supports integration of multiple open source and licensed tools for scanning your application as a part of your CI & CD process.

In this blog post, we’ll see how to achieve security in our Azure DevOps pipeline using following tools:

  1. WhiteSource Bolt extension for Scanning Vulnerability for SCA
  2. Sonarcloud for code quality testing
  3. OWASP ZAP Scanner for passive DAST testing

Sonarcloud for code quality testing:

1.WhiteSource Bolt:

Integrating WhiteSource bolt in your pipeline is pretty straight forward. In this blog post, I’m going to use one of the previous repos that I have used in my previous blog posts.

If you would like to follow along, feel free to clone/import it to your Azure DevOps repo and steps are in the previous blog post too.

To install WhiteSource Bolt in your Azure DevOps pipeline, search for “WhiteSource Bolt” from Marketplace and install it. You’ll go through a series of steps to get it installed in your organization.

It’s all straight forward.

I’m jumping straight ahead to the build pipelines, in which we are going to integrate WhiteSource Bolt.

Login to your Azure DevOps and click on Pipelines -> Build Pipelines and edit your build pipeline, you can import the complete project and pipelines from my git repo and the steps are mentioned my previous blog post. please refer to the link above.

Once in the build pipeline, to add the tasks click on “+” icon and search for “WhiteSource bolt” in Marketplace.

Alt Text

Back in your build pipeline, click “+” and add “WhiteSource Bolt” task

Alt Text

Leave the default settings as by default, it would scan your root directory.

Alt Text

Save and kick-off a new build.

Alt Text

In your build pipeline, you can see the logs of the task

Alt Text

In your build pipeline section, you will see that you have a new section for WhiteSource Bolt, you can click on this to view the results after the build pipeline completes the build.

Alt Text

You can also see the results in the build pipeline results and the report tab.

Alt Text

Notice that it not only shows the vulnerabilities, but also shows the fixes for each of them. Note that this has only scanned the third party libraries and open source components in the code but not the deployed code on the target infrastructure.

This can be achieved via DAST testing in release pipeline using ZAP Auto Scanner. We’ll see that as well in this blog post.

2.Sonarcloud

Now, let us see how to integrate SonarCloud in Azure DevOps Pipeline. Prior to adding task in Azure DevOps, we need to import our Azure DevOps Project in SonarCloud.

You need Sonarcloud account for integrating it in the pipeline. Login to https://sonarcloud.io/ with your Azure DevOps account and choose your organization.

Alt Text

Select import projects from Azure.

Alt Text

Create a personal access token in Azure DevOps, copy the token and paste it somewhere, we need it later

Alt Text

Back in Sonarcloud site, provide the personal access token to import the projects, choose defaults to continue.

Alt Text

Generate a token in Sonarcloud that will be used in Azure DevOps. Once logged in SonarCloud, go to My Account > Security > Generate Tokens and copy the token and paste it somewhere, we need it later.

Alt Text

Select the application project Click on ‘Administration’ -> ‘Update Key’ to find the key for the project.

Alt Text

Now back in Azure DevOps we need to add SonarCloud tasks. Go to the build pipeline and install SonarCloud plugin from marketplace. Just like WhiteSource bolt, search for Sonarcloud and install it in our Azure DevOps Organization.

Alt Text

Unlike WhiteSource bolt, we need to add three tasks for analyzing the code with SonarCloud.

Note that the project I’m trying to analyze is .NET Core, but the process of including the steps doesn’t vary much for any of the other technologies.

Add the ‘Prepare analysis on SonarCloud’ task before Build task.

Alt Text

Provide following details for the task:

  1. SonarCloud Service Endpoint: Create a new service endpoint by clicking on ‘new’ and copy paste the code generated, give a name for Service connection name, Save and verify
  2. Select the organization
  3. Add Project key generated from SonarCloud earlier.

Below screenshot shows how to add a new service connection after clicking on ‘new’ in step 1.

Alt Text

Add ‘Run code Analysis’ and ‘Publish Quality Gate Result’ tasks and save it and create a build.

Alt Text

Publish Quality Gate Result task is optional, but it can be added to publish the report link and quality gate result status to the pipeline.

Save and initiate a build. Once you run it, you should see the logs as below:

Alt Text

In the build summary, under extensions tab, you can see the link to view the results.

Alt Text

In the above screen, the quality gate status shows as none. the reason for that is, in Sonarcloud Initial status for quality gate shows as “Not computed” for the project we imported.

Alt Text

To fix it, under administration tab, choose “Previous Version” and notice that it says that ‘changes will take effect after the next analysis’.

Alt Text

Now, the status in overview shows that “Next scan will generate a Quality Gate”

Alt Text

Back in Azure DevOps, trigger another build and wait for it to complete.

Alt Text

Now under extensions tab of build summary, it should show the result status along with the link to view the Bugs, Vulnerabilities, etc. click on the “Detailed SonarCloud Report” to view the results.

Alt Text
Alt Text
Alt Text

The beauty of Sonarcloud is that you can integrate in your branch polices for any new Pull Requests raised and also as one of the deployment gates for deploying the bug free code to your environments.

3. ZAP Auto Scanner:

One tool to consider for penetration testing is OWASP ZAP. OWASP is a worldwide not-for-profit organization dedicated to helping improve the quality of software. ZAP is a free penetration testing tool for beginners to professionals. ZAP includes an API and a weekly docker container image that can be integrated into your deployment process.

Definition credits: owasp.org

With ZAP scanner you can either run a passive or active test. During a passive test, the target site is not manipulated to expose additional vulnerabilities. These usually run pretty fast and are a good candidate for CI process. When the an active san is done, it is used to simulate many techniques that hackers commonly use to attach websites.

In your release pipeline, click on add to add a new stage after PreProd stage.

Alt Text

Create a new stage with ‘Empty Job’.

Alt Text

Rename it to DAST Testing.

Alt Text

Click on add tasks and add get ‘ZAP Auto Scanner’ task from market place.

Alt Text

Once done, add following tasks one after the other.

OWASP Zap Scanner:

  1. Leave Aggressive mode unchecked.
  2. Failure threshold to 1500 or greater. This to make sure that the test doesn’t fail if your site has score more in number. Default is 50.
  3. Root URL to begin crawling: Provide your URL that the scan needs to run against.

Word of Caution: Don’t provide any site URLs in the above step that you don’t own. crawling against sites that you don’t own is considered as hacking.

4.Port: default is 80, if your site is running on secure port, provide 443, else you can leave it to port 80

Alt Text

Nunit template task: this is mainly used to install a template that is used by Zap scanner to produce a report.

Alt Text

The in-line script used is present in description of the tool in Azure Market Place.

https://marketplace.visualstudio.com/items?itemName=CSE-DevOps.zap-scanner

Generate nunit type file task: This used to publish the test results in XML format to owaspzap directory in the default working directory.

Alt Text

Publish Test Results task: this is mainly used to publish the test results from the previous task.

Alt Text

Make sure that you select the agent pool as ‘ubuntu-18.04’

Alt Text

Once everything is done, Kick off a release. Make sure that Preprod stage is deployed and the environment is ready before running DAST testing stage.

Alt Text

Once a release is complete, you should be able to see the results in the tests tab of the release you created.

Alt Text

With this, We have seen how to integrate security testing using WhiteSource Bolt, SonarCloud and OWASP ZAP Scanner in our DevOps pipeline at various stages of build and release.

This brings us to the end of this blog post.

Just like DevOps, DevSecOps also needs cultural shift. It needs collaboration from all departments of an organization to achieve security at each level.

Hope you enjoyed reading it. Happy Learning!!

Couple of references I used for writing this blog post:

https://docs.microsoft.com/en-us/azure/devops/migrate/security-validation-cicd-pipeline?view=azure-devops

Cloud

Terraform and Azure DevOps

Purpose of this article

The main purpose of this article is to show you how to deploy your infrastructure using Terraform on Azure DevOps and deploy a sample application on multiple environments.

I’ve been working on terraform for a while now and as a part of my learning process, thought I should write a blog post to show how to work with terraform on Azure DevOps and deploy an application into multiple environments.

In this post, we’ll spin up our infrastructure on Azure by setting up the build & release pipelines and We’ll also take a look at what each of the tasks in the build & release pipelines does.

Things you need to follow along

If you would like to do this on your own, following are the prerequisites you need:

  • Azure Subscription
  • Azure DevOps Account

Assumptions

This blog assumes that you have fair understanding of Azure, Azure DevOps & Terraform. Initially, we’ll go through the setup required and then I’ll discuss in detail about each of the pipeline steps.

Ok, lets dive right in.

As you may have already known, terraform is one of the infrastructure as code tools that enables us to deploy your landing zones in your respective cloud environments like Azure, AWS, GCP, soon.

Terraform is considered as one of the tools in DevOps toolset.

So, we’ll take a look at how we can deploy our landing zone to different environments using Azure DevOps and deploy a sample application to it.

I’ve taken a Microsoft’s demo application PartsUnlimted and added my terraform code to it.

It also contains the build and release pipeline json files you can import to follow along and replicate the same in your own subscription.

Here are the steps that we’ll do as a part of this our implementation:

  1. Import the code from my github repo to Azure DevOps
  2. Setup build pipeline
  3. Setup release pipeline
  4. Access the application in Dev
  5. Deploy the application to PreProd, Prod
  6. Walk-Through of terraform code, tasks in build & release pipelines

Code import from GitHub & Project Setup

Login to Azure DevOps and create a new project.

Alt Text

Click on ‘Repos’ -> files and import the code. Click on the 3rd option, import

Alt Text
Alt Text

Copy/Paste the following URL in clone URL https://github.com/vivek345388/PartsUnlimited.git and click on import

Alt Text

Once it’s done, it will show that the code is now imported and you will be able to see the repo with code.

Alt Text

In the above folder,

  1. Infra.Setup folder contains the terraform files that we will be using to deploy our infrastructure.
  2. Pipeline.Setup folder contains the build &release pipelines json files. Download both the json files from Build Pipeline & Release Pipeline to your local folder
Alt Text

Repeat the same step to download release pipeline json file from the code ReleasePipeline->PartsUnlimitedE2E_Release.json as well to your local folder.

Build Pipeline Setup

Now, let’s setup the build pipeline. Click on pipelines -> pipelines

Alt Text

Click on ‘Import Pipeline’

Alt Text

Click on browse and select the downloaded build Json file.

Alt Text
Alt Text

Once import is successful, you will see below screen where it says some settings needs to attention

For the agent pool, Choose ’Azure Pipelines’

Alt Text

In the agent specification, choose ‘vs2017-win2016’

Alt Text

Click on ‘Save & queue’ to queue a new build.

Alt Text

Choose the defaults and click on ‘save and run’

Alt Text

Once its complete, you should be able to see the pipeline run and its results.

Alt Text

We can also see the published artifacts in the results.

Alt Text

Now this completes the build pipeline setup. Let’s also configure release pipeline.

Release Pipeline Configuration

Click on ‘releases’ and click on ‘New pipeline’

Alt Text
Alt Text

Quick Note: At the time of writing this article, we don’t have an option to import an existing pipeline from new release pipeline page when you don’t have any release pipelines. Hence we have to create a new empty pipeline to get to the screen where we can import the downloaded release pipeline json file.

Choose ‘empty job’ and click on ‘save’

Alt Text
Alt Text

Now, come back to the releases page and click on the releases one more time and choose import pipeline.

Alt Text

Choose release pipeline json that’s downloaded in the beginning.

Alt Text

It would look like below after the pipeline has been imported. Click on ‘Dev’ of the stage to configure the settings.

Alt Text

Quick note: You need to have following tasks installed from Azure Market Place. if you don’t have them in your subscription, please get them from here.

  1. Replace tokens
  2. Terraform

Click on ‘Azure cli’ & ‘App service deploy’ tasks and choose the subscription to authorize.

Quick Note: I’m not using service principals/connections here to keep it simple for the purpose of this blog post.

Alt Text
Alt Text

Repeat the same steps for rest of the stages ‘PreProd’ & ‘Prod’. Once you complete all the tasks that needs attention, click on save at the top of the screen to save the pipeline.
Here is how the pipeline should look like after you complete everything.

Alt Text

After you have saved everything, click on ‘Create release’ in above screen.

Alt Text

Click on ‘logs’ option to view the logs for each of the tasks.

Alt Text

After successful deployment to Dev, it would look like this.

Alt Text

Once everything is done, you would see that code is deployed successfully to dev and you can browse the page by accessing the webapp link.

Go to your Azure portal and grab your webapp link and access it.

Alt Text
Alt Text

Back in your Azure DevOps release pipeline, As continuous deployment is enabled, it deploys the code to all the environments one after the other once the deployment is successful.

Alt Text

Now let’s take a minute to examine what each of the files in our Infra.Setup folder does.

Alt Text

I’ve used the concept of modules in terraform to isolate each of the components we are deploying. This is similar to linked templates in ARM templates.

Every terraform file that we author is considered as a module.

In a simple Terraform configuration with only one root module, we create a flat set of resources and use Terraform’ s expression syntax to describe the relationships between these resources:

““
resource “azurerm_app_service_plan” “serviceplan” {
name = var.spName
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name

sku {
tier = var.spTier
size = var.spSKU
}
}

resource “azurerm_app_service” “webapp” {
name = var.webappName
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.serviceplan.id
}

In the above code block, we declare two resources, app service and service plan in a single file. Later, app service is referencing the app service plan in the same file. while this approach is fine for smaller deployments, when the infrastructure grows, it would be challenging to maintain these files.

When we introduce module blocks, our configuration becomes hierarchical rather than flat: each module contains its own set of resources, and possibly its own child modules, which can potentially create a deep, complex tree of resource configurations.

However, in most cases terraform strongly recommend keeping the module tree flat, with only one level of child modules, and use a technique similar to the above of using expressions to describe the relationships between the modules.

module “appServicePlan” {
source = “./modules/appServicePlan”
spName = var.spName
region = var.region
rgName = var.rgName
spTier = var.spTier
spSKU = var.spSKU
}

module “webApp” {
source = “./modules/webApp”
name = var.webAppName
rgName = var.rgName
location = var.region
spId = module.appServicePlan.SPID
appinsightskey = module.appInsights.instrumentation_key
}

Here you can see that both app service plan and app service are called as modules by main.tf file

> Definition Credits: Terraform.io

#### Benefits of modular based templates

Modules or Linked Templates yields us following benefits:
1.    You can reuse the individual components for other deployments.
2.    For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components.
3.    You can easily add new resources in a new template and call them via main template.

Following are the resources that we deployed as a part of this blog post.

1.    App service plan – To host the Webapp
2.    App Service – Webapp to host the application.
3.    Application insights – To enable monitoring.

Its hierarchy looks like this.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/nrwruxhd501rsgfy2byb.png)

In general when we have a single file for deployments, we pass the variables in the same file or use a .tfvars file to pass the variables.

> Variables are same as Parameters in ARM templates

 In the above file structure, each individual template for example: webapp.tf will have its variables that it needs. The values have to be passed to it when this module is called. Remember that each terraform file that we create is considered as a module.

variable “name” {}
variable “location” {}
variable “rgName” {}
variable “spId” {}
variable “appinsightskey” {}

resource “azurerm_app_service” “webApp” {
name = var.name
location = var.location
resource_group_name = var.rgName
app_service_plan_id = var.spId
app_settings = {
“APPINSIGHTS_INSTRUMENTATIONKEY” = var.appinsightskey
}
}

resource “azurerm_app_service_slot” “webApp” {
name = “staging”
app_service_name = azurerm_app_service.webApp.name
location = azurerm_app_service.webApp.location
resource_group_name = azurerm_app_service.webApp.resource_group_name
app_service_plan_id = azurerm_app_service.webApp.app_service_plan_id

app_settings = {
“APPINSIGHTS_INSTRUMENTATIONKEY” = var.appinsightskey
}

}

Now lets see how the values are passed and modules are called in individual templates.

There are two main files that control the entire deployment.

1.  main.tf     -  Contains the code to call all the individual resources.
2.  main.tfvars -  Contains variables that are consumed by main.tf file

in the main.tf file, each of the modules will be called as follows:

module “appServicePlan” {
source = “./modules/appServicePlan”
spName = var.spName
region = var.region
rgName = var.rgName
spTier = var.spTier
spSKU = var.spSKU
}

module “webApp” {
source = “./modules/webApp”
name = var.webAppName
rgName = var.rgName
location = var.region
spId = module.appServicePlan.SPID
appinsightskey = module.appInsights.instrumentation_key
}

the variables are declared in the same file in the variables section.

variable “region” {}
variable “rgName” {}
variable “spName” {}
variable “spTier” {}
variable “spSKU” {}
variable “webAppName” {}
variable “appInsightsname” {}
variable “AIlocation” {}

The values for above variables will be passed from main.tfvars file.

We use the same templates for deployment to all the environments. so how does Azure DevOps handles deployments to different environments? 

We keep place holders `#{placeholdername}#` for each of these values passed in our main.tfvars file.

region = #{region}#
rgName = #{ResouceGroupName}#
spName = #{spName}#
spSKU = #{spSKU}#
spTier = #{spTier}#
webAppName = #{webAppName}#
appInsightsname = #{appInsightsname}#
AIlocation = #{AIlocation}#
“`

when use the same templates for deploying to multiple environments, We use ‘replace tokens’ task in Azure DevOps and place respective values for each environment. This helps us in choosing different values for each environment.

For example, the value for #{webAppName}# will be different per environment.

app-dev-webapp for dev
app-ppd-webapp for preprod
app-prd-webapp for prod

While the main.tfvars file has a place holder #{webAppName}# for this, we declare the values for it in our variables section of release pipeline

Alt Text

The ‘replace tokens’ task has an option called token prefix where we can declare the token prefix and suffix for the place holder value in the file we would like to replace in. In the target files, we place the files would like to get targeted for this replacement. Here we gave */.tf and */.tfvars as the target as these files have the placeholder content.

Alt Text

Build Pipeline

Build pipeline is mostly self explanatory as the first couple of tasks complie the application and publish the code.

Take a look at the Publish Artifact: Artifacts, Publish Artifact: Infra.Setup tasks

Publish Artifact: Artifacts : publishes the compiled code to Azure Pipelines for consumption by release pipelines

Alt Text

Publish Artifact: Infra.Setup tasks : publishes the terraform templates to Azure Pipelines for consumption by release pipelines. As we dont need to compile them we can directly choose them from the repo as path to publish.

Alt Text

At the end of the build pipeline, it would publish the artifacts as below:

Alt Text

These will be consumed in our release pipeline for deployment.

Release Pipeline

You can see that the source artifacts are from our build pipeline.

Alt Text

Now lets take a look at each of the release tasks.

1.Create Resource Group and Storage Account: Creates a storage account for storing .tfstate file that terraform stores the configuration of our deployment.

Alt Text

2.Obtain access Key and assign to pipeline variable: Retrieves the storage account key and assigns it to a variable in Azure Pipelines.

Alt Text

3.Replace tokens in */.tf */.tfvars:

Remember that we have kept place holders to replace the values per environment, this task is responsible for the same. Values for each of the place holders in main.tf file are in variables section of each stage.

Alt Text

4.Install Terraform 0.13.4: Installs terraform on the release agent.

Alt Text

5.Terraform: init : Initializes the terraform configuration and we also have specified the storageaccount resource group and the storage account for it to place the .tfstate file

Alt Text

6.Terraform: plan : Runs terraform deployment in dry-run mode

Alt Text

7.Terraform: apply -auto-approve: Applies the configuration based on the dry-run mode in step 6.

Alt Text

8.Retrieve Terraform Outputs: This task is mainly responsible for retrieving each of the outputs obtained after terraform apply is complete and they are being consumed by the ‘App Service Deploy’ task. In case of Azure, we have ARM Outputs task readily available for us, here we need to write a small script to get the outputs.

Alt Text

9.Azure App Service Deploy: Deploys the application code into the webapp.

Alt Text

Conclusion

This brings us to the end of the blog post.

Hope this helps you learn, practice and deploy your infrastructure using Terraform via Azure DevOps!!

Thanks for reading this blog post & Happy Learning..

Cloud

End-to-end correlation across Logic Apps

Excellent Article on co-relating multiple logic apps which I find very useful. I am re-blogging this blog written by toon vanhoutte.

toon vanhoutte

When using a generic and decoupled integration design, your integrations span often multiple Logic Apps.  For troubleshooting purposes, it’s important to be able to correlate these separate Logic Apps with each other.  Recently, a new feature has been introduced to improve this.

Existing correlation functionality

  • Let’s create a Logic App with a simple request trigger.

Correlation-01

  • Invoke the Logic App.  In the run history details, you will notice a correlation id.

Correlation-02

  • Now, update the Logic App to call another Logic App.

Correlation-03

  • Invoke the parent Logic App.  In their run history details, you will notice that they both share the same id. This is because Logic Apps automatically adds the x-ms-client-tracking-id HTTP header, when calling the child Logic App.  If this header is present, that’s the value taken for the correlation id.

Correlation-04

  • Your client application can also provide this HTTP header, so a custom correlation id is used.

Correlation-05

  • This results in these…

View original post 566 more words

azure, Azure DevOps, Powershell

Azure : Add IP restriction rule to App Service using powershell script

Overview:

Azure Apps service has a feature which enables you to restrict user access to a web application using IP restriction feature.You can allow or deny the access to a set of IPs to your web app, using this feature.

You can find this option by clicking on Networking >Configure Access Restrictions .

NetworkingAppService

WhitelistIP

By default, it will apply the same restriction to the scm website as well.

Use Cases :

Below are the few use cases where you need to add IP restrictions.

  1. release agent IPs : When you automate the provisioning the azure resources through pipelines, sometimes, you need to white list the IP of the agent (the VM, in which the tasks are running) for running some tasks (for e.g. health check of website).
  2. Outbound IPs : When you have a web job associated with your web app, all the calls will be made by the outbound IP list available in the app service. In that case you need to white list the outbound IPs as well (Turning on Allow Azure IP option also works for this scenario).
  3. User IPs : If you want to provide access to different developers and testers, you need to white list their IPs.

You can run this powershell in your release pipeline once the web app is created. It will white list the IPs that you provide. You can add one pipeline variable to contain all the comma separated IP addresses and use that variable in the powershell task to pass the IPs to the script.

Parameters:

  1. RGName : Name of the Resource Group
  2. WebAppName : Name of the App Service
  3. priority :  Priority of the IP restriction(e.g: 1001)
  4. IPList : list of comma separated IPs which needs to be white listed

Script:

Script 1 :

Param
(
# Name of the resource group that contains the App Service.
[Parameter(Mandatory=$true)]
$RGName,

# Name of your Web or API App.
[Parameter(Mandatory=$true)]
$WebAppName,

# priority value.
[Parameter(Mandatory=$true)]
$priority,

# WhitelistIp values.
[Parameter(Mandatory=$true)]
$IPList,

# rule to add.
[PSCustomObject]$rule


)
function Add-AzureIpRestrictionRule
{
$ApiVersions = Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web | 
Select-Object -ExpandProperty ResourceTypes |
Where-Object ResourceTypeName -eq 'sites' |
Select-Object -ExpandProperty ApiVersions

$LatestApiVersion = $ApiVersions[0]

$WebAppConfig = Get-AzureRmResource -ResourceType 'Microsoft.Web/sites/config' -ResourceName $WebAppName -ResourceGroupName $RGName -ApiVersion $LatestApiVersion

$WebAppConfig.Properties.ipSecurityRestrictions = $WebAppConfig.Properties.ipSecurityRestrictions + @($rule) | 
Group-Object name | 
ForEach-Object { $_.Group | Select-Object -Last 1 }

Set-AzureRmResource -ResourceId $WebAppConfig.ResourceId -Properties $WebAppConfig.Properties -ApiVersion $LatestApiVersion -Force 
}
$IPList= @($IPList-split ",")
Write-Host "IPList found "$IPList"."
$increment = 1
foreach ($element in $IPList)
{
if ($element -eq "" -OR $element -eq " ") {continue}
else
{
$element=$element.Trim()
$rule = [PSCustomObject]@{
ipAddress = "$($element)/32"
action = "Allow"
priority = "$priority"
name = "WhitelistIP"+ $increment}
$increment++
Add-AzureIpRestrictionRule -ResourceGroupName "$RGName" -AppServiceName "$WebAppName" -rule $rule
}
}
$OutboundIP = @(Get-AzureRmWebApp -Name "$WebAppName" -ResourceGroupName "$RGName").possibleOutboundIPAddresses -split ","
$increment = 1
foreach ($element in $OutboundIP)
{
$rule = [PSCustomObject]@{
ipAddress = "$($element)/32"
action = "Allow"
priority = "$priority"
name = "OutboundIP"+ $increment}
$increment++
Add-AzureIpRestrictionRule -ResourceGroupName "$RGName" -AppServiceName "$WebAppName" -rule $rule
}

This script is useful when you want to add less number of individual IP addresses.

If you have a large number of IP addresses, you can put them in a powershell script file(script 3) as a variable and call the below powershell file(script 2) which will add those IPs in the settings.

The reason we are using 2 powershell script is, script 4 may vary from one environment to other as it contains the list of IPs. So you may need to create multiple copies of script 3 based on environments (dev , test or prod). All those files can call script 2 to add the IPs to the access restriction rule. However, If you do not have any changes in list of IPs in any environment, you can put the “rules” custom object in script 2 itself and use it.

Sctipt 2:

Param(
[string]$WebAppName,
[string]$RGName,
[string]$resourceType='Microsoft.Web/sites/config',
[PSCustomObject] $rules
)

function AddRules($rules) {

$rules = @() 
$priority = 100
foreach ($item in $rules) {

$rule = [PSCustomObject]@{ ipAddress = $item.ipAddress ; priority = $priority }

$rules += $rule
$priority = $priority + 100
} 
return $rules 
}

$ApiVersions = Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web |
Select-Object -ExpandProperty ResourceTypes |
Where-Object ResourceTypeName -eq 'sites' |
Select-Object -ExpandProperty ApiVersions

$LatestApiVersion = $ApiVersions[0]

# Registering IPs
Write-Host 'Registering IPs...'

$Settings = Get-AzureRMResource -ResourceName $WebAppName -ResourceType $resourceType -ResourceGroupName $RGName -ApiVersion $LatestApiVersion

$Settings.Properties.ipSecurityRestrictions = AddRules -rules $rules

Set-AzureRmResource -ResourceId $Settings.ResourceId -Properties $Settings.Properties -ApiVersion $LatestApiVersion -Force

Write-Host -ForegroundColor "Green" "IP range Added successfully!"

Script 3:

Param
(
# Name of the resource group that contains the App Service.
[Parameter(Mandatory=$true)]
$RGName,

# Name of your Web or API App.
[Parameter(Mandatory=$true)]
$WebAppName,

# subscriptionId value.
[Parameter(Mandatory=$true)]
$InputFile
)
[PSCustomObject]$rules = 
@{ipAddress = "XXX.XX.2.XXX/32"},`
@{ipAddress = "XXX.XX.2.XXX/32"},`
@{ipAddress = "XX.XXX.XX.XXX/32"},`
@{ipAddress = "XXX.X.3X.22X/32"},`
@{ipAddress = "2X3.22X.XXX.XXX/32"},`
@{ipAddress = "2X3.22X.XXX.XXX/32"},`
@{ipAddress = "XXX.XX.2.XX/32"},`
@{ipAddress = "2X3.22X.XXX.XXX/32"},`
@{ipAddress = "2XX.XXX.XXX.XXX/32"},`
@{ipAddress = "2XX.X2X.XXX.2XX/32"},`
@{ipAddress = "XX.X2.XX.XX/32"},`
@{ipAddress = "XX.3X.X2X.2X2/32"},`
@{ipAddress = "XX.33.2XX.XX3/32"},`
@{ipAddress = "XX.XX.XX.2X3/32"},`
@{ipAddress = "X3.XX.X.XXX/32"},`
@{ipAddress = "2X2.XXX.X2.XX/32"},`
@{ipAddress = "2XX.X2X.XX.XXX/32"},`
@{ipAddress = "XX.XXX.X2X.XX/32"}

Write-Host 'PRODUCTION SLOT IPs...'
& $InputFile -WebAppName $WebAppName -resourceGroupName $RGName -rules $rules 
Write-Host 'STAGING SLOT IPs...'
& $InputFile -WebAppName $WebAppName/Staging -resourceGroupName $RGName -rules $rules -resourceType Microsoft.Web/sites/slots/config

This is just a demo. You may need to change your script based on your need. Hope it helps. Happy learning. 🙂 

 

 

azure, Azure DevOps

Azure DevOps : Build and publish your project to userdefined folder using MSBuild task in build pipeline

Overview :

In most of the cases the solution of your application contains more than one project for example one or more web app, function app, WCF service, web API. When you build the solution you will get the predefined folder structure in the artifact. But sometime you might need to publish the different project to some user defined folder structure rather than the default folder structure. 

In this article we will discuss how you can have complete control over where you want your projects to be placed while publishing the code.

Solution:

  • Add the MSBuild task in your build pipeline if , either you have not one already or you have one MSBuild task which builds you whole solution.
  • Select the necessary settings in the new task as shown below. In the project tab you need to select the project that you want to build separately and publish to a different folder.

 

1

 

  • Use $(BuildConfiguration) in the Configuration section. 

2

  • BuildConfiguration can be configured in the variable section as per the need. 

3

  • MSBuild Arguments field is the one which will help you publishing your code to the folders that you want. Put the below mentioned setting in the MSBuild Arguments section.
/p:WebProjectOutputDir="$(Build.ArtifactStagingDirectory)\$(BuildConfiguration)\WebAPIs\WebAPIExample1"
/p:OutputPath="$(Build.ArtifactStagingDirectory)\$(BuildConfiguration)\WebAPIs\WebAPIExample1\bin"

Below is the screenshot for your reference.

4

  • Build.ArtifactStagingDirectory is the predefined build variable which stores the local path on the build agent where any artifacts are copied to before being pushed to their destination. For example: c:\agent_work\1\a 

You can publish different project into different user defined folder by following the above mentioned steps. But what happens when the folder you want, is not part of the solution. For example if your SQL scripts or power shell scripts are not part of the solution but you need them in your artifact, you can achieve that as well. It is discussed in detail in my next article. 

Happy learning . 🙂

Cloud

Azure questions and answers

This blog contains some questions and answers about azure. It is a running document. It is being updated now and then with new questions. Hope you find it useful.

Question 1:

Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?

Answer:

The resource group stores metadata about the resources. Therefore, when you specify a location for the resource group, you’re specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.

If the resource group’s region is temporarily unavailable, you can’t update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can’t update them.

Question 2 :

What is the difference between azure service and azure resource ?

Answer:

In general there is no difference in azure service and azure resource. But you can say that azure service is the compute service provided over internet by azure and Azure resource is an instance of the service(Or its components.) When you pay for a service and use it for something it becomes a ‘resource’ for you.

You can also see the difference on the Azure portal when you click ‘Azure Services’ (It lists what they can provide) and ‘All Resources’ (It lists what you already have).

Question 3:

What is the difference between azure web jobs and timer triggered azure functions ?

Answer:

Webjobs are associated with app services. So if you have another web app deployed in the same app service, its performance will be affected by the web job. Web job is not an independent component.

Where as function apps are independent resources and easy to code and deploy than a web job. It will not impact any other services like web jobs might.

Question 4:

What happens to the workflow which is running, when you stop the logic app ?

Answer:

Stopping the logic app only disables the trigger, which means no workflow will be triggered even if the trigger conditions are met, if the logic app is stopped. However, the workflows which are already triggered, will complete their execution, even after the logic app is turned off.

Ref : https://docs.microsoft.com/en-us/azure/logic-apps/manage-logic-apps-with-azure-portal

Question 5:

Can you resubmit a run if the workflow is disabled ?

Answer:

Yes, when the workflow is disabled, you still can resubmit the runs.

Ref : https://docs.microsoft.com/en-us/azure/logic-apps/manage-logic-apps-with-azure-portal

Question 6:

How can you check the run history of a stateless workflow of a standard logic app ?

Answer:

You can see the run history of a stateless workflow in debug mode.

To enable the debus mode, you need to add the following key value pair in the config of logic app.
Key : Workflows.<workflow_name>.OperationOptions
Value : WithStatelessRunHistory

However, it will have some impact on the performance as compare to the stateless workflow running without debug mode, since it will have to make additional call to the storage account to write the state of the logic app as similar to the stateful workflow.

Ref : https://docs.microsoft.com/en-us/azure/logic-apps/create-single-tenant-workflows-visual-studio-code

Question 7:

What is the main drawback of stateless workflow ?

Answer:

Stateless workflow supports managed connector actions but not support managed connector triggers. So you will have to use only built in triggers when you choose stateless workflow.

It is suitable only for short runs (max 5 min) for processing small chunk of data (under 64 KB), where you don’t have to store state, input and outputs. Which is why you can not resubmit a workflow run.

Ref : https://docs.microsoft.com/en-us/azure/logic-apps/single-tenant-overview-compare

azure

Azure Resource Manager and ARM templates

Azure Resource Manager:

Azure Resource Manager is the deployment and management service for Azure.You can think of it as a management layer that enables you to create, update, and delete and organize the resources in Azure subscription.

You can create azure resources using one of the following methods.

  1. Azure portal
  2. Azure powershell
  3. Azure CLI
  4. Rest Client

When you create a resource using the above mentioned methods, the Azure Resource Manager API handles your request in the background as a result of which we get  consistent results no matter which method you use to create a resource.

ARM Template:

Azure Resource Manager allows you to provision your resources using a declarative template. In a single template, you can deploy multiple resources along with their dependencies.

ARM template is a JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group or subscription. The template can be used to deploy the resources consistently and repeatedly.

The benefit of using ARM template in your project is mainly 3 fold.

  1. Re-usability: Once you create the ARM template for a resource you can use it multiple times to create same kind of resources with just few changes in the parameters.
  2. Consistency: By using a template, you can repeatedly deploy your solution throughout its lifecycle and have confidence your resources are deployed in a consistent state.
  3. Change tracking: Since ARM templates are json files, you can add it in the source control of your project so that you can leverage all the features of source control like change logs, history of changes, author of the changes etc.

Structure and Syntax of ARM templates:

ARM template has the following structure.

{
“$schema”: “”,
“contentVersion”: “”,
“apiProfile”: “”,
“parameters”: { },
“variables”: { },
“functions”: [ ],
“resources”: [ ],
“outputs”: { }
}

Element name Required Description
$schema Yes Location of the JSON schema file that describes the version of the template language.
contentVersion Yes Version of the template (such as 1.0.0.0). You can provide any value for this element. Use this value to document significant changes in your template.
apiProfile No An API version that serves as a collection of API versions for resource types. Use this value to avoid having to specify API versions for each resource in the template.

For more information, see Track versions using API profiles.

parameters No Values that are provided when deployment is executed to customize resource deployment. It comes handy when we use it in Continuous Deployement pipeline.
variables No Values that are used as JSON fragments in the template to simplify template language expressions.
functions No User-defined functions that are available within the template.
resources Yes Resource types that are deployed or updated in a resource group or subscription.
outputs No Values that are returned after deployment. It comes in handy when we use ARM template in Continuous Deployment pipeline.
  • There are many functions which you can use in ARM templates. You can find more details here. And if you wish to know the syntax of each element, you can go through the links mentioned in the above table.

Types of ARM templates:

  1. Nested Templates
  2. Linked Templates

Nested Templates:

You can deploy multiple resources using one template, by defining the resources one after another. Here you will be having one main template file which will contain the template for all the resources and only one parameter file which will contain all the parameters that will be given as input to the template.

This is advisable if you have less number of resources and there is not much customization involved in  your ARM template. Here is an example of Nested ARM template with parameters.

Limitation: The template file can be of 1 MB max.

Linked Templates:

Linked template comes in handy when

  1. You have large number of services to deploy: When you have a large number of services to deploy, there is a very rare chance that your template file size may go beyond 1MB if you use nested template. It is most unlikely to go beyond 1 MB since these are the JSON files. But using  linked template will increase your readability of code. It will be easy to manage your ARM templates.
  2. You want to reuse the template for multiple deployment: When you want to reuse the template of a resource for multiple deployments, you can leverage the linked template. All you need to do is have a separate parameter file for each deployment which will contain the parameters that you want to use for each deployment.

Here is an example of linked template.

Few tricks and tips:

The basic syntax of the template is JSON. However, below are the few points about how to use different expressions in the template.

  • Expressions start and end with brackets: [ and ], respectively.

For example :

"parameters": {
  "location": {
    "type": "string",
    "defaultValue": "[resourceGroup().location]"
  }
},
  • You can use  functions that Resource Manager provides to use within a template. Function calls are formatted as functionName(arg1,arg2,arg3). The syntax .location retrieves one of the properties (i.e location of the resource) from the object returned by that function.
  • Template functions and their parameters are case-insensitive. For example, Resource Manager resolves variables(‘var1’) and VARIABLES(‘VAR1’) as the same. Unless the function explicitely modifies case (such as toUpper or toLower), the function preserves the case.
  • “demoVar1”: “[[test value]” interpreted as [test value] but “demoVar2”: “[test] value” interpreted as [test] value .
  • To pass a string value as a parameter to a function, use single quotes. For Example:”name”: “[concat(‘storage’, uniqueString(resourceGroup().id))]”
  • To escape double quotes in an expression, such as adding a JSON object in the template, use the backslash. For example:
"tags": {
    "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
},
  • You can use concat function to merge strings.
  • Sometimes,you would need to use some properties of a resource while creating other resources. For example: you would need the resource id of server farm (Microsoft.Web/Serverfarms/<app service plan>) while creating the scale out and scale in settingsyou can use the below syntax to get the resource id of server farm.[resourceId(‘Microsoft.Web/serverFarms/’, parameters(‘svcPlanName’))]
  • reference : 

When you need to get the properties of the resources you would need to use the reference function.

  • For example:
     
    > To get the instrumentation key of application insight :
    [reference(resourceId('microsoft.insights/components/', parameters('AppInsights_name')), '2015-05-01').InstrumentationKey]
    > To get the app id of application insight : 
    [reference(resourceId('Microsoft.Insights/components/', parameters('AppInsights_name')), '2015-05-01').AppId]
    > To get the web app url of a deployment slot in app service :
    [concat('https://', reference(resourceId('Microsoft.Web/Sites/Slots', parameters('webAppName'),variables('SlotName')),'2018-11-01').defaultHostName)]
    > To get the web app url in app serivce : 
    [concat('https://', reference(resourceId('Microsoft.Web/Sites', parameters('webAppName')),'2018-11-01').defaultHostName)]
  • You can also use reference function to use the different values from other linked templates. You can find the example of this here . In this example, you can see that it is using the output values of application insight linked template in main template using reference function.
  • Condition:When you must decide during deployment whether to create a resource, use the condition element. The value for this element resolves to true or false. When the value is true, the resource is created. When the value is false, the resource isn’t created. The value can only be applied to the whole resource.Typically, you use this value when you want to create a new resource or use an existing one. For example, to specify whether a new storage account is deployed or an existing storage account is used, use:
  • {
        "condition": "[equals(parameters('newOrExisting'),'new')]",
        "type": "Microsoft.Storage/storageAccounts",
        "name": "[variables('storageAccountName')]",
        "apiVersion": "2017-06-01",
        "location": "[resourceGroup().location]",
        "sku": {
            "name": "[variables('storageAccountType')]"
        },
        "kind": "Storage",
        "properties": {}
    }
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "apiVersion": "2016-01-01",
      "type": "Microsoft.Storage/storageAccounts",
      "name": "[concat('storage', copyIndex(1))]",
      "location": "[resourceGroup().location]",
      "sku": {
        "name": "Standard_LRS"
      },
      "kind": "Storage",
      "properties": {},
      "copy": {
        "name": "storagecopy",
        "count": 3
      }
    }
  ],
  "outputs": {}
}

Note: In the above example copyIndex value will start from 1. If you do not mention the offset value it will start from 0. Which means your storage names will be storage0, storage1, storage2.

> To create WebApp1Storage, FunctionApp1Storage, WebApp2Storage, FunctionApp2Storage

"parameters": { 
  "apps": { 
    "type": "array", 
    "defaultValue": [ 
      "WebApp1", 
      "FunctionApp1", 
      "WebApp2",
      "FunctionApp2" 
    ] 
  }
}, 
"resources": [ 
  { 
    "name": "[concat('storage', parameters('apps')[copyIndex()])]", 
    "copy": { 
      "name": "storagecopy", 
      "count": "[length(parameters('apps'))]" 
      "mode": "serial",
      "batchSize": 2
    }, 
    ...
  } 
]

By default, Resource Manager creates the resources in parallel. The order in which they’re created isn’t guaranteed. However, you may want to specify that the resources are deployed in sequence. For example, when updating a production environment, you may want to stagger the updates so only a certain number are updated at any one time.

To serially deploy more than one instance of a resource, set mode to serial and batchSize to the number of instances to deploy at a time. With serial mode, Resource Manager creates a dependency on earlier instances in the loop, so it doesn’t start one batch until the previous batch completes.

You can find more details about copy here.

I have tried to note down all the basic information one would need while creating an ARM template. However there is much more to ARM template than this. Feel free to suggest any feature or functionality you want me to add to this blog.

Happy learning 🙂

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple#resource-iteration

https://github.com/Azure/azure-quickstart-templates/tree/master/monitor-autoscale-webappserviceplan-simplemetricbased#start-of-content

https://docs.microsoft.com/en-us/azure/templates/microsoft.web/2018-02-01/serverfarms

https://docs.microsoft.com/en-in/azure/azure-resource-manager/resource-group-template-functions

 

Cloud

Example of Linked ARM template

Main Template:

It has the usual template structure with the following tabs.

  1. schema
  2. contentVersion
  3. parameters
  4. resources
  5. outputs

You can find the description of these in my previous blog about ARM templates.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
   //main template parameters
    "StorageContainerSASToken": {
      "type": "string",
      "metadata": {
        "description": "SASToken here Dynamically generated by AzureFileCopyTask"
      }
    },
    "StorageContainerURI": {
      "type": "string",
      "metadata": {
        "description": "Retrieve Azure Storage Container URI"
      }
    },
    "applicationinsight-template-file": {
      "type": "string",
      "metadata": {
        "description": "Specifies Application Insight ARM template file name"
      }
    },
    //Application Insight tempalte parameters
    "AppInsights_name": {
      "type": "string",
      "metadata": {
        "description": "Name of Application Insight"
      }
    },
    "type": {
      "type": "string",
      "metadata": {
        "description": "Type of Application Insight"
      }
    },
    "location": {
      "type": "string",
      "metadata": {
        "description": "location of Application Insight"
      }
    }
  },
  "variables": {
  },
  "resources": [
    {
      "apiVersion": "2015-05-01",
      "name": "linkedTemplate_ApplicationInsight",
      "dependsOn": [
      ],
      "properties": {
        "templateLink": {
          "contentVersion": "1.0.0.0",
          "uri": "[concat(parameters('StorageContainerURI'), parameters('applicationinsight-template-file'), parameters('StorageContainerSASToken'))]"
        },
        "parameters": {
          "type": {
            "value": "[parameters('type')]"
          },
          "AppInsights_name": {
            "value": "[parameters('AppInsights_name')]"
          },
          "location": {
            "value": "[parameters('location')]"
          }
        },
        "mode": "Incremental"
      },
      "type": "Microsoft.Resources/deployments"
    }
  ],
  "outputs": {
    "APPINSIGHTS_INSTRUMENTATIONKEY": {
      "value": "[reference('linkedTemplate_ApplicationInsight').outputs.APPINSIGHTS_INSTRUMENTATIONKEY.value]",
      "type": "string"
    },
    "App_Id": {
      "type": "string",
      "value": "[reference('linkedTemplate_ApplicationInsight').outputs.App_Id.value]"
    }
  }
}

Note: i will update about the storage container SAS token, Storage container uri parameters soon in my next blog.

Main Parameter:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    //common paramters
    "StorageContainerURI": {
      "value": ""   
    },
    "applicationinsight-template-file": {
      "value": "NameOfApplicationInsightTemplateFile"
    },
    "location": {
      "value": "West Europe"
    },
    //application insight parameters
    "AppInsights_name": {
      "value": "ExampleOfLinkedTemplate"
    },
    "type": {
      "value": "other"
    }
  }
}

Linked Template:

Below is the template for creating application insight.

{
 "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
 "contentVersion": "1.0.0.0",
 "parameters": {
  "AppInsights_name": {
   "type": "string",
   "defaultValue": "YourApplicationInsightName"
   },
  "location": {
   "type": "string"
   },
  "type": {
   "type": "string"
   }
  },
  "variables": {
  },
  "resources": [
  {
    "name": "[parameters('AppInsights_name')]",
    "type": "microsoft.insights/components",
    "location": "[parameters('location')]",
    "apiVersion": "2015-05-01",
    "properties": {
    "ApplicationId": "[parameters('AppInsights_name')]",
    "Application_Type": "[parameters('type')]",
    "Flow_Type": "Redfield",    
    }
   }
  ],
  "outputs": {
   "APPINSIGHTS_INSTRUMENTATIONKEY": {
    "value": "[reference(resourceId('microsoft.insights/components/', parameters('AppInsights_name')), '2015-05-01').InstrumentationKey]",
    "type": "string"
    },
   "App_Id": {
    "type": "string",
    "value": "[reference(resourceId('Microsoft.Insights/components/', parameters('AppInsights_name')), '2015-05-01').AppId]"
   }
  }
 }