Create an Application with Web Template Studio

Creating new applications is something I do a fair amount of. Most of the time they are throwaway projects used to test something out or demo projects used for this blog. With all the project creation going on I try and keep an eye out for tools that make the process easier. This post is going to cover a tool a came across a few weeks ago from Microsoft call Web Template Studio (WebTS).

What is WebTS?

From the project’s GitHub read me:

Microsoft Web Template Studio (WebTS) is a Visual Studio Code Extension that accelerates the creation of new web applications using a wizard-based experience. WebTS enables developers to generate boilerplate code for a web application by choosing between different front-end frameworks, back-end frameworks, pages and cloud services. The resulting web app is well-formed, readable code that incorporates cloud services on Azure while implementing proven patterns and best practices.

For the front-end, we have options of Angular, React, or Vue and on the back-end, there are options for ASP.NET Core, Flask, Molecular, or Node.

Installation

We are assuming you already have a recent version of Visual Studio Code (VSCode) installed, but if not you can download it from here. Open VSCode and from the left side of the screen select Extensions. In the search box at the top enter Web Template Studio and then from the list select Web Template Studio. In the detail page that opens click the green Install button at the top.

Project Creation

Now that we have the extension installed press Ctrl + Shift + P to open the VSCode command palette and enter Web Template Studio: Launch (or as much as needed for the option to show) and then select Web Template Studio: Launch.

This will launch the Web Template Studio project creation process. The first page that shows gives you a very basic set of options, but enough for our example. To see the full array of options Web Template Studio provides hit next on this first screen instead of Create Project like we do in this example. Also, note that if you want an ASP.NET [Core] back-end you need to select it on this summary screen as it isn’t yet an option later in the process. Back to our example, on this page, you will need to enter a Project Name, pick a Front-end Framework, and a Back-end Framework before finally clicking Create Project.

After the process finishes a dialog will show letting us know it is complete. Click Open Project to load the created project in a new instance of VSCode.

Running the Project

Before getting to a runnable state we need to run a couple of terminal commands first. This can be done from VSCode’s built-in terminal or from an external terminal of your choice. In VSCode if you don’t see a terminal you can use Cntrl + Shift + ` to open a new one, or from the menu select Terminal > New Terminal. The following are the two commands that need to be run using npm, but they can be adjusted to use yarn instead if you prefer it. The second command may be specific to an ASP.NET Core back-end, make sure and check the project README.md for verification.

npm install
npm run restore-packages

Now the project can be run with the following command.

npm start

The resulting application will look something like this.

Wrapping Up

WebTS seems to be a great tool for quickly getting a project up and running. I do recommend going through the full project creation wizard as it has options to set up Azure integration as well as the ability to add up to 20 different pages to the generated application. Also, keep in mind that the ASP.NET back-end framework option is pretty new so I wouldn’t be surprised to see some changes as it progresses through the preview stage. Make sure and check out the project’s GitHub repo.

Migrations in Transactions with DbUp

This post will be continuing our exploration of DbUp with the addition of transactions as part of migration execution. If you are new to DbUp check out the following post to catch up on this series.

Database Migrations with DbUp
Code-based Database Migrations with DbUp
Always Run Migrations with DbUp
Logging Script Output with DbUp

Transaction Types and Restrictions

DbUp has three different options for transactions. The default is no transaction, which is what we have been using so far. The other two options are a transaction per migration and finally a single transaction for all the migrations in a single upgrader. As far as I can tell so far there is no way to use the same transaction across multiple upgraders so that means in our sample application we could have our normal migrations work correctly and get committed, but have an issue in our always run migrations. The other thing to keep in mind is that some database providers don’t support transactions when changing the structure of the database. As pointed out by the DbUp docs, the Postgres docs have a good summary of transactional support for DDL of different database providers.

Adding Transactions

In the following example, we have changed the upgrader to use a single transaction. This is the full code for the upgrader with the change highlighted.

var upgrader =
    DeployChanges.To
                 .SqlDatabase(connectionString)
                 .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly(),
                                                       f => !f.Contains(".AlwaysRun."))
                 .LogToConsole()
                 .WithTransaction()
                 .Build();

The other options are WithTransactionPerScript for, surprise, a transaction per migration, and WithoutTransaction for no transactions (or just leave it out since this is the default value). The following is a sample of the output with transactions on with two different upgraders in play and the begin transactions highlighted.

Beginning transaction
Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
No new scripts need to be executed - completing.
Beginning transaction
Beginning database upgrade
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.01-EverytimeNoPrints.sql'
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.02-EverytimeNoPrintsNoCountOn.sql'
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.03-EverytimePrints.sql'
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.04-EverytimePrintNoCountOn.sql'
Upgrade successful
Success!

Wrapping Up

DbUp makes adding transactions to your migrations super simple so this was a really short post. It would be nice to have an option to use a transaction across multiple upgraders. The official DbUp docs on transactions can be found here.

Logging Script Output with DbUp

In today’s post, we will be going over how to log script output during a DbUp run. If you are new to this series of posts or DbUp in general you might find the following post helpful to review.

Database Migrations with DbUp
Code-based Database Migrations with DbUp
Always Run Migrations with DbUp

Enable Script Output Logging

I will be using the always run migrations we covered last weekend to simplify testing, but the concepts are the same for normal migrations. The change to enable script output logging is a single line added to our upgrader setup as you can see with the highlighted line in the following code.

var alwaysRunUpgrader =
    DeployChanges.To
                 .SqlDatabase(connectionString)
                 .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly(), 
                                                       f => f.Contains(".AlwaysRun."))
                 .JournalTo(new NullJournal())
                 .LogToConsole()
                 .LogScriptOutput()
                 .Build();

Running the application will result in something like the following. Notice the records affected line which is the output from our migration.

Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
No new scripts need to be executed - completing.
Beginning database upgrade
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.Everytime.sql'
RecordsAffected: 0

Upgrade successful
Success!

Improve the Script Output

So the above is great for a simple migration that has a single statement, but for more complex migrations it might be helpful to use PRINT statements to provide more information. Let’s tweak our script to print the start and end times of the execution. The following is the sample script (and yes I know it would never actually update anything).

PRINT CONVERT(VarChar, GETDATE(), 121)  + ' Start every time';

UPDATE Application.Cities SET CityName = 'Nothing' WHERE 1 = 0;

PRINT CONVERT(VarChar, GETDATE(), 121)  +  + ' End every time';

The above migration results in the following output.

Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
No new scripts need to be executed - completing.
Beginning database upgrade
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.Everytime.sql'
2020-08-05 06:15:21.877 Start every time

2020-08-05 06:15:21.877 End every time

RecordsAffected: 0

Upgrade successful
Success!

The prints are helpful, but the records affected is noise in this case, especially since it is the records affected for the last statement and does print per SQL statement. To suppress the records affected message we and use SET NOCOUNT ON in our script like the following.

SET NOCOUNT ON;

PRINT CONVERT(VarChar, GETDATE(), 121)  + ' Start every time with NoCount ON';

UPDATE Application.Cities SET CityName = 'Nothing' WHERE 1 = 0;

PRINT CONVERT(VarChar, GETDATE(), 121)  +  + ' End every time with NoCount ON';

SET NOCOUNT OFF;

And as you can see in the following we got a cleaner output.

Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
No new scripts need to be executed - completing.
Beginning database upgrade
Executing Database Server script 'DbupTest.Scripts.AlwaysRun.Everytime.sql'
2020-08-05 06:38:18.863 Start every time

2020-08-05 06:38:18.863 End every time

Upgrade successful
Success!

Wrapping Up

Logging the output of scripts can be really helpful at times, but if you are going to use it as we saw above it is important to make sure and plan what that output is going to look like. Having 500 instances of the number of records affected isn’t necessarily helpful.

Always Run Migrations with DbUp

In this post, we are going to walk through how to use DbUp to run specific migrations on every run instead of just once. If you are new to DbUp check out the following post to see how the sample project got to its current state.

Database Migrations with DbUp
Code-based Database Migrations with DbUp

 

Differentiating for Always Run

One of the keys to getting migrations that should run every time is being able to identify them separately from migrations that should only run once. This can be done at either the filename level or by placing all the always run files in a specific directory. This sample is going with the directory methods, but the file method would work basically the same way. In the sample project under the existing Scripts directory, I added an AlwaysRun directory. I also added an example script in this directory so we could verify that it is getting executed on every run. In the sample, the file is named Everytime.sql.

Based on the above when DbUp is processing it will see Everytime.sql as DbupTest.Scripts.AlwaysRun.Everytime.sql.

Filtering Out Always Run

Now that we have a way to identify the migrations we want to always run we need to filter them out of the existing upgrade process so that they don’t get included in the log of already executed migrations. In DbUp this log is referred to as the journal. The extension methods that setup how migrations will be located, either WithScriptsEmbeddedInAssembly or WithScriptsAndCodeEmbeddedInAssembly in our examples, can be provided with a filter which we will use to exclude migrations that are in the AlwaysRun directory. The following code is the upgrader with the filter in place.

var upgrader =
    DeployChanges.To
                 .SqlDatabase(connectionString)
                 .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly(),
                                                       f => !f.Contains(".AlwaysRun."))
                 .LogToConsole()
                 .Build();

var result = upgrader.PerformUpgrade();

Executing Always Run Migrations

Now that we have adjusted our original upgrader to exclude migrations in the AlwaysRun directory we need something to execute the always run migrations. To accomplish this we add a second upgrader that is filtered just to AlwaysRun scripts. The second really important bit about this second upgrader is that it uses a NullJournal. The NullJournal is what keeps the execution of the scripts from being logged which results in them always getting run. The following code was inserted after the above code. The highlighted bits point out the filter and null journal.

if (result.Successful)
{
    var alwaysRunUpgrader =
        DeployChanges.To
                     .SqlDatabase(connectionString)
                     .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly(),
                                                           f => f.Contains(".AlwaysRun."))
                     .JournalTo(new NullJournal())
                     .LogToConsole()
                     .Build();

    result = alwaysRunUpgrader.PerformUpgrade();
}

The following is the full Main function just in case the above code didn’t give enough context on how the code should fit together.

static int Main(string[] args)
{
    var connectionString =
        args.FirstOrDefault()
        ?? "Server=localhost; Database=WideWorldImporters; Trusted_connection=true";

    var upgrader =
        DeployChanges.To
                     .SqlDatabase(connectionString)
                     .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly(),
                                                           f => !f.Contains(".AlwaysRun."))
                     .LogToConsole()
                     .Build();

    var result = upgrader.PerformUpgrade();

    if (result.Successful)
    {
        var alwaysRunUpgrader =
            DeployChanges.To
                         .SqlDatabase(connectionString)
                         .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly(), 
                                                               f => f.Contains(".AlwaysRun."))
                         .JournalTo(new NullJournal())
                         .LogToConsole()
                         .Build();

        result = alwaysRunUpgrader.PerformUpgrade();
    }

    if (!result.Successful)
    {
        Console.ForegroundColor = ConsoleColor.Red;
        Console.WriteLine(result.Error);
        Console.ResetColor();
#if DEBUG
        Console.ReadLine();
#endif                
        return -1;
    }

    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Success!");
    Console.ResetColor();

    return 0;
}

Wrapping Up

With the above changes we now have a project that can run normal migrations, code-base migrations, and always run migrations. No matter what style of database change management you decide on having the changes in some sort of source control system will be super helpful.

For more information on DbUp’s journalling check out the official docs.

Code-based Database Migrations with DbUp

In today’s post, we are going to check out the code-based migration feature of DbUp which allows code to run as part of the migration process instead of just SQL based scripts. The ability to run code as part of the migration provides a ton of flexibility in the migration process. This builds on last week’s post, Database Migrations with DbUp, make sure and check it out if you are new to DbUp.

Script Provider Change

In our example application from last week’s post in the Program class when setting up our upgrader we used the following setup.

var upgrader =
    DeployChanges.To
                 .SqlDatabase(connectionString)
                 .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
                 .LogToConsole()
                 .Build();

The above only picks up embedded SQL type files. In order to also pick up code-based migrations we have to change from the WithScriptsEmbeddedInAssembly script provider to WithScriptsAndCodeEmbeddedInAssembly. The following is the upgrader with the new script provider highlighted.

var upgrader =
    DeployChanges.To
                 .SqlDatabase(connectionString)
                 .WithScriptsAndCodeEmbeddedInAssembly(Assembly.GetExecutingAssembly())
                 .LogToConsole()
                 .Build();

Code-based Migrations

To add a code-based migration add a new class and set it to implement DbUp’s IScript interface. The script provider we changed to above will pick up any non-abstract class that implements IScript as well as SQL script files. The results are combined and sorted by name to determine execution order so make sure whatever naming convention you pick will sort properly for both your SQL scripts as well as code.

The IScript interface defines a single function, ProvideScript, that has a single parameter that provides a function to create a database command and returns a string. Depending on the need you can return a string with the SQL to be executed or you can create a command and execute any commands you need directly on the database or some combination of both if needed.

To test out code-based migrations, I added the following new class in the Scripts folder. It doesn’t actually change any data but instead logs some information to the console. Even without changing data it still conveys how code-based migrations work.

using System;
using System.Data;
using DbUp.Engine;

namespace DbupTest.Scripts
{
    public class Code0002Test : IScript
    {
        public string ProvideScript(Func<IDbCommand> dbCommandFactory)
        {
            using (var cmd = dbCommandFactory())
            {
                cmd.CommandText = "SELECT DeliveryMethodName FROM Application.DeliveryMethods";

                using (var dr = cmd.ExecuteReader())
                {
                    Console.WriteLine("  Delivery Methods");

                    while (dr.Read())
                    {
                        Console.WriteLine($"    {dr["DeliveryMethodName"]}");
                    }
                }
            }

            return string.Empty;
        }
    }
}

The following is the console output from running the application.

Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
  Delivery Methods
    Air Freight
    Chilled Van
    Courier
    Customer Collect
    Customer Courier to Collect
    Delivery Van
    Post
    Refrigerated Air Freight
    Refrigerated Road Freight
    Road Freight
Executing Database Server script 'DbupTest.Scripts.Code0002Test.cs'
Checking whether journal table exists..
Upgrade successful
Success!

Wrapping Up

The ability to run code as part of the database migration process is an awesome set of functionality. I would advise sticking with SQL based migrations as default and only using code when SQL doesn’t provide a way to do what is needed or doing the migration in code is much more understandable. Make sure and check out the official DbUp docs on this feature for more information.

Database Migrations with DbUp

Managing database schemas can be a challenging problem. It is also an area where there is no one size fits all solution (not sure that is ever really true for anything). Solutions range from manual script management, database vendor-specific options (such as DACPAC for Microsoft SQL Server), to migrations. This post will be looking at one of the options for a migration based approach using DbUp.

Sample Database

I needed a sample database to start off with so I dug up one of my posts for Getting a Sample SQL Server Database to guide me through getting Microsoft’s WideWorldImporters sample database downloaded and restored.

DbUp Console Application

There are quite a few ways DbUp can be used, for this example, we are going to be using it from a .NET Core Console application. From a terminal use the following command to create a new console application in the directory you want the application in.

dotnet new console

Now we can use the following command to add the DbUp NuGet package to the sample project. In this case, we are using the SQL Server package, but there are packages for quite a few database providers so install the one that is appropriate for you.

dotnet add package dbup-sqlserver

Next, open the project in Visual Studio (or any editor but part of how this example is setup is easier in Visual Studio). In the Program class replace all the code with the following. We will look at a couple big of this code that below.

using System;
using System.Linq;
using System.Reflection;
using DbUp;

namespace DbupTest
{
    class Program
    {
        static int Main(string[] args)
        {
            var connectionString =
                args.FirstOrDefault()
                ?? "Server=localhost; Database=WideWorldImporters; Trusted_connection=true";

            var upgrader =
                DeployChanges.To
                             .SqlDatabase(connectionString)
                             .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
                             .LogToConsole()
                             .Build();

            var result = upgrader.PerformUpgrade();

            if (!result.Successful)
            {
                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine(result.Error);
                Console.ResetColor();
#if DEBUG
                Console.ReadLine();
#endif                
                return -1;
            }

            Console.ForegroundColor = ConsoleColor.Green;
            Console.WriteLine("Success!");
            Console.ResetColor();

            return 0;
        }
    }
}

The bit below is trying to pull the connection string for SQL Server out of the first argument passed from the terminal to the application and if no arguments were passed in then it falls back to a hardcoded value. This works great for our sample, but I would advise against the fallback value for production use. It is always a bad day when you think your migrations have run successfully but it was on the wrong database because of a fall back value.

var connectionString = 
    args.FirstOrDefault() 
    ?? "Server=localhost; Database=WideWorldImporters; Trusted_connection=true";

This next section is where all the setup happens for which database to deploy to, where to find the scripts to run, and where to log. There are a lot of options provided by DbUp in this area and I recommend checking out the docs under the More Info section for the details.

var upgrader =
    DeployChanges.To
                 .SqlDatabase(connectionString)
                 .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
                 .LogToConsole()
                 .Build();

Finally, the following is where the scripts are actually executed against the database.

var result = upgrader.PerformUpgrade();

The rest of the function is dealing with displaying to the console the results of the scripts running.

Adding Scripts

Keep in mind that we are using the Embedded Scripts option so DbUp is going to find all the embedded files that end with ‘.sql’. I have added a Scripts directory to the project so the scripts don’t clutter the root of the project, but that isn’t a requirement. Do not that how the files are named will control the order in which the scripts get executed so make sure you establish a good naming convention upfront. For our sample, I added a file named 0001-TestScript.sql to the Scripts directory. Since we are using the embedded scripts provider it is critical that after adding the file we go to its properties and set the Build Action to Embedded resource.

The script file itself doesn’t do anything in this case except select a value.

Trying it out

At this point, we can hit Run in Visual Studio and it will execute our new script. The output will look something like the following.

If we were to run the application again it would tell us that no new scripts need to be executed. DbUp like all migration based solutions I have encountered use a table in the database to keep a record of what migrations have been run. The default table for DbUp is SchemaVersion and it consists of an Id, ScriptName, and Applied columns. Given this structure, it is important to keep in mind that if you rename a script that has already been executed on a database DbUp will see that as a different script and execute it again.

Wrapping Up

In the docs for DbUp there is a philosophy section and one of the points is that a database is a result of a set of transitions, not just its new state vs. its old state. This point is key to why I am a fan of a migration based approach.

Check out the DbUp docs and GitHub repo for more information.

Summer Ramblings

Welcome to the summer edition of ramblings. I tend to do this type of post once a quarter so I have added a ramblings category to keep them better separated from my normal tech style posts.

COVID-19

COVID-19 is still raging across the world. Here in the United States, the case count seems to be growing at a faster and faster rate as the push for reopening continues. Keep in mind I’m writing this in mid-July so things may have changed drastically by the time this publishes. We have multiple at-risk family members and the longer this goes on the more the stress of builds up. I’m sure that is true for most people. All we can do is to continue to be as safe as we can and hope others are doing the same.

Working from home

Working from home continues to go well. I do miss the random interactions of the office and have a much more limited set of people I interact with on a daily basis mostly based on what project I’m working on. I think this has added to my feelings of isolation. I would say my interactions these days are 80% to 90% Microsoft Teams based with the remaining being Zoom. It is amazing to me how much Teams have changed since this pandemic started, especially the video features. I highly recommend giving Teams a shot if you haven’t before.

The lack of a commute is still the best part of working from home. I also tend to get more focus time than I do in the office. It seems people are more likely to schedule a meeting for something vs just popping in.

Blogging

Not a lot has changed on the blogging front since the Spring Ramblings post. Posts have continued to be focused around Azure DevOps with a little bit of GitHub Actions mixed in. I want to get back to more programming topics. .NET 5 is fast approaching and I need to dig in and see if I can find some topics there. I have also been kicking around giving Python a shot and if I do I’m sure I will end up with some post from that.

Wrapping Up

Here’s to hoping by the time I do another post like this will thing have improved. Thanks for reading!

Azure DevOps Multi-Stage Pipelines: Require Stage Approval

In last week’s post, we covered taking our existing build pipeline and making it a multi-stage Pipeline with a build stage and a deploy stage. This week we are going to add another stage to our pipeline for production. Since we don’t want the production stage deployed before it has been through QA we will need to hold the stage until it is verified ready, which is what this post is going to be about. If you haven’t read last week’s post, Azure DevOps Pipelines: Multi-Stage Pipelines, you might want to start there before reading the rest of this post if you are new to multi-stage pipelines.

 

Add an Environment

In order to require approval on a stage is to associate it with and environment and add the approval requirement to the environment. In Azure DevOps under Pipelines select Environments and then click the Create environment button.

On the New environment dialog fill in a Name. If you had actual resources associated with the environment they can be added to provide traceability, but in this example, we are going to stick with the None option.

Require Approval for an Environment

Now that the resource has been created on its details page we can use the three dots to open the menu and click Approvals and checks.

On the next screen click the +button in the upper right corner and then from the lists of check select Approvals and then click Next. As you can see from the partial list in the screenshot the range of check available for approvals is massive.

The next dialog is used to select the uses or groups who should be able to perform approves for the environment. When your approves are set click Create to finish.

Use an Environment in a Pipeline

If you are following along from a previous post not that the Deploy stage has been renamed to QA to make the Pipeline results clearer.

Before:
- stage: Deploy
  jobs:
  - job: Deploy
    steps:
      - script: echo Fake deploying code

After:
- stage: QA
  jobs:
  - job: DeployQa
    steps:
      - script: echo Deploying to QA

Now we are going to add a new stage for our production environment. Notice that instead of a normal job we are using a deployment job which enables us to specify our desired environment. Deployment jobs have a ton more features than we are using so make sure and check out the docs to see what other options are available.

- stage: Production
  jobs:
  - deployment: DeployProduction
    environment: 'Production'
    strategy:
     runOnce:
       deploy:
         steps:
          - script: echo Deploying to Production

Save and run the Pipeline and we will look at how this new state presents differently than the Build and QA stages.

Pipeline Results

As you can see in the following screenshot the results of the Pipeline run have has a section notifying that the Production stage can’t run until it has been reviewed. Also, notice in the Stages section that the Production stage shows a Waiting status.

Clicking the Review button will show a dialog that will allow you to approve or reject the stage that is waiting.

Wrapping Up

The extra layer of options provided environments enables most of the scenarios that I missed when I first started playing with multi-stage Pipelines. Having the build and release steps for an app in source control and the added ability to vary them by branch makes this worth it even if I do have to deal with more YAML.

Azure DevOps Pipelines: Multi-Stage Pipelines

The last couple of posts have been dealing with Release managed from the Releases area under Azure Pipelines. This week we are going to take what we were doing in that separate area of Azure DevOps and instead make it part of the YAML that currently builds our application. If you need some background on how the project got to this point check out the following posts.

Getting Started with Azure DevOps
Pipeline Creation in Azure DevOps
Azure DevOps Publish Artifacts for ASP.NET Core
Azure DevOps Pipelines: Multiple Jobs in YAML
Azure DevOps Pipelines: Reusable YAML
Azure DevOps Pipelines: Use YAML Across Repos
Azure DevOps Pipelines: Conditionals in YAML
Azure DevOps Pipelines: Naming and Tagging
Azure DevOps Pipelines: Manual Tagging
Azure DevOps Pipelines: Depends On with Conditionals in YAML
Azure DevOps Pipelines: PowerShell Task
Azure DevOps Releases: Auto Create New Release After Pipeline Build
Azure DevOps Releases: Auto Create Release with Pull Requests

Recap

The current setup we have uses a YAML based Azure Pipeline to build a couple of ASP.NET Core web applications. Then on the Release side, we have basically a dummy release that doesn’t actually do anything but served as a demo of how to configure a continuous deployment type release. The following is the current YAML for our Pipeline for reference.

name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)

resources:      
  repositories: 
  - repository: Shared
    name: Playground/Shared
    type: git 
    ref: master #branch name

trigger: none

variables:
  buildConfiguration: 'Release'

jobs:
- job: WebApp1
  displayName: 'Build WebApp1'
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - task: [email protected]
    inputs:
      targetType: 'inline'
      script: 'Get-ChildItem -Path Env:\'

  - template: [email protected]
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1

- job: WebApp2
  displayName: 'Build WebApp2'
  condition: and(succeeded(), eq(variables['BuildWebApp2'], 'true'))
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - template: build.yml
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp2.csproj
      artifactName: WebApp2
      
- job: DependentJob
  displayName: 'Build Dependent Job'
  pool:
    vmImage: 'ubuntu-latest'

  dependsOn:
  - WebApp1
  - WebApp2

  steps:
  - template: [email protected]
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1Again

- job: TagSources
  displayName: 'Tag Sources'
  pool:
    vmImage: 'ubuntu-latest'

  dependsOn:
  - WebApp1
  - WebApp2
  - DependentJob
  condition: |
    and
    (
      eq(dependencies.WebApp1.result, 'Succeeded'),
      in(dependencies.WebApp2.result, 'Succeeded', 'Skipped'),
      in(dependencies.DependentJob.result, 'Succeeded', 'Skipped')
    )
 
  steps:
  - checkout: self
    persistCredentials: true
    clean: true
    fetchDepth: 1

  - task: [email protected]
    inputs:
      targetType: 'inline'
      script: |
        $env:GIT_REDIRECT_STDERR` = '2>&1'
        $tag = "manual_$(Build.BuildNumber)".replace(' ', '_')
        git tag $tag
        Write-Host "Successfully created tag $tag" 

        git push --tags
         Write-Host "Successfully pushed tag $tag"     

      failOnStderr: false

The above setup works great, but in April of this year, Azure Pipelines got the concept of multi-stage Pipelines which gives us the ability to manage the Release side of things in the same YAML as our builds and allows releases to be source controlled and different per branch in the same way that builds in YAML can be.

Simplified Build YAML

The above is the full YAML for our sample builds, which is a lot of code. The following is a paired down version that we will be using for the rest of this post that only builds WebApp1 and should help the changes stand out.

name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)

resources:      
  repositories: 
  - repository: Shared
    name: Playground/Shared
    type: git 
    ref: master #branch name

trigger: none

variables:
  buildConfiguration: 'Release'

jobs:
- job: WebApp1
  displayName: 'Build WebApp1'
  pool:
    vmImage: 'ubuntu-latest'

  steps:
  - task: [email protected]
    inputs:
      targetType: 'inline'
      script: 'Get-ChildItem -Path Env:\'

  - template: [email protected]
    parameters:
      buildConFiguration: $(buildConfiguration)
      project: WebApp1.csproj
      artifactName: WebApp1

Adding Stages

Stages are an extra layer of grouping that help divide a Pipeline similar to how jobs work except at a higher level. Jobs are a group of Steps, but Stages are a group of Jobs. In the following YAML, you can see that our existing jobs have been grouped under a Build stage and a new Release stage has been added.

name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)

resources:      
  repositories: 
  - repository: Shared
    name: Playground/Shared
    type: git 
    ref: master #branch name

trigger: none

variables:
  buildConfiguration: 'Release'

stages:
- stage: Build
  jobs:
  - job: WebApp1
    displayName: 'Build WebApp1'
    pool:
      vmImage: 'ubuntu-latest'

    steps:
    - task: [email protected]
      inputs:
        targetType: 'inline'
        script: 'Get-ChildItem -Path Env:\'

    - template: [email protected]
      parameters:
        buildConFiguration: $(buildConfiguration)
        project: WebApp1.csproj
        artifactName: WebApp1

- stage: Deploy
  jobs:
  - job: Deploy
    steps:
      - script: echo Fake deploying code

When adding stages watch your whitespace it is easy to miss spacing in your existing code when wrapping them in stages.

Results

After running the Pipeline with the above changes you will see on the Pipeline’s summary page that it will display the results of each stage.

In the detailed view of a specific Pipeline run, there will now be a Stages tab that shows the results by stage. If you hit the expander on a stage it will also give you an option to rerun a stage if you ever have that need.

Wrapping Up

Hopefully, this will help you get a jump start on setting up your own multi-stage Pipelines. While I’m still not in love with YAML it is nice to have builds and releases in source control with the ability to vary by branch when you have the need. This setup works great if you want all your stages to run every time. A follow-up post will look at how to make a stage that requires approval.

Azure DevOps Releases: Auto Create Release with Pull Requests

Last week we covered auto-creating Release when a build completes. This week we are going to cover how to create a release when a build from a pull request completes. This setup would be helpful for verification of changes before the actual make it into a releasable branch. The following posts will help you catch up if you’re new to the series.

Getting Started with Azure DevOps
Pipeline Creation in Azure DevOps
Azure DevOps Publish Artifacts for ASP.NET Core
Azure DevOps Pipelines: Multiple Jobs in YAML
Azure DevOps Pipelines: Reusable YAML
Azure DevOps Pipelines: Use YAML Across Repos
Azure DevOps Pipelines: Conditionals in YAML
Azure DevOps Pipelines: Naming and Tagging
Azure DevOps Pipelines: Manual Tagging
Azure DevOps Pipelines: Depends On with Conditionals in YAML
Azure DevOps Pipelines: PowerShell Task
Azure DevOps Releases: Auto Create New Release After Pipeline Build

Build Validation Branch Policy

Before we can have a Release created with a pull request we have to make sure that the pull request process does a build. I’m going to review how to do this quickly, for more info see my Branch Policies post. To do this we are going to head over to the Repos section of Azure DevOps. In the Branches section on the branch we want a build on a pull request for select the three dots and then click Branch policies.

On the repo settings page scroll down to the Build Validation section and click the + button to add a build to the pull request process.

The Add build policy dialog has a few options, but we are taking the defaults. Do note that if you have multiple Build pipelines that you make sure and adjust that option to the correct build. Click Save when you’re done.

Release Changes to allow Pull Request Trigger

Based on how we set up our Release to trigger when a build complete you might think that using the same build to validate a Pull Request would automatically trigger a new Release, but that isn’t the case. In most cases, this is actually a good thing since you wouldn’t want to deploy change before they have been reviewed. In the case we are trying to cover our release is to a QA environment so the requested changes can be verified before they make it into a releasable branch.

To enable a Release to be created from a pull request we need to head over to the Pipeline > Release area in Azure DevOps. Once there with the release in question selected click the Edit button.

In the Artifacts, section click the lightning bolt to edit the continuous deployment triggers.

Near the middle of the dialog, we want to Enable the Pull request trigger. Doing this will also require you to enter Target Branch Filters which are the branches that will be allowed to trigger a release when they are a target of a pull request.

Next, we need to enable our sample stage to be deployed for releases based on pull requests. Click the lightning bolt on the left side of the stage to edit the pre-deployment conditions.

On the dialog that shows Enable the Pull request deployment setting.

After closing the dialog make sure and Save the release.

Results

To show the results I created a new branch with a small change and created a PR into the master branch. From the Pull Request, we can click the View all checks button to see the status of the required build.

On the Checks dialog, you can see at the bottom that our sample release ran successfully.

If you click on the release you can see in the artifacts section that the files being used are from the pull request’s merge branch, not the branch being PRed or the target branch.

Wrapping Up

Building and releasing on a pull request opens up a lot of options, especially around making sure your code is verified before making it into a release branch.