Azure Pipelines: Release to Azure App Service

This post is going to use the same Azure DevOps project used in last week’s Azure Repos and Azure Pipelines post which had a build pipeline and add a release pipeline that deploys to an Azure App Service.

This walkthrough is going to assume you have an Azure account already set up using the same email address as Azure DevOps. If you don’t have an Azure account one signup for an Azure Free Account.

Build Changes

The build set up in last week’s post proved that the code built, but it didn’t actually do anything with the results of that build. In order to use the results of the build, we need to publish the resulting files. This will be needed later when setting up the release pipeline.

In order to get the results we are looking for a few steps must be added to our build.  All the changes are being made in the azure-pipelines.yml. The following is my full yaml file with the new tasks.

pool:
  vmImage: 'Ubuntu 16.04'

variables:
  buildConfiguration: 'Release'

steps:
- task: DotNetCoreCLI@2
  displayName: Build
  inputs:
    projects: '**/EfSqlite.csproj'
    arguments: '--configuration $(BuildConfiguration)'
- task: DotNetCoreCLI@2
  displayName: Publish
  inputs:
    command: publish
    publishWebProjects: True
    arguments: '--configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact'
  inputs:
    PathtoPublish: '$(build.artifactstagingdirectory)'

As you can see in the above yaml this build now has three different steps. The build (this is equivalent to what the post from last week was doing) and publish (this gets all the files in the right places) tasks are both handled using the DotNetCoreCLI@2 task.  Finally, the PublishBuildArtifacts@1 takes the results of the publish and zips them to the artifact staging directory where they can be used in a release pipeline.

Create an Azure App Service

Feel free to skip this section if you have an existing App Service to use. To create a new App Service open the Azure Portal and select App Services from the left navigation.

Next, click the Add button.

On the next page, we are going to select Web App.

Now hit the Create button.

The next page you will need to enter an App name, select the OS, and Runtime Stack before hitting the Create button. The OS and Runtime Stack should match the target of your application.

Create a Release Pipeline

On your project from the left navigation select Pipelines > Releases and then click the New pipeline button. If you already have a release pipeline setup this page will look different.

The next page has a list of template you can select from. In this example, we will be selecting Azure App Service deployment and then click the Apply button.

 

 

Artifact Set Up

After clicking Apply you will hit the pipeline overview page with to sets of information. The first is the Artifacts which for us is going to be the results of the build pipeline we set up in last week’s post. Click the Add an artifact box.

The Add an artifact dialog is where you will select where the release will get its build artifact form. We will be using the source type of build and selecting our existing build pipeline.

Once you select your build pipeline as the Source a few more controls will show up. I took the defaults on all of them. Take note of the box highlighted in the screenshot as it will give you a warning if you build is missing artifacts. Click the Add button to complete.

 

Stage Setup

 

Above we selected the Azure App Service template which is now in our pipeline as Stage 1. Notice the red exclamation, which means the stage has some issues that need to be addressed before it can be used. Click on the stage box to open it.

 

As you can see in the following screenshot the settings that are missing are highlighted in red on the Deploy Azure App Service task.  On Stage 1 click the Unlink all button and confirm. Doing this means there is more setup on the Deploy Azure App Service task, but this is the only way to use .NET Core 2.1 at the time of this writing. For some reason, the highest version available at the Stage level for Linux is .NET Core 2.0.

After clicking unlink all the options other than the name of the stage will be removed. Next, click on Deploy Azure App Service task which handles the bulk to the work will place for this pipeline. There are a lot of setting on this task. Here is a screenshot of my setup and I will call out the important bits after.

First, select your Azure subscription. You may be required to Authorize your account so if you see an Authorize button click it and go through the sign in steps for your Azure account.

Take special note of App type. In this sample, we are using Linux so it is important to select Linux App from the drop down instead of the just using Web App.

With your Azure subscription and App type selected the App Service name drop-down should only let you select Linux based App  Services that exist on your subscription.

For Image Source, I went with the Built-in Image, but it does have the option to enter use a container from a registry if the built-in doesn’t meet your needs.

For Package or folder, the default should work if you only have a single project. Since this, I have two projects I used the file browser (the … button) to select the specific zip file I want to deploy.

Runtime Stack needs to to be .NET Core 2.1 for this application.

Startup command needs to be set up to tell the .NET CLI to run the assembly that is the entry point for the application. In the example, this works out to be dotnet EfSqlite.dll.

After all the settings have been entered hit the Save button in the top right of the screen.

Execute Release Pipeline

Navigate back to Pipelines > Release and select the release you want to run. Then click the Create a release button.

On the next page, all you have to do is select the Artifact you want to deploy and then click the Create button.

Create will start the release process and send you back to the main release page. There will be a link at the top of the release page that you can click to see the status of the release.

The following is the results of my release after it has completed.

Wrapping Up

I took me more trial and error to get this setup going that I would have hoped, but once all the pieces are get up the results are awesome. At the very minimum, I recommend taking the time to at least set up a build that is triggered when code is checked into your repos. Having the feedback that a build is broken as soon as the code hits the repo versus finding out when you need a deliverable will do wonders for your peace of mind.

Azure Pipelines: Release to Azure App Service Read More »

Negative Responses

After 3.5 years I finally had a post that resulted in a negative reaction from a fair number of people. The morning after the .NET Parameterized Queries Issues with SQL Server Temp Tables post went live I woke up with 4 comments waiting for approval. Before I read them this was exciting. I thought that I had hit on an issue that a lot of people had faced. I was wrong.

The Negative

Three of the four comments were about that specific person’s view of best practices and how my post was a poor example and something that any seasoned developer would ever dare use.

I replied to all the comments and tried to clarify the points of misunderstanding. The post was based on a real issue we faced and something that was not at all clear on why. The post was meant to be simple in order to show the point and I did my best to communicate these points to the commenters. To say the least this was a hard day for me.

The post now has something like 33 comments, which is way more than any of my other posts. I think that for the most part, the commenters are more clear on what I was going for. The straight up negativity around the post in question is something I hope to avoid for the most part.

The Positive

On the plus side on the day in question, I hit a new record for single-day views on my blog. The previous record was from 3 years ago when I had a post featured on the ASP.NET homepage.

While less visible on my blog I also got a lot of support from people letting me know that they understood my point and through it was clear. The post has also appeared in a blog and newsletter that I highly respect. I really appreciate everyone who took the time to reach out and remind me that this type of thing will happen and to stay positive.

Future Strategies

If something like this happens again I will do my best to keep it in perspective. I want honest feedback on my posts. I welcome people posting issues and us working through them. It results in better information for my readers which at the end of the day is why I spend so much time doing this blog every week.

I think it is human nature for negative comments to stand out. It will be important for me to temper that initial negative reaction with the positive side of things.

Negative Responses Read More »

Azure Repos and Azure Pipelines

Last week’s post looked at using GitHub with Azure Pipelines. This week I’m going to take the same project and walk through adding it to Azure Repos and setting a build up using Azure Pipelines. I will be using the code from this GitHub repo minus the azure-pipelines.yml file that was added in last weeks post.

Creating a Project

I’m not going to walk you through the sign-up process, but if you don’t have an account you can sign up for Azure DevOps here. Click the Create project button in the upper right corner.

On the dialog that shows enter a project name. I’m using the same name that was on the GitHub repo that the code came from. Click Create to continue.

Adding to the Repo

After the project finishes the creation process use the menu on the left and select Repos.

Since the project doesn’t currently have any files the Repos page lists a number of options for getting files added. I’m going to use the Clone in VS Code option and then copy the files from my GitHub repo. If I weren’t trying to avoid including the pipeline yaml file an easier option would be to use the Import function and clone the repo directly.

I’m not going to go through the details of using one of the above methods to get the sample code into Azure Repos. It is a git based repo and the options for getting code uploaded are outlined on the page above.

Set up a build

Now that we have code in a repo you should see the view change to something close to the following screenshot. Hit the Set up build to start the process of creating a build pipeline for the new repo. This should be pretty close to the last half of last week’s post, but I want to include it here so this post can stand alone.

On the next page select Azure Repos as the source of code.

Next, select the repo that needs to be built for the new pipeline.

Template selection is next. Based on your code a template will be suggested, but don’t just take the default. For whatever reason, it suggested a .NET Desktop template for my sample which is actually ASP.NET Core based. Select your template to move on to the next step.

The next screen will show you the YAML that will be used to build your code. My repo contains two projects so I had to tweak the YAML to tell it which project to build, but otherwise, the default would have worked. After you have made any changes that your project needs click Save and run.

The last step before the actual build is to commit the build YAML file to your Azure Repo. Make any changes you need on the dialog and then click Save and run to start the first build of your project.

The next page will show you the status of the build in real time. When the build is complete you should see something like the following with the results.

Wrapping Up

As expected using Azure Repos with Azure Pipelines works great. If you haven’t yet give Azure DevOps a try. Microsoft has a vast offering with this set of products that are consistently getting better and better.

Azure Repos and Azure Pipelines Read More »

GitHub and Azure Pipelines

A few weeks ago Microsoft announced that Visual Studio Team Services was being replaced/rebranded by a collection of services under the brand Azure DevOps. One of the services that make up Azure DevOps is Azure Pipelines which provides a platform for continuous integration and continuous delivery for a huge number of languages on Windows, Linux, and Mac.

As part of this change, Azure Pipelines is now available on the GitHub marketplace. In this post, I am going to pick one of my existing repos and see if I can get it building from GitHub using Azure Pipelines. I’m sure Microsoft or GitHub has documentation, but I’m attempting this without outside sources.

GitHub Marketplace

Make sure you have a GitHub account with a repo you want to build. For this post, I’m going to be using my ASP.NET Core Entity Framework repo. Now that you have the basic prep out of the way head over to the GitHub Marketplace and search for Azure Pipelines or click here.

Scroll to the bottom of the page to the Pricing and setup section. There is a paid option that is the default option. Click the Free option and then click Install it for free.

On the next page, you will get a summary of your order. Click the Complete order and begin installation button.

On the next page, you can select which repos to apply the installation to. For this post, I’m going to select a single repo. After making your choice on repos click the Install button.

Azure DevOps

After clicking install you will be thrown into the account authorization/creation process with Microsoft. After getting authorized you will get to the first set up in the setup process with Azure. You will need to select an organization and a project to continue. If you don’t have these setup yet there are options to create them.

After the process complete you will be land on the New pipeline creation process where you need to select the repo to use. Clicking the repo you want to use will move you to the next step.

The next step is a template selection. My sample is an ASP.NET Core application so I selected the ASP.NET Core template. Selecting a template will move you to the next step.

The next page will show you a yaml file based on the template you selected. Make any changes your project requires (my repo had two projects so I had to change the build to point to which project I wanted to build).

Next, you will be prompted to commit the yaml file to source control. Select your options and click Save and run.

After your configuration gets saved a build will be queued. If all goes well you will see your app being built. If everything works you will see something like this build results page.

Wrapping Up

GitHub and Microsoft have done a great job on this integration. I was surprised at how smooth the setup was. It was also neat to see a project that I created on Windows being built on Linux.

If you have a public repo on GitHub and need a way to build give Azure Pipelines a try.

GitHub and Azure Pipelines Read More »

Entity Framework Core: Logging

The other day I was having to dig into some performance issues around a process that is using Entity Framework Core. As part of the process, I need to see the queries generated by Entity Framework Core to make sure they were not the source of the issue (they were not). I’m going to be making these changes using the Contacts project from my ASP.NET Core Basics repo if you want to see where I started from.

First, we will cover adding a logging provider. Next, I’m going to show you what I came up with and then I will show you the method suggested by the Microsoft docs (which I didn’t find until later).

Logging Providers

First, we need to do pick how we want the information logged. A good starting place is using one of the Microsoft provides which can be found on NuGet. Right-click on the project you want to add the logging to and click Manage NuGet Packages.

In the search box enter Microsoft.Extensions.Logging for a list of good list of logging options. For this post, we will be using the console logger provided by Microsoft. Select Microsoft.Extensions.Logging.Console and then click the Install button on the upper right side of the screen.

First Go

For my first try at this, all the change are in the ConfigureServices function of the Startup class. The following is the code I added at the end of the function that will log all the queries to the console window (if you are using IIS Express then use the Debug logger instead).

var scopeFactory = services.BuildServiceProvider()
                           .GetRequiredService<IServiceScopeFactory>();

using (var scope = scopeFactory.CreateScope())
{
    using (var context = scope.ServiceProvider
                              .GetRequiredService<ContactsContext>())
    {
        var loggerFactory = context.GetInfrastructure()
                                   .GetService<ILoggerFactory>();
        loggerFactory.AddProvider(new ConsoleLoggerProvider((_, __) => true, true));
    }
}

This code is creating a scope to get an instance of the ContactsContext and then using the context to get it’s associated logger factory and adding a console logger to it. This isn’t the cleanest in the world but gets the job done especially if this is just for a quick debug session and not something that will stay.

Microsoft Way

While the above works I ended up finding a logging page in the Entity Framework Core docs. After undoing the changes made above open the ContactsContext (or whatever your DBContext is) and add a class level static variable for a logger factory. This class level variable will be used to prevent memory and performance issues that would be caused by creating a new instance of the logging classes every time a context is created.

public static readonly LoggerFactory LoggerFactory = 
       new LoggerFactory(new[] {new ConsoleLoggerProvider((_, __) => true, true)});

Next, add/update an override to the OnConfiguring to use the logger factory defined above. The following is the full function in my case.

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    base.OnConfiguring(optionsBuilder);

    optionsBuilder.UseLoggerFactory(LoggerFactory);
}

The Output

Either way, the following is an example of the output you will get with logging on.

The query is highlighted in the red box above. As you can see there is a lot of output, but there are options for filtering which are detailed in the docs.

Wrapping Up

Entity Framework Core does a great job, but the above gives you an option to check in on what it is doing. If you are using SQL Server you could also get the queries using SQL Server Profiler.

Entity Framework Core: Logging Read More »

Back from Disney World

Today is my first day back from being at Disney World for a week with my family. We had a wonderful time. It was awesome to see our 6-year-old light up seeing all the sights and ride for the first time.

This trip was also the first full week of vacation I have had in years. There was a lot going on at work and I found it really hard to unplug. I can’t tell you how many times I checked my email and Teams while waiting in line for one thing or another. I’m sure this amounted to less than 1% of the time I was off, but I feel like even that level of work is more than enough to lose some of the mental recoveries that vacation should provide.

Future Strategies

I think the biggest thing I could do in the future is different timing. This trip ended up being during a client go live which greatly added to the feeling that I needed to stay connected in case something went wrong.

Second, I need to take more time to ensure I’m not the single point of knowledge on any subject. The one thing I did end up having to do was because I forgot to convey some information on how our QA team was running a service.

Third, I just need to take more time off. As I said above this is the first time in years that I have taken a full week off. This has been due to family medical issues and it has gotten me in the habit of saving all the time off I can to cover times I need to be out to help out the family. Now that things have calmed down I am going to have to adjust to being able to have time off and disconnecting.

Wrapping Up

I’m still in the process of getting back into the swing of things and this topic was on my mind. I would love to hear how you all deal with this issue.

Back from Disney World Read More »

.NET Parameterized Queries Issues with SQL Server Temp Tables

In the last few weeks at work, I have had multiple people have issues using a parameterized query in .NET that involved a temp table. It took a little bit of digging, but we finally tracked down the issue. This post is going to cover the cause of the issue as well as a couple of ways to fix it. The database used in this post is Wide World Importers sample database from Microsoft. For instructions on setting it up check out my Getting a Sample SQL Server Database post from last week.

Sample Project Creation

To keep things as simple as possible I am using a console application created using the following .NET CLI command.

dotnet new console

Followed by this command to add in the SQL Client from Nuget.

dotnet add package System.Data.SqlClient

The following is the full Program class with the sample code that will result in the exception this post is dealing with. Yes, I am aware this isn’t the be way to structure this type of code so please don’t judge it from that aspect. It is meant to be simple to demonstrate the issue.

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Running sample");

        using (var connection = 
                  new SqlConnection(@"Data Source=YourServer;
                                      Initial Catalog=YourDatabase;
                                      Integrated Security=SSPI;"))
        {
            connection.Open();

            using (var command = connection.CreateCommand())
            {
                SqlTest(command);
            }
        }

        Console.ReadLine();
    }

    private static void SqlTest(SqlCommand command)
    {
        command.CommandText = @"SELECT OrderId
                                      ,CustomerId
                                      ,SalespersonPersonID
                                      ,BackorderOrderId
                                      ,OrderDate
                                INTO #backorders
                                FROM Sales.Orders
                                WHERE BackorderOrderID IS NOT NULL
                                  AND OrderDate > @OrderDateFilter";

        command.Parameters.Add("@OrderDateFilter", 
                                SqlDbType.DateTime)
                          .Value = DateTime.Now.AddYears(-1);
        command.ExecuteNonQuery();

        command.CommandText = "SELECT OrderId FROM #backorders";

        using (var reader = command.ExecuteReader())
        {
            while (reader.Read())
            {
                Console.WriteLine(reader["OrderId"]);
            }
        }
    }
}

The Error

Running the application as it exists above will result in the following error.

Invalid object name ‘#backorders’.

Strange error since we just created the #backorder temp table. Let’s give it a try without the filter. The query now looks like the following.

command.CommandText = @"SELECT OrderId
                              ,CustomerId
                              ,SalespersonPersonID
                              ,BackorderOrderId
                              ,OrderDate
                        INTO #backorders
                        FROM Sales.Orders
                        WHERE BackorderOrderID IS NOT NULL";

command.ExecuteNonQuery();

Now the application runs without any issues. What if we try adding back the filter, but without using the command parameter?

command.CommandText = @"SELECT OrderId
                              ,CustomerId
                              ,SalespersonPersonID
                              ,BackorderOrderId
                              ,OrderDate
                        INTO #backorders
                        FROM Sales.Orders
                        WHERE BackorderOrderID IS NOT NULL
                          AND OrderDate > '2018-01-01'";

command.ExecuteNonQuery();

Again the application runs without any issues.

The Reason

Why is it that adding a command parameter is causing our temp table to disappear? I discovered the issue by using SQL Server Profiler (in SQL Server Management Studio it can be found in Tools > SQL Server Profiler). With the code back to the original version with the command parameter and Profiler connected to the same server as the sample application running the sample application shows the following command received by SQL Server.

exec sp_executesql N'SELECT OrderId 
                     FROM #backorders',
                   N'@OrderDateFilter datetime',
                   @OrderDateFilter='2017-08-28 06:41:37.457'

It turns out that when you use command parameters in .NET it gets executed on SQL Server using the sp_executesql stored procedure. This was the key bit of information I was missing before. Now that I know parameterized queries are executed in the scope of a stored procedure it also means the temp table used in our first query is limited to the usage within the stored procedure in which it was created.

Options to Fix

The first option is to not use parameters on your initial data pull. I don’t recommend this option. Parameters provide a level of protection that we don’t want to lose.

The second option and the way we addressed this issue is to create the temp table first. Now that the temp table has been created outside of a stored procedure it is scoped to the connection and then allows us to insert the data using parameters. The following code is our sample using this strategy.

command.CommandText = @"CREATE TABLE #backorders
                        (
                           OrderId int
                          ,CustomerId int
                          ,SalespersonPersonID int
                          ,BackorderOrderID int
                          ,OrderDate date
                        )";

command.ExecuteNonQuery();

command.CommandText = @"INSERT INTO #backorders
                        SELECT OrderId
                              ,CustomerId
                              ,SalespersonPersonID
                              ,BackorderOrderId
                              ,OrderDate
                        FROM Sales.Orders
                        WHERE BackorderOrderID IS NOT NULL
                          AND OrderDate > @OrderDateFilter";

command.Parameters.Add("@OrderDateFilter", 
                       SqlDbType.DateTime)
                  .Value = DateTime.Now.AddYears(-1);
command.ExecuteNonQuery();

Wrapping Up

I hope this saves someone some time. Once I understood what was going on the issue made sense. How .NET deals with a SQL command with parameters is one of those things that just always worked and I never had the need to dig into until now. Always something new to learn which is one of the reasons I love what I do.

.NET Parameterized Queries Issues with SQL Server Temp Tables Read More »

Getting a Sample SQL Server Database

As tends to happen this isn’t the post I set out to write today. We hit an issue at work the other day that I want to write about, but to do so I need a sample database (I can’t use the data from work). I wanted something that is more fleshed out that my normal single table contact database. I google for AdventureWorks, since that is the sample database I have always seen in examples.

Options

Turns out that AdventureWorks isn’t the only SQL Server sample database option in town these days. Microsoft has a SQL Server Samples repo that has three different options depending on your needs. The following is a description and links to all there options as they stand right now.

wide-world-importers

The new sample database for SQL Server 2016 and Azure SQL Database. It illustrates the core capabilities of SQL Server 2016 and Azure SQL Database, for transaction processing (OLTP), data warehousing and analytics (OLAP) workloads, as well as hybrid transaction and analytics processing (HTAP) workloads.

contoso-data-warehouse

Sample data warehouse that illustrates loading data into Azure SQL Data Warehouse.

AdventureWorks

Sample databases and Analysis Services models for use with SQL Server.

Since I’m not dealing with a data warehouse the middle option is out. AdventureWorks is still a valid option, but I just can’t pass up WideWorldImporters which is the newest sample used to show off SQL Server 2016.

Getting Started

Make sure you have both SQL Server and SQL Server Management Studio installed. For SQL Server we will be using the on-premises version. I recommend grabbing the developer edition as it provides the full set of features for free as long as it isn’t used in production. Once you have both of the above installed it is time to get download the database related files.

As of this writing, this GitHub release page contains the latest bits to get started with, and yes it is from June of 2016. There are a lot of options listed, but the file we are interested in is WideWorldImporters-Full.bak. If you are looking to explore some of the new features of SQL Server 2016 you should also grab sample-scripts.zip to play with, but this isn’t going to be covered in this post.

Restoring the Database

Now that we have the backup of the database we are going to restore it is time to open up SQL Server Management Studio. In the connection dialog, connect to the server you want the database to end up on. In this case, I’m connecting to a SQL Server instance running on my local PC. When you are all set hit the Connect button.

Once connected you should see the selected server in the Object Explorer window.

Right-click on the Databases folder (or node if you prefer) and click Restore Database.

This will launch the Restore Database dialog. This dialog is full of stuff, but we are going to focus on the minimum needed to restore a database from a backup file. First, select the Device option and then click the  button.

This will show the Select backup devices dialog. We want to use the File type and then click the Add button.

On the next screen, you need to enter the path to your backup file. I’m not sure why, but this isn’t the easiest dialog for finding a file. I found it easier to use Windows Explorer to find the directory the backup file is in and copy it to the Backup File Location. Once you have the right directory select the backup file and click OK.

Back on the Select backup devices dialog click the OK button to continue. This will land you back on the Restore Database dialog which will now display information from the backup that was selected. Click the OK button to start the restore process.

After a minute or so the restore process should complete. If you expand the Databases node back in the Object Explorer window you should see the restored database listed.

Data Generation

The WideWorldImporters database comes with a stored procedure that will generate current data for you to work with. This isn’t a fast process so be prepared to wait if you decide to run this. To start open a new query window for the WideWorldImporters database by right-clicking and selecting New Query.

If you run the following query it will start the data generation process.

EXECUTE DataLoadSimulation.PopulateDataToCurrentDate
    @AverageNumberOfCustomerOrdersPerDay = 60,
    @SaturdayPercentageOfNormalWorkDay = 50,
    @SundayPercentageOfNormalWorkDay = 0,
    @IsSilentMode = 1,
    @AreDatesPrinted = 1;

The official docs have more details on data generation. From the docs, it says that data generation will take about 10 minutes per year and starts off from 2016 so you are looking at a 30-minute minimum runtime (and based on my test that 10 minutes per year number is much too low). Do note that the back up comes with data so this process is only needed if you want recent (date wise) data.

Wrapping Up

The above process wasn’t hard, but if you are like me and haven’t worked much with database backup hopefully you found this post helpful. I’m sure you will find the sample database helpful for trying out some of the newer features that SQL Server or for a source of data that is OK to use publically.

Make sure you check out the official docs pages for Wide World Importers. It has a lot more information as well as instructions for other workload use cases such as data warehousing.

Getting a Sample SQL Server Database Read More »

Controlling .NET Core’s SDK Version

Recently I was building a sample application and noticed a build warning that I was using a preview version of the .NET Core SDK.

You can see from the screen show that the build shows zero warning and zero errors, but if you read the full build text you will notice the following message.

NETSDK1057: You are working with a preview version of the .NET Core SDK. You can define the SDK version via a global.json file in the current project. More at https://go.microsoft.com/fwlink/?linkid=869452 [C:\sdkTest\sdkTest.csproj]

That will teach me not to just look at the ending results of a build. I didn’t explicitly install the preview version I have been accidentally using, but I’m pretty sure it got installed with the Visual Studio Preview I have installed.

Setting the SDK version for a project

Following the link in the message from above will take you to the docs page for global.json which will allow you to specify what version of the SDK you want to use. Using the following command will give you a list of the SDK versions you have installed.

dotnet --list-sdks

On my machine, I have the following 5 SDKs installed.

2.1.201 [C:\Program Files\dotnet\sdk]
2.1.202 [C:\Program Files\dotnet\sdk]
2.1.302 [C:\Program Files\dotnet\sdk]
2.1.400-preview-009063 [C:\Program Files\dotnet\sdk]
2.1.400-preview-009171 [C:\Program Files\dotnet\sdk]

For this project, I really want to use version 2.1.302 which is the newest non-preview version. Using the following .NET CLI command will create a global.json file targeting the version we want.

dotnet new globaljson --sdk-version 2.1.302

If you open the new global.json file you will see the following which has the SDK version set to the value we specified. If you use the above command without specifying a version it will use the latest version installed on your machine, including preview versions.

{
  "sdk": {
    "version": "2.1.302"
  }
}

Now if you run the build command you will see that the warning is gone.

global.json Location

So far we have been using global.json from within a project directory, but that isn’t the only option for its location. For example, if I moved the global.json from the C:\sdkTest directory to C:\ then any .NET Core base application on the C drive would use the version specified in that top-level global.json unless they specified their own version (or had another global.json up the projects folder structure before it made it to the root of the C drive).

For the full details see the matching rules in the official docs.

Wrapping Up

I’m not sure if anyone one else is in this boat, but until I saw the build warning from above the need to control the SDK version never crossed my mind. Thankfully the teams have Microsoft have made the ability to lock to a version of the SDK simple. I could see this being especially useful if you are required to stick with a long-term support version of the SDK, which would currently require you to be on version 1.0 or 1.1.

Controlling .NET Core’s SDK Version Read More »

Entity Framework Core 2.1: Data Seeding

I am taking some time to explore some of the new features that came out with the .NET Core 2.1 release and this post is going to be a continuation of that process. The following are links to the other posts in this same vein.

Host ASP.NET Core Application as a Windows Service
ASP.NET Core 2.1: ActionResult<T>

Today we are going to be looking at a new feature added to Entity Framework to allow for data seeding. I am using the official docs for this feature as a reference.

Sample Application

We are going to use the .NET CLI to create a new application using the MVC template with individual authorization since it is one of the templates that come with Entity Framework already set up. If you have an existing project and need to add Entity Framework you can check out this post. The following is the command I used to create my project.

dotnet new mvc --auth Individual

Model

Before we get to data seeding we need to create an entity to seed. In this example, we will be creating a contact entity (surprise!). In the Models directory, I created a Contacts.cs file with the following contents.

public class Contact
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string Zip { get; set; }
}

Next, in the Data directory, we are going to open the ApplicationDbContext class and add a DbSet for our new Contact entity. Added the following property to the class.

public DbSet<Contact> Contacts { get; set; }

Now that we have our DbContext setup lets add a migration for this new Contact entity using the following .NET CLI command.

dotnet ef migrations add Contacts -o Data/Migrations

Finally, run the following command to create/update the database for this application.

dotnet ef database update

Data Seeding

Now that our project is setup we can move on to actual data seeding. In Entity Framework Core data seeding is done in the OnModelCreating function of your DbContext class. In this example that is the ApplicationDbContext class. The following example shows using the new HasData method to add seed data for the Contact entity.

protected override void OnModelCreating(ModelBuilder builder)
{
    base.OnModelCreating(builder);

    builder.Entity<Contact>().HasData(
        new Contact 
        {
            Id = 1,
            Name = "Eric",
            Address = "100 Main St",
            City = "Hometown",
            State = "TN",
            Zip = "153789"
        }
    );
}

Data seeding is handled via migrations in Entity Framework Core, which is a big difference from previous versions. In order to get our seed data to show up, we will need to create a migration which can be done using the following command.

dotnet ef migrations add ContactSeedData -o Data/Migrations

Then apply the migration to your database using the following command.

dotnet ef database update

Looking at the code that the migration created you can see that it is just inserting the data.

public partial class ContactSeedData : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.InsertData(
            table: "Contacts",
            columns: new[] { "Id", "Address", "City", "Name", "State", "Zip" },
            values: new object[] { 1, "100 Main St", "Hometown", "Eric", "TN",
                                   "153789" });
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DeleteData(
            table: "Contacts",
            keyColumn: "Id",
            keyValue: 1);
    }
}

The above works great on new databases or new tables but can cause issues if you are trying to add seed data to an existing database. Check out Rehan’s post on Migrating to Entity Framework Core Seed Data for an option on how to deal with this.

Wrapping up

I am very happy to see that we have a way to prepopulate data in Entity Framework Core it will make some scenarios, such as mostly static data, much easier to deal with. One downside I see to the migration approach is the inability to have some built-in test data since the migrations will always be applied to your databases.

Entity Framework Core 2.1: Data Seeding Read More »