.NET Core

Azure Pipelines: Release to Azure App Service

This post is going to use the same Azure DevOps project used in last week’s Azure Repos and Azure Pipelines post which had a build pipeline and add a release pipeline that deploys to an Azure App Service.

This walkthrough is going to assume you have an Azure account already set up using the same email address as Azure DevOps. If you don’t have an Azure account one signup for an Azure Free Account.

Build Changes

The build set up in last week’s post proved that the code built, but it didn’t actually do anything with the results of that build. In order to use the results of the build, we need to publish the resulting files. This will be needed later when setting up the release pipeline.

In order to get the results we are looking for a few steps must be added to our build.  All the changes are being made in the azure-pipelines.yml. The following is my full yaml file with the new tasks.

pool:
  vmImage: 'Ubuntu 16.04'

variables:
  buildConfiguration: 'Release'

steps:
- task: DotNetCoreCLI@2
  displayName: Build
  inputs:
    projects: '**/EfSqlite.csproj'
    arguments: '--configuration $(BuildConfiguration)'
- task: DotNetCoreCLI@2
  displayName: Publish
  inputs:
    command: publish
    publishWebProjects: True
    arguments: '--configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact'
  inputs:
    PathtoPublish: '$(build.artifactstagingdirectory)'

As you can see in the above yaml this build now has three different steps. The build (this is equivalent to what the post from last week was doing) and publish (this gets all the files in the right places) tasks are both handled using the DotNetCoreCLI@2 task.  Finally, the PublishBuildArtifacts@1 takes the results of the publish and zips them to the artifact staging directory where they can be used in a release pipeline.

Create an Azure App Service

Feel free to skip this section if you have an existing App Service to use. To create a new App Service open the Azure Portal and select App Services from the left navigation.

Next, click the Add button.

On the next page, we are going to select Web App.

Now hit the Create button.

The next page you will need to enter an App name, select the OS, and Runtime Stack before hitting the Create button. The OS and Runtime Stack should match the target of your application.

Create a Release Pipeline

On your project from the left navigation select Pipelines > Releases and then click the New pipeline button. If you already have a release pipeline setup this page will look different.

The next page has a list of template you can select from. In this example, we will be selecting Azure App Service deployment and then click the Apply button.

 

 

Artifact Set Up

After clicking Apply you will hit the pipeline overview page with to sets of information. The first is the Artifacts which for us is going to be the results of the build pipeline we set up in last week’s post. Click the Add an artifact box.

The Add an artifact dialog is where you will select where the release will get its build artifact form. We will be using the source type of build and selecting our existing build pipeline.

Once you select your build pipeline as the Source a few more controls will show up. I took the defaults on all of them. Take note of the box highlighted in the screenshot as it will give you a warning if you build is missing artifacts. Click the Add button to complete.

 

Stage Setup

 

Above we selected the Azure App Service template which is now in our pipeline as Stage 1. Notice the red exclamation, which means the stage has some issues that need to be addressed before it can be used. Click on the stage box to open it.

 

As you can see in the following screenshot the settings that are missing are highlighted in red on the Deploy Azure App Service task.  On Stage 1 click the Unlink all button and confirm. Doing this means there is more setup on the Deploy Azure App Service task, but this is the only way to use .NET Core 2.1 at the time of this writing. For some reason, the highest version available at the Stage level for Linux is .NET Core 2.0.

After clicking unlink all the options other than the name of the stage will be removed. Next, click on Deploy Azure App Service task which handles the bulk to the work will place for this pipeline. There are a lot of setting on this task. Here is a screenshot of my setup and I will call out the important bits after.

First, select your Azure subscription. You may be required to Authorize your account so if you see an Authorize button click it and go through the sign in steps for your Azure account.

Take special note of App type. In this sample, we are using Linux so it is important to select Linux App from the drop down instead of the just using Web App.

With your Azure subscription and App type selected the App Service name drop-down should only let you select Linux based App  Services that exist on your subscription.

For Image Source, I went with the Built-in Image, but it does have the option to enter use a container from a registry if the built-in doesn’t meet your needs.

For Package or folder, the default should work if you only have a single project. Since this, I have two projects I used the file browser (the … button) to select the specific zip file I want to deploy.

Runtime Stack needs to to be .NET Core 2.1 for this application.

Startup command needs to be set up to tell the .NET CLI to run the assembly that is the entry point for the application. In the example, this works out to be dotnet EfSqlite.dll.

After all the settings have been entered hit the Save button in the top right of the screen.

Execute Release Pipeline

Navigate back to Pipelines > Release and select the release you want to run. Then click the Create a release button.

On the next page, all you have to do is select the Artifact you want to deploy and then click the Create button.

Create will start the release process and send you back to the main release page. There will be a link at the top of the release page that you can click to see the status of the release.

The following is the results of my release after it has completed.

Wrapping Up

I took me more trial and error to get this setup going that I would have hoped, but once all the pieces are get up the results are awesome. At the very minimum, I recommend taking the time to at least set up a build that is triggered when code is checked into your repos. Having the feedback that a build is broken as soon as the code hits the repo versus finding out when you need a deliverable will do wonders for your peace of mind.

Azure Pipelines: Release to Azure App Service Read More »

Azure Repos and Azure Pipelines

Last week’s post looked at using GitHub with Azure Pipelines. This week I’m going to take the same project and walk through adding it to Azure Repos and setting a build up using Azure Pipelines. I will be using the code from this GitHub repo minus the azure-pipelines.yml file that was added in last weeks post.

Creating a Project

I’m not going to walk you through the sign-up process, but if you don’t have an account you can sign up for Azure DevOps here. Click the Create project button in the upper right corner.

On the dialog that shows enter a project name. I’m using the same name that was on the GitHub repo that the code came from. Click Create to continue.

Adding to the Repo

After the project finishes the creation process use the menu on the left and select Repos.

Since the project doesn’t currently have any files the Repos page lists a number of options for getting files added. I’m going to use the Clone in VS Code option and then copy the files from my GitHub repo. If I weren’t trying to avoid including the pipeline yaml file an easier option would be to use the Import function and clone the repo directly.

I’m not going to go through the details of using one of the above methods to get the sample code into Azure Repos. It is a git based repo and the options for getting code uploaded are outlined on the page above.

Set up a build

Now that we have code in a repo you should see the view change to something close to the following screenshot. Hit the Set up build to start the process of creating a build pipeline for the new repo. This should be pretty close to the last half of last week’s post, but I want to include it here so this post can stand alone.

On the next page select Azure Repos as the source of code.

Next, select the repo that needs to be built for the new pipeline.

Template selection is next. Based on your code a template will be suggested, but don’t just take the default. For whatever reason, it suggested a .NET Desktop template for my sample which is actually ASP.NET Core based. Select your template to move on to the next step.

The next screen will show you the YAML that will be used to build your code. My repo contains two projects so I had to tweak the YAML to tell it which project to build, but otherwise, the default would have worked. After you have made any changes that your project needs click Save and run.

The last step before the actual build is to commit the build YAML file to your Azure Repo. Make any changes you need on the dialog and then click Save and run to start the first build of your project.

The next page will show you the status of the build in real time. When the build is complete you should see something like the following with the results.

Wrapping Up

As expected using Azure Repos with Azure Pipelines works great. If you haven’t yet give Azure DevOps a try. Microsoft has a vast offering with this set of products that are consistently getting better and better.

Azure Repos and Azure Pipelines Read More »

GitHub and Azure Pipelines

A few weeks ago Microsoft announced that Visual Studio Team Services was being replaced/rebranded by a collection of services under the brand Azure DevOps. One of the services that make up Azure DevOps is Azure Pipelines which provides a platform for continuous integration and continuous delivery for a huge number of languages on Windows, Linux, and Mac.

As part of this change, Azure Pipelines is now available on the GitHub marketplace. In this post, I am going to pick one of my existing repos and see if I can get it building from GitHub using Azure Pipelines. I’m sure Microsoft or GitHub has documentation, but I’m attempting this without outside sources.

GitHub Marketplace

Make sure you have a GitHub account with a repo you want to build. For this post, I’m going to be using my ASP.NET Core Entity Framework repo. Now that you have the basic prep out of the way head over to the GitHub Marketplace and search for Azure Pipelines or click here.

Scroll to the bottom of the page to the Pricing and setup section. There is a paid option that is the default option. Click the Free option and then click Install it for free.

On the next page, you will get a summary of your order. Click the Complete order and begin installation button.

On the next page, you can select which repos to apply the installation to. For this post, I’m going to select a single repo. After making your choice on repos click the Install button.

Azure DevOps

After clicking install you will be thrown into the account authorization/creation process with Microsoft. After getting authorized you will get to the first set up in the setup process with Azure. You will need to select an organization and a project to continue. If you don’t have these setup yet there are options to create them.

After the process complete you will be land on the New pipeline creation process where you need to select the repo to use. Clicking the repo you want to use will move you to the next step.

The next step is a template selection. My sample is an ASP.NET Core application so I selected the ASP.NET Core template. Selecting a template will move you to the next step.

The next page will show you a yaml file based on the template you selected. Make any changes your project requires (my repo had two projects so I had to change the build to point to which project I wanted to build).

Next, you will be prompted to commit the yaml file to source control. Select your options and click Save and run.

After your configuration gets saved a build will be queued. If all goes well you will see your app being built. If everything works you will see something like this build results page.

Wrapping Up

GitHub and Microsoft have done a great job on this integration. I was surprised at how smooth the setup was. It was also neat to see a project that I created on Windows being built on Linux.

If you have a public repo on GitHub and need a way to build give Azure Pipelines a try.

GitHub and Azure Pipelines Read More »

Entity Framework Core: Logging

The other day I was having to dig into some performance issues around a process that is using Entity Framework Core. As part of the process, I need to see the queries generated by Entity Framework Core to make sure they were not the source of the issue (they were not). I’m going to be making these changes using the Contacts project from my ASP.NET Core Basics repo if you want to see where I started from.

First, we will cover adding a logging provider. Next, I’m going to show you what I came up with and then I will show you the method suggested by the Microsoft docs (which I didn’t find until later).

Logging Providers

First, we need to do pick how we want the information logged. A good starting place is using one of the Microsoft provides which can be found on NuGet. Right-click on the project you want to add the logging to and click Manage NuGet Packages.

In the search box enter Microsoft.Extensions.Logging for a list of good list of logging options. For this post, we will be using the console logger provided by Microsoft. Select Microsoft.Extensions.Logging.Console and then click the Install button on the upper right side of the screen.

First Go

For my first try at this, all the change are in the ConfigureServices function of the Startup class. The following is the code I added at the end of the function that will log all the queries to the console window (if you are using IIS Express then use the Debug logger instead).

var scopeFactory = services.BuildServiceProvider()
                           .GetRequiredService<IServiceScopeFactory>();

using (var scope = scopeFactory.CreateScope())
{
    using (var context = scope.ServiceProvider
                              .GetRequiredService<ContactsContext>())
    {
        var loggerFactory = context.GetInfrastructure()
                                   .GetService<ILoggerFactory>();
        loggerFactory.AddProvider(new ConsoleLoggerProvider((_, __) => true, true));
    }
}

This code is creating a scope to get an instance of the ContactsContext and then using the context to get it’s associated logger factory and adding a console logger to it. This isn’t the cleanest in the world but gets the job done especially if this is just for a quick debug session and not something that will stay.

Microsoft Way

While the above works I ended up finding a logging page in the Entity Framework Core docs. After undoing the changes made above open the ContactsContext (or whatever your DBContext is) and add a class level static variable for a logger factory. This class level variable will be used to prevent memory and performance issues that would be caused by creating a new instance of the logging classes every time a context is created.

public static readonly LoggerFactory LoggerFactory = 
       new LoggerFactory(new[] {new ConsoleLoggerProvider((_, __) => true, true)});

Next, add/update an override to the OnConfiguring to use the logger factory defined above. The following is the full function in my case.

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    base.OnConfiguring(optionsBuilder);

    optionsBuilder.UseLoggerFactory(LoggerFactory);
}

The Output

Either way, the following is an example of the output you will get with logging on.

The query is highlighted in the red box above. As you can see there is a lot of output, but there are options for filtering which are detailed in the docs.

Wrapping Up

Entity Framework Core does a great job, but the above gives you an option to check in on what it is doing. If you are using SQL Server you could also get the queries using SQL Server Profiler.

Entity Framework Core: Logging Read More »

.NET Parameterized Queries Issues with SQL Server Temp Tables

In the last few weeks at work, I have had multiple people have issues using a parameterized query in .NET that involved a temp table. It took a little bit of digging, but we finally tracked down the issue. This post is going to cover the cause of the issue as well as a couple of ways to fix it. The database used in this post is Wide World Importers sample database from Microsoft. For instructions on setting it up check out my Getting a Sample SQL Server Database post from last week.

Sample Project Creation

To keep things as simple as possible I am using a console application created using the following .NET CLI command.

dotnet new console

Followed by this command to add in the SQL Client from Nuget.

dotnet add package System.Data.SqlClient

The following is the full Program class with the sample code that will result in the exception this post is dealing with. Yes, I am aware this isn’t the be way to structure this type of code so please don’t judge it from that aspect. It is meant to be simple to demonstrate the issue.

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Running sample");

        using (var connection = 
                  new SqlConnection(@"Data Source=YourServer;
                                      Initial Catalog=YourDatabase;
                                      Integrated Security=SSPI;"))
        {
            connection.Open();

            using (var command = connection.CreateCommand())
            {
                SqlTest(command);
            }
        }

        Console.ReadLine();
    }

    private static void SqlTest(SqlCommand command)
    {
        command.CommandText = @"SELECT OrderId
                                      ,CustomerId
                                      ,SalespersonPersonID
                                      ,BackorderOrderId
                                      ,OrderDate
                                INTO #backorders
                                FROM Sales.Orders
                                WHERE BackorderOrderID IS NOT NULL
                                  AND OrderDate > @OrderDateFilter";

        command.Parameters.Add("@OrderDateFilter", 
                                SqlDbType.DateTime)
                          .Value = DateTime.Now.AddYears(-1);
        command.ExecuteNonQuery();

        command.CommandText = "SELECT OrderId FROM #backorders";

        using (var reader = command.ExecuteReader())
        {
            while (reader.Read())
            {
                Console.WriteLine(reader["OrderId"]);
            }
        }
    }
}

The Error

Running the application as it exists above will result in the following error.

Invalid object name ‘#backorders’.

Strange error since we just created the #backorder temp table. Let’s give it a try without the filter. The query now looks like the following.

command.CommandText = @"SELECT OrderId
                              ,CustomerId
                              ,SalespersonPersonID
                              ,BackorderOrderId
                              ,OrderDate
                        INTO #backorders
                        FROM Sales.Orders
                        WHERE BackorderOrderID IS NOT NULL";

command.ExecuteNonQuery();

Now the application runs without any issues. What if we try adding back the filter, but without using the command parameter?

command.CommandText = @"SELECT OrderId
                              ,CustomerId
                              ,SalespersonPersonID
                              ,BackorderOrderId
                              ,OrderDate
                        INTO #backorders
                        FROM Sales.Orders
                        WHERE BackorderOrderID IS NOT NULL
                          AND OrderDate > '2018-01-01'";

command.ExecuteNonQuery();

Again the application runs without any issues.

The Reason

Why is it that adding a command parameter is causing our temp table to disappear? I discovered the issue by using SQL Server Profiler (in SQL Server Management Studio it can be found in Tools > SQL Server Profiler). With the code back to the original version with the command parameter and Profiler connected to the same server as the sample application running the sample application shows the following command received by SQL Server.

exec sp_executesql N'SELECT OrderId 
                     FROM #backorders',
                   N'@OrderDateFilter datetime',
                   @OrderDateFilter='2017-08-28 06:41:37.457'

It turns out that when you use command parameters in .NET it gets executed on SQL Server using the sp_executesql stored procedure. This was the key bit of information I was missing before. Now that I know parameterized queries are executed in the scope of a stored procedure it also means the temp table used in our first query is limited to the usage within the stored procedure in which it was created.

Options to Fix

The first option is to not use parameters on your initial data pull. I don’t recommend this option. Parameters provide a level of protection that we don’t want to lose.

The second option and the way we addressed this issue is to create the temp table first. Now that the temp table has been created outside of a stored procedure it is scoped to the connection and then allows us to insert the data using parameters. The following code is our sample using this strategy.

command.CommandText = @"CREATE TABLE #backorders
                        (
                           OrderId int
                          ,CustomerId int
                          ,SalespersonPersonID int
                          ,BackorderOrderID int
                          ,OrderDate date
                        )";

command.ExecuteNonQuery();

command.CommandText = @"INSERT INTO #backorders
                        SELECT OrderId
                              ,CustomerId
                              ,SalespersonPersonID
                              ,BackorderOrderId
                              ,OrderDate
                        FROM Sales.Orders
                        WHERE BackorderOrderID IS NOT NULL
                          AND OrderDate > @OrderDateFilter";

command.Parameters.Add("@OrderDateFilter", 
                       SqlDbType.DateTime)
                  .Value = DateTime.Now.AddYears(-1);
command.ExecuteNonQuery();

Wrapping Up

I hope this saves someone some time. Once I understood what was going on the issue made sense. How .NET deals with a SQL command with parameters is one of those things that just always worked and I never had the need to dig into until now. Always something new to learn which is one of the reasons I love what I do.

.NET Parameterized Queries Issues with SQL Server Temp Tables Read More »

Controlling .NET Core’s SDK Version

Recently I was building a sample application and noticed a build warning that I was using a preview version of the .NET Core SDK.

You can see from the screen show that the build shows zero warning and zero errors, but if you read the full build text you will notice the following message.

NETSDK1057: You are working with a preview version of the .NET Core SDK. You can define the SDK version via a global.json file in the current project. More at https://go.microsoft.com/fwlink/?linkid=869452 [C:\sdkTest\sdkTest.csproj]

That will teach me not to just look at the ending results of a build. I didn’t explicitly install the preview version I have been accidentally using, but I’m pretty sure it got installed with the Visual Studio Preview I have installed.

Setting the SDK version for a project

Following the link in the message from above will take you to the docs page for global.json which will allow you to specify what version of the SDK you want to use. Using the following command will give you a list of the SDK versions you have installed.

dotnet --list-sdks

On my machine, I have the following 5 SDKs installed.

2.1.201 [C:\Program Files\dotnet\sdk]
2.1.202 [C:\Program Files\dotnet\sdk]
2.1.302 [C:\Program Files\dotnet\sdk]
2.1.400-preview-009063 [C:\Program Files\dotnet\sdk]
2.1.400-preview-009171 [C:\Program Files\dotnet\sdk]

For this project, I really want to use version 2.1.302 which is the newest non-preview version. Using the following .NET CLI command will create a global.json file targeting the version we want.

dotnet new globaljson --sdk-version 2.1.302

If you open the new global.json file you will see the following which has the SDK version set to the value we specified. If you use the above command without specifying a version it will use the latest version installed on your machine, including preview versions.

{
  "sdk": {
    "version": "2.1.302"
  }
}

Now if you run the build command you will see that the warning is gone.

global.json Location

So far we have been using global.json from within a project directory, but that isn’t the only option for its location. For example, if I moved the global.json from the C:\sdkTest directory to C:\ then any .NET Core base application on the C drive would use the version specified in that top-level global.json unless they specified their own version (or had another global.json up the projects folder structure before it made it to the root of the C drive).

For the full details see the matching rules in the official docs.

Wrapping Up

I’m not sure if anyone one else is in this boat, but until I saw the build warning from above the need to control the SDK version never crossed my mind. Thankfully the teams have Microsoft have made the ability to lock to a version of the SDK simple. I could see this being especially useful if you are required to stick with a long-term support version of the SDK, which would currently require you to be on version 1.0 or 1.1.

Controlling .NET Core’s SDK Version Read More »

Entity Framework Core 2.1: Data Seeding

I am taking some time to explore some of the new features that came out with the .NET Core 2.1 release and this post is going to be a continuation of that process. The following are links to the other posts in this same vein.

Host ASP.NET Core Application as a Windows Service
ASP.NET Core 2.1: ActionResult<T>

Today we are going to be looking at a new feature added to Entity Framework to allow for data seeding. I am using the official docs for this feature as a reference.

Sample Application

We are going to use the .NET CLI to create a new application using the MVC template with individual authorization since it is one of the templates that come with Entity Framework already set up. If you have an existing project and need to add Entity Framework you can check out this post. The following is the command I used to create my project.

dotnet new mvc --auth Individual

Model

Before we get to data seeding we need to create an entity to seed. In this example, we will be creating a contact entity (surprise!). In the Models directory, I created a Contacts.cs file with the following contents.

public class Contact
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string Zip { get; set; }
}

Next, in the Data directory, we are going to open the ApplicationDbContext class and add a DbSet for our new Contact entity. Added the following property to the class.

public DbSet<Contact> Contacts { get; set; }

Now that we have our DbContext setup lets add a migration for this new Contact entity using the following .NET CLI command.

dotnet ef migrations add Contacts -o Data/Migrations

Finally, run the following command to create/update the database for this application.

dotnet ef database update

Data Seeding

Now that our project is setup we can move on to actual data seeding. In Entity Framework Core data seeding is done in the OnModelCreating function of your DbContext class. In this example that is the ApplicationDbContext class. The following example shows using the new HasData method to add seed data for the Contact entity.

protected override void OnModelCreating(ModelBuilder builder)
{
    base.OnModelCreating(builder);

    builder.Entity<Contact>().HasData(
        new Contact 
        {
            Id = 1,
            Name = "Eric",
            Address = "100 Main St",
            City = "Hometown",
            State = "TN",
            Zip = "153789"
        }
    );
}

Data seeding is handled via migrations in Entity Framework Core, which is a big difference from previous versions. In order to get our seed data to show up, we will need to create a migration which can be done using the following command.

dotnet ef migrations add ContactSeedData -o Data/Migrations

Then apply the migration to your database using the following command.

dotnet ef database update

Looking at the code that the migration created you can see that it is just inserting the data.

public partial class ContactSeedData : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.InsertData(
            table: "Contacts",
            columns: new[] { "Id", "Address", "City", "Name", "State", "Zip" },
            values: new object[] { 1, "100 Main St", "Hometown", "Eric", "TN",
                                   "153789" });
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DeleteData(
            table: "Contacts",
            keyColumn: "Id",
            keyValue: 1);
    }
}

The above works great on new databases or new tables but can cause issues if you are trying to add seed data to an existing database. Check out Rehan’s post on Migrating to Entity Framework Core Seed Data for an option on how to deal with this.

Wrapping up

I am very happy to see that we have a way to prepopulate data in Entity Framework Core it will make some scenarios, such as mostly static data, much easier to deal with. One downside I see to the migration approach is the inability to have some built-in test data since the migrations will always be applied to your databases.

Entity Framework Core 2.1: Data Seeding Read More »

ASP.NET Core 2.1: ActionResult

This post is going to take the Contacts API from my ASP.NET Basics set of posts and move it from using IActionResultto ActionResult<T> which was introduced with the 2.1 release. The changes are really simple, but if you are using OpenAPI/Swagger I have a call out later in the post about something I noticed. The code before any changes can be found here.

IActionResult vs ActionResult<T>

The official docs explain the three different ways to return data in an API which are a specific type, IActionResult type, or ActionResult<T> type.

A specific type is great if you don’t have to do any sort of validation or the like, but as soon as you need to return a different HTTP status than OK is no longer sufficient. This is where you would have to move to IActionResult.

IActionResult allows different HTTP statuses to be returned. In the following example, NotFound is returned if a contact with the supplied ID isn’t found or OK(contact) if a contact is found.

public async Task<IActionResult> GetContact([FromRoute] int id)
{
     var contact = await _context.Contact
                                 .SingleOrDefaultAsync(m => m.Id == id);

     if (contact == null)
     {
        return NotFound();
     }
    
     return Ok(contact);
}

The advantage of ActionResult<T> it is the return type of the function is clear. You can see in the following example where GetContact has been changed to use ActionResult<T> that if all goes well you will be dealing with a Contact object in the end without the need to wrap the result in an OK.

public async Task<ActionResult<Contact>> GetContact([FromRoute] int id)
{
     var contact = await _context.Contact
                             .SingleOrDefaultAsync(m => m.Id == id);

     if (contact == null)
     {
        return NotFound();
    }

    return contact;
}

OpenAPI/Swagger

If you are using OpenAPI/Swagger in your project with a function with the following definition it will automatically pick up the return type if you switch to using ActionResult<T>.

public async Task<ActionResult<Contact>> GetContact([FromRoute] int id)

The above function results in the following in OpenAPI/Swagger UI.

This is awesome and saves you from having to ProducesResponseType attributes to your API functions. Just note that as soon as you do add a ProducesResponseType for say a NotFound response you will still need include a response for OK with the proper type or you will lose the return type in the OpenAPI/Swagger UI.

I’m calling that last bit out because I spent time trying to figure out why all the samples I saw the return type was automatically picked up, but in my sample application it wasn’t.

Wrapping Up

I’m a huge fan of ActionResult<T> mostly because of the clarity it adds to API function definitions. The fact that OpenAPI/Swagger can pick up on it in the simple cases is an added bonus.

If you are looking for more info check out the Exploring ActionResult<T> in ASP.NET Core 2.1 post by Joonas Westlin in which there is more info on how the functionality is actually implemented. If you didn’t already make sure and check out the Controller action return types in ASP.NET Core Web API page in the official docs for a detailed comparison of the return type options for APIs.

The completed code can be found here.

ASP.NET Core 2.1: ActionResult Read More »

Host ASP.NET Core Application as a Windows Service

.NET Core 2.1 had a ton of cool stuff in it. David Fowler did a bunch of tweets a while back on some of the hidden gems in this release and one that really jumped out at me was the ability to host an ASP.NET Core application in a Windows Service. This post is going to walk through creating a new ASP.NET Core application and then making the changes needed for it to run as a Windows Service. I pulled most of the information I needed from the official docs.

Project Creation

We will be using the .NET CLI to create the project, but you can also use Visual Studio if you like. Open a command prompt in the directory where you want the project created and run the following commands.

dotnet new razor --no-https
dotnet new sln
dotnet sln add WindowsServiceHosted.csproj

Open the new solution in Visual Studio.

Project File Changes

Right click on your project field and select Edit.

The first step is to add a runtime identifier. The docs are using win7-x64 so we are going to use the same. I did try using win and win7 but they don’t work since there isn’t a specific runtime associated with them. In this same step, we are going to add a reference to the Microsoft.AspNetCore.Hosting.WindowServices NuGet package.

Before:
<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.App" />
  </ItemGroup>

</Project>

After:
<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <RuntimeIdentifier>win7-x64</RuntimeIdentifier>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.App" />
    <PackageReference Include="Microsoft.AspNetCore.Hosting.WindowsServices" Version="2.1.1" />
  </ItemGroup>

</Project>

Program Class Changes

Open the Program project’s  class. In the Main function, Run call on the Web Host needs to change to RunAsService.

Before:
CreateWebHostBuilder(args).Build().Run();

After:
CreateWebHostBuilder(args).Build().RunAsService();

Next, in the CreateWebHostBuilder function, we need to change the content root to be the directory of the application. We are using the Process class to pull the filename and using that to get the directory the process is running in.

Before:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>();

After:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseContentRoot(Path
                        .GetDirectoryName(Process
                                          .GetCurrentProcess()
                                          .MainModule
                                          .FileName))
        .UseStartup<Startup>();

Publish

In the pre-core versions this step could have been skipped, but for .NET Core the only way to get an actual exe out of your project that can be installed is to publish the application. You can either use the .NET CLI or Visual Studio to publish the application. I’m going to use the following .NET CLI command run from the same directory as the project file.

dotnet publish

Since I didn’t specify a configuration value the project was built in debug and ended up in the bin\Debug\netcoreapp2.1\win7-x64\publish directory.

Installation

Open a command prompt in admin mode and run the following command to create a windows service. The binPath needs to be the full path to your exe or your service will fail to start even it is created successfully.

sc create WindowsServiceHosted binPath= "C:\WindowsServiceHosted\bin\Debug\netcoreapp2.1\win7-x64\publish\WindowsServiceHosted.exe"

Also, note that the space after binPath= and before the exe name is needed.

Service Management

Now that the service is installed run the following command to start it.

sc start WindowsServiceHosted

After the service is started you can open a browser and go to http://localhost:5000/ and see your application running.

To check the state of your service use the following command.

sc query WindowsServiceHosted

To stop your service use the following command.

sc stop WindowsServiceHosted

Finally, to uninstall your service use the following command.

sc delete WindowsServiceHosted

Debugging

While debugging a Windows Service can be done, it is a pain. Thankfully the docs walk us through away to run the application normally if it is run in debug mode, but this does require more changes in the Program class.

First, change the CreateWebHostBuilder back to its original state.

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>();

Next, in the Main function, we have to decide if we are running as a service or not and if so switch around how we start the application.

public static void Main(string[] args)
{
    var runAsService = !Debugger.IsAttached;
    var builder = CreateWebHostBuilder(args);

    if (runAsService)
    {
        builder.UseContentRoot(Path
                               .GetDirectoryName(Process
                                                 .GetCurrentProcess()
                                                 .MainModule.FileName));
    }

    var host = builder.Build();

    if (runAsService)
    {
        host.RunAsService();
    }
    else
    {
        host.Run();
    }
}

For this example, we are going to run as a service only if the debugger isn’t attached. You can see how the rest of the codes is using this runAsService boolean to change between the setup needed for a service and that of a normal web application host.

Wrapping Up

I’m very happy to have the ability to host an ASP.NET Core application as a Window Service. This seems to be a case I run into a lot more that one would think.

I hope to see things that simplify Windows Services like Topshelf add support for .NET Core 2.1 (this issue has links to all the issues related to 2.1 if you want to check on the progress). It will be nice to have .NET Core Windows Services with the same level of support and the previous version of .NET.

Host ASP.NET Core Application as a Windows Service Read More »

ASP.NET Core Basics: Blazor

Blazor is a single page application framework that uses .NET running in the browser through the magic of WebAssembly and Mono. Check out How Blazor runs .NET in the browser for more information. You can think of Blazor like you do Angular or React, but written in .NET.

Please make note that this project is still an unsupported experiment and shouldn’t be used for production applications. This is part of the reason I held off on trying it until now.

I will be adding the Blazor project to my ASP.NET Core Basics repo. The code before any changes can be found here.

Getting Started

For an experiment, Blazor already has some really good docs. A lot of this post is going to mirror the Get started with Blazor docs page, but I try and document as I go. First, make sure you have the latest version of .NET Core 2.1.x SDK installed.

If you are going to be using Visual Studio you will need at least version 15.7. In addition, you will need to install the Blazor Language Services extension. From within Visual Studio, this can be done from the Tools > Extension and Updates menu. Select Online and search for Blazor. It should come up with just one item and you can click Download to start the install process.

Project Creation

We are going to use the .NET CLI to install the Blazor templates and add the resulting project to the ASP.NET Core Basics solution. First, open a command prompt and use the following command to get the Blazor template installed.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

If you are using the sample code I am running the commands from a new Blazor directory in the existing src directory. Then run the following command to create the Blazor project.

dotnet new blazor

The next command will add the new project to the existing solution. The path to the solution file isn’t required if the command is being run from the same directory as the solution file, but in our case, we are running the commands two level down from where the solution file is located.

dotnet sln "../../ASP.NET Core Basics.sln" add Blazor.csproj

At this point, you can use the following command to run the application.

dotnet run

Adding the Contact List

As with the other post in the ASP.NET Basics category, we are going to add a contact list page that pulls a list of contacts from an API. For this example, the API is part of the Contacts project that already exists in the sample solution. All the changes in the section take place in the Blazor project we created above.

In the Pages directory add a ContactList.cshtml file with the following contents. A break down of some of the parts of this file will follow.

@page "/contacts"
@inject HttpClient Http

<h1>Contact List</h1>

@if (contacts == null)
{
    <p><em>Loading...</em></p>
}
else
{
    <table class="table">
        <thead>
            <tr>
                <th>ID</th>
                <th>Name</th>
            </tr>
        </thead>
        <tbody>
            @foreach (var contact in contacts)
            {
                <tr>
                    <td>@contact.Id</td>
                    <td>@contact.Name</td>
                </tr>
            }
        </tbody>
    </table>
}

@functions {
    Contact[] contacts;

    protected override async Task OnInitAsync()
    {
        contacts = await Http.GetJsonAsync<Contact[]>("http://localhost:13322/api/contactsApi/");
    }

    class Contact
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Address { get; set; }
        public string City { get; set; }
        public string State { get; set; }
        public string PostalCode { get; set; }
        public string Phone { get; set; }
        public string Email { get; set; }
    }
}

This has a lot going on so we are going to start with the functions section which is essentially a way to block off code needed by the page. It can contain functions, classes, or whatever else C# construct the page may need. The following is just the functions section of the above page.

@functions {
    Contact[] contacts;

    protected override async Task OnInitAsync()
    {
        contacts = await Http.GetJsonAsync<Contact[]>("http://localhost:13322/api/contactsApi/");
    }

    class Contact
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public string Address { get; set; }
        public string City { get; set; }
        public string State { get; set; }
        public string PostalCode { get; set; }
        public string Phone { get; set; }
        public string Email { get; set; }
    }
}

In our sample, we are using the functions to define a Contact class to hold the data we get back from the API call in the OnInitAsync function. The OnInitAsync function is one of the lifecycle functions provided by the Blazor components. We aren’t going to dive into components, I recommend you check out the component docs for more information.

The @page "/contacts" is defining the route that the page will be served for. Next, @inject HttpClient Http is showing how to use ASP.NET Core’s dependency injection to get an instance of an HttpClient.

The remaining bit of the file is normal Razor. Everything defined in the functions section is available for use in your layout. For example, we are using the list of contacts filled by the OnInitAsync to loop and create a table to display our contact list.

<h1>Contact List</h1>

@if (contacts == null)
{
    <p><em>Loading...</em></p>
}
else
{
    <table class="table">
        <thead>
            <tr>
                <th>ID</th>
                <th>Name</th>
            </tr>
        </thead>
        <tbody>
            @foreach (var contact in contacts)
            {
                <tr>
                    <td>@contact.Id</td>
                    <td>@contact.Name</td>
                </tr>
            }
        </tbody>
    </table>
}

Wrapping Up

Blazor is a very interesting concept. I hope it is able to make it through the experimental phase into a supported project. It seems like it would be a great option for .NET developers. It was a lot of fun to finally try out. Go give it a try and make sure and provide feedback to Microsoft so they can make an informed decision on how to proceed with the project.

The finished code can be found here.

ASP.NET Core Basics: Blazor Read More »