In software, there are tons of different ways to accomplish the same thing and one of the metrics we tend to use to determineĀ a course of action is how we feel that one set of code will perform over another.Ā The thing is determiningĀ which code actually performs better is a bit tricky and I feel like in general people make the choice based on a gut feeling more than actual evidence. Even when evidence a developer has involved if it wasn’t properly collected then it isn’t actually helpful. For example, trying to determine the performance of a set of code when running in debug mode isn’t actually a good indicator of how it is going to perform.
What is a developer to do? Well, this is where BenchmarkDotNet comes in. Here is how the project describes itself.
Benchmarking is really hard (especially microbenchmarking), you can easily make a mistake during performance measurements. BenchmarkDotNet will protect you from the common pitfalls (even for experienced developers) because it does all the dirty work for you: it generates an isolated project per each benchmark method, does several launches of this project, run multiple iterations of the method (include warm-up), and so on. Usually, you even shouldn’t care about a number of iterations because BenchmarkDotNet chooses it automatically to achieve the requested level of precision.
This rest of this post is going to cover creating a sample project using BenchmarkDotNet.
Sample Project
We will be using a new .NET Core console application which can be created using the following .NET CLI command.
dotnet new console
Next, run the following command to add the BenchmarkDotNet NuGet package.
dotnet add package BenchmarkDotNet
Now in theĀ Main function of theĀ Program class, we need to tell the application to run the benchmark we are interested in. In this example, we are telling it to run the benchmarks in theĀ Strings class.
public static void Main(string[] args) { BenchmarkRunner.Run<Strings>(); }
Now in theĀ Strings class, we have two functions marked with theĀ Benchmark attribute which is how the packageĀ identifies which functions to measure. For this example, we will be measuring the performance of two different ways to do case insensitive string comparisons.
public class Strings { private readonly Dictionary<string, string> _stringsToTest = new Dictionary<string, string> { { "Test", "test" }, { "7", "7" }, { "A long string", "Does not match" }, { "Testing", "Testing" }, { "8", "2" } }; [Benchmark] public bool EqualsOperator() { var result = false; foreach (var (key, value) in _stringsToTest) { result = key.ToLower() == value.ToLower(); } return result; } [Benchmark] public bool EqualsFunction() { var result = false; foreach (var (key, value) in _stringsToTest) { result = string.Equals(key, value, StringComparison.OrdinalIgnoreCase); } return result; } }
I’m sure there is a better way to set up data for test runs, but the above works for my first go at it.
Results
Run the application in release mode and you will see output similar to the following.
Wrapping Up
Having a tool that takes all the guessworkĀ out of how operations perform is going to be very valuable. This is one of those tools I really wish I had found years ago. The project is open source and can be found on GitHub.