Skip to content

Performance testing

pmaytak edited this page Dec 10, 2020 · 24 revisions

Microsoft.Client.Test.Performance project uses BenchmarkDotNet library for performance testing of MSAL methods. AcquireTokenForClientLargeCacheTests.cs has a benchmark for AcquireTokenForClient method and a ConfidentialClientApplication with 100k cache items.

This performance test project is a console app. Behind the scenes when the project is run, BenchmarkDotNet builds and outputs this test project into a temporary working directory. Then it spawns a separate process where all the benchmarking measurements are done.

BenchmarkDotNet is pretty customizable. BenchmarkDotNet tests are set up similarly to unit tests, using attributes. Benchmarks can be parametrized. There are global and iteration setup and cleanup methods that can be used to setup the environment before running actual tests. The number of times that a benchmark should run can be customized, although it is recommended to use the defaults, as the BenchmarkDotNet does it's own pre-processing to find the optimal number of runs. How it works guide describes the steps that BenchmarkDotNet takes to run the benchmarks. BenchmarkDotNet supports running tests on multiple frameworks.

Running tests

There are multiple ways to run the tests. First:

  • Build Microsoft.Client.Test.Performance in Release mode.
  • Go to the {project directory}/bin/Release/{framework directory}/ and run the project executable.

Another one:

  • Go to the project directory.
  • Run dotnet run -c Release in the console window.

BenchmarkDotNet.Artifacts folder with the exported results will be created in the directory from which the executable was run from.

The test project can be ran multiple times using the methods above and then the results aggregated manually. Second way is to call WithLaunchCount(this Job job, int count) method in Program.cs when setting up the BenchmarkDotNet job. This will specify how many times the BenchmarkDotNet will launch the benchmark process.

Testing code changes

  • Build and run the perf project to establish baseline numbers.
  • Make desired MSAL code changes.
  • Again build and run the perf project.
  • Compare the results between the runs.
  • Include the before and after results in the pull request that includes these changes. Also mention the PR and the improvements in the Improvements and test results section below.

Viewing results

Sample table with summary results:

Method TokenCacheSize Mean Error StdDev
AcquireTokenForClientTestAsync 100 62.14 μs 0.934 μs 0.873 μs
AcquireTokenForClientTestAsync 1000 383.90 μs 7.596 μs 9.876 μs
AcquireTokenForClientTestAsync 10000 5,111.33 μs 97.121 μs 103.918 μs
AcquireTokenForClientTestAsync 100000 98,313.18 μs 783.933 μs 733.292 μs

Results are consolidated across all the iterations and launches.. They are written to the console at the end of the run and also exported into .md, .csv, and .html files in BenchmarkDotNet.Artifacts folder by default. The results are grouped by the benchmark method and any parameters. The main data point is the mean value. Compare this value across runs, before and after code changes. Some other potentially interesting data points exported include median, min, max, skewness, kurtosis, confidence interval. The run log, which contains how many times benchmarks were executed and general debug information, is also exported into the same folder.

Improvements and test results

PR #2261 includes improvements for AcquireTokenForClient method, especially when an internal token cache is large (100k+ items). Testing showed 10% - 30% speed improvement. Released in MSAL 4.24.0.

Getting started with MSAL.NET

Acquiring tokens

Desktop/Mobile apps

Web Apps / Web APIs / daemon apps

Advanced topics

News

FAQ

Other resources

Clone this wiki locally