Skip to main content

Benchmarks

Benchmarks are crucial to understand if ZITADEL fulfills your expected workload and what resources it needs to do so.

This document explains the process and goals of load-testing zitadel in a cloud environment.

The results can be found on sub pages.

Goals​

The primary goal is to assess if ZITADEL can scale to required proportion. The goals might change over time and maturity of ZITADEL. At the moment the goal is to assess how the application’s performance scales. There are some concrete goals we have to meet:

  1. https://github.com/zitadel/zitadel/issues/8352 defines 1000 JWT profile auth/sec
  2. https://github.com/zitadel/zitadel/issues/4424 defines 1200 logins / sec.

Procedure​

First we determine the β€œtarget” of our load-test. The target is expressed as a make recipe in the load-test Makefile. See also the load-test readme on how to configure and run load-tests.
A target should be tested for longer periods of time, as it might take time for certain metrics to show up. For example, cloud SQL samples query insights. A runtime of at least 30 minutes is advised at the moment.

After each iteration of load-test, we should consult the After test procedure to conclude an outcome:

  1. Scale
  2. Log potential issuer and scale
  3. Terminate testing and resolve issues

Methodology​

Benchmark definition​

Tests are implemented in the ecosystem of k6. The tests are publicly available in the zitadel repository. Custom extensions of k6 are implemented in the xk6-modules repository.
The tests must at least measure the request duration for each API call. This gives an indication on how zitadel behaves over the duration of the load test.

Metrics​

The following metrics must be collected for each test iteration. The metrics are used to follow the decision path of the After test procedure:

MetricTypeDescriptionUnit
BaselineComparisonDefines the baseline the test is compared against. If not specified the baseline defined in this document is used.Link to test result
PurposeDescriptionDescription what should been proved with this test runtext
Test startSetupTimestamp when the test started. This is useful for gathering additional data like metrics or logs laterDate
Test durationSetupDuration of the testDuration
Executed testSetupName of the make recipe executed. Further information about specific test cases can be found here.Name of the make recipe
k6 versionSetupVersion of the test client (k6) usedsemantic version
VUsSetupVirtual Users which execute the test scenario in parallelNumber
Client locationSetupRegion or location of the machine which executed the test client. If not further specified the hoster is Google CloudLocation / Region
Client machine specificationSetupDefinition of the client machine the test client ran on. The resources of the machine could be maxed out during tests therefore we collect this metric as well. The description must at least clarify the following metrics: vCPU Memory egress bandwidthvCPU: Amount of threads (additional info) memory: GB egress bandwidth:Gbps
ZITADEL locationSetupRegion or location of the deployment of zitadel. If not further specified the hoster is Google CloudLocation / Region
ZITADEL container specificationSetupAs ZITADEL is mainly run in cloud environments it should also be run as a container during the load tests. The description must at least clarify the following metrics: vCPU Memory egress bandwidth ScalevCPU: Amount of threads (additional info) memory: GB egress bandwidth:Gbps scale: The amount of containers running during the test. The amount must not vary during the tests
ZITADEL VersionSetupThe version of zitadel deployedSemantic version or commit
ZITADEL ConfigurationSetupConfiguration of zitadel which deviates from the defaults and is not secretyaml
ZITADEL feature flagsSetupChanged feature flagsyaml
DatabaseSetupDatabase type and versiontype: crdb / psql version: semantic version
Database locationSetupRegion or location of the deployment of the database. If not further specified the hoster is Google Cloud SQLLocation / Region
Database specificationSetupThe description must at least clarify the following metrics: vCPU, Memory and egress bandwidth (Scale)vCPU: Amount of threads (additional info) memory: GB egress bandwidth:Gbps scale: Amount of crdb nodes if crdb is used
ZITADEL metrics during testResultThis metric helps understanding the bottlenecks of the executed test. At least the following metrics must be provided: CPU usage Memory usageCPU usage in percent Memory usage in percent
Observed errorsResultErrors worth mentioning, mostly unexpected errorsdescription
Top 3 most expensive database queriesResultThe execution plan of the top 3 most expensive database queries during the test executiondatabase execution plan
Database metrics during testResultThis metric helps understanding the bottlenecks of the executed test. At least the following metrics must be provided: CPU usage Memory usageCPU usage in percent Memory usage in percent
k6 Iterations per secondResultHow many test iterations were done per secondNumber
k6 overviewResultShows some basic metrics aggregated over the test run At least the following metrics must be included: duration per request (min, max, avg, p50, p95, p99) VUS For simplicity just add the whole test result printed to the terminalterminal output
k6 outputResultTrends and metrics generated during the test, this contains detailed information for each step executed during each iterationcsv

Test setup​

Make recipes​

Details about the tests implemented can be found in this readme.

Test conclusion​

After each iteration of load-test, we should consult the Flowchart to conclude an outcome:

  1. Scale
  2. Log potential issue and scale
  3. Terminate testing and resolve issues

Scale​

An outcome of scale means that the service hit some kind of resource limit, like CPU or RAM which can be increased. In such cases we increase the suggested parameter and rerun the load-test for the same target. On the next test we should analyse if the increase in scale resulted in a performance improvement proportional to the scale parameter. For example if we scale from 1 to 2 containers, it might be reasonable to expect a doubling of iterations / sec. If such an increase is not noticed, there might be another bottleneck or unlying issue, such as locking.

Potential issues​

A potential issue has an impact on performance, but does not prevent us to scale. Such issues must be logged in GH issues and load-testing can continue. The issue can be resolved at a later time and the load-tests repeated when it is. This is primarily for issues which require big changes to ZITADEL.

Termination​

Scaling no longer improves iterations / second, or some kind of critical error or bug is experienced. The root cause of the issue must be resolved before we can continue with increasing scale.

After test procedure​

This flowchart shows the procedure after running a test.

Flowchart

Baseline​

Will be established as soon as the goal described above is reached.

Test results​

This chapter provides a table linking to the detailed test results.