You have a running ZITADEL instance reachable from your monitoring stack.
You know the base URL and port to reach ZITADEL (e.g., http://zitadel.zitadel.svc:8080 in Kubernetes or http://localhost:8080 locally).
Metrics are enabled in your runtime configuration (they are enabled by default in standard setups). If you explicitly disabled metrics in your configuration, re-enable them before proceeding. The (default) configuration is located in the defaults.yaml.
This approach is common when you don’t use the Prometheus Operator. Vanilla Prometheus can auto-discover scrape targets by reading standard annotations on Pods/Services:
When your Prometheus server is configured with Kubernetes service discovery and relabeling rules that honor these annotations, it will automatically discover and scrape ZITADEL without any per-target scrape_configs.
Your Prometheus configuration (often installed via Helm) should include jobs like the following. These are canonical examples that keep annotated Pods and map the annotated path/port to the actual metrics endpoint:
scrape_configs: - job_name: "kubernetes-pods" kubernetes_sd_configs: - role: pod relabel_configs: # Only keep pods with prometheus.io/scrape: "true" - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true # Use prometheus.io/path for the metrics path - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) # Replace the address with <pod_ip>:<prometheus.io/port> - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__
Most Prometheus Helm charts already ship with similar discovery jobs and relabeling rules. If you installed Prometheus via Helm, you likely already have these in place.
If you deploy ZITADEL via Helm and the chart emits scrape annotations on the Deployment/Pods, no extra work is needed. Otherwise, add the annotations yourself (via values override or a strategic patch):
Prometheus must have permission to list/watch Pods/Endpoints in the target namespaces. Ensure its ServiceAccount has the standard ClusterRole/ClusterRoleBinding for discovery. Missing RBAC typically shows up as discovery errors in the Prometheus logs.
If you run kube-prometheus-stack or the Prometheus Operator, use a ServiceMonitor (or PodMonitor). ZITADEL’s Helm chart provides out-of-the-boxServiceMonitor support that you can enable via values—no manual YAML is required.
The chart renders a ServiceMonitor roughly equivalent to:
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: <release-name> # {{ include "zitadel.fullname" . }} # namespace: <metrics.serviceMonitor.namespace> # only if you set it labels: # Standard chart labels + any you add: # {{- include "zitadel.start.labels" . | nindent 4 }} # {{- toYaml .Values.metrics.serviceMonitor.additionalLabels | nindent 4 }}spec: jobLabel: <release-name> # {{ include "zitadel.fullname" . }} namespaceSelector: matchNames: - "<release-namespace>" # defaults to the Helm release namespace selector: matchLabels: # Matches the ZITADEL Service created by the chart # {{- include "zitadel.service.selectorLabels" . | nindent 6 }} endpoints: - port: "<protocol>-server" # e.g., "http-server" or "https-server" path: /debug/metrics # Optional tunables below are included only if set: interval: <metrics.serviceMonitor.scrapeInterval> scrapeTimeout: <metrics.serviceMonitor.scrapeTimeout> scheme: <metrics.serviceMonitor.scheme> # http|https tlsConfig: # metrics.serviceMonitor.tlsConfig # ... proxyUrl: <metrics.serviceMonitor.proxyUrl> honorLabels: <metrics.serviceMonitor.honorLabels> honorTimestamps: <metrics.serviceMonitor.honorTimestamps> relabelings: # metrics.serviceMonitor.relabellings # ... metricRelabelings: # metrics.serviceMonitor.metricRelabellings # ...
Details that matter:
Port name: The chart uses
port: {{ regexReplaceAll "\\W+" .Values.service.protocol "-" }}-server
which resolves to http-server when service.protocol=http (default) or https-server when service.protocol=https. You do not need to edit this—just make sure you didn’t rename the ZITADEL Service port.
Namespace selection: By default, the ServiceMonitor targets the ZITADEL release namespace via:
Set metrics.serviceMonitor.namespace if you want the ServiceMonitor object itself to live elsewhere (e.g., monitoring). The selector.matchLabels still points to the ZITADEL Service labels.
Labels for discovery: If your Prometheus (Operator) instance selects ServiceMonitors by label (common in kube-prometheus-stack), add those under metrics.serviceMonitor.additionalLabels—for example:
If you prefer to manage the ServiceMonitor yourself, keep it aligned with the chart’s conventions:
Target the ZITADEL Service (not pods) using the same selector labels the chart adds.
Use the correct port name (http-server or https-server) and path (/debug/metrics).
Ensure your Prometheus (Operator) selects this ServiceMonitor by label/namespace.
Tip
If you already run the Prometheus Operator, prefer this ServiceMonitor approach. If you run vanilla Prometheus without the Operator, consider the annotation-based discovery method instead (Option A).
If you run Prometheus outside of Kubernetes, add a static job pointing at ZITADEL’s metrics endpoint:
global: scrape_interval: 15sscrape_configs: - job_name: "zitadel" metrics_path: "/debug/metrics" scheme: "http" # use https if TLS is enabled for Zitadel static_configs: - targets: ["<ZITADEL_HOST>:8080"] # e.g., "localhost:8080", "zitadel.internal:8080" or "host.docker.internal:8080"
In this snippet, replace <ZITADEL_HOST>:8080 with the appropriate address. This could be localhost:8080 for local deployments, or a DNS name / IP of the server or Kubernetes service where ZITADEL is running. If ZITADEL is behind a reverse proxy or ingress, ensure that /debug/metrics is reachable (you might expose it internally only). The metrics_path is set to /debug/metrics to match ZITADEL’s endpoint. We use http scheme assuming an internal/non-TLS endpoint; if you have enabled TLS on ZITADEL, use https and the appropriate port (e.g., 443) and adjust any hostname (like zitadel.yourdomain.com).
When running Prometheus in Docker on your workstation:
macOS/Windows: if ZITADEL runs on your host, use host.docker.internal:8080.
Linux: either add --add-host=host.docker.internal:host-gateway to docker run, attach Prometheus to the same Docker network as ZITADEL and use the service name (e.g., zitadel:8080), or run Prometheus with --network host (Linux only).
In Docker: remember localhost inside the Prometheus container is the container itself, not your host. Use host.docker.internal:8080 (plus --add-host=host.docker.internal:host-gateway on Linux), or join Prometheus to the same Docker network as ZITADEL and use the service name (e.g., zitadel:8080), or run with --network host on Linux.
In Kubernetes: verify the Service port name (for example, http-server or https-server) and path (/debug/metrics) match your ServiceMonitor (or annotations). Check that Prometheus has RBAC to list Pods/Endpoints.
Metrics path mismatch
ZITADEL uses /debug/metrics. If you see 404s, confirm your Prometheus job or annotations aren’t still using /metrics.
No targets discovered (Kubernetes)
If using annotations, make sure your Prometheus config has Kubernetes discovery & relabeling rules that honor prometheus.io/* annotations and that the Pods/Service are annotated.
If using ServiceMonitor, ensure your Prometheus Operator instance selects the ServiceMonitor by label/namespace.
Nothing shows up in the Graph dropdown
First confirm up{job="zitadel"} returns 1. If yes, metrics are being scraped—start typing generic prefixes like go_ or process_ to explore. ZITADEL’s exported metric set can evolve; check the raw output at /debug/metrics to see exactly what is exposed by your version.
While Prometheus is the most common choice, other collectors and services can ingest the same endpoint:
Amazon Managed Service for Prometheus (AMP) — managed, Prometheus-compatible backend on AWS.
AWS CloudWatch via ADOT/OTel Collector — scrape with the OpenTelemetry Collector and export to CloudWatch Metrics.
Grafana Cloud / VictoriaMetrics / Thanos — remote-write targets or managed TSDBs for Prometheus data.
Datadog / New Relic / Splunk Observability — agents or OTel pipelines can ingest Prometheus-format metrics.
If you already operate one of these platforms, you can point their agents/collectors at /debug/metrics or use an OTel Collector with a Prometheus receiver and the appropriate exporter.