Scaling Cloud-Native Identity: Optimizing Performance with Caching

Founder and CEO
- The Critical Role of Caching in High-Performance Identity
- Engineered Flexibility: Caching Connectors
- Granular Control: Tuning for Your Use Case
- Strategic Use Case: Solving B2B Scaling
- Reliability at Scale
- Experience the Difference
In the world of cloud-native identity infrastructure, performance is not just a metric—it is a feature. When your application scales to millions of users, every millisecond of latency in the authentication flow impacts user experience and operational costs. For B2B SaaS applications, where complexity grows with every new customer organization, this challenge is even more acute.
At Zitadel, our vision for 2026 and beyond is built on "Reliability at Scale." We are stripping away complexity to provide an uncompromising developer experience. A critical component of this vision is our flexible, high-performance caching architecture. It empowers our Go-based API to do the heavy lifting—efficiently managing high-concurrency throughput to deliver speed without sacrificing consistency.
Today, we are diving into how Zitadel’s caching strategy allows you to balance speed, consistency, and resource efficiency, ensuring your identity infrastructure evolves alongside your specific deployment needs.
Technical Note: The caching features discussed below offer control over system performance. While caching for transient data (like redirect states from idps) is enabled by default, advanced object caching (like Instances and Organizations) is an opt-in power feature designed for high-scale tuning.
The Critical Role of Caching in High-Performance Identity
Zitadel is famous for its event-sourced architecture, which stores changes as immutable events to provide "time travel" capabilities and robust security forensics. While this provides unmatched auditability for compliance, serving high-frequency data from an event log requires sophisticated engineering.
Coming Soon: An Evolution in Storage We are currently working on a major evolution of our core engine—shifting to a hybrid relational model that combines the speed of traditional tables with the auditability of events. This change will drastically reduce the need for complex read models. Keep an eye on our blog for a technical deep dive into how we are re-engineering our storage layer for the next generation of scale!
Until then—and even after—a robust caching strategy remains the bridge between your security requirements and real-time application performance. For B2B SaaS applications, where user attributes and permissions are queried constantly, caching serves as the shock absorber for your database, ensuring that high-traffic events (like a Black Friday surge) don’t degrade system responsiveness.
Engineered Flexibility: Caching Connectors
We do not believe in "black box" infrastructure. Zitadel empowers you with control over how data is cached, supporting three distinct connectors to match your environment.
1. Redis-Compatible Systems (The Cloud-Native Standard)
Best For: Production Kubernetes clusters, Distributed Deployments, Zitadel Cloud.
For high-scale, distributed deployments, Zitadel integrates seamlessly with Redis and compatible services like Valkey or Google Cloud Memorystore. This is the recommended approach for any multi-node setup.
- Why it matters: Redis allows multiple Zitadel pods to share a consistent cache state. If a user's permission changes on one pod, the cache invalidation propagates instantly, maintaining consistency across your cluster.
- Key Features: Supports Failover, and advanced connection pooling.
2. PostgreSQL (The Stable Default)
Best For: Simplified deployments, reducing infrastructure complexity.
This connector leverages your existing database infrastructure to handle caching duties. It is the default behavior for critical transient data.
- Performance Reality: Don't underestimate this option. We see customers running north of 30,000 requests per second using just this default setup. If your Postgres instance has sufficient memory to cache the working set, it can soak up massive traffic loads without the operational overhead of managing a separate Redis cluster.
- Default Behavior: Used automatically for IdPFormCallbacks and FederatedLogouts to ensure that login flows and global sign-outs work reliably out of the box.
3. In-Memory Cache (The Dev Cache)
Best For: Single-server deployments, Testing, Edge scenarios.
Performance: This is the fastest option, as it avoids network hops entirely by storing data directly in the application's RAM.
⚠️ Architect's Pro-Tip: Memory caching is unsuitable for multi-container deployments (like Kubernetes) without sticky sessions. Since each container holds its own isolated memory state, users may experience data inconsistencies (e.g., being logged out on one request and logged in on the next) if requests are load-balanced across different pods. Use this for speed at the edge or during development, but switch to Redis for clustered production.
Granular Control: Tuning for Your Use Case
Zitadel empowers you with control over how data is cached, allowing you to tune the system for your specific consistency requirements:
- MaxAge Configuration: Define precisely how long an object remains valid. This allows you to set aggressive caching for static data (like Organization metadata) while keeping dynamic data (like User Sessions) fresh.
- LastUseAge Optimization: Automatically retain your "hot" data. This feature intelligently prunes seldom-used objects while keeping frequently accessed data in memory, optimizing your resource footprint.
Strategic Use Case: Solving B2B Scaling
Zitadel operates on a strict Instance → Organization → User hierarchy. In high-density environments—like a B2B SaaS platform serving thousands of tenant organizations—resolving this context for every request creates an "N-over-N" scaling bottleneck.
By enabling caching for Instances & Organizations, you flatten this hierarchy. This allows you to scale application pods linearly without punishing your database, effectively decoupling your read performance from your data complexity.
Reliability at Scale
Performance is an ongoing journey. As we move toward our 2026 goals, our engineering team is rigorously benchmarking these strategies to shave off milliseconds and reduce memory footprints.
We are building Zitadel to be the standard for open-source identity—a platform that gives you the simplicity to spin up in minutes, but the enterprise-grade control to scale indefinitely.
Experience the Difference
Ready to accelerate your identity infrastructure?
- Read the Docs: Dive into the Zitadel Caching Documentation for configuration details.
- Join the Community: Discuss your architecture with our engineers and other users on Discord.
- Deploy Today: Configure the connector that aligns with your stack—whether it's Redis for the cloud or In-Memory for the edge.