Why Node-Level Cache aren’t Supported in Pega Constellation
Pega Constellation’s architecture has been designed in such a way that relying on node-level data pages is not feasible. Below are the reasons, grouped by key concerns:
1. Stateless, API-Driven Architecture
- The UI is decoupled (built with React or similar) and interacts with the Pega backend via REST APIs.
- Each UI → Pega request is stateless: there’s no guarantee the same server node handles successive requests.
- Node-level data pages rely on server memory (within a specific JVM/node) for shared cache—this breaks when requests hit different nodes.
2. Scalability & Consistency
- In clustered environments with multiple nodes, a cache at node A won’t be accessible at node B.
- This can lead to stale, inconsistent, or missing data when different requests are handled by different nodes.
- To ensure consistency, Pega uses requestor-scope or thread-scope data pages (or similar safe caching scope) so data remains aligned per user/session or thread
3. Data Virtualization & Performance
- Constellation favors on-demand data fetching via data views and APIs rather than long-lived, large caches at node level.
- Data is fetched when needed and kept in short-lived, requestor‐safe scopes to reduce memory overhead.
- This approach supports zero-downtime upgrades and rolling updates, because there’s no static node cache that has to be cleared or reinitialized.
4. Security & Multi-Tenancy
- Node-level data pages mix data across all users connected to a node. In multi-tenant or microservice setups, that increases the risk of unintended data exposure.
- By limiting caching to requestor or thread level, Pega ensures data isolation per session/user, enhancing security.
5. Upgrade / Hotfix / Deployment Resilience
- In cloud or clustered deployments, patches, hotfixes, upgrades are applied node by node.
- If node-level caching existed, some nodes would hold “old” data while others use “new”—leading to inconsistent behavior.
- Without node-level data pages, the system maintains data consistency during rolling updates.