Breaking Update of PostSharp Caching adapter for Redis in 2025.1

by Gael Fraiteur on 01 Sep 2025

We have released a refactored version of PostSharp.Patterns.Caching.Backends.Redis in PostSharp 2025.1 to address a reliability issue that could surface in multi-node Redis deployments (master/replica or cluster) when cache dependencies are enabled under sustained load. The fix required a redesign of the internal data schema, which is why this is a breaking change (one-time cache purge). Outside of that purge, the upgrade is straightforward and brings measurable gains in performance, consistency, and resilience.

Impact at a glance:

  • If you run multi-node Redis with dependencies, the update removes a potential failure mode.
  • You only need a one-time cache purge, and you’ll still benefit from performance and diagnostics improvements.
  • There is no risk of user data loss; only old cached entries stay in the database until purged.
  • No impact for the other backends.

Who is affected?

  • Affected by the bug:
    • Applications that use PostSharp.Patterns.Caching.Backends.Redis with support for dependencies in a Redis master/replica or cluster setup.
  • Affected by the refactoring:
    • All applications that use PostSharp.Patterns.Caching.Backends.Redis, with or without support for dependencies, are affected by the redesign of the data schema. You must purge the cache after the update, and there is no other impact. All versions prior to 2025.1.7 are affected.

Customers using Metalama.Patterns.Caching.Backends.Redis are affected by this issue too, and the solution has not yet been ported to Metalama. Please contact us if it matters to you.

What was wrong?

The previous design of the RedisCachingBackend relied on a retry loop in GetItem operations to retrieve a consistent snapshot of the cache item, which includes both the cache value and its dependencies, stored in separate Redis keys. Under heavy load in a master/replica setup, this loop could exhaust its iterations and fail.

The consistent snapshot of value and dependencies was required because recursive dependencies (when a cached method calls another already cached method) were flattened in the SetItem operation, so the RemoveItem and InvalidateItem operations did not need to operate recursively. This was a poor design choice.

The previous component was only tested in a single-node Redis topology. We believe the root cause was a lack of testing in multi-node topologies under high load.

How was this fixed?

We redesigned the data schema so GetItem never needs to iterate:

  • Dependencies are no longer flattened. The RemoveItem and InvalidateItem operations now work recursively.
  • Cache keys (a single logical cache item requires several Redis keys) are now versioned, so we don’t need loops to ensure read consistency.

What else has been improved?

Since a major refactoring was required to implement the new data schema, we went the extra mile and made the component more resilient in high-load production environments:

  • Performance improvements due to the new design of the Redis data schema that is always observationally consistent for read operations.
  • Key compression (hashing). Enable this feature by assigning the CachingServices.DefaultKeyBuilder property.
  • More resilient thanks to new behaviors controlled by the following properties on the RedisCachingBackendConfiguration object:
    • Exception handling: RedisCachingBackendConfiguration.ExceptionHandlingPolicy.
    • Retry policies: RedisCachingBackendConfiguration.BackgroundRecoveryRetryPolicy, RedisCachingBackendConfiguration.BackgroundTasksRetryPolicy, RedisCachingBackendConfiguration.TransactionRetryPolicy.
    • Concurrency control: BackgroundTasksMaxConcurrency and InvalidationMaxConcurrency.
    • Node selection: ReadCommandFlags and WriteCommandFlags, defaulting to PreferReplica and PreferMaster, respectively.
  • Support for sharding clusters, with keys containing taghashes. Tested on a 3+3 cluster.
  • Support for dependencies with a very large number of dependent cache items.
  • GC process:
    • Detection of overloads and automatic temporary disabling.
    • Periodic background cleanups to handle situations that were not detected by processing keyspace notifications in real time.
  • Keys contain the PostSharp Caching schema version for safe updates.
  • Fixed bugs with sliding expiration.
  • Improved documentation.

What changes are breaking?

  • The data schema has changed, so you must purge the old version’s cache; otherwise it will contain permanently unreachable data (garbage) forever.
  • The CacheValue.Dependencies and CacheItem.Dependencies properties now contain first-level dependencies rather than all recursive dependencies. These objects are largely implementation details and do not surface to the aspect-oriented API of PostSharp Caching.

How to update?

  1. Update all PostSharp packages to 2025.1.8 and deploy your application.
  2. Purge the cache using Redis’s FLUSHDB command after your old application has been undeployed.

If you ignore this post and only update the package, the Redis database will contain the data of the old version forever, since the dependency-management keys are set to never expire.

Read more

For details about the refactored PostSharp Caching for Redis, see our documentation.

Conclusion

PostSharp Caching for Redis now runs on a more robust, versioned schema that removes a reliability edge case and improves performance and consistency across single-node, replicated, and sharded clusters when dependency support is enabled. The component has been overhauled for better observability and resilience. The only required action is a one-time cache purge after upgrading to 2025.1.8 or later.

If you rely on the Metalama Redis backend, let us know so we can prioritize porting the same improvements.

Thanks to customers who surfaced the scenario early—enjoy the improved backend.