Cache (Unified Layer)
Unified cache on aiocache: one-time cache.setup, global default cache, stampede-protected get_or_set and @cached.
1. Core concepts
- cache (CacheManager): singleton;
cache.setup(prefix=...)once, thencache.cache(orcache.get_cache(name)) and@cacheduse it. - ZodiacCache: thin wrapper over aiocache
BaseCache—get/set/delete/existsand get_or_set (RedLock stampede protection). - Namespace:
cache.setup(prefix="...")→ keys underzodiac_cache:{prefix}. - Lifecycle:
await cache.shutdown(name="...")releases one named cache;await cache.shutdown()releases all registered caches.
2. Installation
Requires aiocache>=0.12.0. For Redis etc., see aiocache docs.
3. Configuration
Call cache.setup at startup; optionally await cache.shutdown() on shutdown.
Calling cache.setup(...) again with the same name is allowed only when the effective configuration is identical; different settings for an existing name raise RuntimeError.
Lifecycle control is name-aware:
await cache.shutdown(name="...")closes only the selected named cache.await cache.shutdown()closes all registered caches.
This preserves the singleton manager design for shared cache backends while letting each app or resource release only the cache it owns.
In-memory (default)
Pass only prefix (and optional default_ttl). We set default cache and serializer; namespace is always zodiac_cache:{prefix}.
Custom backend (Redis, etc.)
Use aiocache's same parameters as caches.add(). We only inject/override namespace and minimal defaults.
cache.setup(
prefix="myapp",
cache="aiocache.RedisCache",
endpoint="127.0.0.1",
port=6379,
default_ttl=300,
)
To reuse an existing alias config: cache.setup(prefix="myapp", **caches.get_alias_config("my_redis")). Namespace is still set to zodiac_cache:myapp.
Named caches (name)
You can register multiple caches under different names (e.g. default in-memory + a Redis cache for sessions). Use the name parameter:
-
In
cache.setup(prefix=..., name=...)Registers a cache under that name. Default is"default". Each name has its own backend config and namespacezodiac_cache:{prefix}. Example:cache.setup(prefix="myapp", default_ttl=300)usesname="default";cache.setup(prefix="sessions", name="sessions", cache="aiocache.RedisCache", endpoint="...")adds a second cache. -
cache.cacheShorthand forcache.get_cache("default")(the default cache). -
cache.get_cache(name)Returns theZodiacCachefor that name. Use this when you have multiple caches and want to callget/set/get_or_seton a specific one. -
In
@cached(..., name=...)The decorator uses that named cache instead of the default. Example:@cached(ttl=60, name="sessions")stores entries in the cache registered withname="sessions".
Typical use: one default cache (e.g. in-memory or Redis) plus an optional second cache (e.g. Redis for sessions) with a different backend or TTL; call cache.get_cache("sessions") or @cached(name="sessions") for the second one.
When cleaning up, pair named setup with named shutdown:
cache.setup(prefix="myapp", default_ttl=300)
cache.setup(prefix="sessions", name="sessions", cache="aiocache.RedisCache", endpoint="127.0.0.1")
await cache.shutdown(name="sessions") # only the sessions cache
await cache.shutdown() # full cleanup
FastAPI lifespan
from contextlib import asynccontextmanager
from fastapi import FastAPI
from zodiac_core.cache import cache
@asynccontextmanager
async def lifespan(app: FastAPI):
cache.setup(prefix="myapp", default_ttl=300)
yield
await cache.shutdown()
app = FastAPI(lifespan=lifespan)
For a single-app service, await cache.shutdown() remains the simplest option.
If your process registers multiple named caches or shares the global manager across multiple app lifecycles, prefer await cache.shutdown(name="...") for scoped cleanup.
4. Usage
get_or_set: c = cache.cache then await c.get_or_set("key", producer, ttl=60).
@cached: Key from module:qualname:hash(args,kwargs). The default key builder only supports stable immutable parameters (None, bool, int, float, str, bytes, and tuples of those values). Supports both async and sync functions.
Important: The decorated function always becomes asynchronous. If you decorate a sync function, you must still
awaitthe result. Avoid slow blocking work in sync functions to prevent blocking the event loop.
from zodiac_core.cache import cache, cached
cache.setup(prefix="myapp", default_ttl=300)
# Async function (standard usage)
@cached(ttl=60)
async def get_user(user_id: int):
return await db.fetch_user(user_id)
# Sync function (now supported, but caller MUST await)
@cached(ttl=120)
def get_config(key: str):
return {"key": key, "value": "some_value"}
# Usage:
# user = await get_user(1)
# config = await get_config("theme") # Await is required here!
If your function takes complex parameters such as dict, list, ORM objects, request/session objects, or custom class instances, pass key_builder=... explicitly. The default key builder raises TypeError for unsupported argument types instead of guessing an unstable cache key.
Receiver-aware default keys
@cached also supports receiver-aware default keys for methods:
include_cls=True: For class methods, add the bound class identity (cls.__module__+cls.__qualname__) to the default cache key.include_self=True: For instance methods, add the receiver class identity (self.__class__.__module__+self.__class__.__qualname__) to the default cache key.
This feature relies on the conventional first parameter names cls and self.
If you use non-standard receiver names, provide a custom key_builder.
Place @cached(...) closest to the function definition. For class methods, use @classmethod above @cached(...).
Important
include_cls=Trueis appropriate only when the cache should vary by the bound class. In inheritance-heavy code, enabling it means parent and child classes will use different cache keys even when they call the same method implementation.Warning
include_self=Trueis class-scoped, not instance-scoped. It is intended for singleton services or functionally equivalent instances of the same class. If instance-specific configuration affects the result, do not rely oninclude_self=True; provide a customkey_builderinstead.
class UserService:
@cached(ttl=60, include_self=True)
async def get_user(self, user_id: int):
return await self.repo.get(user_id)
class UserSchema:
@classmethod
@cached(ttl=60, include_cls=True)
async def resolve(cls, key: str):
return f"{cls.__name__}:{key}"
5. Exceptions and None
- Exceptions: Propagate; nothing is written to the cache.
- None:
@cacheddoes not storeNoneby default; useskip_cache_func=lambda r: Falseto cache it.get_or_setwithoutskip_cache_funcstores all values (including None via internal sentinel).
6. RedLock (best-effort)
get_or_set uses aiocache RedLock: one producer per key while the lock is held; after lease (default 2s) expires, waiters may run producer too. Memory: per-process; Redis: distributed.
7. Observability
No built-in plugins. For metrics, pass aiocache Plugins in the same config (e.g. cache.setup(prefix="myapp", cache="...", plugins=[...])).
8. API Reference
Cache manager and ZodiacCache
zodiac_core.cache.manager
Unified cache layer: config (setup) + prefix + @cached decorator.
cache = CacheManager()
module-attribute
ZODIAC_CACHE_NAMESPACE = 'zodiac_cache'
module-attribute
DEFAULT_CACHE_NAME = 'default'
module-attribute
ZodiacCache
Thin wrapper over aiocache BaseCache with stampede protection.
Source code in zodiac_core/cache/manager.py
backend
property
The underlying aiocache backend instance.
close()
async
delete(key)
async
exists(key)
async
get(key)
async
get_or_set(key, producer, ttl=None, lease=2.0, skip_cache_func=None)
async
Get from cache, or call producer and set on miss with RedLock protection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
str
|
Cache key. |
required |
producer
|
Callable[[], Awaitable[T]]
|
Async callable to compute the value. |
required |
ttl
|
Optional[int]
|
TTL in seconds. |
None
|
lease
|
Optional[float]
|
Lock lease in seconds for stampede protection. |
2.0
|
skip_cache_func
|
Optional[Callable[[T], bool]]
|
If it returns True, the produced value is not stored. |
None
|
Source code in zodiac_core/cache/manager.py
set(key, value, ttl=None)
async
Store a value in the cache with an optional TTL.
CacheManager
Singleton manager for ZodiacCache instances. Aligns with aiocache.caches and mirrors DatabaseManager.
Source code in zodiac_core/cache/manager.py
124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | |
cache
property
The default cache instance (ZodiacCache) for get/set/get_or_set.
get_cache(name=DEFAULT_CACHE_NAME)
Return the cache instance (ZodiacCache) for the given name.
Source code in zodiac_core/cache/manager.py
setup(prefix, *, name=DEFAULT_CACHE_NAME, default_ttl=None, **kwargs)
Configure a cache using aiocache's unified config. All kwargs are
passed through to aiocache (e.g. cache, endpoint, port,
serializer, ttl). We only set default namespace to
{ZODIAC_CACHE_NAMESPACE}:{prefix} and minimal defaults (cache class,
serializer) when omitted.
Source code in zodiac_core/cache/manager.py
shutdown(name=None)
async
Close cache resources and remove them from aiocache's registry.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str | None
|
Optional cache name. When provided, only that cache is closed and deregistered. When omitted, all registered caches are closed. |
None
|
Source code in zodiac_core/cache/manager.py
Cached decorator
zodiac_core.cache.decorators
@cached decorator: cache async or sync function result using the configured default cache.
cached(ttl=None, key_builder=None, name=None, skip_cache_func=None, include_cls=False, include_self=False)
Decorate an async or sync function to cache its return value with the configured cache. The decorated callable is always async (await the result). Sync functions are called inside the cache layer; avoid slow blocking sync work to not block the event loop.
Uses cache.get_cache(name) when name is set, otherwise cache.cache (default).
Key is built from module, qualname, and supported immutable args/kwargs
(or a custom key_builder).
TTL comes from decorator, then from the cache instance default_ttl.
Exception handling: If the wrapped function raises, the exception propagates and nothing is written to the cache.
None and skip_cache_func: By default, a return value of None is
not stored (so the next call will run the function again). This avoids
ambiguity with cache miss. To cache None, pass
skip_cache_func=lambda r: False. To skip caching other values (e.g.
empty list), pass a callable that returns True for values that must not
be cached.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ttl
|
Optional[int]
|
TTL in seconds for this function's entries. If None, uses cache default_ttl. |
None
|
key_builder
|
Optional[Callable[[Callable[..., Awaitable[T]], tuple, dict], str]]
|
Optional (fn, args, kwargs) -> str. Default supports stable immutable parameters only; provide explicitly for complex types. |
None
|
name
|
Optional[str]
|
Name of the cache (from cache.setup(..., name=...)). If None, uses default. |
None
|
skip_cache_func
|
Optional[Callable[[T], bool]]
|
Callable(result) -> bool; if True, result is not stored. Default is to skip when result is None. |
None
|
include_cls
|
bool
|
When True, class methods include the receiver class identity
( |
False
|
include_self
|
bool
|
When True, instance methods include the receiver class identity
( |
False
|