Skip to content

Cache (Unified Layer)

Unified cache on aiocache: one-time cache.setup, global default cache, stampede-protected get_or_set and @cached.

1. Core concepts

  • cache (CacheManager): singleton; cache.setup(prefix=...) once, then cache.cache (or cache.get_cache(name)) and @cached use it.
  • ZodiacCache: thin wrapper over aiocache BaseCacheget / set / delete / exists and get_or_set (RedLock stampede protection).
  • Namespace: cache.setup(prefix="...") → keys under zodiac_cache:{prefix}.
  • Lifecycle: await cache.shutdown(name="...") releases one named cache; await cache.shutdown() releases all registered caches.

2. Installation

pip install 'zodiac-core[cache]'

Requires aiocache>=0.12.0. For Redis etc., see aiocache docs.


3. Configuration

Call cache.setup at startup; optionally await cache.shutdown() on shutdown. Calling cache.setup(...) again with the same name is allowed only when the effective configuration is identical; different settings for an existing name raise RuntimeError. Lifecycle control is name-aware:

  • await cache.shutdown(name="...") closes only the selected named cache.
  • await cache.shutdown() closes all registered caches.

This preserves the singleton manager design for shared cache backends while letting each app or resource release only the cache it owns.

In-memory (default)

Pass only prefix (and optional default_ttl). We set default cache and serializer; namespace is always zodiac_cache:{prefix}.

from zodiac_core.cache import cache

cache.setup(prefix="myapp", default_ttl=300)

Custom backend (Redis, etc.)

Use aiocache's same parameters as caches.add(). We only inject/override namespace and minimal defaults.

cache.setup(
    prefix="myapp",
    cache="aiocache.RedisCache",
    endpoint="127.0.0.1",
    port=6379,
    default_ttl=300,
)

To reuse an existing alias config: cache.setup(prefix="myapp", **caches.get_alias_config("my_redis")). Namespace is still set to zodiac_cache:myapp.

Named caches (name)

You can register multiple caches under different names (e.g. default in-memory + a Redis cache for sessions). Use the name parameter:

  • In cache.setup(prefix=..., name=...) Registers a cache under that name. Default is "default". Each name has its own backend config and namespace zodiac_cache:{prefix}. Example: cache.setup(prefix="myapp", default_ttl=300) uses name="default"; cache.setup(prefix="sessions", name="sessions", cache="aiocache.RedisCache", endpoint="...") adds a second cache.

  • cache.cache Shorthand for cache.get_cache("default") (the default cache).

  • cache.get_cache(name) Returns the ZodiacCache for that name. Use this when you have multiple caches and want to call get / set / get_or_set on a specific one.

  • In @cached(..., name=...) The decorator uses that named cache instead of the default. Example: @cached(ttl=60, name="sessions") stores entries in the cache registered with name="sessions".

Typical use: one default cache (e.g. in-memory or Redis) plus an optional second cache (e.g. Redis for sessions) with a different backend or TTL; call cache.get_cache("sessions") or @cached(name="sessions") for the second one.

When cleaning up, pair named setup with named shutdown:

cache.setup(prefix="myapp", default_ttl=300)
cache.setup(prefix="sessions", name="sessions", cache="aiocache.RedisCache", endpoint="127.0.0.1")

await cache.shutdown(name="sessions")  # only the sessions cache
await cache.shutdown()  # full cleanup

FastAPI lifespan

from contextlib import asynccontextmanager
from fastapi import FastAPI
from zodiac_core.cache import cache

@asynccontextmanager
async def lifespan(app: FastAPI):
    cache.setup(prefix="myapp", default_ttl=300)
    yield
    await cache.shutdown()

app = FastAPI(lifespan=lifespan)

For a single-app service, await cache.shutdown() remains the simplest option. If your process registers multiple named caches or shares the global manager across multiple app lifecycles, prefer await cache.shutdown(name="...") for scoped cleanup.


4. Usage

get_or_set: c = cache.cache then await c.get_or_set("key", producer, ttl=60).

@cached: Key from module:qualname:hash(args,kwargs). The default key builder only supports stable immutable parameters (None, bool, int, float, str, bytes, and tuples of those values). Supports both async and sync functions.

Important: The decorated function always becomes asynchronous. If you decorate a sync function, you must still await the result. Avoid slow blocking work in sync functions to prevent blocking the event loop.

from zodiac_core.cache import cache, cached

cache.setup(prefix="myapp", default_ttl=300)

# Async function (standard usage)
@cached(ttl=60)
async def get_user(user_id: int):
    return await db.fetch_user(user_id)

# Sync function (now supported, but caller MUST await)
@cached(ttl=120)
def get_config(key: str):
    return {"key": key, "value": "some_value"}

# Usage:
# user = await get_user(1)
# config = await get_config("theme")  # Await is required here!

If your function takes complex parameters such as dict, list, ORM objects, request/session objects, or custom class instances, pass key_builder=... explicitly. The default key builder raises TypeError for unsupported argument types instead of guessing an unstable cache key.

Receiver-aware default keys

@cached also supports receiver-aware default keys for methods:

  • include_cls=True: For class methods, add the bound class identity (cls.__module__ + cls.__qualname__) to the default cache key.
  • include_self=True: For instance methods, add the receiver class identity (self.__class__.__module__ + self.__class__.__qualname__) to the default cache key.

This feature relies on the conventional first parameter names cls and self. If you use non-standard receiver names, provide a custom key_builder.

Place @cached(...) closest to the function definition. For class methods, use @classmethod above @cached(...).

Important

include_cls=True is appropriate only when the cache should vary by the bound class. In inheritance-heavy code, enabling it means parent and child classes will use different cache keys even when they call the same method implementation.

Warning

include_self=True is class-scoped, not instance-scoped. It is intended for singleton services or functionally equivalent instances of the same class. If instance-specific configuration affects the result, do not rely on include_self=True; provide a custom key_builder instead.

class UserService:
    @cached(ttl=60, include_self=True)
    async def get_user(self, user_id: int):
        return await self.repo.get(user_id)


class UserSchema:
    @classmethod
    @cached(ttl=60, include_cls=True)
    async def resolve(cls, key: str):
        return f"{cls.__name__}:{key}"

5. Exceptions and None

  • Exceptions: Propagate; nothing is written to the cache.
  • None: @cached does not store None by default; use skip_cache_func=lambda r: False to cache it. get_or_set without skip_cache_func stores all values (including None via internal sentinel).

6. RedLock (best-effort)

get_or_set uses aiocache RedLock: one producer per key while the lock is held; after lease (default 2s) expires, waiters may run producer too. Memory: per-process; Redis: distributed.


7. Observability

No built-in plugins. For metrics, pass aiocache Plugins in the same config (e.g. cache.setup(prefix="myapp", cache="...", plugins=[...])).


8. API Reference

Cache manager and ZodiacCache

zodiac_core.cache.manager

Unified cache layer: config (setup) + prefix + @cached decorator.

cache = CacheManager() module-attribute
ZODIAC_CACHE_NAMESPACE = 'zodiac_cache' module-attribute
DEFAULT_CACHE_NAME = 'default' module-attribute
ZodiacCache

Thin wrapper over aiocache BaseCache with stampede protection.

Source code in zodiac_core/cache/manager.py
class ZodiacCache:
    """
    Thin wrapper over aiocache BaseCache with stampede protection.
    """

    def __init__(
        self,
        backend: BaseCache,
        *,
        default_ttl: Optional[int] = None,
    ) -> None:
        self._backend = backend
        self._default_ttl = default_ttl

    @property
    def backend(self) -> BaseCache:
        """The underlying aiocache backend instance."""
        return self._backend

    async def _get_raw(self, key: str) -> Any:
        """Retrieve the raw backend value, including internal sentinels."""
        return await self._backend.get(key)

    async def get(self, key: str) -> Any:
        """Retrieve a value from the cache."""
        value = await self._get_raw(key)
        if isinstance(value, _CachedNoneSentinel):
            return None
        return value

    async def set(self, key: str, value: Any, ttl: Optional[int] = None) -> bool:
        """Store a value in the cache with an optional TTL."""
        ttl = ttl if ttl is not None else self._default_ttl
        return await self._backend.set(key, value, ttl=ttl)

    async def delete(self, key: str) -> bool:
        """Remove a value from the cache."""
        return await self._backend.delete(key)

    async def exists(self, key: str) -> bool:
        """Check if a key exists in the cache."""
        return await self._backend.exists(key)

    async def get_or_set(
        self,
        key: str,
        producer: Callable[[], Awaitable[T]],
        ttl: Optional[int] = None,
        lease: Optional[float] = 2.0,
        skip_cache_func: Optional[Callable[[T], bool]] = None,
    ) -> T:
        """
        Get from cache, or call producer and set on miss with RedLock protection.

        Args:
            key: Cache key.
            producer: Async callable to compute the value.
            ttl: TTL in seconds.
            lease: Lock lease in seconds for stampede protection.
            skip_cache_func: If it returns True, the produced value is not stored.
        """
        value = await self._get_raw(key)
        if isinstance(value, _CachedNoneSentinel):
            return None
        if value is not None:
            return value

        lease_sec = lease if lease is not None and lease > 0 else 2.0
        async with RedLock(self._backend, key, lease=lease_sec):
            value = await self._get_raw(key)
            if isinstance(value, _CachedNoneSentinel):
                return None
            if value is not None:
                return value

            fresh = await producer()
            if skip_cache_func is not None and skip_cache_func(fresh):
                return fresh

            to_store = _CACHED_NONE if fresh is None else fresh
            await self.set(key, to_store, ttl=ttl)
            return fresh

    async def close(self) -> None:
        """Close the underlying backend connections."""
        await self._backend.close()
backend property

The underlying aiocache backend instance.

close() async

Close the underlying backend connections.

Source code in zodiac_core/cache/manager.py
async def close(self) -> None:
    """Close the underlying backend connections."""
    await self._backend.close()
delete(key) async

Remove a value from the cache.

Source code in zodiac_core/cache/manager.py
async def delete(self, key: str) -> bool:
    """Remove a value from the cache."""
    return await self._backend.delete(key)
exists(key) async

Check if a key exists in the cache.

Source code in zodiac_core/cache/manager.py
async def exists(self, key: str) -> bool:
    """Check if a key exists in the cache."""
    return await self._backend.exists(key)
get(key) async

Retrieve a value from the cache.

Source code in zodiac_core/cache/manager.py
async def get(self, key: str) -> Any:
    """Retrieve a value from the cache."""
    value = await self._get_raw(key)
    if isinstance(value, _CachedNoneSentinel):
        return None
    return value
get_or_set(key, producer, ttl=None, lease=2.0, skip_cache_func=None) async

Get from cache, or call producer and set on miss with RedLock protection.

Parameters:

Name Type Description Default
key str

Cache key.

required
producer Callable[[], Awaitable[T]]

Async callable to compute the value.

required
ttl Optional[int]

TTL in seconds.

None
lease Optional[float]

Lock lease in seconds for stampede protection.

2.0
skip_cache_func Optional[Callable[[T], bool]]

If it returns True, the produced value is not stored.

None
Source code in zodiac_core/cache/manager.py
async def get_or_set(
    self,
    key: str,
    producer: Callable[[], Awaitable[T]],
    ttl: Optional[int] = None,
    lease: Optional[float] = 2.0,
    skip_cache_func: Optional[Callable[[T], bool]] = None,
) -> T:
    """
    Get from cache, or call producer and set on miss with RedLock protection.

    Args:
        key: Cache key.
        producer: Async callable to compute the value.
        ttl: TTL in seconds.
        lease: Lock lease in seconds for stampede protection.
        skip_cache_func: If it returns True, the produced value is not stored.
    """
    value = await self._get_raw(key)
    if isinstance(value, _CachedNoneSentinel):
        return None
    if value is not None:
        return value

    lease_sec = lease if lease is not None and lease > 0 else 2.0
    async with RedLock(self._backend, key, lease=lease_sec):
        value = await self._get_raw(key)
        if isinstance(value, _CachedNoneSentinel):
            return None
        if value is not None:
            return value

        fresh = await producer()
        if skip_cache_func is not None and skip_cache_func(fresh):
            return fresh

        to_store = _CACHED_NONE if fresh is None else fresh
        await self.set(key, to_store, ttl=ttl)
        return fresh
set(key, value, ttl=None) async

Store a value in the cache with an optional TTL.

Source code in zodiac_core/cache/manager.py
async def set(self, key: str, value: Any, ttl: Optional[int] = None) -> bool:
    """Store a value in the cache with an optional TTL."""
    ttl = ttl if ttl is not None else self._default_ttl
    return await self._backend.set(key, value, ttl=ttl)
CacheManager

Singleton manager for ZodiacCache instances. Aligns with aiocache.caches and mirrors DatabaseManager.

Source code in zodiac_core/cache/manager.py
class CacheManager:
    """
    Singleton manager for ZodiacCache instances.
    Aligns with aiocache.caches and mirrors DatabaseManager.
    """

    _instance = None

    def __new__(cls) -> "CacheManager":
        if cls._instance is None:
            cls._instance = super().__new__(cls)
            cls._instance._wrappers: Dict[str, ZodiacCache] = {}
            cls._instance._setup_configs: Dict[str, Dict[str, Any]] = {}
        return cls._instance

    def get_cache(self, name: str = DEFAULT_CACHE_NAME) -> ZodiacCache:
        """Return the cache instance (ZodiacCache) for the given name."""
        if name not in self._wrappers:
            try:
                backend = aiocaches.get(name)
            except Exception as e:
                raise RuntimeError(f"Cache '{name}' is not initialized: {e}") from e
            setup_config = self._setup_configs.get(name, {})
            self._wrappers[name] = ZodiacCache(
                backend=backend,
                default_ttl=setup_config.get("default_ttl"),
            )
        return self._wrappers[name]

    @property
    def cache(self) -> ZodiacCache:
        """The default cache instance (ZodiacCache) for get/set/get_or_set."""
        return self.get_cache(DEFAULT_CACHE_NAME)

    def setup(
        self,
        prefix: str,
        *,
        name: str = DEFAULT_CACHE_NAME,
        default_ttl: Optional[int] = None,
        **kwargs: Any,
    ) -> None:
        """
        Configure a cache using aiocache's unified config. All ``kwargs`` are
        passed through to aiocache (e.g. ``cache``, ``endpoint``, ``port``,
        ``serializer``, ``ttl``). We only set default ``namespace`` to
        ``{ZODIAC_CACHE_NAMESPACE}:{prefix}`` and minimal defaults (cache class,
        serializer) when omitted.
        """
        config = dict(kwargs)
        config["namespace"] = f"{ZODIAC_CACHE_NAMESPACE}:{prefix}"  # always apply our namespace
        config.setdefault("cache", "aiocache.SimpleMemoryCache")
        config.setdefault("serializer", {"class": "aiocache.serializers.PickleSerializer"})

        if name in self._wrappers:
            existing = self._setup_configs.get(name)
            current = {"default_ttl": default_ttl, "config": config}
            if existing == current:
                logger.debug(f"Cache '{name}' is already configured with the same settings, skipping.")
                return
            raise RuntimeError(f"Cache '{name}' is already configured with different settings")

        aiocaches.add(name, config)
        instance = aiocaches.get(name)
        self._wrappers[name] = ZodiacCache(backend=instance, default_ttl=default_ttl)
        self._setup_configs[name] = {"default_ttl": default_ttl, "config": deepcopy(config)}
        logger.info(f"Cache '{name}' initialized with prefix={prefix}")

    async def shutdown(self, name: str | None = None) -> None:
        """
        Close cache resources and remove them from aiocache's registry.

        Args:
            name: Optional cache name. When provided, only that cache is closed
                  and deregistered. When omitted, all registered caches are closed.
        """
        if name is not None:
            wrapper = self._wrappers.pop(name, None)
            if wrapper is not None:
                await wrapper.close()
            getattr(aiocaches, "_caches", {}).pop(name, None)
            getattr(aiocaches, "_config", {}).pop(name, None)
            self._setup_configs.pop(name, None)
            return

        for cache_name, wrapper in list(self._wrappers.items()):
            await wrapper.close()
            getattr(aiocaches, "_caches", {}).pop(cache_name, None)
            getattr(aiocaches, "_config", {}).pop(cache_name, None)
            del self._wrappers[cache_name]
            self._setup_configs.pop(cache_name, None)
cache property

The default cache instance (ZodiacCache) for get/set/get_or_set.

get_cache(name=DEFAULT_CACHE_NAME)

Return the cache instance (ZodiacCache) for the given name.

Source code in zodiac_core/cache/manager.py
def get_cache(self, name: str = DEFAULT_CACHE_NAME) -> ZodiacCache:
    """Return the cache instance (ZodiacCache) for the given name."""
    if name not in self._wrappers:
        try:
            backend = aiocaches.get(name)
        except Exception as e:
            raise RuntimeError(f"Cache '{name}' is not initialized: {e}") from e
        setup_config = self._setup_configs.get(name, {})
        self._wrappers[name] = ZodiacCache(
            backend=backend,
            default_ttl=setup_config.get("default_ttl"),
        )
    return self._wrappers[name]
setup(prefix, *, name=DEFAULT_CACHE_NAME, default_ttl=None, **kwargs)

Configure a cache using aiocache's unified config. All kwargs are passed through to aiocache (e.g. cache, endpoint, port, serializer, ttl). We only set default namespace to {ZODIAC_CACHE_NAMESPACE}:{prefix} and minimal defaults (cache class, serializer) when omitted.

Source code in zodiac_core/cache/manager.py
def setup(
    self,
    prefix: str,
    *,
    name: str = DEFAULT_CACHE_NAME,
    default_ttl: Optional[int] = None,
    **kwargs: Any,
) -> None:
    """
    Configure a cache using aiocache's unified config. All ``kwargs`` are
    passed through to aiocache (e.g. ``cache``, ``endpoint``, ``port``,
    ``serializer``, ``ttl``). We only set default ``namespace`` to
    ``{ZODIAC_CACHE_NAMESPACE}:{prefix}`` and minimal defaults (cache class,
    serializer) when omitted.
    """
    config = dict(kwargs)
    config["namespace"] = f"{ZODIAC_CACHE_NAMESPACE}:{prefix}"  # always apply our namespace
    config.setdefault("cache", "aiocache.SimpleMemoryCache")
    config.setdefault("serializer", {"class": "aiocache.serializers.PickleSerializer"})

    if name in self._wrappers:
        existing = self._setup_configs.get(name)
        current = {"default_ttl": default_ttl, "config": config}
        if existing == current:
            logger.debug(f"Cache '{name}' is already configured with the same settings, skipping.")
            return
        raise RuntimeError(f"Cache '{name}' is already configured with different settings")

    aiocaches.add(name, config)
    instance = aiocaches.get(name)
    self._wrappers[name] = ZodiacCache(backend=instance, default_ttl=default_ttl)
    self._setup_configs[name] = {"default_ttl": default_ttl, "config": deepcopy(config)}
    logger.info(f"Cache '{name}' initialized with prefix={prefix}")
shutdown(name=None) async

Close cache resources and remove them from aiocache's registry.

Parameters:

Name Type Description Default
name str | None

Optional cache name. When provided, only that cache is closed and deregistered. When omitted, all registered caches are closed.

None
Source code in zodiac_core/cache/manager.py
async def shutdown(self, name: str | None = None) -> None:
    """
    Close cache resources and remove them from aiocache's registry.

    Args:
        name: Optional cache name. When provided, only that cache is closed
              and deregistered. When omitted, all registered caches are closed.
    """
    if name is not None:
        wrapper = self._wrappers.pop(name, None)
        if wrapper is not None:
            await wrapper.close()
        getattr(aiocaches, "_caches", {}).pop(name, None)
        getattr(aiocaches, "_config", {}).pop(name, None)
        self._setup_configs.pop(name, None)
        return

    for cache_name, wrapper in list(self._wrappers.items()):
        await wrapper.close()
        getattr(aiocaches, "_caches", {}).pop(cache_name, None)
        getattr(aiocaches, "_config", {}).pop(cache_name, None)
        del self._wrappers[cache_name]
        self._setup_configs.pop(cache_name, None)

Cached decorator

zodiac_core.cache.decorators

@cached decorator: cache async or sync function result using the configured default cache.

cached(ttl=None, key_builder=None, name=None, skip_cache_func=None, include_cls=False, include_self=False)

Decorate an async or sync function to cache its return value with the configured cache. The decorated callable is always async (await the result). Sync functions are called inside the cache layer; avoid slow blocking sync work to not block the event loop.

Uses cache.get_cache(name) when name is set, otherwise cache.cache (default). Key is built from module, qualname, and supported immutable args/kwargs (or a custom key_builder). TTL comes from decorator, then from the cache instance default_ttl.

Exception handling: If the wrapped function raises, the exception propagates and nothing is written to the cache.

None and skip_cache_func: By default, a return value of None is not stored (so the next call will run the function again). This avoids ambiguity with cache miss. To cache None, pass skip_cache_func=lambda r: False. To skip caching other values (e.g. empty list), pass a callable that returns True for values that must not be cached.

Parameters:

Name Type Description Default
ttl Optional[int]

TTL in seconds for this function's entries. If None, uses cache default_ttl.

None
key_builder Optional[Callable[[Callable[..., Awaitable[T]], tuple, dict], str]]

Optional (fn, args, kwargs) -> str. Default supports stable immutable parameters only; provide explicitly for complex types.

None
name Optional[str]

Name of the cache (from cache.setup(..., name=...)). If None, uses default.

None
skip_cache_func Optional[Callable[[T], bool]]

Callable(result) -> bool; if True, result is not stored. Default is to skip when result is None.

None
include_cls bool

When True, class methods include the receiver class identity (cls.__module__ + cls.__qualname__) in the default cache key.

False
include_self bool

When True, instance methods include the receiver class identity (self.__class__.__module__ + self.__class__.__qualname__) in the default cache key. This is suitable only when instances of the same class are functionally equivalent for the cached method.

False
Source code in zodiac_core/cache/decorators.py
def cached(
    ttl: Optional[int] = None,
    key_builder: Optional[Callable[[Callable[..., Awaitable[T]], tuple, dict], str]] = None,
    name: Optional[str] = None,
    skip_cache_func: Optional[Callable[[T], bool]] = None,
    include_cls: bool = False,
    include_self: bool = False,
) -> Callable[[Callable[..., Awaitable[T]]], Callable[..., Awaitable[T]]]:
    """
    Decorate an async or sync function to cache its return value with the configured cache.
    The decorated callable is always async (await the result). Sync functions are called
    inside the cache layer; avoid slow blocking sync work to not block the event loop.

    Uses ``cache.get_cache(name)`` when ``name`` is set, otherwise ``cache.cache`` (default).
    Key is built from module, qualname, and supported immutable args/kwargs
    (or a custom key_builder).
    TTL comes from decorator, then from the cache instance default_ttl.

    **Exception handling:** If the wrapped function raises, the exception
    propagates and nothing is written to the cache.

    **None and skip_cache_func:** By default, a return value of ``None`` is
    *not* stored (so the next call will run the function again). This avoids
    ambiguity with cache miss. To cache ``None``, pass
    ``skip_cache_func=lambda r: False``. To skip caching other values (e.g.
    empty list), pass a callable that returns True for values that must not
    be cached.

    Args:
        ttl: TTL in seconds for this function's entries. If None, uses cache default_ttl.
        key_builder: Optional (fn, args, kwargs) -> str. Default supports stable
            immutable parameters only; provide explicitly for complex types.
        name: Name of the cache (from cache.setup(..., name=...)). If None, uses default.
        skip_cache_func: Callable(result) -> bool; if True, result is not stored.
            Default is to skip when result is None.
        include_cls: When True, class methods include the receiver class identity
            (`cls.__module__` + `cls.__qualname__`) in the default cache key.
        include_self: When True, instance methods include the receiver class identity
            (`self.__class__.__module__` + `self.__class__.__qualname__`) in the
            default cache key. This is suitable only when instances of the same
            class are functionally equivalent for the cached method.
    """

    def decorator(fn: Callable[..., Awaitable[T]]) -> Callable[..., Awaitable[T]]:
        if key_builder is None:

            def builder(inner_fn: Callable[..., Awaitable[Any]], args: tuple, kwargs: dict) -> str:
                return _default_key_builder(
                    inner_fn,
                    args,
                    kwargs,
                    include_cls=include_cls,
                    include_self=include_self,
                )
        else:
            builder = key_builder
        skip = skip_cache_func if skip_cache_func is not None else _skip_none

        @wraps(fn)
        async def wrapper(*args: Any, **kwargs: Any) -> T:
            backend = _default_cache_manager.get_cache(name) if name is not None else _default_cache_manager.cache
            key = builder(fn, args, kwargs)

            async def producer() -> T:
                if inspect.iscoroutinefunction(fn):
                    return await fn(*args, **kwargs)
                return fn(*args, **kwargs)

            return await backend.get_or_set(key, producer, ttl=ttl, skip_cache_func=skip)

        return wrapper

    return decorator