Description
functools.lru_cache on async functions is a well-known footgun — it caches the coroutine object, not the awaited result. Second call raises RuntimeError: cannot reuse already awaited coroutine.
This is a genuine cachekit advantage that should be documented in the comparison guide.
Evidence
from functools import lru_cache
@lru_cache(maxsize=128)
async def fn(x):
return x * 2
result1 = await fn(1) # Works — returns 2
result2 = await fn(1) # RuntimeError: cannot reuse already awaited coroutine
cachekit handles this correctly:
@cache(backend=None, ttl=300)
async def fn(x):
return x * 2
result1 = await fn(1) # Works — returns 2
result2 = await fn(1) # Works — returns 2 (from cache)
Suggested Fix
Add an "Async Support" section to docs/comparison.md documenting:
- lru_cache: caches coroutine, not result (broken)
- cachetools: no async support
- aiocache: async-first (works)
- cachekit: works with both sync and async (auto-detected)
Found by: tests/competitive/test_head_to_head.py::TestAsyncSupport
Description
functools.lru_cacheon async functions is a well-known footgun — it caches the coroutine object, not the awaited result. Second call raisesRuntimeError: cannot reuse already awaited coroutine.This is a genuine cachekit advantage that should be documented in the comparison guide.
Evidence
cachekit handles this correctly:
Suggested Fix
Add an "Async Support" section to
docs/comparison.mddocumenting:Found by:
tests/competitive/test_head_to_head.py::TestAsyncSupport