I would like to implement a decorator that provides per-request caching to any method, not just views. Here is an example use case.
I have a custom tag that determines if a record in a long list of records is a "favorite". In order to check if an item is a favorite, you have to query the database. Ideally, you would perform one query to get all the favorites, and then just check that cached list against each record.
One solution is to get all the favorites in the view, and then pass that set into the template, and then into each tag call.
Alternatively, the tag itself could perform the query itself, but only the first time it's called. Then the results could be cached for subsequent calls. The upside is that you can use this tag from any template, on any view, without alerting the view.
In the existing caching mechanism, you could just cache the result for 50ms, and assume that would correlate to the current request. I want to make that correlation reliable.
Here is an example of the tag I currently have.
@register.filter()
def is_favorite(record, request):
if "get_favorites" in request.POST:
favorites = request.POST["get_favorites"]
else:
favorites = get_favorites(request.user)
post = request.POST.copy()
post["get_favorites"] = favorites
request.POST = post
return record in favorites
Is there a way to get the current request object from Django, w/o passing it around? From a tag, I could just pass in request, which will always exist. But I would like to use this decorator from other functions.
Is there an existing implementation of a per-request cache?
Using a custom middleware you can get a Django cache instance guaranteed to be cleared for each request.
This is what I used in a project:
from threading import currentThread
from django.core.cache.backends.locmem import LocMemCache
_request_cache = {}
_installed_middleware = False
def get_request_cache():
assert _installed_middleware, 'RequestCacheMiddleware not loaded'
return _request_cache[currentThread()]
# LocMemCache is a threadsafe local memory cache
class RequestCache(LocMemCache):
def __init__(self):
name = 'locmemcache@%i' % hash(currentThread())
params = dict()
super(RequestCache, self).__init__(name, params)
class RequestCacheMiddleware(object):
def __init__(self):
global _installed_middleware
_installed_middleware = True
def process_request(self, request):
cache = _request_cache.get(currentThread()) or RequestCache()
_request_cache[currentThread()] = cache
cache.clear()
To use the middleware register it in settings.py, e.g:
MIDDLEWARE_CLASSES = (
...
'myapp.request_cache.RequestCacheMiddleware'
)
You may then use the cache as follows:
from myapp.request_cache import get_request_cache
cache = get_request_cache()
Refer to the django low level cache api doc for more information:
It should be easy to modify a memoize decorator to use the request cache. Have a look at the Python Decorator Library for a good example of a memoize decorator: