This might be easy question but I am having a hard time finding the answer. How does Redis 2.0 handle running out of maximum allocated memory? How does it decide which data to remove or which data to keep in memory?
If you have virtual memory functionality turned on (EDIT: now deprecated), then Redis starts to store the "not-so-frequently-used" data to disk when memory runs out.
If virtual memory in Redis is disabled (the default) and the maxmemory
parameter is set (the default), Redis will not use any more memory than maxmemory
allows. If you turn maxmemory
off, Redis will start using virtual memory (i.e. swap), and performance will drop tremendously.
Newer versions of Redis have various policies when maxmemory
is reached:
volatile-lru
- remove a key among the
ones with an expire set, trying to
remove keys not recently used.volatile-ttl
- remove a key among the
ones with an expire set, trying to
remove keys with short remaining time
to live.volatile-random
- remove a
random key among the ones with an
expire set.allkeys-lru
- like
volatile-lru
, but will remove every
kind of key, both normal keys or keys
with an expire set.allkeys-random
-
like volatile-random
, but will remove
every kind of keys, both normal keys
and keys with an expire set.If you pick a policy that only removes keys with an EXPIRE set, then when Redis runs out of memory, it looks like the program just aborts the malloc() operation. That is, if you try to store more data, the write operation simply fails.
Some links for more info: