I've just upgraded my Heroku postgres database from the Kappa plan (800MB RAM, postgres 9.1) to the Ronin plan (1.7GB RAM, postgres 9.2), but performance has degraded.
Following the guide here, I checked and the cache hit rate is even lower than it was with our Kappa database (now ~57%, previously ~69%). Our app design should be decently ok, as we've seen a cache hit rate of ~99% before.
The recommendation is that the data set should be able to fit into memory, which shouldn't be a problem now - our data size is 1.27GB (at least most of it should fit).
Is the low cache hit rate due to the data size, or is there something else I can look into? Or is it simply a case of the database cache not fully warmed up? (it's been almost 2 days)
If you have plenty of memory and are not running much else on the db, one thing that may change is the shared_buffers. What the shared buffers do is they cache frequently used data so that it maximizes throughout when not all of the database will fit in memory.
Unfortunately this cache does not perform as well as he OS cache. If your data will easily fit in memory, make sure that effective_cache_size
is high enough, and then try reducing shared_buffers
Note that this is not a magic bullet. The appropriate size of shared_buffers
depends on how much data you have, how much space it takes up, your types of queries, how much memory is going towards things like sorts and the like. You can expect to play around with this from time to time to find the sweet spot for your current setup and database.