Amazon RDS (PostgreSQL): Optimize memory usage

Adagyo picture Adagyo · Jan 6, 2016 · Viewed 8.3k times · Source

There is something i did not really understand with Amazon RDS (the PostrgreSQL version). Some queries takes a lot of time to show their results. I have set all relevant indexes (as shown with EXPLAIN). So I think it's not due to my schema design.

I do not use a big machine (m3.xlarge) as bigger ones are too much expensive. My database size is about 300GB.

It seems that Postgres does not use all the available memory (only ~5GB, the "Freeable memory" report of the console shows that there are always ~10GB freeable...). I try to tune my "parameter group" as proposed by tune-your-postgres-rds-instance, especially set EFFECTIVE_CACHE_SIZE to 70%. But it does not change anything.

I'm probably wrong somewhere... Any idea ?

Answer

David Kerr picture David Kerr · Jan 6, 2016

To make more memory available to your queries you would tune your work_mem. There are implications to doing that since that's memory per backend.

effective_cache_size actually doesn't deal with memory at all. It's an optimizer parameter.

"Freeable memory" is a good thing - it means that the memory is currently used (most likely) by postgres in the operating system cache.

You can increase your shared_buffers to allow postgres to use more of it's own memory for caching, but there are limits to it's effectiveness that mean you don't usually want to use more than 25% of available memory to this.