Tuning postgreSQL on Windows desktop to take advantage of 24GB RAM

Russell Taylor picture Russell Taylor · Feb 9, 2014 · Viewed 7.1k times · Source

I am new to tuning postgreSQL but have read this standard guide: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and have used pgtune to get some configuration recommendations. I am running postgreSQL 9.3 on Windows 8, am doing data analytics, and my desktop has 24GB RAM, an i7 4-core processor, and a 7200rpm hdd with 32GB SSD cache using intel smart response.

It seems as though postgreSQL is not taking full advantage of the computer and I am wondering what more I might need to do in terms of tuning.

pgtune made the following changes to postgresql.conf:

  • default_statistics_target = 100
  • maintenance_work_mem = 480MB
  • constraint_exclusion = on
  • checkpoint_completion_target = 0.9
  • effective_cache_size = 2816MB
  • work_mem = 96MB
  • wal_buffers = 32MB
  • checkpoint_segments = 64
  • shared_buffers = 960MB
  • max_connections = 20

Now I run this complex self join, count with group by query on the 5GB table "training" which has 100 million rows and four integer columns:

SELECT t1.m_id, t2.m_id, count(*)
FROM training t1, training t2
WHERE t1.u_id = t2.u_id AND t1.m_id < t2.m_id
GROUP BY t1.m_id, t2.m_id

EXPLAIN revealed the following query plan:

GroupAggregate  (cost=4984590388.65..5216672318.82 rows=25381444 width=8)
  ->  Sort  (cost=4984590388.65..5042547417.59 rows=23182811573 width=8)
        Sort Key: t1.m_id, t2.m_id
        ->  Nested Loop  (cost=0.57..676446040.92 rows=23182811573 width=8)
              ->  Seq Scan on training t1  (cost=0.00..1621754.12 rows=99072112 width=8)
              ->  Index Only Scan using training_u_id_m_id_idx on training t2  (cos=0.57..4.90 rows=191 width=8)
                    Index Cond: ((u_id = t1.u_id) AND (m_id > t1.m_id))

It has been running for 8 hours, but what interested me is what task manager revealed. The PostgreSQL Server process is using only:

  • 15% CPU
  • 6.1% Memory (about 512MB)
  • 3.5% Disk

No other processes are taking significant amounts of resources. It surprises me that postgreSQL would not use more of the available resources given the complexity of the query, does anyone have an idea of what might be going on? Do my pgtune values seem good?

I've done a little research as well which has told me that:

  1. On Windows shared_buffers should not be greater than 512MB, and that system cache should be used instead. Question: Do I have to somehow tell Windows to allocate system cache to postgreSQL or will this happen automatically if postgreSQL requests it?
  2. work_mem allows the database server to do sorts in RAM if it is large enough. Question: Is my work_mem large enough for this? How can I tell whether sorts are being done in RAM or on disk?

I would appreciate any insight at all to help speed up this query. Thanks!

Answer

jjanes picture jjanes · Feb 10, 2014

I think effective_cache_size sounds way small, try 20GB. Also, for an analytics workload, work_mem is quite small. I'd set it to at 1GB if you are sure you won't have a lot of connections (and lowering max_connections even further will protect you from accidentally running a lot of them)

A single 7200rpm hdd seems quite inadequate for an analytics workload. I'm not familiar with "SSD cache using intel smart response", maybe that can help make up for it. Can you tell how much of our 5 GB table is getting cached on it?

You might also want increase effective_io_concurrency, not knowing how the SSD cache performs I don't know how much good that will do. But it could help and probably won't hurt.

The low memory usage is OK. Windows should be using the memory to cache the file data, which should help postgres a lot, but is not be charged to postgres.

I don't know where in Task Manager you find a "3.5% Disk", I can't find such a metric.

PostgreSQL 9.3 does not parallelize a single query to multiple CPUs (parallel query was added in version 9.6), so 15% CPU usage is not that far from being totally CPU bound.