pg_dump on Database throwing error 'out of shared memory'

pgollangi picture pgollangi · May 6, 2016 · Viewed 10.7k times · Source

Getting problem when taking backup on database contains around 50 schema with each schema having around 100 tables.

pg_dump throwing below error suggesting that to increase max_locks_per_transaction.

pg_dump: WARNING:  out of shared memory
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: The command was: SELECT tableoid, oid, prsname, prsnamespace, prsstart::oid, prstoken::oid, prsend::oid, prsheadline::oid, prslextype::oid FROM pg_ts_parser

An updated of max_locks_per_transaction to 256 in postgresql.conf did not solve the problem.

Are there any possibilities which can cause this problem?

Edited:(07 May, 2016)

Postgresql version = 9.1

Operating system = Ubuntu 14.04.2 LTS

shared_buffers in postgresql.conf = 2GB

Edited:(09 May, 2016)

My postgres.conf

maintenance_work_mem = 640MB
wal_buffers = 64MB
shared_buffers = 2GB
max_connections = 100
max_locks_per_transaction=10000

Answer

pgollangi picture pgollangi · May 7, 2016

I solved this problem by taking backup for all schema individually as size of database (be it no.of schemas or no.of tables) increases it's hard to take backup using pg_dump.

I have done following modification to the script to take schema-wise backup:

  1. Before running pg_dump, list all database schemas into a file. So that we can iterate all schemas and take backup for a schema.

    Here is the command to list all schema to a file

    psql <db_name> -o <output_file> < <sql_to_list_schema>

    Here sql_to_list_schema contains

    SELECT n.nspname FROM pg_catalog.pg_namespace n WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema';

  2. Now read all lines of output_file and take backup of that schema

    pg_dump <db_name> -f <backup_file> -i -x -O -R -n <schema_name_read_from_file>