I need to delete about 2 million rows from my PG database. I have a list of IDs that I need to delete. However, any way I try to do this is taking days.
I tried putting them in a table and doing it in batches of 100. 4 days later, this is still running with only 297268 rows deleted. (I had to select 100 id's from an ID table, delete where IN that list, delete from ids table the 100 I selected).
I tried:
DELETE FROM tbl WHERE id IN (select * from ids)
That's taking forever, too. Hard to gauge how long, since I can't see it's progress till done, but the query was still running after 2 days.
Just kind of looking for the most effective way to delete from a table when I know the specific ID's to delete, and there are millions of IDs.
It all depends ...
Delete all indexes (except the one on the ID which you need for the delete)
Recreate them afterwards (= much faster than incremental updates to indexes)
Check if you have triggers that can safely be deleted / disabled temporarily
Do foreign keys reference your table? Can they be deleted? Temporarily deleted?
Depending on your autovacuum settings it may help to run VACUUM ANALYZE
before the operation.
Assuming no concurrent write access to involved tables or you may have to lock tables exclusively or this route may not be for you at all.
Some of the points listed in the related chapter of the manual Populating a Database may also be of use, depending on your setup.
If you delete large portions of the table and the rest fits into RAM, the fastest and easiest way would be this:
SET temp_buffers = '1000MB'; -- or whatever you can spare temporarily
CREATE TEMP TABLE tmp AS
SELECT t.*
FROM tbl t
LEFT JOIN del_list d USING (id)
WHERE d.id IS NULL; -- copy surviving rows into temporary table
TRUNCATE tbl; -- empty table - truncate is very fast for big tables
INSERT INTO tbl
SELECT * FROM tmp; -- insert back surviving rows.
This way you don't have to recreate views, foreign keys or other depending objects.
Read about the temp_buffers
setting in the manual. This method is fast as long as the table fits into memory, or at least most of it. Be aware that you can lose data if your server crashes in the middle of this operation. You can wrap all of it into a transaction to make it safer.
Run ANALYZE
afterwards. Or VACUUM ANALYZE
if you did not go the truncate route, or VACUUM FULL ANALYZE
if you want to bring it to minimum size. For big tables consider the alternatives CLUSTER
/ pg_repack
:
For small tables, a simple DELETE
instead of TRUNCATE
is often faster:
DELETE FROM tbl t
USING del_list d
WHERE t.id = d.id;
Read the Notes section for TRUNCATE
in the manual. In particular (as Pedro also pointed out in his comment):
TRUNCATE
cannot be used on a table that has foreign-key references from other tables, unless all such tables are also truncated in the same command. [...]
And:
TRUNCATE
will not fire anyON DELETE
triggers that might exist for the tables.