DELETE SQL with correlated subquery for table with 42 million rows?

JohnB picture JohnB · Aug 7, 2010 · Viewed 24.9k times · Source

I have a table cats with 42,795,120 rows.

Apparently this is a lot of rows. So when I do:

/* owner_cats is a many-to-many join table */
DELETE FROM cats
WHERE cats.id_cat IN (
SELECT owner_cats.id_cat FROM owner_cats
WHERE owner_cats.id_owner = 1)

the query times out :(

(edit: I need to increase my CommandTimeout value, default is only 30 seconds)

I can't use TRUNCATE TABLE cats because I don't want to blow away cats from other owners.

I'm using SQL Server 2005 with "Recovery model" set to "Simple."

So, I thought about doing something like this (executing this SQL from an application btw):

DELETE TOP (25) PERCENT FROM cats
WHERE cats.id_cat IN (
SELECT owner_cats.id_cat FROM owner_cats
WHERE owner_cats.id_owner = 1)

DELETE TOP(50) PERCENT FROM cats
WHERE cats.id_cat IN (
SELECT owner_cats.id_cat FROM owner_cats
WHERE owner_cats.id_owner = 1)

DELETE FROM cats
WHERE cats.id_cat IN (
SELECT owner_cats.id_cat FROM owner_cats
WHERE owner_cats.id_owner = 1)

My question is: what is the threshold of the number of rows I can DELETE in SQL Server 2005?

Or, if my approach is not optimal, please suggest a better approach. Thanks.

This post didn't help me enough:

EDIT (8/6/2010):

Okay, I just realized after reading the above link again that I did not have indexes on these tables. Also, some of you have already pointed out that issue in the comments below. Keep in mind this is a fictitious schema, so even id_cat is not a PK, because in my real life schema, it's not a unique field.

I will put indexes on:

  1. cats.id_cat
  2. owner_cats.id_cat
  3. owner_cats.id_owner

I guess I'm still getting the hang of this data warehousing, and obviously I need indexes on all the JOIN fields right?

However, it takes hours for me to do this batch load process. I'm already doing it as a SqlBulkCopy (in chunks, not 42 mil all at once). I have some indexes and PKs. I read the following posts which confirms my theory that the indexes are slowing down even a bulk copy:

So I'm going to DROP my indexes before the copy and then re CREATE them when it's done.

Because of the long load times, it's going to take me awhile to test these suggestions. I'll report back with the results.

UPDATE (8/7/2010):

Tom suggested:

DELETE
FROM cats c
WHERE EXISTS (SELECT 1
FROM owner_cats o
WHERE o.id_cat = c.id_cat
AND o.id_owner = 1)

And still with no indexes, for 42 million rows, it took 13:21 min:sec versus 22:08 with the way described above. However, for 13 million rows, took him 2:13 versus 2:10 my old way. It's a neat idea, but I still need to use indexes!

Update (8/8/2010):

Something is terribly wrong! Now with the indexes on, my first delete query above took 1:9 hrs:min (yes an hour!) versus 22:08 min:sec and 13:21 min:sec versus 2:10 min:sec for 42 mil rows and 13 mil rows respectively. I'm going to try Tom's query with the indexes now, but this is heading in the wrong direction. Please help.

Update (8/9/2010):

Tom's delete took 1:06 hrs:min for 42 mil rows and 10:50 min:sec for 13 mil rows with indexes versus 13:21 min:sec and 2:13 min:sec respectively. Deletes are taking longer on my database when I use indexes by an order of magnitude! I think I know why, my database .mdf and .ldf grew from 3.5 GB to 40.6 GB during the first (42 mil) delete! What am I doing wrong?

Update (8/10/2010):

For lack of any other options, I have come up with what I feel is a lackluster solution (hopefully temporary):

  1. Increase timeout for database connection to 1 hour (CommandTimeout=60000; default was 30 sec)
  2. Use Tom's query: DELETE FROM WHERE EXISTS (SELECT 1 ...) because it performed a little faster
  3. DROP all indexes and PKs before running delete statement (???)
  4. Run DELETE statement
  5. CREATE all indexes and PKs

Seems crazy, but at least it's faster than using TRUNCATE and starting over my load from the beginning with the first owner_id, because one of my owner_id takes 2:30 hrs:min to load versus 17:22 min:sec for the delete process I just described with 42 mil rows. (Note: if my load process throws an exception, I start over for that owner_id, but I don't want to blow away previous owner_id, so I don't want to TRUNCATE the owner_cats table, which is why I'm trying to use DELETE.)

Anymore help would still be appreciated :)

Answer

Dave Markle picture Dave Markle · Aug 7, 2010

There is no practical threshold. It depends on what your command timeout is set to on your connection.

Keep in mind that the time it takes to delete all of these rows is contingent upon:

  • The time it takes to find the rows of interest
  • The time it takes to log the transaction in the transaction log
  • The time it takes to delete the index entries of interest
  • The time it takes to delete the actual rows of interest
  • The time it takes to wait for other processes to stop using the table so you can acquire what in this case will most likely be an exclusive table lock

The last point may often be the most significant. Do an sp_who2 command in another query window to make sure that there isn't lock contention going on, preventing your command from executing.

Improperly configured SQL Servers will do poorly at this type of query. Transaction logs which are too small and/or share the same disks as the data files will often incur severe performance penalties when working with large rows.

As for a solution, well, like all things, it depends. Is this something you intend to be doing often? Depending on how many rows you have left, the fastest way might be to rebuild the table as another name and then rename it and recreate its constraints, all inside a transaction. If this is just an ad-hoc thing, make sure your ADO CommandTimeout is set high enough and you can just bear the cost of this big delete.