I need to count sequential timeout exceptions from SqlBulkCopy. To test this, I use an external app to start a transaction & lock up the target table.
Only on the first call does SqlBulkCopy throw a timeout exception when expected. We've tried using an external connection & transaction, as well as using a connection string and internal transaction. With the external connection & transaction, the infinite wait was never in opening the connection or beginning or committing the transaction, but always at .WriteToServer()
.
Is there some approach to this whereby SqlBulkCopy.WriteToServer()
will reliably throw a timeout exception when it has reached its .BulkCopyTimeout
limit?
public void BulkCopy(string connectionString, DataTable table, int bulkTimeout)
{
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(
connectionString,
SqlBulkCopyOptions.UseInternalTransaction))
{
bulkCopy.BulkCopyTimeout = bulkTimeout;//e.g. 120 sec.
//... fill with data, map columns...
bulkCopy.WriteToServer(table);
// ^^^^ waits indefinitely, doesn't throw until *after*
// the lock is released.
}
}
I prefer to let exceptions bubble up rather than handle them in the scope of the using
block, but I can always rethrow. Thanks much for any insight.
Update 1:
Still no resolution. Interesting behavior discovered though -- a normal SqlCommand will throw a TimeoutException as expected during the same lock that makes the SqlBulkCopy.WriteToServer method hang indefinitely.
Here are approaches that we've tried -- and that have failed -- to get SqlBulkCopy.WriteToServer to consistently throw timeouts when expected:
For now, as a workaround, I'm alternating between a) putting the WriteToServer call in an asynchronous wrapper so I can time it myself, and b) only calling WriteToServer once; after timeouts, wait until a regular SqlCommand succeeds before trying WriteToServer again. Using these approaches, I'm at least able to stay in control of the execution flow.
Have you tried passing in the SqlBulkOptions.TableLock option to SqlBulkCopy? That option (quote) means it will:
Obtain a bulk update lock for the duration of the bulk copy operation.
So, if there is another processing locking the table, it would prevent the lock being gained and in theory, reliably timeout.
Update:
I set up my own test harness and can't reproduce. To lock the table, I started a transaction in SSMS doing a SELECT * FROM TargetTable WITH (HOLDLOCK)
. I used the same BulkCopy method you included in the question, using internal transactions, with a bulk load timeout of 30 seconds. Each attempt to do the bulk copy times out as expected after 30 seconds. It then succeeds when I rollback the SSMS transaction.
I was using SQL Server 2008 Express, .NET 3.5.
It's not something like after the first attempt, the bulk load timeout is not being passed in correctly? i.e. it's not somehow being set to "indefinite".
Update 2:
Also switched on Multiple Active Result Sets support in the connection string, still consistently times out for me each time.