Problems with table-value parameter performance

Keith picture Keith · May 20, 2011 · Viewed 7k times · Source

I don't know whether this is an issue with how I'm using them or Microsoft's implementation, but SQL 2008 table-value parameters are painfully slow.

Generally if I need to use a TVP it's because I've got lots of records - currently they appear to be unusably slow for anything more than the fewest records.

I'm calling them in .Net like this:

// get the data
DataTable data = GetData();

com.CommandText = "sprocName"

// create the table-value parameter
var tvp = com.Parameters.AddWithValue("data", data);
tvp.SqlDbType = SqlDbType.Structured;

com.ExecuteNonQuery();

I ran profiler to see why, and the actual SQL statement is something like this:

declare @data table ...

insert into @data ( ... fields ... ) values ( ... values ... )
-- for each row
insert into @data ( ... fields ... ) values ( ... values ... )

sprocName(@data)

That's a really slow way to do it though. It would be much quicker if it did this instead:

insert into @data ( ... fields ... ) 
values ( ... values ... ),
       ( ... values ... ),
       -- for each row
       ( ... values ... )

I'm not sure why it isn't using the newer, much quicker syntax. Or even whatever it does under the hood with SqlBulkCopy.

The new syntax was added in SQL 2008, but then so are TVPs (I think).

Is there some option to make it do this? Or something that I'm missing?

Answer

Solomon Rutzky picture Solomon Rutzky · Feb 24, 2015

If TVPs are "noticeably slower" than the other options, then most likely you are not implementing them correctly.

  1. You should not be using a DataTable, unless your application has use for it outside of sending the values to the TVP. Using the IEnumerable<SqlDataRecord> interface is faster and uses less memory as you are not duplicating the collection in memory only to send it to the DB. I have this documented in the following places:
  2. You should not use AddWithValue for the SqlParameter, though this is not likely a performance issue. But still, it should be:

    SqlParameter tvp = com.Parameters.Add("data", SqlDbType.Structured);
    tvp.Value = MethodThatReturnsIEnumerable<SqlDataRecord>(MyCollection);
    
  3. TVPs are Table Variables and as such do not maintain statistics. Meaning, they report only having 1 row to the Query Optimizer. So, in your proc, either:
    • Use statement-level recompile on any queries using the TVP for anything other than a simple SELECT: OPTION (RECOMPILE)
    • Create a local temporary table (i.e. single #) and copy the contents of the TVP into the temp table
    • You could try adding a clustered primary key to the User-Defined Table Type
    • If using SQL Server 2014 or newer, you can try making use of In-Memory OLTP / memory-optimized tables. Please see: Faster temp table and table variable by using memory optimization

Regarding why you are seeing:

insert into @data ( ... fields ... ) values ( ... values ... )
-- for each row
insert into @data ( ... fields ... ) values ( ... values ... )

instead of:

insert into @data ( ... fields ... ) 
values ( ... values ... ),
       ( ... values ... ),

IF that is actually what is happening, then:

  • If the inserts are being done within a Transaction then there is no real performance difference
  • The newer value-list syntax (i.e. VALUES (row1), (row2), (row3)) is limited to something like 1000 rows and hence not a viable option for TVPs that do not have that limit. HOWEVER, this is not likely the reason that individual inserts are being used, given that there is no limit when doing INSERT INTO @data (fields) SELECT tab.[col] FROM (VALUES (), (), ...) tab([col]), which I documented here: Maximum Number of Rows for the Table Value Constructor. Instead...
  • The reason is most likely that doing individual inserts allows for streaming the values from the app code into SQL Server:
    1. using an iterator (i.e. the IEnumerable<SqlDataRecord> noted in #1 above), the app code sends each row as it is returned from the method, and
    2. constructing the VALUES (), (), ... list, even if doing the INSERT INTO ... SELECT FROM (VALUES ...) approach (which is not limited to 1000 rows), that would still require building the entire VALUES list before sending any of the data into SQL Server. If there is a lot of data, that would take longer to construct the super-long string, and it would take up a lot more memory while doing it.

Please also see this whitepaper from the SQL Server Customer Advisory Team: Maximizing Throughput with TVP