We are importing a csv file with CSVReader
then using SqlBulkCopy
to insert that data into SQL Server. This code works for us and is very simple, but wondering if there is a faster method (some of our files have 100000 rows) that would also not get too complex?
SqlConnection conn = new SqlConnection(connectionString);
conn.Open();
SqlTransaction transaction = conn.BeginTransaction();
try
{
using (TextReader reader = File.OpenText(sourceFileLocation))
{
CsvReader csv = new CsvReader(reader, true);
SqlBulkCopy copy = new SqlBulkCopy(conn, SqlBulkCopyOptions.KeepIdentity, transaction);
copy.DestinationTableName = reportType.ToString();
copy.WriteToServer(csv);
transaction.Commit();
}
}
catch (Exception ex)
{
transaction.Rollback();
success = false;
SendFileImportErrorEmail(Path.GetFileName(sourceFileLocation), ex.Message);
}
finally
{
conn.Close();
}
Instead of building your own tool to do this, have a look at SQL Server Import and Export / SSIS. You can target flat files and SQL Server databases directly. The output dtsx
package can also be run from the command line or as a job through the SQL Server Agent.
The reason I am suggesting it is because the wizard is optimized for parallelism and works really well on large flat files.