views:

30

answers:

2

Hi,

It is a question for SQL Server 2008 and .NET code.

I am wondering what you think in terms of performances, design between these two options to insert a lot of data (>250,000 rows, 5 “normal” columns) into a table using a transaction.

  • a. Insert Into statement surrounded by transact statement in the c# code.

Or

  • b. Insert Into statement into a stored procedure executed row-by-row by a while in the c# code surrounded by a transact statement.

Or

  • c. Serialize my object in c# and send the xml to a stored procedure. The stored procedure is a while what read the xml. The while is surrounded by a transact statement.

Also, the network between the client and the server is not very good...

Thanks in advance.

+2  A: 

As always you should profile your code but I'd recommend you look into using the SqlBulkCopy class.

Rodrick Chapman
+1 this is the fastest way to insert many rows of data, it is .NET way of executing the bcp command or BULK INSERT statement for SQL
Chris Diver
Yes you right, I didn't thought of the SqlBulkCopy. I don't have the habit to use this command.
Daniel Luxembourger
A: 

Since you already take care of batch commit, the next issue is going to be the round trips back-and-forth to the client. The fewer requests, the better. So first two options would be pretty bad, since each sends one row per request and have to round-trip 250k times. Option 3 is viable, specially if the XML processing is set oriented (project the XML into a table via .nodes() method and insert the entire projected table). See also How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large buffer for a trick on how to send a large XML to SQL w/o huge memory allocations on client.

The other option you did not mention but was already pointed out: use SqlBulkCopy. This would be the fastest way due to many reasons. Only gotcha is that you must endure a way to present the rows to be bulk copied usign streaming semantics (yield return of a IEnumerable) , to avoid large memory allocations in the client.

Remus Rusanu