views:

728

answers:

6

I have a program in which I need to run multiple insert statements (dynamically generated) against a MS SQL Table. Right now, I do (pseudo-code):

Loop
Generate Insert Statement String
SQLCommand.Text = Generated Insert Statement String
SQLCommand.Execute().
End Loop

Close Connection

Is it better performance-wise to simply construct one big string of inserts, separated by semi-colons. And run only one SQLCommand.Execute() statement? Or does it not make a difference?

Thanks!

+3  A: 

you might find this technique useful. It significantly cuts down on the number of statements processed.

Jonathan Fingland
A: 

it'll be a lot faster to batch them up, otherwise the sql server has to look at execution plans, etc. depending on how big the loop and the statements are i might end up sending N records at a time, and then letting it come back, just to give yourself some breathing room. the other thing that you might want to consider (i'm coming at this as if its 'throw away code', [one time thing]) is just doing a bulk load. SSIS is also an option...

joseph
A: 

On Postgres you would use COPY, which allows bulk inserts in a CSV-like format. Don't know if sqlserver has something similar.

Another way would be to use a stored proceedure and just pass a long list of data to some backend insert loop.

SpliFF
It does; it's a program called bcp.
Jacob
+1  A: 

If you have the choice and ability to do so at this stage, you may want to upgrade to SQL Server 2008. It has a lot of added benefits, including an addition to T-SQL for doing multiple inserts in one statement.

Jacob
Does it? UNION and Bulk copy has been around for ever
gbn
It adds multiple inserts directly into T-SQL. INSERT INTO Blah (foo, bar) VALUES (1, 2), (3, 4). You can do things with UNIONs and bcp, but that was a handy addition that reduces verbosity.
Jacob
Oops! I'd forgotten this... thanks
gbn
A: 

Depends on just how many your multiple are. But you should definitely also look at Sql Bulk Copy - very fast, very handy.

Marc

marc_s
+2  A: 

You need to start a transaction before you begin your inserts. Then, when you have send all your inserts to the server and finished the loop, you should commit. This will save you a lot writes on the database!

See more here: http://msdn.microsoft.com/en-us/library/5ha4240h.aspx

Martin Olsen
Hmm. That makes sense.
Saobi
No it doesn't. You start an explicit transaction then commit or rollback explicitly. You don't need to change this setting.
gbn
Hmm.. I didn't mean to disable automatic commits globally, just for the current transaction batch. I'll change the answer!
Martin Olsen
What's the advantage of committing everything at the end. Instead of appending all insert statements into one statement separated by semi-colons, and executing that statement once, directly ?
Saobi
For one thing, you can't (easily) handle bound parameters. You ARE binding your parameters, right? The other is client memory usage (the length of the statement times the number of statements to send). Finally, if the server sends an error back, you can handle it right there and possibly only throw a single statement away instead of the whole batch. There are probably lots of other reasons...
Martin Olsen