views:

38

answers:

1

I'm running a process that does a lot of updates (> 100,000) to a table. I have the choice between putting all the updates in a single transaction or committing transactions every 1000 or so.

Ignore for the moment the case where a transaction fails and is aborted. I'm interested in the best size of transaction for memory and speed efficiency.

+1  A: 

Ignoring the case of a transaction failing, splitting up into batches will use less memory.

It is conceivable that it might add some overhead to the total time taken to perform the entire update, but put less overall pressure on anything else running concurrently.

Mitch Wheat
In very unscientific tests, it appears that splitting queries into batches of what I would consider 1 or 2 kilobytes seems to give reasonable performance.
Joe