views:

80

answers:

4

In SQL Profiler you can see that very simple updates to a table by primary key take about 10-30ms each. On about every 10th update the write column shows 1, on all other updates it shows 0. This must mean that about every 10th update statement still requires disk IO. I wonder why that is. Would it not be more efficient queue up all IO until the transaction commits?

+4  A: 

And where would you wish to store these kind of memory updates once you update a table of size, lets say 10 000 000 records?

Where would you wish to store the transaction log.

AS plentifull as RAM seems, we cannot assume it is endless...

astander
unless you have one of those fabled Turing machines with an endless tape at home :-)
marc_s
@marc_s: man... I have one... such a pain when I have to clean under all that tape.
Stefano Borini
+1  A: 

Also, waiting with I/Os until the end of transaction might not be the optimal plan even with endless RAM, because then the transaction would need to wait until all the I/O operations are finished.

Otherwise it is actually desirable to write to disk for long running transactions (but in such a way that atomicity is not broken).

Unreason
Why not buffer a few megabytes instead of only 10 writes?
usr
10 writes could very well be few megabytes. I don't know the details of MS SQL Server buffering, but normally limits on buffering are consequences of http://en.wikipedia.org/wiki/ACID compliance.
Unreason
A: 

That's the Transaction Log

Alexander
A: 

I researched a bit myself: The amount of IOs depends on if the updates come in huge batches or if they are send as individual commands. IT seems that SQL Server can execute huge amounts (10000/s) of updates in big batches and only 300/sec if sent on individual commands

usr