views:

2042

answers:

11

Hello,

Say I have a simple table that has the following fields:

  1. ID: int, autoincremental (identity), primary key
  2. Name: varchar(50), unique, has unique index
  3. Tag: int

I never use the ID field for lookup, because my application is always based on working with the Name field.

I need to change the Tag value from time to time. I'm using the following trivial SQL code:

UPDATE Table SET Tag = XX WHERE Name = YY;

I wondered if anyone knows whether the above is always faster than:

DELETE FROM Table WHERE Name = YY;
INSERT INTO Table (Name, Tag) VALUES (YY, XX);

Again - I know that in the second example the ID is changed, but it does not matter for my application.

Thanks

+3  A: 

One command on the same row should always be faster than two on that same row. So the UPDATE only would be better.

EDIT set up the table:

create table YourTable
(YourName  varchar(50)  primary key
,Tag int
)

insert into YourTable values ('first value',1)

run this, which takes 1 second on my system (sql server 2005):

SET NOCOUNT ON
declare @x int
declare @y int
select @x=0,@y=0
UPDATE YourTable set YourName='new name'
while @x<10000
begin
    Set @x=@x+1
    update YourTable set YourName='new name' where YourName='new name'
    SET @y=@y+@@ROWCOUNT
end
print @y

run this, which took 2 seconds on my system:

SET NOCOUNT ON
declare @x int
declare @y int
select @x=0,@y=0
while @x<10000
begin
    Set @x=@x+1
    DELETE YourTable WHERE YourName='new name'
    insert into YourTable values ('new name',1)
    SET @y=@y+@@ROWCOUNT
end
print @y
KM
Are you basing this on any concrete data?
Roee Adler
@Rax Olgud, how do you answer that? You haven't even said what database you are using. You asked a conceptual question, yet want concrete data. If you want actual data then you will need to write a wile loop (on your system), and update the row 1000 times, write another loop that will delete/insert it 1000 times. and see what is faster.
KM
@Rax Olgud, there is some overhead in removing and creating an index value and checking any constraints. if you are just updating a data column it will avoid any of this overhead.
KM
+1 Thank you for the test!
Roee Adler
+2  A: 

Obviously, the answer varies based on what database you are using, but UPDATE can always be implemented faster than DELETE+INSERT. Since in-memory ops are mostly trivial anyways, given a hard-drive based database, an UPDATE can change a database field in-place on the hdd, while a delete would remove a row (leaving an empty space), and insert a new row, perhaps to the end of the table (again, it's all in the implementation).

The other, minor, issue is that when you UPDATE a single variable in a single row, the other columns in that row remain the same. If you DELETE and then do an INSERT, you run the risk of forgetting about other columns and consequently leaving them behind (in which case you would have to do a SELECT before your DELETE to temporarily store your other columns before writing them back with INSERT).

erjiang
I'm not sure I agree with you regarding the first point, especially when using variable length string types. Updating those may indeed require HD writes in "new places".
Roee Adler
Also on DELETE+INSERT indices have to be updated twice.
Adam Byrtek
@Raxkeyword "can"
erjiang
+3  A: 

Keep in mind the actual fragmentation that occurs when DELETE+INSERT is issued opposed to a correctly implemented UPDATE will make great difference by time.

Thats why, for instance, REPLACE INTO that MySQL implements is discouraged as opposed to using the INSERT INTO ... ON DUPLICATE KEY UPDATE ... syntax.

jishi
+5  A: 

Anecdotally the update should be faster since it is a single command, but as a thought exercise, I suspect I could construct a convoluted scenario in SQL Server where it turned out to be closer than might be thought. Would need to be instrumented and tested to see.

...

I've had to edit this, I had written that if the heap page split it would ripple updates etc, and it does not.

The row would be edited, inserted on the end page, and a forwarding pointer used in the original page, as a result the NC indexes would not be updated, since they are not using a volatile RID due to the forwarding pointer being on the data page.

I can't think of a convoluted scenario in this case, there can be very expensive single row updates / inserts that do cause an immense amount of page splits (I'm told 17 earlier this week has been demonstrated.)

If you have a trigger on the table, you are already deleting / inserting anyway, you just do not see it but you have versioned the row and are performing two operations without realising it.

A.

Andrew
This is a nice story, but not really an "answer" for someone stumbling on this question. It's hard to distill the actual answer from this text - you can explain your edits in the textfield made to version your edits, not in your main posts.
Konerak
+2  A: 

In your case, I believe the update will be faster.

Remember indexes!

You have defined a primary key, it will likely automatically become a clustered index (at least SQL Server does so). A cluster index means the records are physically laid on the disk according to the index. DELETE operation itself won't cause much trouble, even after one record goes away, the index stays correct. But when you INSERT a new record, the DB engine will have to put this record in the correct location which under circumstances will cause some "reshuffling" of the old records to "make place" for a new one. There where it will slow down the operation.

An index (especially clustered) works best if the values are ever increasing, so the new records just get appended to the tail. Maybe you can add an extra INT IDENTITY column to become a clustered index, this will simplify insert operations.

Developer Art
The 'reshuffling' there would be the page split.
Andrew
+1  A: 

It depends on the product. A product could be implemented that (under the covers) converts all UPDATEs into a (transactionally wrapped) DELETE and INSERT. Provided the results are consistent with the UPDATE semantics.

I'm not saying I'm aware of any product that does this, but it's perfectly legal.

Damien_The_Unbeliever
... perfectly legal, as long as the Foreign-Key constraint-checking is deferred until after the insert, which may not be legal.
Mr. Shiny and New
I am not sure but I have heard that SQL Server does a DELETE+INSERT for UPDATE, internally. If that is the case, will it make any difference in case of SQL Server?
Faiz
@Faiz - as with everything, the only way to be sure is to test with your data, in your environment. The underlying cost of these operations is unlikely to be your bottleneck - ever. With SQL Server, if you've got a trigger, it certainly *resembles* a delete/insert, but whether that's what the system actually does, who needs to know :-)
Damien_The_Unbeliever
+2  A: 

The bigger the table (number of and size of columns) the more expensive it becomes to delete and insert rather than update. Because you have to pay the price of UNDO and REDO. DELETEs consume more UNDO space than UPDATEs, and your REDO contains twice as many statements as are necessary.

Beisdes, it is plain wrong from a business point of view. Consider how much harder it would be to understand a notional audit trail on that table.

APC
+1  A: 

Every write to the database has lots of potential side effects.

Delete: a row must be removed, indexes updated, foreign keys checked and possibly cascade-deleted, etc. Insert: a row must be allocated - this might be in place of a deleted row, might not be; indexes must be updated, foreign keys checked, etc. Update: one or more values must be updated; perhaps the row's data no longer fits into that block of the database so more space must be allocated, which may cascade into multiple blocks being re-written, or lead to fragmented blocks; if the value has foreign key constraints they must be checked, etc.

For a very small number of columns or if the whole row is updated Delete+insert might be faster, but the FK constraint problem is a big one. Sure, maybe you have no FK constraints now, but will that always be true? And if you have a trigger it's easier to write code that handles updates if the update operation is truly an update.

Another issue to think about is that sometimes inserting and deleting hold different locks than updating. The DB might lock the entire table while you are inserting or deleting, as opposed to just locking a single record while you are updating that record.

In the end, I'd suggest just updating a record if you mean to update it. Then check your DB's performance statistics and the statistics for that table to see if there are performance improvements to be made. Anything else is premature.

An example from the ecommerce system I work on: We were storing credit-card transaction data in the database in a two-step approach: first, write a partial transaction to indicate that we've started the process. Then, when the authorization data is returned from the bank update the record. We COULD have deleted then re-inserted the record but instead we just used update. Our DBA told us that the table was fragmented because the DB was only allocating a small amount of space for each row, and the update caused block-chaining since it added a lot of data. However, rather than switch to DELETE+INSERT we just tuned the database to always allocate the whole row, this means the update could use the pre-allocated empty space with no problems. No code change required, and the code remains simple and easy to understand.

Mr. Shiny and New
+1  A: 

Just tried updating 43 fields on a table with 44 fields, the remaining field was the primary clustered key.

The update took 8 seconds.

A Delete + Insert is faster than the minimum time interval that the "Client Statistics" reports via SQL Management Studio.

Peter

MS SQL 2008

+1  A: 

I am afraid the body of your question is unrelated to title question.

If to answer the title:

In SQL, is UPDATE always faster than DELETE+INSERT?

then answer is NO!

Just google for

  • "Expensive direct update"* "sql server"
  • "deferred update"* "sql server"

Such update(s) result in more costly (more processing) realization of update through insert+update than direct insert+update. These are the cases when

  • one updates the field with unique (or primary) key or
  • when the new data does not fit (is bigger) in the pre-update row space allocated (or even maximum row size),resulting in fragmentation,
  • etc.

My fast (non-exhaustive) search, not pretending to be covering one, gave me [1], [2]

[1]
Update Operations
(Sybase® SQL Server Performance and Tuning Guide
Chapter 7: The SQL Server Query Optimizer)
http://www.lcard.ru/~nail/sybase/perf/11500.htm
[2]
UPDATE Statements May be Replicated as DELETE/INSERT Pairs
http://support.microsoft.com/kb/238254

vgv8
A: 

The question of speed is irrelevant without a specific speed problem.

If you are writing SQL code to make a change to an existing row, you UPDATE it. Anything else is incorrect.

If you're going to break the rules of how code should work, then you'd better have a damn good, quantified reason for it, and not a vague idea of "This way is faster", when you don't have any idea what "faster" is.

Andy Lester