views:

80

answers:

4

If I have a table with huge amount of data...and If I do incremental delete instead "one time delte"..what's the benefit ?

Onetime delete

 DELETE table_1
     WHERE BID = @BID
    AND CN = @CN        
     AND PD = @PD;  

Incremental Delete

While (1=1)
Begin
 DELETE TOP (100000) FROM table_1
     WHERE BID = @BID
    AND CN = @CN        
     AND PD = @PD;  

If @@rowcount = 0   -- No row affected.
  BREAK
 ELSE
  Continue
End

I got help from http://stackoverflow.com/questions/3883420/deleting-a-sql-server-table-takes-much-time

+3  A: 

Depends on the configuration, I have seen large deletes blow up the transaction log and cause a failure in not enough disk space.

You could also avoid locking escalation by using a smaller batch.

Dustin Laine
What is " locking escalation ". I am a rookie in case of DB tuning ....:(
Anish
Just to add that lock escalation is attempted once 5,000 locks are held so the batch size would need to be reduced quite a lot even in the incremental version to get this benefit.
Martin Smith
reduce batch size -> You mean changing the value from "100000" to "100"..in below query ? DELETE TOP (100000) FROM table_1 WHERE BID = @BID AND CN = @CN AND PD = @PD;
Anish
@Anish - 5,000 assuming that the delete takes row locks. If you don't have to worry about concurrent access this isn't important though.
Martin Smith
@Anish, lock escalation is when the SQL server optimizer passes criteria where it will lock the table rather than the row. There are many levels of escalation, but that is the general idea.
Dustin Laine
Thanks for the info.
Anish
+1  A: 

Alternatively, you can export the data you want to keep, truncate the table, and load your data back. This may be faster. Even if you want to kepp 50% of your data, it can still be faster - truncate is only minimally logged. Run your own benchmarks

AlexKuznetsov
The data which i need to retain is more compared to the amount I wish to delete. So I think this approach is not good...Isn't it so ?
Anish
It can still be faster - truncate is only minimally logged. Run your own benchmarks.
AlexKuznetsov
to be aware: It is impossible to truncate the table if the table is involved in replication or log shipping and if foreign key references the table to be truncated.
vgv8
+1  A: 

The difference is the size of the rollback information.

SQL Server is transactional, before the delete is committed it should be possible to rollback the transaction.

Take the following example:

  • Free space on harddisk 10 GB
  • Information to be deleted 20 GB

When you start the delete the transaction log will grow until the disk is full then it will crash.

Even with enough disk space there are other problems, database locking or performance hits. This can be a serious problem if deleting data from a live system.

Shiraz Bhaiji
Ok...That means..incremental delete will defintely reduce the time (to delete the data)
Anish
@Anish, it is more that there is a much better chance that it will work, we recently had to delete 150,000,000 rows from a live database, we did it 500,000 at a time. Deleting all in one go would have crashed the system.
Shiraz Bhaiji
A: 

I am a rookie in case of DB tuning ....:(

Me too...
but I do not need to understand technical intricacies (like lock escalations, etc.) to feel/guess the benefits of eating/biting watermelon by by small pieces against putting it by a whole one piece into my mouth

vgv8
I was looking ...why/how it's beneficial...I got answer from some guys tat, it's because of the log which sql server keeps....And while eating water melon..I dont have habit of logging it becauuse I never rollback it.
Anish