views:

553

answers:

6

suppose that I have a table with 10000000 record. What is difference between this two solution?

1- delete data like :

DELETE FROM MyTable

2- delete all of data with a application row by row :

DELETE FROM MyTable WHERE ID = @SelectedID

Is the first solution has best performance? what is the impact on log and performance?

+2  A: 

The first has clearly better performance.

When you specify DELETE [MyTable] it will simply erase everything without doing checks for ID. The second one will waste time and disk operation to locate a respective record each time before deleting it.

It also gets worse because every time a record disappears from the middle of the table, the engine may want to condense data on disk, thus wasting time and work again.

Maybe a better idea would be to delete data based on clustered index columns in descending order. Then the table will basically be truncated from the end at every delete operation.

Developer Art
please tell me why.
masoud ramezani
you say that:"Maybe a better idea would be to delete data based on clustered index columns in descending order. Then the table will basically be truncated from the end at every delete operation."can you describe more?
masoud ramezani
The database engine allocates data physically on the disk in the clustered disk order. If you were to delete records with the highest index values, that would basically lead to cutting off the tail of the file, without condensing the data which might occur if you removed something in the middle of the file. This is especially important when you adding records to try to have them appended to the end of the file. I can imagine removing records from the end would improve performance as well.
Developer Art
+3  A: 

Option 1 will create a very large transaction and have a big impact on the log / performance, as well as escalating locks so that the table will be unavailable. Option 2 will be slower, although it will generate less impact on the log (assuming bulk / full mode)

If you want to get rid of all the data, Truncate Table MyTable would be faster than both, although it has no facility to filter rows, it does a meta data change at the back and basically drops the IAM on the floor for the table in question.

Andrew
I can't use TRUNCATE because MyTable have foreign key also I need WHERE clause for filter data
masoud ramezani
+2  A: 

The best performance for clearing a table would bring TRUNCATE TABLE MyTable. See http://msdn.microsoft.com/en-us/library/ms177570.aspx for more verbose explaination

Scoregraphic
+6  A: 

If you have that manny records in your table and you want to delete them all, you should consider truncate table instead of delete from table. It will be much faster, but be aware that it cannot activate a trigger.

See for more details (this case sql server 2000): http://msdn.microsoft.com/en-us/library/aa260621%28SQL.80%29.aspx

Deleting the table within the application row by row will end up in long long time, as your dbms can not optimize anything, as it doesn't know in advance, that you are going to delete everything.

Nils Schmidt
You can rollback a truncate if you are still within the transaction scope you performed it, contrary to popular belief it is transactionalised - the deallocated IAM is not deleted until the commit so it can be rolled back by reinstating the IAM.
Andrew
Just read it up, and you are right, in sql server you may rollback a truncate. I had oracle in mind where it is not possible (according to the documentation).
Nils Schmidt
I can't use TRUNCATE because MyTable have foreign key also I need WHERE clause for filter data
masoud ramezani
If you need a WHERE clause, you can not use truncate. Guess we misunderstood your question, if you dont want to delete ALL the data in the table, see Andrew's first part for distinction between option one and two.
Nils Schmidt
A: 

The first will delete all the data from the table and will have better performance that your second who will delete only data from a specific key.

Now if you have to delete all the data from the table and you don't rely on using rollback think of the use a truncate table

Patrick
+2  A: 

If you need to restrict to what rows you need to delete and not do a complete delete, or you can't use TRUNCATE TABLE (e.g. the table is referenced by a FK constraint, or included in an indexed view), then you can do the delete in chunks:

DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1
SET ROWCOUNT 10000 -- delete 10,000 rows a time

WHILE (@RowsDeleted > 0)
    BEGIN
        DELETE FROM MyTable [WHERE .....] -- WHERE is optional
        SET @RowsDeleted = @@ROWCOUNT
    END

Generally, TRUNCATE is the best way and I'd use that if possible. But it cannot be used in all scenarios. Also, note that TRUNCATE will reset the IDENTITY value for the table if there is one.

Edit:
As per comment, assuming you are using SQL 2005 or later, you should start using the TOP condition instead of SET ROWCOUNT.

DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1

WHILE (@RowsDeleted > 0)
    BEGIN
        DELETE TOP (10000) FROM MyTable [WHERE .....] -- WHERE is optional
        SET @RowsDeleted = @@ROWCOUNT
    END
AdaTheDev
You may want to get away from using SET ROWCOUNT in favour for SELECT/INSERT/UPDATE/DELETE TOP (N)... The reason? Have a look here: http://msdn.microsoft.com/en-us/library/ms143729.aspx and here: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=282528
Frank Kalis
Good point (assuming SQL 2005 or later) which is probably a safe bet. Updated my answer
AdaTheDev