views:

340

answers:

6

How to delete records from SQL Server 2005 tables without logging them into transaction logs.

I do not wish to log because once deleted, those records will never be needed again.

Currently the various deletes take too much of time. Are there any other options to improve performance of delete statements? I can not use truncate since there is a where clause needed.

+1  A: 

First of all, note that you cannot NOT have a transaction log.. what if you loose power in the server while doing a huge delete? This information is needed so SQL Server can perform atomic operations..

What might be of interest to you though is "recovery model”. Please read this article on technet: http://technet.microsoft.com/en-us/library/ms189275.aspx

matdumsa
+3  A: 

If you are deleting an entire table you could use TRUNCATE TABLE, which only logs the page deallocations rather than having an entry for each row deleted. I'm not aware of any way to do deletions without any transaction logging at all.

RJ1516
`TRUNCATE TABLE` removes *all* records from the table
OMG Ponies
So? The ones still needed can be stored in a temp table ;) THen moved back into the now empty table.
TomTom
A: 

You are deleting stuff, but you have a where clause, so the usual performance recommendations about SQL apply.

Have you used the SQL profiler to see the execution plan of the requests ? If you record these operations, you can use the performance wizard to analyze them and maybe suggest a new index.

If you do a lot of individual delete statements (delete ... where id = xxx), you may be better off creating a temp table with all IDs and joining this temp table to have a single delete statement.

Timores
+2  A: 

I think you are confusing the concept of a transaction log in the database. The transaction log's primary function isn't for restoring old rows -- it's job is to maintain the consistency of the database. All modifications go through the transaction log, and there is no way around that. That's a good thing. A transaction log is also used to make point-in-time backups and restores of the database, and is used when mirroring between two servers.

If you have deletes that are taking too much time, you should look in a couple of areas first.

1) Do you have any DELETE triggers that are firing on your tables? If so, those could be a source of slowness.

2) Has your DBA properly set up the database, at the bare minimum, keeping the transaction log and the data files on separate physical disks?

3) Do you have a lot of foreign keys which are getting checked? For example, if you have another table which references the table you're deleting from, the database server will check each delete against the referencing tables to make sure that the delete statement does not cause those other tables to become inconsistent.

4) Do you have too many indexes, or an otherwise high indexing burden on the table you're deleting from? Every deleted row will correspond to an entry in each index being deleted too, so be judicious about your use of indexes. Are your indexes being properly maintained?

5) Does it take a long time to seek out the rows you want to delete? If the WHERE clause on your DELETE statement is too costly, this will really slow down your deletes. Try temporarily changing your DELETE statement to a SELECT statement and see if that query runs fast. If it doesn't, you should optimize your SELECT statement either by editing it, restructuring your tables, or addindg the appropriate indexes. Then change the statement back to a DELETE. The performance should be significantly improved if your corresponding SELECT statement's execution improved during your optimization.

If you have a big batch job executing a great many deletes against your tables, you may want to temporarily disable your triggers, or you may want to drop and recreate your indexes and foreign keys after these big delete batches. That also could speed things up.

Dave Markle
A: 

Forget the transaction log as a source of the speed or anything other than something used internally by the DB to keep consistency. Instead, you should consider finding a way to do the delete in batches. Instead of a single delete statement, and if you had an integer PK on the table, try deleting ranges of values and do this in a loop. So something like

Declare @RecordsLeft int
Declare @StartRange int
Declare @EndRange int
Declare @BatchSize int

Set @BatchSize = 10000
Set @RecordsLeft = ( Select Count(*) From ... )
Set @StartRange = 0
Set @EndRange = @StartRange + @BatchSize

While @RecordsLeft > 0
Begin
    Delete ...
    Where ...
      And PK Between @StartRange And @EndRange

    Set @RecordsLeft = ( Select Count(*) From ... )
    Set @StartRange = @EndRange + 1
    Set @EndRange = @StartRange + @BatchSize
End
Thomas
+1  A: 

It's easy:

DECLARE @BatchSize INT
SET @BatchSize = 100000

WHILE @BatchSize <> 0
BEGIN
    DELETE TOP (@BatchSize)
    FROM [dbo].[UnknownTable]
    SET @BatchSize = @@rowcount
END  
Dmitry