views:

1016

answers:

6

We've run across a slightly odd situation. Basically there are two tables in one of our databases that are fed tons and tons of logging info we don't need or care about. Partially because of this we're running out of disk space.

I'm trying to clean out the tables, but it's taking forever (there are still 57,000,000+ records after letting this run through the weekend... and that's just the first table!)

Just using delete table is taking forever and eats up drive space (I believe because of the transaction log.) Right now I'm using a while loop to delete records X at a time, while playing around with X to determine what's actually fastest. For instance X=1000 takes 3 seconds, while X=100,000 takes 26 seconds... which doing the math is slightly faster.

But the question is whether or not there is a better way?

(Once this is done, going to run a SQL Agent job go clean the table out once a day... but need it cleared out first.)

+12  A: 

TRUNCATE the table or disable indexes before deleting

TRUNCATE TABLE [tablename]

Truncating will remove all records from the table without logging each deletion.

Joe Philllips
worth linking to http://en.wikipedia.org/wiki/Truncate_(SQL) as a jump off to be sure they understand the limitations but whole heartedly concur with the advice
ShuggyCoUk
+1  A: 
TRUNCATE TABLE [tablename]

will delete all the records without logging.

DJ
it will do some minimal logging, it will log only pages and extents affected
SQLMenace
+3  A: 

1) Truncate table

2) script out the table, drop and recreate the table

SQLMenace
+1  A: 

Depending on how much you want to keep, you could just copy the records you want to a temp table, truncate the log table, and copy the temp table records back to the log table.

John M Gant
+4  A: 

To add to the other responses, if you want to hold onto the past day's data (or past month or year or whatever), then save that off, do the TRUNCATE TABLE, then insert it back into the original table:

SELECT
     *
INTO
     tmp_My_Table
FROM
     My_Table
WHERE
     <Some_Criteria>

TRUNCATE TABLE My_Table

INSERT INTO My_Table SELECT * FROM tmp_My_Table

The next thing to do is ask yourself why you're inserting all of this information into a log if no one cares about it. If you really don't need it at all then turn off the logging at the source.

Tom H.
Third party software...:(
Telos
You have my condolences ;)
Tom H.
A: 

If you can work out the optimum x this will constantly loop around the delete at the quickest rate. Setting the rowcount limits the number of records that will get deleted in each step of the loop. If the logfile is getting too big; stick a counter in the loop and truncate every million rows or so.

set @@rowcount x while 1=1 Begin

delete from table If @@Rowcount = 0 break

End

Change the logging mode on the db to simple or bulk logged will reduce some of the delete overhead.

u07ch