views:

120

answers:

3

We are currently running a SQL Job that archives data daily at every 10PM. However, the end users complains that from 10PM to 12, the page shows a time out error.

Here's the pseudocode of the job

while @jobArchive = 1 and @countProcecessedItem < @maxItem
     exec ArchiveItems @countProcecessedItem out
     if error occured
          set @jobArchive = 0
     delay '00:10'

The ArchiveItems stored procedure grabs the top 100 item that was created 30 days ago, process and archive them in another database and delete the item in the original table, including other tables that are related with it. finally sets the @countProcecessedItem with the number of item processed. The ArchiveItems also creates and deletes temporary tables it used to hold some records.

Note: if the information I've provide is incomplete, reply and I'll gladly add more information if possible.

+1  A: 

Only thing not clear is it the ArchiveItems also delete or not data from database. Deleting rows in SQL Server is a very expensive operation that causes a lot of Locking condition on the database, with possibility to have table and database locks and this typically causes timeout.
If you're deleting data what you can do is:

  • Set a "logical" deletion flag on the relevant row and consider it in the query you do to read data
  • Perform deletes in batches. I've found that (in my application) deleting about 250 rows in each transaction gives the faster operation, taking a lot less time than issuing 250 delete command in a separate way

Hope this helps, but archiving and deleting data from SQL Server is a very tough job.

massimogentilini
>Only thing not clear is it the ArchiveItems also delete or not data from databaseI does. Sorry if forgot to add it in the description.
OnesimusUnbound
+1  A: 

While the ArchiveItems process is deleting the 100 records, it is locking the table. Make sure you have indexes in place to make the delete run quickly; run a Profiler session during that timeframe and see how long it takes. You may need to add an index on the date field if it is doing a Table Scan or Index Scan to find the records.

On the end user's side, you may be able to add a READUNCOMMITTED or NOLOCK hint on the queries; this allows the query to run while the deletes are taking place, but with the possibility of returning records that are about to be deleted.

Also consider a different timeframe for the job; find the time that has the least user activity, or only do the archiving once a month during a maintenance window.

SqlACID
+1  A: 

As another poster mentioned, slow DELETEs are often caused by not having a suitable index, or a suitable index needs rebuilding.

During DELETEs it is not uncommon for locks to be escalated ROW -> PAGE -> TABLE. You reduce locking by

  • Adding a ROWLOCK hint (but be aware it will likely consume more memory)

  • Randomising the Rows that are deleted (makes lock escalation less likely)

  • Easiest: Adding a short WAITFOR in ArchiveItems

    WHILE someCondition BEGIN

    DELETE some rows

    -- Give other processes a chance... WAITFOR DELAY '000:00:00.250'

    END

I wouldn't use the NOLOCK hint if the deletes are happening during periods with other activity taking place, and you want to maintain integrity of your data.

Mitch Wheat