views:

57

answers:

2

I have a long running job. The records to be processed are in a table with aroun 100K records.

Now during whole job whenever this table is queried it queries against those 100K records. After processing status of every record is updated against same table.

I want to know, if it would be better if I add another table where I can update records status and in this table keep deleting whatever records are processed, so as the query go forward the no. of records in master table will decrease increasing the query performance.

EDIT: Master table is basically used for this load only. I receive a flat file, which I upload as it is before processing. After doing validations on this table I pick one record at a time and move data to appropriate system tables.

+3  A: 

I had a similar performance problem where a table generally has a few million rows but I only need to process what has changed since the start of my last execution. In my target table I have an IDENTITY column so when my batch process begins, I get the highest IDENTITY value from the set I select where the IDs are greater than my previous batch execution. Then upon successful completion of the batch job, I add a record to a separate table indicating this highest IDENTITY value which was successfully processed and use this as the start input for the next batch invocation. (I'll also add that my bookmark table is general purpose so I have multiple different jobs using it each with unique job names.)

If you are experiencing locking issues because your processing time per record takes a long time you can use the approach I used above, but break your sets into 1,000 rows (or whatever row chunk size your system can process in a timely fashion) so you're only locking smaller sets at any given time.

cfeduke
+1  A: 

Few pointers (my two cents):

Consider splitting that table similar to "slowly changing dimension" technique into few "intermediate" tables, depending on "system table" destination; then bulk load your system tables -- instead of record by record.

Drop the "input" table before bulk load, and re-create to get rid of indexes, etc.

Do not assign unnecessary (keys) indexes on that table before load.

Consider switching the DB "recovery model" to bulk-load mode, not to log bulk transactions.

Can you use a SSIS (ETL) task for loading, cleaning and validating?

UPDATE:
Here is a typical ETL scenario -- well, depends on who you talk to.

1. Extract to flat_file_1 (you have that)
2. Clean flat_file_1 --> SSIS --> flat_file_2 (you can validate here)
3. Conform flat_file_2 --> SSIS --> flat_file_3 (apply all company standards)
4. Deliver flat_file_3 --> SSIS (bulk) --> db.ETL.StagingTables (several, one per your destination)
4B. insert into destination_table select * from db.ETL.StagingTable (bulk load your final destination)

This way if a process (1-4) times-out you can always start from the intermediate file. You can also inspect each stage and create report files from SSIS for each stage to control your data quality. Operations 1-3 are essentially slow; here they are happening outside of the database and can be done on a separate server. If you archive flat_file(1-3) you also have an audit trail of what's going on -- good for debug too. :)

Damir Sudarevic
I cannot do bulk load as there are lot of validations to be done per record basis. Recovery model of my current system is "Simple". Bulk-Load? do you mean Simple?Would SSIS tool have any performance benefits over running a SQL Proc?
noob.spt
In DB Properties/Options: Full, Bulk-Load and Simple -- but that may depend on Standard/Enterprise version.SSIS would be preferred to do validation (well, depends) but idea is to do as much as you can in SSIS, then load "intermediate" tables from there; and use SQL "insert into..." to move data from "intermediate" tables to your system tables. It is good practice to keep intermediate tables in the same database, but separate schema -- like ETL schema.
Damir Sudarevic