views:

372

answers:

2

Edit: using SQL Server 2005.

I have a query that has to check whether rows from a legacy database have already been imported into a new database and imports them if they are not already there. Since the legacy database was badly designed, there is no unique id for the rows from the legacy table so I have to use heuristics to decide whether the row has been imported. (I have no control over the legacy database.) The new database has slightly different structure and I have to check several values such as whether create dates match, group number match, etc. to heuristically decide whether the row exists in the new database or not. Not very pretty, but the bad design of the legacy system it has to interface with leaves me little choice.

Anyhow the users of the system started throwing 10x to 100x more data at the system than I designed for, and now the query is running too slow. Can you suggest a way to make it faster? Here is the code, with some redadacted for privacy or to simplify but I think I left the important part:

INSERT INTO [...NewDatabase...]
SELECT [...Bunch of columns...]
  FROM  [...OldDatabase...] AS t1
 WHERE t1.Printed = 0
  AND NOT EXISTS(SELECT *
                   FROM [...New Database...] AS s3
                  WHERE year(s3.dtDatePrinted) = 1850  --This allows for re-importing rows marked for reprint
                    AND CAST(t1.[Group] AS int) = CAST(s3.vcGroupNum AS int)
                    AND RTRIM(t1.Subgroup) = s3.vcSubGroupNum
                    AND RTRIM(t1.SSN) = s3.vcPrimarySSN
                    AND RTRIM(t1.FirstName) = s3.vcFirstName
                    AND RTRIM(t1.LastName) = s3.vcLastName
                    AND t1.CaptureDate = s3.dtDateCreated)
+2  A: 

Not knowing what the schema looks like, your first step is to EXPLAIN those sub-queries. That should show you where the database is chewing up its time. If there's no indexes its likely doing multiple full table scans. If I had to guess, I'd say t1.printed and s3.dtDatePrinted are the two most vital to get indexed as they'll weed out what's already been converted.

Also anything which needs to be calculated might cause the database not to use the index. For example, the calls to RTRIM and CAST. That suggests you have dirty data in the new database. Trim it off permanently, and see about changing t1.group to the right type.

year(s3.dtDatePrinted) = 1850 may fool the optimizer into not using an index for s3.dtDatePrinted (EXPLAIN should let you know). This appears to be just a flag set by you to check if the row has already been converted, so set it to a specific date (ie. 1850-01-01 00:00:00) and do a specific match (ie. s3.dtDatePrinted = "1850-01-01 00:00:00") and now that's a simple index lookup.

Making your comparision simpler would also help. Essentially what you have here is a 1-to-1 relationship between t1 and s3 (if t1 is the real name for the new table, consider something more descriptive). So rather than matching each individual bit of s3 to t1, just give t1 a column to reference the primary key of its corresponding s3 row. Then you just have one thing to check. If you can't alter t1 then you could use a 3rd table to track t1 to s3 mappings.

Once you have that, all you should have to do is a join to find rows in s3 which are not in t1.

SELECT s3.*
FROM s3
LEFT JOIN t1 ON t1.s3 = s3.id   -- or whatever s3's primary key is
WHERE t1.s3 IS NULL
Schwern
Thanks. Since the legacy system will be replaced at some point, we ended up deciding not to fuss with it anymore but it's good to have an answer so that we design the newer import better.You are spot on there Schwern: the query plan does indicate that it spends most of its time table scanning in those hot spots you pointed out. One thing that particularly mystified me was year(s3.dtDatePrinted) = 1850; I was so sure that was the faster way to do it that I didn't believe the query plan. Thanks for explaining that one for me.
Dennis
@Dennis You're welcome! Glad it worked out so easily.
Schwern
A: 

Try replacing this:

year(s3.dtDatePrinted) = 1850

With this:

s3.dtDatePrinted >= '1850-01-01' and s3.dtDatePrinted < '1851-01-01'

In this case, and if there's an index on dtDatePrinted MAYBE the optimizer could use a range index scan.

But I agree with previous posters that you should avoid the RTRIMs. One idea is keeping in s3 the untrimmed (original) value, or creating an intermediate table that maps untrimmed values with trimmed (new) ones. Or even creating materialized views. But all this work is useless without proper indexes.

Lluis Martinez