views:

73

answers:

2

I have written this T-SQL script to roll up duplicate rows in a database that are created by a reconstruction process. To do this, it performs the following:

  • Gets and stores the minimum target table primary key (ColTargetPK) per set of duplicate entries in a table variable (@minColTargetPKTable); determining duplicates by matching on three columns - ColIntA, ColIntB and ColDateTimeA.
  • Sets the Target Column to be rolled up (TargetColVarchar) to equal a concatenation of the target column of the corresponding duplicate entries.
  • Flags the duplicate entries as inactive (ColTargetStatus = 0)
  • Reports success (or failure)

Due to the size of the dataset involved, this script takes an inappopriate length of time to run.

Can anyone see how this can be converted to be set-based, if so, could you please provide an example?

I apologise in advance if my description is a bit confusing...

declare @MinColTargetPKTable table
    (ColIntA int,
     ColIntB int,
     ColDateTimeA nvarchar(25),
     minColTargetPK int
    )

insert @minColTargetPKtable
    select ColIntA, ColIntB, convert(nvarchar(25),ColDateTimeA,120) as ColDateTimeA, 
    min(ColTargetPK) as MinColTargetPK from TargetColTable
    group by ColIntA, ColIntB, convert(nvarchar(25),ColDateTimeA,120) 


declare @TargetColVarchar varchar(max)

declare @updatedColTargetPKs table
(updatedColTargetPKs int)

declare @minColTargetPK int

declare cur cursor
for
select minColTargetPK
from @minColTargetPKtable

open cur

fetch next from cur into @minColTargetPK

while @@FETCH_STATUS = 0
begin
    begin try

    set @TargetColVarchar =
        convert(nvarchar(max),(
        select replace(convert(nvarchar(max), isnull(TargetColVarchar,'')) +   convert (nvarchar(max),' \par \par \par'), '\par } ', '\par') as 
        TargetColVarchar
        from TargetColTable v1
        where ColIntA = (select ColIntA from TargetColTable where ColTargetPK = @minColTargetPK)
        and ColIntB = (select ColIntB from TargetColTable where ColTargetPK = @minColTargetPK)
        and convert(nvarchar(25),ColDateTimeA,120) = (select convert(nvarchar(25),ColDateTimeA,120) from TargetColTable where ColTargetPK = @minColTargetPK)
        order by ColTargetPK
        for xml path(''), type  
        ))

        set @TargetColVarchar = REPLACE(REPLACE (REPLACE (@TargetColVarchar,'<TargetColVarchar>',''),'</TargetColVarchar>',''), '&#x0D;','')


        update TargetColTable
        set TargetColVarchar = @TargetColVarchar
        where ColTargetPK = @minColTargetPK

        update TargetColTable
        set ColTargetStatus = 0
        from TargetColTable v1
        where ColIntA = (select ColIntA from TargetColTable where ColTargetPK = @minColTargetPK)
        and ColIntB = (select ColIntB from TargetColTable where ColTargetPK = @minColTargetPK)
        and convert(nvarchar(25),ColDateTimeA,120) = (select convert(nvarchar(25),ColDateTimeA,120) from TargetColTable where ColTargetPK = @minColTargetPK)
        and ColTargetPK != @minColTargetPK

        Print 'Merge complete for ColTargetPK '+ convert(varchar(50),  @minColTargetPK)

    end try
    begin catch

        Print 'Merge failed for ColTargetPK '+ convert (varchar(20),@minColTargetPK)

    end catch

    fetch next from cur into @minColTargetPK
end

close cur
deallocate cur

EDIT: Ok, below is the script moved to a set-based operation using Preet's suggestion. To give some additional background, TargetTable is approximately 1.1 million rows. Strangely enough, the set-based script below is not significantly faster than the cursor-based script below on the same subset of data (approx. 20000 rows) over 2 trials. Any thoughts on why this wouldn't be faster?

declare @minColTargetPKTable table
    (
    ColIntA int,
    ColIntB int,
    ColDateTimeA nvarchar(25),
    ColTargetPK int,
    concTargetCol varchar(max)
    )

insert @minColTargetPKtable (minColIntA,ColIntB,minColDateTimeA,minColTargetPK)
select ColIntA, ColIntB, convert(nvarchar(25),ColDateTimeA,120) as ColDateTimeA, min(ColTargetPK) as minColTargetPK from TargetTable
group by ColIntA, ColIntB, convert(nvarchar(25),ColDateTimeA,120) 


update @minColTargetPKTable 
set concTargetCol  = 
(REPLACE(REPLACE(REPLACE(replace(convert(nvarchar(max),
    (
        select convert(nvarchar(max), isnull(TargetColVarchar,'')) + convert (nvarchar(max),' \par \par \par ') as 
        TargetColVarchar
        from TargetTable v1
        where ColIntA = (select ColIntA from TargetTable where ColTargetPK = minColTargetPK)
        and ColIntB = (select ColIntB from TargetTable where ColTargetPK = minColTargetPK)
        and convert(nvarchar(25),ColDateTimeA,120) = (select convert(nvarchar(25),ColDateTimeA,120) from TargetTable where ColTargetPK = minColTargetPK)
        order by ColTargetPK
        for xml path(''), type  
    ))
, '\par } ', '\par '),'<TargetColVarchar>',''),'</TargetColVarchar>',''), '&#x0D;',''))

update TargetTable 
set TargetColVarchar = mv.concTargetCol
from @minColTargetPKTable mv
where mv.minColTargetPK = TargetTable.ColTargetPK 

update TargetTable 
set TargetColStatus = 0 
from TargetTable v
inner join @minColTargetPKTable mv on
mv.minColIntA = v.ColIntA
and mv.minColDateTimeA = convert(nvarchar(25),v.ColDateTimeA,120)
and mv.ColIntB = v.ColIntB 
where ColTargetPK not in (select minColTargetPK from @minColTargetPKTable)
+1  A: 

Ok the I'd sugggest the following:

  1. Add an extra column to the temp table to @TargetColVarchar value, do this one hit
  2. Join the temp table and TargetColTable to the do the update

You can then optimise based on the execution plans

Update:

Looking at your amended results, I'd say the following is in order:

use a #temp table, these tend to be more performant on large datasets.

a. add more columns to the temp table to record things like : (select ColIntA from TargetColTable where ColTargetPK = @minColTargetPK and (select ColIntB from TargetColTable where ColTargetPK = @minColTargetPK in the big hit up front

b. The string replace is slow I reckon. This will still be slow. I know XML is not the fastest thing in the world. Can you replace the string comp with SQL XML specfic code

c. In the second update at the bottom the where ColTargetPK not in (select minColTargetPK from @minColTargetPKTable) is likely to be slower than a precise join, and you should do both the updated in one hit

However use the Actual Query Plan to work this out.

Preet Sangha
Thanks for your reply, Preet - if I follow this method how would you suggest I go back and flag the duplicate entries as inactive (the ColTargetStatus = 0 portion of the above script)?
WT_W
Nevermind, realised the answer was right in front of my face.
WT_W
Please paste your answer after you've done it - it will make good reference material
Preet Sangha
@WT_W: agreed with Preet: please edit your question and paste your refactored code to benefit others in the future! Interested in seeing it.
p.campbell
I have editted the question with new script and comments from testing
WT_W
Updating the answer:
Preet Sangha
Thanks for your help Preet, I will implement these changes when I can and repost
WT_W
A: 

Try to do this to your first query

    from tct v1
    join tct v2 on v2.pk = @pk
    where v1.a = v2.a and v1.b = v2.b and v1.dt = v2.dt

and this to your second query

    from tct v1
    join tct v2 on v2.pk = @pk and v1.pk <> @pk
    where v1.a = v2.a and v1.b = v2.b and v1.dt = v2.dt
Denis Valeev
Thanks for your comment, Denis - however I think you may misunderstand the purpose of that (admittedly inefficient) subquery. That subquery does not get a minimum ColIntA value, it gets the ColIntA value for the row that is specified by the primary key minColTargetPK.
WT_W
Okay, I revised my answer.
Denis Valeev