views:

114

answers:

5

Step 1: Loading data via "bulk insert" from .txt (delimited) file into Table 1 (no indexes etc.)

bulk insert Table_1
from '\\path\to_some_file.txt'
with ( tablock, FORMATFILE ='format_file_path.xml') 

Via format file I map output Column data types to avoid further conversions (from Char to Int for example)

Step 2: OUTPUT the result (perhaps NOT all the columns from Table 1) into another Table 2, but only DISTINCT values from Table 1.

NB! Table_1 is about 20 Million records (per load).

what we have now (example simplified):

select distinct convert(int, col1), convert(int, col2), col3, ... 
into Table_2
from Table_1

It takes about 3.5 mins to process. Could you advice some best practices that may help to reduce the processing time and put only UNIQUE records into Table_2?

Thanks in advance!

UPD 1: sorry for misunderstanding - I meant that select distinct Query takes 3.5 mins. "bulk insert" is rather optimized - it loads via 8 threads (8 separate .txt files) "bulk insert" into 1 table with (TABLOCK) and imports 20mln records in about 1min.

UPD 2: I tested different approaches (didn't test on SSIS - in our application this approach won't work): The best result is the approach when data "bulk inserted" into TABLE_2 format already (column types match, data types - also) so we eliminate data type Converts. And just "plain" distinct:

select distinct * into Table_2 from Table_1

Gives 70sec of processing. So I could consider It's a best result I could get for now. I also tried a couple of techniques (additional Order by, CTE win grouping etc) - they were worse then "plain" distinct.

Thanks all for participation!

+1  A: 

find those that are duplicates then delete, then copy into table2.

Find the duplicates like this

SELECT col1, 
COUNT(col1) AS NumOccurrences
FROM table1
GROUP BY col1
HAVING ( COUNT(col1) > 1 )
NimChimpsky
i guess zmische wants to keep one of the duplicates - seems that this is difficult with your approach.
Aivar
Exactly, I need to choose one from 2+ records.
zmische
+1 for username
Denis Valeev
A: 

I would suggest creating a cursor that goes through all rows ordered by all fields and use variables to compare current row with previous row to detect if that row has been seen already. You can either delete duplicate rows during this process or create procedure returning a relation (sorry, don't know exact terminology for SQL server) containing only unique rows (if you detect duplication then you skip this row ie. don't yield).

Aivar
-1 for suggesting cursor. Set based operations is one of the stronger points of an RDBMS. Cursors are avoidable, and must be avoided.
Raj More
hmm, what's wrong with cursors?
Aivar
true, declarative style of data selection is nicer, but the guy was concerned about efficiency
Aivar
@Aivar: cursor != efficient. ever.
Chris Lively
Alright, actually I don't have experience with cursors in SQLServer specifically, maybe they really are slow. Would be interesting if zmische gave some feedback about efficiency of different approaches.
Aivar
Cursors are always a "pain" in case of processing huge amount of data. Even Cross Apply via UDF is better. About effeciency - I'm happy to try, but I need some advices what I could TRY as an alternative.
zmische
"maybe they really are slow". this is true, no maybe. They are slow compared to a set based op. There are reasons to use them, but they should be avoided with large data sets.
Sam
OK, I accept now that cursors are slow :) But i noticed something more stupid in my answer: surely there's no need to "return" good rows from the procedure - one can insert them right into target table.
Aivar
Isn't this how SQL Server does DISTINCT for a large dataset anyway? ie. ordering it, and skipping duplicates?
Lasse V. Karlsen
+2  A: 

You have to know if it is your SELECT DISTINCT that is causing the issue or your INSERT INTO is causing the issue.

You will have to run the SELECT DISTINCT once with, and once without the INSERT INTO and measure the duration to figure out which one you have to tune.

If it is your SELECT DISTINCT, you can try to fine tune that query to be more efficient.

If it is your INSERT INTO, then consider the following:

With an INSERT INTO, a new table is created, and all the pages are allocated as required.

Are you dropping the old table and creating a new one? If so, you should change that to just DELETE from the old table - DELETE, not Truncate - this is because a truncate will let go of all the pages acquired by the table, and they have to be re-allocated.

You can try the one or several of the following things to improve efficiency.

  1. Ask your customer for non-duplicate data
    • Index on all the Duplicate-criteria columns. Scanning an index should be much faster than scanning a table.
    • Partition your staging table to get better performance
    • Create a view that selects the distinct values, and use BCP to fast load the data.
Raj More
+1 - For asking which of the two is the problem.
James Black
I try to understand - perhaps there is a way to 'remove" duplicates on Step 1 stage (while bulk inserting???) We have DataMart with Staging tables - they are loaded with data every week/day/month with TONS of data we need to import and process and eliminate from duplicates. and we have data in .txt (bcp-outed) from Customer.
zmische
@zmische Your data is in .txt files, then removing duplicates in the files is going to be an extremely hard option. Duplicate removal from a multi-million-rowset is a set based operation best suited to a good RDBMS.
Raj More
Accepted as a most voted answer.
zmische
A: 

That's what SSIS is all about, man. :)

[Update]

Try this in SSIS to see how fast you can chew through that data:

Flat File Source --> Sort component with duplicate removal --> Flat File Destination (or Ole Db destination with tablock and stuff)

Denis Valeev
SSIS is based on Native t-SQL queries. So i just try to do it on "low" level. SSIS is just an IDE for visualizing. It's good, but question is still there! ))) If you prove, that SSIS do it quicker - I'll be happy to use your approach.
zmische
@zmische Well, the problem with sql approach is that on huge volumes of data it generates an equally huge transaction log and that slows things down. SSIS, on the other hand, may operate piecemeal and you have control over parallel flows and it's specifically designed to accept files of different formats.
Denis Valeev
@zmische Another approach of removing duplicates would be to create a staging table with no clustered index and a unique index with the option `IGNORE_DUP_KEY = ON` on the table.
Denis Valeev
I have simple logged DB, no indexes - just a small overhead in case of bulk-inserts. There is transaction log, but I'm not sure that SSIS we'll remove tran log generation - because it's on Database level, not SSIS "decision". Do you mean that SSIS we'll do DISTINCT is such situation IN Parallel better then t-sql solution? Could you pls explain this - it's something new for me. With Bulk insert and format file - i'm able to process almost anything. For Excel (upto 2010) there is OPENROWSET via OLEDB that is Also supports BULK. So SSIS IS OK, but t-sql solution is always an alternative.
zmische
@zmische Try to bulk insert that data as per my second suggestion.
Denis Valeev
A: 

Your CONVERT statements may be causing the delay.

Sam