views:

464

answers:

1

I am doing a bulk insert/update for millions of records using the MERGE statement in oracle. Well, the other options could be to use FORALL making use of BULKCOLLECT and then commit but this operation will be again a slowdown the performance since I need to keep loop. Committing after completion of entire merge operation will be a performance hit, I would like to know how can I put the commit frequency within merge statement for every say 10000 rows of insert/update something like batch commit.

+1  A: 

A MERGE statement is one single statement, so it can't, by definition, span transactions. (The A in ACID stands for Atomicity.)

If you don't have indexes on the merge criteria (the ON clause) for both the source and target tables, turning a single MERGE statement into N merge statements is going to run significantly slower, as when you were doing a single scan against either source and/or target, you're going to be doing N of them. If you thought one full table scan was slow, try doing 100 for every million rows!

To be honest, chopping up your single MERGE into more statements is going to be slower, as more work is being done -- you're running more statements, so there's more parsing overhead, and you're performing more commits, so there's going to be more serialization, as your program waits while the commit's being performed. The only good reason to partition a single statement into multiples is if you have resumability concerns (Oh no! The job failed three hours in, and we only had a four hour window!) or if the transaction management overhead (UNDO and ORA-01555's) becomes an issue.

If you have CPU availability for this process, you may wish to consider using a PARALLEL hint against your MERGE statement to see if it runs faster. Similarly, if your MERGE statement only affects a fraction of the rows in the target table, perhaps an index on the columns being merged against would improve performance.

Adam Musch
Adam, Thanks a lot on your valueable answer!