views:

150

answers:

4

Hi

I have an Oracle PL / SQL script. It processes about 51 millon registers, and writes results to 5 different tables.

The problem is that I left the process running last night, and apparently there was an overflow in the UNDO logs.

Particularly, we are not interested in Rollbacking this script, if it fails, we can run it again.

Is there any way optimize the usage of the undo / redo logs? Avoid writing them or minimizing those writes?

As far as I understand, setting the NOLOGGING attribute the output tables would help, in addition to using the APPEND insert (as said here)

Any tips will be appreciated. Thanks in advance.

+2  A: 

You should not process the 51 million registers in only one batch. Try to split it up into smaller chunks of a few thousands for example. If you perform a COMMIT after each smaller batch (which you would do anyway as you say that you're not going to rollback) the redo/undo logs usage will only be for the uncommitted parts and you will avoid an overflow.

Kosi2801
That will increase redo log usage
David Aldridge
But aren't redo logs only used while the transaction is uncommitted? When the commit happens, the redo-log is marked as free again and can be recycled for future transactions. So yes, it increases log 'usage' but it will be less prone to be 'used up'.
Kosi2801
+1  A: 

It is really a matter or reducing the amount of work that you are doing.

Table UNDO on a direct path insert is always small as the system just has to record that certain ranges of blocks should be removed from the segment. Indexes will still require substantial undo though. Nologging on a direct path insert minimises table REDO.

David Aldridge
A: 

In addition, disabling constraints and indexes may also speed up the inserts. You can rebuild indexes with nologging.

Dinesh Bhat
A: 

"Particularly, we are not interested in Rollbacking this script, if it fails, we can run it again."

One question is, would you need to, and are you prepared to, go back to a previous backup in order to run the script again ? That is, if the process failed after 10 million rows, would simply re-running the script result in 61 million rows being inserted, or would the 10 million be skipped/ignored or would you have 10 million updated and 41 million inserted.

Also, if you do a NOLOGGING insert, it will probably require a fresh backup immediately after the job. You would have problems doing a point-in-time recovery to a time during the script run, so you also need to consider what other activity is happening on the database while the script is running.

Depending on how you've written the PL/SQL script, you may find that using large SQLs, rather than row-by-row processing, can reduce undo (eg by minimising revisits to processed blocks).

Unless you really understand the impact of reducing undo or committing to allow reuse of the undo, my first recommendation would be simply to increase the size of the undo tablespace. Of course you do have the drawback that, if you have generated bucket loads of undo, then a failure will take a LONG time to rollback.

Gary