views:

268

answers:

2

Right now the process that we're using for inserting sets of records is something like this:

(and note that "set of records" means something like a person's record along with their addresses, phone numbers, or any other joined tables).

  1. Start a transaction.
  2. Insert a set of records that are related.
  3. Commit if everything was successful, roll back otherwise.
  4. Go back to step 1 for the next set of records.

Should we be doing something more like this?

  1. Start a transaction at the beginning of the script
  2. Start a save point for each set of records.
  3. Insert a set of related records.
  4. Roll back to the savepoint if there is an error, go on if everything is successful.
  5. Commit the transaction at the beginning of the script.

After having some issues with ORA-01555 and reading a few Ask Tom articles (like this one), I'm thinking about trying out the second process. Of course, as Tom points out, starting a new transaction is something that should be defined by business needs. Is the second process worth trying out, or is it a bad idea?

+1  A: 

Some thoughts...

  1. Seems to me one of the points of the asktom link was to size your rollback/undo appropriately to avoid the 1555's. Is there some reason this is not possible? As he points out, it's far cheaper to buy disk than it is to write/maintain code to handle getting around rollback limitations (although I had to do a double-take after reading the $250 pricetag for a 36Gb drive - that thread started in 2002! Good illustration of Moore's Law!)
  2. This link (Burleson) shows one possible issue with savepoints.
  3. Is your transaction in actuality steps 2,3, and 5 in your second scenario? If so, that's what I'd do - commit each transaction. Sounds a bit to me like scenario 1 is a collection of transactions rolled into one?
DCookie
I think I might not have been clear enough in the first scenario. Basically, right now we're committing for each set of records (by set of records I mean something like say a person, their addresses, and phone numbers). What I was thinking was making this more like one big transaction that uses savepoints.
Jason Baker
I don't see why you're getting 1555's then. What are you trying to gain from the savepoints? Seems to me you're more likely to get the 1555 errors in the second scenario?
DCookie
Committing frequently will result in records getting scattered across several undo segments. This makes it more likely that transactions will get overwritten and increase the likelihood of an ORA-01555. In fact, more frequent commits take up *more* undo space: http://oradbdev.blogspot.com/2007/04/commit-frequency-and-undo.html
Jason Baker
True enough. That still means your undo space is inadequate for the task at hand. As @APC asks, are you fetching across the commits? Is this a massive insert process run during peak hours that might be better suited for off-hours? Can the first scenario be broken down into smaller chunks?
DCookie
DCookie. Smaller chunks would probably be completely the wrong way to go. It is the committing that ALLOWS the undo to be overwritten because you are saying "I don't need it anymore".
Gary
As @APC states, one needs to define "transaction" precisely here. What I was getting at with the smaller chunks was if he's doing a large loop, selecting and updating within the loop, perhaps the select loop could be broken down into several smaller loops.
DCookie
+5  A: 

A transaction should be a meaningful Unit Of Work. But what constitutes a Unit Of Work depends upon context. In an OLTP system a Unit Of Work would be a single Person, along with their address information, etc. But it sounds as if you are implementing some form of batch processing, which is loading lots of Persons.

If you are having problems with ORA-1555 it is almost certainly because you are have a long running query supplying data which is being updated by other transactions. Committing inside your loop contributes to the cyclical use of UNDO segments, and so will tend to increase the likelihood that the segments you are relying on to provide read consistency will have been reused. So, not doing that is probably a good idea.

Whether using SAVEPOINTs is the solution is a different matter. I'm not sure what advantage that would give you in your situation. As you are working with Oracle10g perhaps you should consider using bulk DML error logging instead.

Alternatively you might wish to rewrite the driving query so that it works with smaller chunks of data. Without knowing more about the specifics of your process I can't give specific advice. But in general, instead of opening one cursor for 10000 records it might be better to open it twenty times for 500 rows a pop. The other thing to consider is whether the insertion process can be made more efficient, say by using bulk collection and FORALL.

APC
+1 for the commit within a loop. Usually this is when you're updating the records you're selecting in the loop. It still boils down to undo retention and sizing. See another Burleson link (who references others): http://www.dba-oracle.com/t_ora_01555_snapshot_old.htm
DCookie
The advantage of savepoints would be, by not committing Oracle would be forced to retain information in UNDO longer. I suspect your outline is missing a step 0 of "Open a cursor". In the original method, it can lose the undo each time it hits step 3, so the cursor from step 0 can raise 1555. In the new method, it has to keep the undo all the way to the end of the script.
Gary