views:

36

answers:

2

I am using oracle database. If have a large transaction to make (700 tables, million rows , data size will be around 100G) what are the parameters to be fined tuned? The oracle server has disk space of 250G, 32G RAM. (Is this a question for superuser/serverfault? Please move)

+1  A: 

You should have space for rollback segment, probably more than 100GB, BUT your approach is not the best. You should avoid doing everything in ONE transaction. Try to split the process in smaller consistent steps. If step N fail, it must be possible to retry step N, without undo steps 1 to N-1.

Florin Ghita
+2  A: 

I don't think 250G will be enough. You will have 100G of data in your tablespace (.DBF files), plus 100G of uncommitted transaction in your ORLs (online redo logs). And the actual size on disk will be bigger (you should tune PCTUSED and PCTFREE so that Oracle tries to fill each block rather than leaving space for rows to expand), even so at a rough guess you would need 130-150G of DBFs to hold that much data.

Oracle is perfectly capable of doing transactions this big and if you really do need to do a single atomic 100G update then don't be afraid to. You will just need to make sure it is physically possible on your hardware. Quite probably your best bet, if your original data file format is suitable, would be SQL*Loader Direct Path Load. This writes the blocks directly to the DBFs. It's very fast, but not very flexible.

The RAM doesn't really matter in this scenario, except for performance.

Gaius
Shouldn't need the full 100G in the *online* redo logs. They'll fill up to their size and get archived (or not, if you are in noarchive). The undo tablespace needs to cope with the size of the uncommitted transaction, but not redo. But upvoted because direct path should be the first choice for this size of operation
Gary
Oops, yes, of course you're right :-)
Gaius