Using oracle 10g , accessed via perl DBI , I have a table with a few tens of million rows being updated a few times per second while being read from much more frequently form another process.
Soon the update frequency will increase by an order of magnitude (maybe two). Someone suggested that committing every N updates instead of after every update will help performance.
I have a few questions:
- Will that be faster or slower or it depends (planning to benchmark both way as soon as can get a decent simulation of the new load)
- Why will it help / hinder performance.
- If "it depends ..." , on what ?
- If it helps what's the best value of N ?
- Why can't my local DBA have an helpful straight answer when I need one? (Actually I know the answer to that one) :-)
EDIT:
@codeslave : Thanks, btw losing uncommited changes is not a problem, I don't delete the original data used for updating till I am sure everything is fine , btw cleaning lady did unplugs the server, TWICE :-)
Some googling showed it might help because of issue related to rollback segments, but I still don't know a rule of thumb for N every few tens ? hundreds? thousand ?
@diciu : Great info, I'll definitely look into that.