views:

595

answers:

4

What are some useful Oracle optimizations one can use for an Application that mostly writes (updates) to an Oracle database?

The general usage pattern here is not web-serving or logging, as is most cases, but instead to persist complex state of a system, so the only times reading is needed is when the system starts up, after that its update and create all the way. Thus right now the Write to Read ratio is over 9 to 1, in this case what database tuning could improve performance?

+4  A: 

Monitoring of the system health using statspack (9i) or AWR (10g+) would be the best method of identifying bottlenecks.

In particular:

  • lookout for redo waits. The redo log is critical in maintaining a high write rate
  • Use bind variables
  • Use bulk operations wherever possible.
  • Watch for index contention where multiple processes insert records into a single table having an index on a sequence-derived column
David Aldridge
+1  A: 

I could not recommend the Oracle Enterprise Management Console (built in to Oracle) enough. It will let you know exactly what you're doing wrong and how to fix it!

You may want to consider getting rid of any extra index's (indices?) you may have. This may have cause a slight overhead on start up, but adding data to an indexed table may slow it down considerably.

Nick Stinemates
+1  A: 

Along with David's answer:

  • Monitor row migration and row chaining activity and change table storage parameters if necessary
  • Check your redo log file system: disable FS caching (i.e. use Direct I/O), disable last access time, change block size to 512B. Or even better, migrate to ASM.
  • Read about index-organized tables and see if you can apply them anywhere.
  • Verify that asynch I/O is used.
  • For large SGA sizes, enable large pages and LOCK_SGA (platform specific)
  • Experiment with different DBWR settings (e.g. *fast_start_mttr_target*, *dbwr_processes*)
  • At the hardware level, make sure you got a decent RAID-10 controller with write-caching enabled! Get lots of 15K RPM hard drives.

Last but not the least: define repeatable and realistic performance test cases before you do any modifications. There's a lot of hit and miss in this kind of tuning - for each test execution do only one change at a time.

Andrew from NZSG
The last sentence itself is very good advice even on its own.
Robert Gould
A: 

Depending on the characteristics of your application and your data, consider a bulk data load using an Oracle external table. Have the application write the data to a text file, then use an INSERT INTO your target table from a SELECT on the external table = very fast.

There are some constraints, and it may not fit your circumstances, but it gives massive performance when you can use it.

I used this for loading near-real-time text data files at the rate of 40,000 files per day, upto about 2 MB per file, into an 8 TB (yes, TeraBytes) Oracle 10g database instance.

Rob Williams