views:

469

answers:

5

I have to simultaneously load data into a table and run queries on it. Because of data nature, I can trade integrity for performance. How can I minimize the overhead of transactions?

Unfortunately, alternatives like MySQL cannot be used (due to non-technical reasons).

A: 

How are you loading the data?

SQLLDR or INSERT statements?

cagcowboy
SQLLDR but may be INSERT will be used too (in differen scenario)
Dmitry Khalatov
+1  A: 

You want transaction isolation read uncommitted. I don't recommend it but that's what you asked for :)

This will allow you to breach transaction isolation and read uncommitted inserted data.

Please read this Ask Tom article: http://www.oracle.com/technology/oramag/oracle/05-nov/o65asktom.html.

UPDATE: I was actually mistaking, Oracle doesn't really support read uncommitted isolation level, they just mention it :).

Bogdan
As is mentioned in this article, Oracle does not technically support READ UNCOMMITTED, only the spirit of it.
Leigh Riffel
I stand corrected.
Bogdan
+1  A: 

Perhaps I'm missing something, but since in Oracle readers don't block writers and writers don't block readers, what exactly is the problem you are trying to solve?

From the perspective of the sessions that are reading the data, sessions that are doing inserts aren't really adding any overhead (updates might add a bit of overhead as the reader would have to look at data in the UNDO tablespace in order to reconstruct a read-consistent view of the data). From the perspective of the sessions that are inserting the data, sessions that are doing reads aren't really adding any overhead. Of course, your system as a whole might have a bottleneck that causes the various sessions to contend for resources (i.e. if your inserts are using up 100% of the available I/O bandwidth, that is going to slow down queries that have to do physical I/O), but that isn't directly related to the type of operations that the different sessions are doing-- you can flood an I/O subsystem with a bunch of reporting users just as easily as with a bunch of insert sessions.

Justin Cave
I agree -- this sounds like a hypothetical problem. The software is designed to do this.
David Aldridge
Your explanations include the answer - when nobody cares about integrity, why spend resources (memory, I/O, CPU) for UNDO lookup ? I prefer to use them to improve query performance
Dmitry Khalatov
When you say "load data into a table", the natural assumption is that you are talking about inserts, which wouldn't cause readers to do any additional work to apply UNDO. That would only come in to play if your load process is updating rows that the readers are reading-- is that the case?
Justin Cave
yes, it is the case
Dmitry Khalatov
A: 

Other than the general optimization practices that apply to all databases such as eliminating full table scans, removing unused or inefficient indexes, etc., etc., here are a few things you can do.

  1. Run in No Archive Log mode. This sacrifices recoverability for speed.
  2. For inserts use the /*+ APPEND */ hint. This puts data into the table above the high water mark which does not create UNDO. The disadvantage is that existing free space is not used.
  3. On the hardware side, RAID 0 over a larger number of smaller disks will give you the best insert performance, but depending on your usage RAID 10 with its better read performance may provide a better fit.

This said, I don't think you will gain much from any of these changes.

Leigh Riffel
A: 

What kind of performance volumes are you looking at? Are inserts batched or numerous small ones?

Before banging your head against the wall trying to think of clever ways to have good performance, did you create any simple prototypes which would give you a better picture of the out-of-the-box performance? It could easily turn out that you don't need to do anything special to meet the goals.

Andrew from NZSG