views:

39

answers:

1

Hi,

We have a test system which matches our production system like for like. 6 months ago we did some testing on new hardware, and found the performance limit of our system.

However, now we are re-doing the testing with a view to adding further hardware, and we have found the system doesnt perform as it used to.

The reason for this is because on one specific volume we are now doing random I/O which used to be sequential. Further to this it has turned out that the activity on this volume by oracle which is 100% writes, is actually in 8k blocks, where before it was up to 128k.

So something has caused the oracle db writer to stop batching up it's writes.

We've extensively checked our config, and cannot see any difference between our test and production systems. We've also opened a call with Oracle but at this stage information is slow in forthcoming.

so; Ultimately this is 2 related questions:

  1. Can you rely on oracle multiblock writes? Is that a safe thing to engineer/tune your system for?
  2. Why would oracle change its behaviour?

We're not at this stage necessarily blaming oracle - it may well be reacting to something in the environment - but what?

The OS/arch is solaris/sparc.

Oh; I forgot to mention, the insert table has no indexes, and only a couple of foreign keys - it's designed as a bucket for as fast an insert as possible. It's also partitioned on the key field.

Thanks for any tips!

+1  A: 

More description of the workload would allow some hypotheses.

If you are updating random blocks, then the DBWR process(es) are going to have little choice but to do single-block writes. Indexes especially are likely to have writes all over the place. If you have an index of character values and need to insert a new 'M' record where there isn't room, it will get a new block for the index and split the current block. You'll have some of those 'M' records in the original block, and some in the new block (while will be the last [used] block in the last extent).

I suspect you are most likely to get multi-block writes when bulk inserting into tables, as new blocks will be allocated and written to. Potentially, initially you had (say) 1GB of extents allocated and were writing into that space. Now you might have reached the limit of that and be creating new extents (say 50 Mb) which it may be getting from scattered file locations (eg other tables that have been dropped).

Gary
Ok, there are no indexes on the main table on 1 particular volume, however other writes that the db writer does on other volumes are in indexed tables. at least 2/3 of the writes are to completely unindexed tables. It seems when we did the testing before, each writer (2) seemed to take ownership of a different volume, and now each writer seems to be sharing the work evenly instead. Anyway the majority of the writes are unindexed and in pre-sized tables.
Codek
There is some stuff on Metalink about multiple DBWR (eg Note 157868.1 and 97291.1). There does appear to be some segregation between what each is responsible for and, as you say, it appears to be aimed at balancing workload [which makes sense]. You don't want one DBWR lying around because there is no writes to "his" disks.Perhaps, if the multi-block writes are a big performance issue, it may be worth going to one DBWR and see if having one process doing all writes means they get 'chunked' up better.
Gary