views:

30

answers:

1

A lot of the queries in our application involve joins between a large (+ frequently updated) tables with small (+ constant tables). By large I mean > 1 million rows and by small table I mean < 5000 rows.

I am looking for ways to utilize some feature/optimization technique in Oracle (not at application layer) so that these joins can be made more efficient. After reading on this matter, I found result caching and materialized views to suit my scenario.

I am weighing the cons for each of these 2 methods:

  1. Results Cache: Because the large tables are volatile hence I don't think if result cache can help me in making the JOINs efficient. But what if I put the small tables in the result cache -- will it help in JOINs in a way that Oracle won't read the disk for fetching data from the small tables when performing a JOIN?

  2. Materialized views: I think this can cause stale data issues again because the large tables are updated frequently

I wanted to know which method people have found useful in similar scenarios.

+1  A: 

It depends on the sort of queries.

If a query looks at a large number of rows in the large table (eg an aggregation typical of a datawarehouse) then you might benefit from 'pre-aggregating' older/static rows in materialized view (which will probably require partitioning). In joins you may be looking at bitmap indexes / bitmap join indexes, depending on the nature (especially concurrency) of inserts/updates.

If each query looks at the same subset of the large table (eg "this week's orders"), then clustering that subset together on the same bits of disk may help with memory caching.

If each query looks at different subsets of the large table then caching probably won't help, but you'd want to make sure your indexes are right.

Gary