views:

293

answers:

4

I have oracle 10g installed on windows server 2003. I have 22,000,000 records in single table and this is a transactional table,
increasing of records in same table approx. 50,000 per month.

My question is that whenever I run query on it always my query too slow. Is there any method by which I can improve the performance of the query, like partitioning the table or else?

the query is

select a.prd_code
       , a.br_code||'-'||br_title
       , a.size_code||'-'||size_title
       ,size_in_gms
       , a.var_code||'-'||var_title
      , a.form_code||'-'||form_title
      , a.pack_code||'-'||pack_title
      , a.pack_type_code||'-'||pack_type_title
      , start_date
      , end_date
      , a.price
from   prices a
       , brand br
       ,  (select distinct prd_code
                 , br_code
                 ,  size_code
                 , var_code
                 , form_code
                 ,packing_code
                 ,  pack_type_code 
            from cphistory 
            where prd_code = '01' 
            and flag = 'Y'  
            and project_yy = '2009' and '01' and '10') cp
       , (select prd_code
                , br_code
                , size_code
                , size_in_gms
          from sizes 
          where prd_code = '01' 
          and end_date = '31-dec-2050' 
          and flag = 'Y') sz
        ,  (select prd_code
                   , br_code
                   , var_code
                   , var_title 
              from varient) vt
          , (select prd_code
                    , br_code
                    , form_code
                    , form_title 
             from form) fm
          ,  (select prd_code
                     , pack_title 
               from package) pc
          ,    (select prd_code
                       , pack_type_title
                 from pakck_type) pt
where a.prd_code = br.prd_code 
and   a.br_code  = br_br_code
and   a.prd_code = sz.prd_code
and   a.br_code  = sz.br_code
and   a.size_code = sz.size_code
and   a.prd_code  = vt.prd_code
and   a.br_code   = vt.br_code
and   a.var_code  = vt.var_code
and   a.prd_code  = fm.prd_code
and   a.br_code   = fm.br_code
and   a.form_code = fm.form_code
and   a.prd_code  = pc.prd_code
and   a.br_code   = pc.br_code
and   a.pack_code = pc.pack_code
and   a.prd_code  = pt.prd_code
and   a.pack_type_code = pt.pack_type_code
and   end_date = '2009'
and   prd_code = '01'
order by a.prd_code
         , a.br_code
         , a.size_code
         , a.var_code
         , a.pack_code
         , a.form_code

tables used in this query are:

prices    : has more than 2.1M rows
cphistory : has more than 2.2M rows
sizes     : has more than 5000 rows
brand     : has more than 1200 rows
varient   : has more than 1800 rows
package   : has more than 200 rows
pack_type : has more than 150 rows
A: 

It will depend on what you are querying. Indexes may help Partitioning may help

What sort of information are you querying? Show examples.

EvilTeach
+5  A: 
  1. Check indexes. Make sure you have a primary key. Alternate candidate keys should have unique constraints and indexes.
  2. Run EXPLAIN PLAN on queries and see how the optimizer is running them. If you see TABLE SCAN, add indexes.
  3. Make sure the optimizer is using statistics to make its job easier.
  4. Move historical data into warehouses if you must.

22M records isn't that enormous.

duffymo
A: 

Why are the queries slow? Are they doing table scans on the large table? Normally, OLTP queries would be fetching a relatively small number of rows based on a primary key or other indexed column. If your queries are not using indexes and they are the typical sort of OLTP queries that would benefit from using indexes, that would be the place to start.

If you regularly need to query a large number of rows from this table, such that a table scan would be the more efficient access path, you could look into either using materialized views to pre-aggregate the data or into partitioning the table. Partitioning, however, is an extra cost option on top of your enterprise edition license, so you'll generally want to exhaust your other options before going down that path.

Justin Cave
+1  A: 

You should probably start with explain plans, but do this:

  1. Take each of the queries in the "FROM" clause out and do an explain plan on each one. Verify that each one is hitting an appropriate index. If not then add indexes for each of them so that each of the sub-queries is fast.
  2. (only if lots of data is returned ) Take out the order by from the main query and run it. See if it is a lot faster. If that is the case then your time is spent sorting the data and you need to look into why you are having a slow sort.
  3. Pull out the sub-queries. "vt", "fm", "pc" and "pt" are taking the entire tables in the sub queries. When I test with this putting in sub-queries like this causes 10g to miss the indexes on the table completely. Just put the tables into the from and the the oracle optimizer use indexes.
  4. Try folding in all the criteria on "cp" and "sz" into the main query and remove the sub-queries and see if that makes a difference.

Lots and lots of explain plans and more than a little careful though. I wish that I could help more.

Philip Schlump