I have a fact table containing 8 Million rows with 1 Million rows increase per month. The table already contains indexes on it. The table is used by IBM Cognos environment to generate reports. Currently I am looking for way to optimize the table SELECT statements.
As first try, I partitioned the table (each partition has equal distribution of rows) and the query is suitable for the partitions, but for some reason, I am getting equal or even worse performance, which is weird. Only one partition is affected per query. Can someone explain how to optimize this ?
Second idea I came to is to implement the fact table as Index organized table, but it will have to have all the columns as primary key. Is this alright and will there be performance gain ?
Third idea is to implement the fact table in a way that will contain all the columns that are joined from the star schema. Will there be performance gain ?
EDIT: Here is the execution plan: http://i50.tinypic.com/11qtzr6.jpg
I have managed to reduce the access time to fact table FT_COSTS by 3 times (cost was 42000, now is 14900) AFTER I created indexes containing the partitioning criteria, but before that I was getting worse results than in unpartitioned table. I used this link to solve my partitioning problem http://stackoverflow.com/questions/2535908/range-partition-skip-check
From what I see now, the main bottleneck is the GROUP BY which raises the cost from 34000 to 85 000 , which is more than doubling . Does anyone have idea about a workaround on this ?