views:

60

answers:

2

I have a report query that joins 19 tables in DW. I know our DW schema design is not good. This query is running every night and gathering yesterday's activity. Our system went live in 06/2008, so the data volume for 1 day is a small portion compared to the entire data volume.

The query execution time was generally 5~10 minutes and the execution cost is about 70,000. And it is using index/nested loop. Cost is low. Everything looks good.

In this month, the query is getting slower and slower yet execution cost remains same. It is still using index, the execution cost is still low but it is running over 1 hour.

Any idea??

+2  A: 

Maybe your optimiser stats are out of date. The cost calculated by the optimiser is based on the stats it has, so if these haven't changed nor will the cost. However, the time taken to actually run the query will change if there is a lot more data than there used to be.

Tony Andrews
+1  A: 

I have some ideas what could cause a sudden slowdown, but the best starting point is to get information from the database as to where it is spending it's time.

If you have access to the code and SQL*Plus, a starting point would be :

set autotrace on

then run the query.

(This automates the steps you used to have to perform to enable SQL Trace and then tkprof the output).

Another option would be taking a Statspack with a snapshot period around your bad job.

This should at least narrow down where it is spending it's time (physical I/O, CPU, etc).

It could just be that the cumulative increase in data has tipped your query over a resource limit.

JulesLt
Yes.. I am doing set autotrace trace only now. It is taking 15 minutes by now and still running..Thanks for your comment
exiter2000
OK - if that doesn't give you the answer, put the results up here - we won't be able to reverse engineer your data warehouse from it, but it may give a good clue to where your time is going.
JulesLt