tags:

views:

7051

answers:

3

On oracle 10gr2, i have several sql queries that i am comparing performance, but after first run, the v$sql table has the execution plan stored for caching, so for one of the queries i go from 28 seconds on first run to .5 seconds after.

I've tried

ALTER SYSTEM FLUSH BUFFER_CACHE; -- after running this, the query consistently runs at 5 seconds, which i do not believe is accurate.

thought maybe deleting the line item itself from the cache: delete from v$sql where sql_text like 'select * from.... but get an error about not being able to delete from view.

+4  A: 

It's been a while since I worked with Oracle, but I believe execution plans are cached in the shared pool. Try this:

alter system flush shared_pool;

The buffer cache is where Oracle stores recently used data in order to minimize disk io.

Peter
my 28 second query takes 1.5 seconds after executing that command
Sorry this didn't work for you. That is how you clear the cached execution plan, though. :)
Peter
A: 

Peter gave you the answer to the question you asked:

alter system flush shared_pool;

That is the statement you would use to [sic] delete prepared statements from the cache. (Prepared statements aren't the only objects flushed from the shared pool, the statement does more than that.) As I indicated in my earlier comment (on your question), v$sql is not a table. It's a dynamic performance view, a convenient representation of Oracle's internal memory structures. You only have SELECT privilege on the view, you can't delete rows from it.


flush the shared pool and buffer cache?

The following doesn't answer your question directly. Instead, it answers a fundamentally different (and maybe more important) question:

Should we normally flush the shared pool and/or the buffer cache to measure the performance of a query?

In short, the answer is no.

I think Tom Kyte addresses this pretty well:

http://www.oracle.com/technology/oramag/oracle/03-jul/o43asktom.html

<excerpt>

Actually, it is important that a tuning tool not do that. It is important to run the test, ignore the results, and then run it two or three times and average out those results. In the real world, the buffer cache will never be devoid of results. Never. When you tune, your goal is to reduce the logical I/O (LIO), because then the physical I/O (PIO) will take care of itself.

Consider this: Flushing the shared pool and buffer cache is even more artificial than not flushing them. Most people seem skeptical of this, I suspect, because it flies in the face of conventional wisdom. I'll show you how to do this, but not so you can use it for testing. Rather, I'll use it to demonstrate why it is an exercise in futility and totally artificial (and therefore leads to wrong assumptions). I've just started my PC, and I've run this query against a big table. I "flush" the buffer cache and run it again:

</excerpt>


hard parse performance bottleneck

With that out of the way, let's move forward to address your concern with performance.

You tell us that you've observed that the first execution of a query takes significantly longer (~28 seconds) compared to subsequent executions (~5 seconds), even when flushing (all of the index and data blocks from) the buffer cache.

To me, that suggests that the hard parse is doing some heavy lifting. It's either a lot of work, or its encountering a lot of waits. This can be investigated and tuned.

tangentially related anecdotal story

A few years back, I did see one query that had elapsed times in terms of MINUTES on first execution, subsequent executions in terms of seconds. What we found was that most of the first executing time was spent on the hard parse. This was a query written by a CrystalReports developer who innocently (naively?) joined two humongous reporting views. One of the views was a join of 62 tables, the other view was a join of 42 tables. The query used Cost Based Optimizer. And tracing revealed it wasn't wait time, it was CPU time spent evaluating possible join paths. Each of the vendor supplied "reporting" views wasn't too bad by itself, but when two of them were joined, it was painfully slow. The problem was the sheer number of join permutations that the optimizer was considering. There is an instance parameter that limits the number of permutations considered by the optimizer, but our fix was to re-write the query, to join only the dozen or so tables that were actually needed by the query.

(To be honest here, an initial immediate short-term "band aid" was to schedule an earlier morning run of the same query run by the report. That was sufficient. The subsequent user-initiated report run found the prepared statement, and avoided the hard parse. (Of course that wasn't a "fix" for the problem, it just moved the problem earlier in the morning, when it wasn't noticed.)

Our next step would have (probably) been to go with a stored outline, to get a stable query plan.

Of course, statement reuse (avoiding the hard parse, using bind variables) is the normative pattern in Oracle, improves performance and scalability, yada, yada, yada.

This anecdotal incident may be entirely different than the problem you are observing.


hard parse performance bottleneck (continued)

Back to your performance issue.

I'm wondering if perhaps statistics are non-existent, and the optimizer is spending a lot of time gathering statistics before it prepares a query plan. That's one of the first things I would check, that statistics are collected on all of the referenced tables, indexes and indexed columns.

If your query joins a large number of tables, the CBO may be considering a huge number of permutations for join order.

A discussion of Oracle tracing is beyond the scope of this answer, but it's the next step.

I'm thinking you are probably going to want to trace events 10053 and 10046.

Here's a link to an "event 10053" discussion by Tom Kyte you may find useful:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:63445044804318

HTH

spencer7593
A: 

We've been doing a lot of work lately with performance tuning queries, and one culprit for inconsistent query performance is the file system cache that Oracle is sitting on.

It's possible that while you're flushing the Oracle cache the file system still has the data your query is asking for meaning that the query will still return fast.

Unfortunately I don't know how to clear the file system cache - I just use a very helpful script from our very helpful sysadmins.

Jeremy