views:

322

answers:

1

Oracle 10g has Profiles that allow various resources to be limited. Here are some references for clarity - orafaq.com, Oracle Documentation. I am particularly interested in limiting CPU_PER_CALL, LOGICAL_READS_PER_CALL, and COMPOSITE_LIMIT with the goal of preventing a poorly formed statement from destroying performance for every other session.

My problem is that I don't know what values to use for these parameters that will allow your typical long running resource intensive operations while preventing the truly bad ones. I realize that the values will differ based on the hardware, tolerance levels, and queries involved, which is why I am more interested in a method to follow to determine what values are best.

+3  A: 

There are a variety of approaches, depending on the situation. The simplest possible approach that has any hope of working is to ask how long the longest running realistic operation would run (that's obviously system-dependent, and depends on whether this is a system you're building or something existing) and to back in to a CPU_PER_CALL based on that time limit and the degree of parallelism. Assuming single-threaded operation, if you can reasonably say that if a query hasn't returned in 30 minutes you want to kill it, you can set CPU_PER_CALL to allocate 30 minutes worth of CPU (obviously most queries aren't going to use 100% constantly, so that 30 minute limit gives you some amount of breathing room).

If this is an existing system, you (or your DBA) can go through AWR/ statspack reports for a reasonable number of days (some systems will need to make sure to look at reports from month/ quarter/ year-end where additional processing may be done) and find the real statements that use the most CPU and I/O. You can then set your profile limits appropriately (i.e. the maximum CPU recorded for a statement in the past month + 30% of breathing room).

Of course, for any limit you pick, someone has to monitor the system to make sure that the limits keep pace. If queries get more and more expensive over time because of increases in data volume, for example, that max + 30% limit might be insufficient in 6 months. You don't want to find that out when the nightly processing aborts, someone has to keep on top of that.

If you are using the enterprise edition, you may be better served looking at Resource Manager rather than profiles. While profiles allow you to kill runaway sessions, Resource Manager allows you to change session priority based on a variety of factors. Rather than killing a query that has used more than 30 minutes of CPU, it may be better to make it lower priority so that it doesn't interfere with other sessions without killing it, in case it is just running long.

Justin Cave
The documentation for Resource Manager says "The main goal of the Database Resource Manager is to give the Oracle Database server more control over resource management decisions, thus circumventing problems resulting from inefficient operating system management." This made me think it wouldn't apply to my goal. I'll look at it again. Thanks for the tip.
Leigh Riffel
From Oracle's standpoint, though, the problem you may well be trying to solve is that the operating system's scheduler is inefficient because it gives equal weight to the session doing the critical OLTP operation that keeps the business online and the session running the hour long report that may be interesting for tomorrow's meeting. Resource Manager lets you tell Oracle that certain sessions are more important than others and that it should slow down that report session in order to maintain the response time for the OLTP session.
Justin Cave