views:

172

answers:

3

I am collecting runtime profiling data from PLSQL stored procedures. The data is collected as certain stored procedures execute, but it needs to accumululate across multiple executions of those procedures.

To minimize overhead, I'd like to store that profiling data in some PLSQL-accessable Oracle memory-resident storage somewhere for the duration of the data collection interval, and then dump out the accumulated values. The data collection interval might be seconds or hours; its ok not to store this data across system boots. Something like session state in web servers would do.

What are my choices for storing such data?

The only method I know about are contexts in dbms_sessions:

procedure set_ctx (value in varchar8) as 
begin
    dbms_session.set_context ( 'Test_Ctx', 'AccumulatedValue', value, NULL, 'ProfilerSessionId' );
end set_ctx;

This works, but takes some 50 microseconds(!) per update to the accumulated value.

What I'm hoping for is a way to access/store an array of values in some Oracle memory using vanilla PLSQL statements, with access times typical of array accesses made to package-local arrays.

EDIT (after learning about session-lifetimes of PLSQL package variables):

Are there PLSQL-accessible variables with lifetimes longer than "sessions"? I'd guess such variables would likely need synchronization to enable safe updates from multiple sessions. Weirdly, the kind of performance data I'm collecting wouldn't be hurt very badly by such synhcronization faults, because the performacne data values simply grow monotonically. A synchronization error would merely mean that we didn't capture a bit of that value growth, and I don't think that would damage what I'm doing enought to matter.

+2  A: 

The only way I can think of doing this is to have a "listener" process that is constantly running and maintaining the in-memory data. Your other processes would then record information by communicating with the listener via e.g. DBMS_PIPE or DBMS_AQ. There is an example of a Pro*C listener process in the DBMS_PIPE docs.

However, I have doubts as to whether this would be more efficient for the calling programs than the simpler solution of writing the information to a table via an autonomous transaction.

Tony Andrews
A: 

How about using a per-session package that has a fairly fixed memory profile, which you periodically (every N updates) flush to the database (for your 'all sessions' data), and then reset.

If you can do an Insert rather than update, the risk of locking should be pretty low, and you could then use a view or materialized view over the rows to get the cumulative totals.

Alternatively, if use combine that with the DBMS_AQ approach Tony suggests, you can guarantee against locks - putting a message into a queue is fast, and you can link a queue to a callback on a PLSQL package which executes on an Oracle background thread.

JulesLt
A: 

Hi There

I am not entirely sure what you are doing, but I have written a detailed logging package for PL/SQL that allows for tracing execution through PL/SQL etc etc, timed to the millisecond. But relevent to yourself, there is an option to log to memory, ie no I/O using a PL/SQL collection. In other words, you can log to a buffer and periodicaly flush the buffer to a table. It is available from https://sourceforge.net/p/plj-logger/home/ or from http://www.pljumpstart.com/download

It has the added advantage of being very simple to implement. A single package and supporting tables etc.

pj