tags:

views:

86

answers:

3

OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to [email protected] in production and [email protected] in testing).

These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.

[Edit] Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.

A: 

You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.

Diederik Hoogenboom
Thanks Diederik - I was hoping for a solution that would not require restarting the instance to make changes take effect - have added an edit to reflect that.
dpbradley
+1  A: 

You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.

The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.

All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.

The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.

While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.

APC
Thanks - I had thought about externally-defined tables but not in conjunction with CONTEXT.
dpbradley
A: 

I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.

David
Thanks David - the copies from production are usually done by RMAN "duplicate database", so ALL schemas are overwritten.
dpbradley