views:

414

answers:

3

Hi there, not sure if the subject entirely conveys what I'm trying to achieve, but let me explain:

We are building an application that uses Oracle as storage backend. Each year, last years dataset will be "Archived", and a new instance created and populated from scratch. What are the options to do this within the same schema?

  1. Keep version information on a record level (we presume this will be too slow for our use-case).
  2. Keep version information on a table level, so for each new version, we will re-create all the tables but with a new version prefix. (We like this solution, since we can do it all in code).
  3. ?

Is there not something like partitions/personalities/namespaces available that will allow us to achieve this in Oracle?

My oracle experience is rather limited, any assistance will be greatly appreciated!

+1  A: 

The RDBMS conceptual model is not very good at maintaining temporal versions of data. So it is not just Oracle which is lacking in this regard.

I am unclear why you think keeping version information at the record level will be too slow. Too slow in creating a new version? Or too slow where it comes to data retrieval during regular operations?

Here is how you could do it. Given a table CUSTOMERS with a business key of CUSTOMER_REF I might normally build it like this (I am using abbreviated syntax rather than best practice for reasons of space):

create table customers 
( id number not null primary key
  , customer_ref number not null unique key
  , name varchar2(30) not null )
/

The versioned equivalent would look like this:

create table customers 
( id number not null primary key
  , customer_ref number not null 
  , version_number number
  , name varchar2(30) not null
  , constraint whatever unique (customer_ref, version_number) )
/

This works by keeping the current version of VERSION_NUMBER null, and only populating it at archival time. Any lookup is going to have to include and version_number is null. This will be a bit of a pain and you may need to include the column in any additional indexes you build.

Obviously maintaining all versions of the records in the same table will increase the size of your tables, which might have an effect on performance. Oracle's Partitioning option can definitely help here. It also would give you a neat way of creating next year's set of data. However, it is a chargeable extra on top of the Enterprise License, so it is an expensive option. Find out more..

The most time consuming aspect of this will be managing foreign key relationships in the new version of the table. Presuming you choose to use synthetic primary keys, the archival process will have to generate new IDs and then painstakingly cascade them to their dependent records in the new versions of referencing foreign keys.

Thinking about this makes discreet tables for each version seem very attractive. For ease of use I would keep the current version un-prefixed, so that archiving becomes a process simply of

create table customers_n as select * from customers; 

You might want to avoid downtime while creating the versioned tables. In that case you could use materialized views to capture the tables' state during the run-up to the archival switchover. When the clock strikes twelve you can switch off the refresh. (caveat: this is thinking on the fly, I have never done anything like this so try before you buy.)

One pertinent advantage of multiple tables (and Partitioning) is that you can move the archived records to a READ ONLY tablespace. This not only preserves them from unwanted change, it also means you can exclude them from subsequent backups.

edit

I notice you have commented that the archived data can occasionbally be amended. In taht case moving it to READ ONLY tablespaces is not a go-er.

APC
>> I am unclear why you think keeping version information at the>> record level will be too slow. Too slow in creating a new>> version?>> Or too slow where it comes to data retrieval during regular >> operations?Too slow during regular retrival operations, but I must say that my perception of it being too-slow is based on no-facts at all and entirely a gut-feel.We are probing the client to establish if they have the partitioning option, failing that, we are going for the table versioning strategy. Thank you for your input it was greatly appreciated!
A: 

The only thing I wil add to what APC said is regarding your asking for "namespaces".

A namespace in Oracle is a schema, whereby you can have the same object name(s) in each schema.

Of course this all depends on how your app must access multiple versions, but I would lean towards a different schema for each year before I would use some sort of naming convention to maintain versions of tables in the same schema. The reason is, eventually you will have a nightmares. At least with different schemas, all DDL can be the same, all references to objects will be the same, and tools like ER modellers and query tools will work within the context of that schema. Data models change, so at some point you may need to run some compare tools, and if all your tables are named funky with some sort of version postfix, that won't work well.

Add a schema can be copied / moved with export or data pump quickly using the fromuser/touser or remap_schema options, so you won't need much code, except to do any cleanup of last years data out of the new version.

I find schemas are very useful as "containers" and most apps I host only have schema level privileges, so I'm guaranteed the app can be easily and quickly moved from instance to instance, or multiple copies of the app can be hosted side-by-side on the same instance.

mrjoltcola
A: 

Might the schema change between years. For example, in 2010 you have fifteen columns but in 2011 you add a sixteenth. If so, will the same application work on both 2010 and 2011 data.

If the schema is static, I'd go for table with a 'YEAR' column and use VPD/RLS/FGAC to apply a YEAR = '2010' predicate.

I'd only worry about partitioning if performance was a problem.

Gary