tags:

views:

140

answers:

3

Right now I am working on a solution to archive older data from a big working database to a separate archive database with the same schema. I move the data using SQL scripts and SQL Server Management Objects (SMO) from a .Net executable written in C#.

The archived data should still be accessible and even (occassionally) changeable, we just want it out of the way to keep the working database lean and fast.

Hurling large portions of data around and managing the relations between tables has proven to be quite a challenge.

I wonder if there is a better way to archive data with SQL Server.

Any ideas?

+1  A: 

I think if you still want/need the data to be accessible, then partitioning some of your biggest or most-used tables could be an option.

Galwegian
+1  A: 

Yep, use table and index partitioning with filegroups.

You don't even have to change the select statements, only if you want to get the last bit of speed out of the result.

Another option can be the workload balancing with two servers and two way replication between them.

Biri
A: 

We are in a similar situation. For regulatory reasons we cannot delete data for a set period of time, but many of our tables grow very large and unwieldy and realistically much of the data that is older than a month can be removed with few day-to-day problems.

We currently programatically prune the tables, using a custom .NET/shell combination app using BCP to backup files which can be zipped and left on an out of the way network share. This isn't particularly accessible, but it is more space efficient. (It is complicated by our needing to keep certain historic dates, rather than being able to truncate at a certain size or with key fields in certain ranges.)

We are looking into alternatives but, I think surprisingly, there's not much in the way of best-practice on this discussion!

Unsliced