views:

66

answers:

2

My project modules are organized in such a way that I have to maintain following Database structure.

I've 5 schemas, each containing 20 to 50 tables. Each schema is associated with one module of the project.

All the tables that are used within more than one modules are kept in "COMMON" schema. Each module takes 2 to 5 months to finish.

So when first module is finished we are opening it for use, and continuing with the second one and so on. Meanwhile real data is being inserted into the finished module's schema. Also table definitions of the finished module's schema might change slightly depending on the bug fixes and suggestions.

I want to be able to backup/restore, synchronize database schema by schema (I think it's called vertical partitioning?)

In DB2 there was a way to put tables to different tablespaces, and backup/restore tablespaces separately.

I'm looking for similar or better solutions for the problem in SQL Server. So Problems:

  1. How to apply table definition changes to the table that's containing real data.
  2. Being able to update/change/restore/backup/synchronize database schema by schema

Any kind of advices are greatly welcome.

+1  A: 

Another way to approach this would to be to create distinct file groups for each schema and manage the back up of each set of files outside of SQL Server if SQL Server doesn't allow file group specific backups. Splitting up your db across files can also improve performance, e.g. if you place indexes on a separate filegroup & physical disc from your table data.

Codewerks
That's what I was thinking. In that case I just could copy-replace filegroups, right?But it seems it's not possible to put different schemas to different file groups (or at least I could find out how). What I found is that filegroups are for partitioning table rows to different groups (Correct me if I'm mistaken ). Could you please point me to the tutorial or page where I can find further information on this.
Azho KG
I think, If you don't specify partitioning funtion, the the table goes to the specified filegroup.
Azho KG
By default tables go to the filegroup specified as default in your database properties. You can override this for each table in the properties for each table.http://msdn.microsoft.com/en-us/library/aa174545(SQL.80).aspx
Codewerks
I have to protest - this is a bad idea on a number of levels.1. You'd have to stop the SQL Server to unlock the files to allow a backup of the raw LDF/MDF/NDF files. That's a sketchy idea operationally.2. You'd best test replacing one of those MDF/NDF files from a different version of your DB before settling on this, because I don't think that works. The various files should come from the same database/log sequence to be transactionally consistent.
onupdatecascade
A: 

Your overall design seems sound, and better than separate databases, in spite of this challenge. Reasons: transactional consistency between schemas and the ability to do things like mirroring and log shipping cleanly.

It is true that the only practical backup solution will back up the whole DB. Filegroup backups are possible in SQL Server, but don't match your desired behaviour. They are for backing up VLDB data where some data is locked/read-only. However, backup/restore and schema version are separate issues, I think, so this doesn't seem like a reason to change the design.

I don't think, then, that your problem is different from any schema version control, except maybe more complicated: you have to make and ship SQL scripts that contain the schema updates, for each schema.

As for revising existing tables, you do that with ALTER TABLE whenever possible, and when you are up against a change that cannot happen that way, you have to make a new table, insert all the data from the existing table (in a transaction) and then drop the old / rename the new table. Standard practice, if a little ugly.

onupdatecascade
For revising existing tables, your suggestion makes sense and that's what I was doing. It seems there is not better solution anyway. Thanks.But second problem is bigger. I don't want to touch existing schema when updating/adding other schemas. What comes to my mind is that, generate SQL from development environment and apply it to production one every time I need to - but every time I've to create around 100-200 tables though. Any other suggestions?
Azho KG