views:

631

answers:

6

As a database architect, developer, and consultant, there are many questions that can be answered. One, though I was asked recently and still can't answer good, is...

"What is one of, or some of, the best methods or techniques to keep database changes documented, organized, and yet able to roll out effectively either in a single-developer or multi-developer environment."

This may involve stored procedures and other object scripts, but especially schemas - from documentation, to the new physical update scripts, to rollout, and then full-circle. There are applications to make this happen, but require schema hooks and overhead. I would rather like to know about techniques used without a lot of extra third-party involvement.

+3  A: 

I rather liked this series: http://odetocode.com/Blogs/scott/archive/2008/02/03/11746.aspx

WildJoe
I liked this link a lot. It was thought out and informative. Much more so than my accepted answer. However, if I recall correctly, Stack Overflow answers should be more of an answer/summarization and give the link; instead of a link only. Sorry. But more importantly, thanks it was great!! Read it!
SnapJag
+1  A: 

In my case I have a script generate every time I change the database, I named the script like 00001.sql, n.sql and I have a table with de number of last script I have execute. You can also see Database Documentation

Jedi Master Spooky
+3  A: 

The easiest way I have seen this done without the aid of an external tool is to create a "schema patch" if you will. The schema patch is just a simple t-sql script. The schema patch is given a version number within the script and this number is stored in a table in the database to receive the changes.

Any new changes to the database involve creating a new schema patch that you can then run in sequence which would then detect what version the database is currently on and run all schema patches in between. Afterwards the schema version table is updated with whatever date/time the patch was executed to store for the next run.

A good book that goes into details like this is called Refactoring Databases.

If you wish to use an external tool you can look at Ruby's Migrations project or a similar tool in C# called Migrator.NET. These tools work by creating c# classes/ruby classes with an "Forward" and "Backward" migration. These tools are more feature rich because they know how to go forward as well as backwards in the schema patches. As you stated however, you are not interested in an external tool, but I thought I would add that for other readers anyways.

Sean Chambers
A: 

as long as you add columns/tables to your database it will be an easy task by scripting these changes in advance in sql-files. you just execute them. maybe you have some order to execute them.

a good solution would be to make one file per table, so that all changes belonging to this table would be visible to who-ever is working on the table (its like working on a class). the same is valid for stored procedures or views.

a more difficult task (and therefore maybe tools would be good) is to step back. as long as you just added tables/columns maybe this would not be a big issue. but if you have dropped columns on an update, and now you have to undo your update, the data is not there anymore. you will need to get this data from the backup. but keep in mind, if you have more then a few tables this could be a big task, and in the normal case you should undo your update very fast!

if you could just restore the backup, then its fine in this moment. but, if you update on monday, your clients work till wednesday and then they see that some data is missing (which you just dropped out of a table) then you could not just restore the old database.

i have a model-based approach in my mind (sorry, not implemented at the moment) in which schema-changes are "modeled" (e.g. per xml) and during an update a processor (e.g. a c# program) creates all necessary "sql" and e.g. moves data to a "dropDatabase". the data can reside there, and if for some reason i need to restore some of the dropped data, i can just do it with the processor. i think over some time (years) this approach is not as bad because otherwise developers don't touch "old" tables because they don't know anymore if the table or column is really necessary. with this approach you don't risk too lot if you drop something!

karlis
A: 

What I do is:

  • All the DDL commands required to recreate the schema (and the stored procedures and the indexes, etc) are in a script.
  • To be sure the script is OK, it is tested from time to time (create a database, run the script and restore the backup and check the database works well).
  • For change control, the script is kept in a Version Control System (I typically use Subversion).

The trick is that, if the database cannot be brought down to recreate with, say, an added column, I have two changes to make, an ALTER TABLE + a modification in the script. A bit more work but, in the long term, it wins.

bortzmeyer
+1  A: 

A proper disclosure follows below, however, I know the best database management software that is really stat of the art. It is called TeamWork from a company that is call dbMaestro - and it covers the question above - it documents the changes, helps developers to orgenize their work and much more. It is really the ultimate software now. Here is what they say in their website:

"dbMaestro brought Software Configuration Management benefits into the Database development world by creating dbMaestro TeamWork".

To summarize, this is the best database change control known to man right now...:-)

OK - I work for this company... but nevertheless, absolutely believe in the product!! So does IBM and Microsoft!!!