views:

105

answers:

4

As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion).

I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information.

I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine.

This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio?

Thanks for any help and additional insight. Always appreciated.

+1  A: 

I use DataConstructor but am biased because I wrote it.

+4  A: 

That is correct you should check in scripts not the database file itself.

I'm not a fan of building from test data unless the data itself will mimic the size of data that production has (or in,the case of new databases, is intended to have) . Why? because writing code against a table with 100 records doesn't tell you if it will run in a timely fashion when you have 10,000,000 records. I've way too many bad design choices made from people who think a small data set is OK for development.

Here, we do not allow devs to have a separate database on their box (which typically limits the size the database can be by virture of not being a server attached to SAN), instead they must work against the dev database which is periodically refreshed from prod (and then all the new dev scripts run) to keep the data the right size. I think it is important that your dev datbase environment match prod as closely as possible including equipment configuration, size of the database etc. Nothing more frustrating than spending a long time developng something that either won't work at all on prod or has to be taken down immediately because it is slowing the system too much.

Jumping down off my soapbox now.

HLGEM
@HLGEM - Good point about the test data. That was actually pointed out in one of the articles.
JasCav
+1  A: 

It is great idea to check in scripts, since source code control is best suited to working with text files, rather than binary files. Differences in the script files can be easily reviewed as part of the rest of your code changes related to the database change. In addition to checking in the database scripts, we also check in a database schema snapshot (such as the one generated by SchemaCrawler for SQL Server). This database schema snapshot allows us to verify that the schema in production matches the expected schema for given version of the product. Besides that, the database schema snapshot is a handy way for searching for columns and tables using a plain text editor.

Sualeh Fatehi
A: 

Try SQL Examiner: How to keep your database under version control

SQLDev