We have been using Visual Studio Team System Database Edition recently, and I have to say it has worked very well. All stored procedures are stored as files, and checked in and out of source control, and it has tools to generate scripts, etc.
Also, in the past we've used scripts stored as text files that are checked in and out of source control. The rule was that you had to check out the file, then edit it in, for instance, Management Studio, and save it, and check it back in. At the top of every stored procedure script file it would drop the existing stored proc, and then use the CREATE statement to create a new one (gets around the CREATE/ALTER problem). We then had a tool that would run all the scripts in the right order to build a blank database from scratch, and then we used RedGate's SQL Compare product to generate a script to bring our existing databases up to date. I admit it was tedious.
Around the same time I worked on a system with about 10 other developers, and they implemented a rigorous database change management procedure. There were many, many applications that all depended on a set of 2 or 3 databases. Any time a database schema had to change (we're only talking tables, columns and views here) then a document was created that explained the change, and then there was a matrix that listed the changes vs. what applications we thought it would impact. Then the document was circulated and had to be reviewed by someone responsible for each application, and they had to search through their application for anywhere it might be impacted, etc. It was a long arduous procedure, but it worked. However, stored procs were just stored as text files in source control.
In the more distant past, with smaller projects that were more like desktop apps with a database as the datastore, every time the app started, I would:
- Check to see if the database existed, and if not, create it
- Check that all the tables existed, and if not, create them
- Check that all the columns existed, and if not, add them
Whenever I needed to change the schema, I would just add more code to the end of the startup code to modify the schema as necessary, taking care to migrate any existing data. The benefit of this was that you could just uninstall and reinstall a new version of the software, and it would automatically upgrade the existing database to the latest version. Installation, upgrades, and maintenance was a dream. That wouldn't work for more "enterprisey" systems though.
You can reduce some of these problems by adopting ADO.Net Entities or another similar Entity Framework, like Entity Spaces. These are object-relational mapping layers. They auto-generate classes for each entity (table) in your database, including properties for each column, etc. Then they allow you to extend those classes with custom logic. If you can get away from having your business logic in stored procedures, and put them in the Entity classes, then the benefit is that they're strongly typed. Therefore if you change the name of a column, or delete a column and you regenerate your entity classes, then your IDE or compiler will automatically flag all the places where the code is broken. Obviously, all the entity code is naturally in source control with the rest of your source code too.