views:

85

answers:

1

I've been building out an SCM environment (that has to be PCI compliant, but that's tangential to this issue).

I've got to the point where I want to automate database updates, but I'm not 100% sure that's the best way forward.

Say I want to add a field to a DB table. Easy enough to add it to the dev environment, but what about rolling out to the live environment? I took a look at MySQL::Diff but the thought of spending time completely automating this seems like overkill for me.

I want to have a rollback option, and want to avoid the overkill of complete DB duplication. All the tutorials I've found on SCM appear to either not cover this, or say it can be very messy. Is there a best practice for this? Or should I just use MySQL diff to identify changes and backup individual tables before manually tweaking at rollout?

+1  A: 

We store an SQL script of the changes that are needed in git, with the branch that contains the changes.

This SQL script can be applied repeatedly to "fresh" copies of production data, verifying that it will work as expected.

It is our strong opinion that a focused DBA or Release Engineer should apply the changes, AFTER making appropriate backups and restoration measures.

--

Navicat MySQL is also an amazing tool for helping with that. Their schema diff tool is great for verifying changes, and even applying them.

gahooa