views:

537

answers:

5

I'm a web developer working on my own using django, and I'm trying to get my head round how best to deploy sites using mercurial. What I'd like to have is to be able to keep one repository that I can use for both production and development work. There will always be some differences between production/development (e.g. they might use different databases, development will always have debug turned on) but by and large they will be in sync. I'd also like to be able to make changes directly on the production server (tidying up html or css, simple bugfixes etc.).

The workflow that I intend to use for doing this is as follows:

  • Create 2 branches, prod and dev (all settings initially set to production settings)
  • Change settings.py and a few other things in the dev branch. So now I've got 2 heads, and from now on the repository will always have 2 heads.
  • (On dev machine) Make changes to dev, then use 'hg transplant' to copy relevant changesets to production.
  • push to master repository
  • (On production server) Pull from master repo, update to prod head

Note: you can also make changes straight to prod so long as you transplant the changes into dev.

This workflow has the drawback that whenever you make a change, not only do you have to commit it to whichever branch you make the change on, you also have to transplant it to the other branch. Is there a more sensible way of doing what I want here, perhaps using patches? Or failing that, is there a way of automating the commit process to automatically transplant the changeset to the other branch, and would this be a good idea?

+1  A: 

Perhaps try something like this: (I was just thinking about this issue, in my case it's a sqlite database)

  • Add settings.py to .hgignore, to keep it out of the repository.
  • Take your settings.py files from the two separate branches and move them into two separate files, settings-prod.py and settings-dev.py
  • Create a deploy script which copies the appropriate settings-X file to settings.py, so you can deploy either way.

If you have a couple of additional files, do the same thing for them. If you have a lot of files but they're all in the same directory by themselves, you could just create a pair of directories: production and development, and then either copy or symlink the appropriate one into a deploy directory.

If you did something like this, you could dispense with the need for branching your repository.

Jason S
I like this approach because of its lack of branches, in fact I use one similar already. Instead of making 2 settings files, I make a settings directory and put an `__init__.py` file in it add my settings files in there, prod.py and dev.py (idea lifted from this blog:http://blog.haydon.id.au/2009/07/django-development-workflow.html). Then just symlink the correct wsgi script in. But unfortunately I need to change some templates as well, production uses minified versions of javascript and dev uses unminified, and I don't think I can do this without some nasty symlinking.
markmuetz
+4  A: 

I'd probably use Mercurial Queues for something like this. Keep the main repository as the development version, and have a for-production patch that makes any necessary changes for production.

Steve Losh
+2  A: 

Here are two possible solutions one using mercurial and one not using mercurial:

  1. Use the hostname to switch between prod and devel. We have a single check at the top of our settings file that looks at the SERVER_NAME environment variable. If it's www.production.com it's the prod DB and otherwise it picks a specified or default dev/test/stage DB.
  2. Using Mercurial, just have a clone that's dev and a clone that's prod, make all changes in dev, and at deploy time pull from dev to prod. After pulling you'll have 2 heads in prod diverging from a single common ancestor (the last deploy). One head will have a single changeset containing only the differences between dev and prod deployments, and the other will have all the new work. Merge them in the prod clone, selecting the prod changes on conflict of course, and you've got a deployable setup, and are ready to do more work on 'dev'. No need to branch, transplant, or use queues. So long as you never pull that changeset with the prod settings into 'dev' it will always need a merge after pulling from dev, and if it's just a few lines there's not much to do.
Ry4an
+1  A: 

I actually do this using named branches and straight merging instead of transplanting (which is more reliable, IMO). This usually works, although sometimes (when you've edited the different files on the other branch), you'll need to pay attention not to remove the differences again when you're merging.

So it works great if you're not changing the different files much.

djc
+1  A: 

I've solved this with local settings.

  1. Append to settings.py:

    try:
    from local_settings import *
    except ImportError:
    pass
    

  2. touch local_settings.py

  3. Add ^local_settings.py$ to your .hgignore

Each deploy I do has it's own local settings (typically different DB stuff and different origin email addresses).

PS: Only read the "minified versions of javascript portion" later. For this, I would suggest a post-update hook and a config setting (like JS_EXTENSION).

Example (from the top of my head! not tested, adapt as necessary):

  1. Put JS_EXTENSION = '.raw.js' in your settings.py file;
  2. Put JS_EXTENSION = '.mini.js' in your local_settings.py file on the production server;
  3. Change JS inclusion from:
    <script type="text/javascript" src="blabla.js"></script>
    To:
    <script type="text/javascript" src="blabla{{JS_EXTENSION}}"></script>
  4. Make a post-update hook that looks for *.raw.js and generates .mini.js (minified versions of raw);
  5. Add .mini.js$ to your .hgignore
pkoch
This method is pretty tidy. I like the JS_EXTENSION part, does exactly what I wanted and makes sense immediately.
markmuetz