views:

488

answers:

7

Right now I have a development server running a basic LAMP configuration. The production server is a slicehost. But I'm wondering what is the best way to push the instances of the code/db to the stages dev > stage > production. Does it have to do with how you create the stages?

How do you do it without bring the site down? Is it even possible if you don't do load balancing?

I know this is somewhat general, I'm just looking to be pointed in the right direction.

+1  A: 

I'd look into some sort of automated "build" style environment for lamp where you have scripts that package and prepare your releases for each environment.

I recognize there is no actual building for PHP but you could set up automation to change any configuration or setup issues and save everything off to a folder ready to be implemented.

I don't believe you can completely eliminate downtime without load balancing / web farm style environments. However the easiest way to reduce it in my book is by establishing a consistent code prep process and testing the process multiple times. Automation would help there.

As for the act of actually copying the files, well I don't know much more than using something like FTP or whatever is convenient. Maybe putting up a loading message. Again this could all be scripted out.

Finally keep in mind since PHP isn't built it might work well for you to track differences between what's there now and what you have changed and only move those files. Sometimes that can add unneeded complexity though.

JoshReedSchramm
+2  A: 

I use .htaccess to create a "maintenance mode" jiffy where only my IP can see the main site while updating. Everybody else gets to view a short message so they know everything should be back online in a few ticks.

Then I:

  1. Make any DB edits
  2. SVN export/upload the files
  3. Run automated testing and give as much as possible a quick look over to make sure there's nothing hideously wrong
  4. Revert the .htaccess

It's a given but you should test things as much as you can locally before pushing to a live server. Some people use two live servers (like a cryptically name subdomain on the production server) to act as a test-area for live-time updates. This can reduce the main site's actual downtime.

I should stress that it's important not to push updates to a live server without shutting it down beforehand (especially if you're using binaries ala ASPNET) because users trying to use the site while you update will get hideous error messages and you might get locked files.

Oli
And I'll add that the .htaccess-swap-out, DB-edits, svn checkout/file-uploading and automated unit testing can all be scripted so an update happens much, much faster.
Oli
+1  A: 

I don't know if this is a good idea, but what about automated checkouts from a source control system? Perhaps having a few branches of development - testing for bleeding-edge, development for maintenance/small improvements, and production for the production code. Whenever development is stable, merge it into the production branch and have it automatically checked out on a regular basis by the production machine.

Thomas Owens
A: 

As far as pushing files out to all the web servers I find that good old robocopy does the trick. My dev/stage/prod environments are all identical of course. Just put up a temporary page that tells users the site will be right back.

Alex
A: 

Right now we use a series of shell scripts with configuration files which tar up our changed files, scp them to each server in the cluster, and then untar them once they're there. This method has its drawbacks of course and we're contemplating a method where each server member has a svn client installed and, and once we tag a new release we switch the working copy on the production servers to that new tag. Of course, we do releases during our maintenance hour nowadays so we don't have to do anything special for our users (they see a maintenance page anyway).

firebird84
A: 

I have an option in my core configuration to allow access to the site or a list of preset urls I can redirect all users to (and stop any API traffic with an appropriate http code).

These urls are links to static html files explaining to the user what's going on.

When this is enabled there are no requests to the databases or any file as all requests are sent to the HTML file before that can happen which gives me a 'clear' space to deploy any updates.

Ross
A: 

apache ant with tasks for svn and ftp foot the bill for me. There are even people that do the db stuff with ANT, but I tend to want to watch those personally. Once you have a clean and push build for ftp to all of your locations, you'll be amazed at how easy it is.

anopres