views:

299

answers:

5

Our company is nearing its "go live" date (and its getting a QA department date), and I'm trying to define the right operational processes to support this. A big consideration of mine is how to avoid the deployment/configuration hell that has inevitably occurred. Have any of you found a good solution for handing off builds to the non-programmers so that they could successfully install and configure it in a QA, staging, and production environment?

A full environment for us is composed of a mixture of heterogeneous scheduled tasks, Windows services, and web sites, all of which can be scaled out through parallel deployment. Thankfully, the means of configuration is consistent. Unfortunately, it's all managed through .NET web/app.config files. In my experience, QA and ops folks always mess up when trying to modify them (XML is surprisingly hard for most people to handle!)

Here's the options I'm considering:

Using machine.config files

This is something I haven't done in practice, but it looks promising. If we create a machine.config template containing every setting for every application that can vary by environment, this would allow an admin to make all changes to one file and deploy it to each machine in the environment.

Pros:
This potentially reduces the number of steps necessary to deploy a system
Cons:
Having to somehow document configuration schema changes
Unknowns:
We make use of custom config sections and other configuration extensions that reference assemblies. Would this require us to install our .NET assemblies in the machines' GACs?

Perform config file manipulations in build process

If we set up the QA, staging, and production environments so they appear identical to our software (virtual servers and LANs, etc), QA should be able to transition ready software with no configuration changes directly to the staging environment, and staging to production. With this setup, theoretically we could hand to QA pre-configured foo.config files that nobody needs to touch.

Pros:
Engineering would be more adept at ensuring that configuration files are valid
Cons:
It may be considered poor security practice for engineering to be aware of production configurations (a poor argument, IMHO)

Have a network-centralized settings repository

This one doesn't look attractive to me, because I tried this in three ways that were ultimately failures:

  1. At a prior company, we had the configuration settings in the database, but of course you can't put them all in there, since you need to configure the connection string to that database. Also, it was just as difficult to ensure that the database got properly updated before deployment.
  2. Another approach we'd tried was to have a networked service that worked as sort of a centralized registry. This almost worked, but there were always issues with local caching, ensuring that the URL to the config server was properly configured, and of course configuring the config server.
  3. Active Directory? Ew! Need I say more?

Thoughts?

How successful have you been with using the options I'm considering? Are there any alternatives to these that have worked well for you?

+1  A: 

I've seen companies use a deployment script and Virtual Machines that mirror production to have QA Builds and staging builds that are deployed to them at the press of a button. You can use Powershell to do this.

George Stocker
+3  A: 

I'd say do a couple of things:

First, use a staging server. Whether for engineering or non-engineering, have a location where you perform "mock deployments" of your code to the server, and where it's tested from there. This serves a good purpose of giving a distinct "production-like" environment to test from, and it allows for deployment training without causing everybody to go all freaky and shaky from fear of nuking everything on a deployment. It costs a bit more in terms of hardware, but it's probably worth it from the errors that it prevents.

Second, if your configuration files are truly complex and they're hard for non-engineers to construct, create a quick tool that will create your validation files for deployment. A simple website or even client-side app that just takes the basic deployment parameters, does some validation on them, and then saves everything in the right format can do wonders in terms of helping out some of the less-technical folks. The confidence that having a tool that validates your input can be really useful for those types of folks, and knowing that you're always going to have well-formed XML with validated results can save some engineer worry time as well.

McWafflestix
A: 

I've always had success with a mix of local- and database-driven settings storage. Machine-specific settings were stored in an XML file on the machine (this included connection information for our root database), as were any application-specific (but not user-specific) settings. Anything user specific or enterprise-wide settings were stored in the database, including connection information for OTHER databases. In other words, we had a single database with this information in it, which the client could then use to connect to other databases. This allowed us to centrally maintain everything except the connection to our root database.

Adam Robinson
+2  A: 

We have an custom exe that runs after builds that we use.

Our projects have 4 config files

web.config -- development (local box)
web.integration.config -- alpha testing (runs on our alpha server)
web.staging.config -- beta testing (runs on our beta server)
web.production.config -- production (runs on our production server)

the exe simply deletes the all files except for the one needed and then renames it to web.config...

We don't allow non-developers (QA, DBA, etc) to manipulate the config files as they could change to production values (mail server, sql server) and cause some serious issues...

It works very well for us

J.13.L
A: 

Previous company we implemented this:

  • Developers create master config files for each app
  • Identify machine/environment-specific tokens and move those off to a separate file
  • CI machine does a checkout, tags the files with a new version number, runs tests, and then copies the application to a central share that the change management folks control.
  • Change management pulls up our spiffy deployment dashboard. They then pick the app they want, the version that was requested, and the requested environment. They then hit the go button.
  • Deployment app robocopies out all of the files from the staged file share to the server.
  • Deployment app then kicks off any further tasks in the build script.
  • The NAnt script was copied to each target machine and then kicked off using P/Invoke

Everything was implemented using Anthill, NAnt, Robocopy, and a lightweight custom app that orchestrated the deployments. Using

The biggest upside to this is that we almost never had a manual deployment step. Everything was repeatable and testable from dev deployments onward.

To this end, we tried to isolate as much as possible. We largely avoided the GAC and the machine.configs. We found over time that this helped with velocity quite a bit since all apps didn't necessarily want to move to new versions of shared components anyway.

Ben