views:

78

answers:

4

As a guy who frequently switches between QA, build and operations, I keep running into the issue of what to do about operating system updates on the build server. The dichotomy is the same on Windows, Linux, MacOS or any other o/s that can update itself via the internet:

  • The QA team wants to keep the build server exactly as it is from the beginning of the product release cycle to the end, since installing updates could destabilize the server and means that successive builds aren't made against the same baseline.
  • The ops team wants the software to be deployed on a system with all the latest security patches; this can mean that the software isn't deployed on exactly the same version of the o/s that it was built on.

I usually mitigate this by taking release candidate builds and installing them on a test server that has a completely up-to-date o/s, repeating the automated tests that are run on the build server and doing some additional system level testing to make sure everything looks good before deployment. However, this seems inefficient to me; does anyone have a better way ?

+2  A: 

Personally i don't think you have much of an issue here - just apply the latest updates to the build server. The main reasons i say this are:

  • it is highly unlikely that your code or any of the dependencies on the build server are so tightly coupled to the OS version that installing regular updates is going to affect anything, let alone break it. There can be minor differences between window messages etc between Windows versions, but those are few and far between, and are usually quite well documented out there on teh interweb. If you are using managed technology stacks like WPF/Silverlight or ASP.Net and even mostly Winforms then you will be isolated from these changes - they should only affect you if you are doing hardcore stuff using the WinAPI directly to create your windows or draw your buttons.

  • it is a good practice to always engineer your product against the latest version of the OS, because you need to encourage your customer to implement those updates too - IOW you should not be in a position where you have to say to your client to not install update xyz because your application will not run against it - especially if that update is a critical security update

  • testing for differences between OS versions should be done by the QA team and should independant of what is on the build server

  • you do not want your build server to get in to such a state that it has been so isolated from the company update process that when you finally do apply them all it barfs and spits molten silicon everywhere. IOW, the longer you wait to update, the higher the risk of something going wrong and doing so catastrophically. Small and frequent/incremental updates are lower risk than mass updates once per decade :)

The build server updates that you do have to be cautious about are third party controls or library updates - they can frequently contain breaking changes or considerably altered behavior. They really should be scheduled, and followed up by a round of testing looking for any changes.

slugster
Thanks for the detailed answer, especially the points about not having to tell customers they need to be behind the latest o/s version.
gareth_bowles
+1  A: 

Virtualize!

Using stuff like VMWare Server you can script the launch and suspend of virtual machines. So you can script VM resume, SSH to launch build, copy, VM suspend, repeat. (I say this, but I abandoned my work on this. Still, I was making progress at the time.)

Also, you can trust your OS vendors. Can't you?

They have an interest in compatibility. If you build on Windows XP it is almost certain to work on XP SP3 and Vista and Windows 7.

If you build on RedHat Enterprise 5, it had better work on 5.1, 5.2, 5.3, 5.4, etc.

In my experience this has worked out OK so far for me and I recommend building on your lowest patch OS versions. With the Linux stuff in particular I have found newer releases linking to more recent libraries not available on older versions.

Of course it doesn't hurt to test your code on a copy of the deployment server. It all depends on how certain you want to be.

Zan Lynx
I'd add that in heavy disk scenarios such as build servers, you should consider real (data) disk not virtualised disk for those VM. I speak from experience.
Preet Sangha
@Preet: I have not seen a problem with disk access in my experience. If it is a problem, then I suppose building in a tmpfs (on Linux) or a RAM disk on Windows would be the answer.
Zan Lynx
Good points on scripting the launch of virtual servers; we already do this for other tests using Amazon EC2.
gareth_bowles
@Zan. That may well be. Unfortunetly we noticed serious degradation in performance using virtual disks for Build servers. Granted the builds were not disk IO optimised, but it can be. Can you recommend a decent WINDOWS ram disk? I've not come across a compelling product.
Preet Sangha
@Preet: I have no recommendations because I didn't have an IO problem but you should check on ServerFault or SuperUser.
Zan Lynx
+1  A: 

Take the build server off the network, that way you do not need to worry about installing security updates. Only load the source from CD, thumb drive or whatever other means.

Plug it back in at the end of your release cycle and then let all the updates take place.

Romain Hippeau
I think you might have misunderstood the problem - it's not that they want to update the build server because they're worried about security updates, it's because the software will eventually be deployed on a production server which has all updates installed, and it would be preferable to test on a server which matches production.
caf
@caf - I think you misunderstood the answer !!! If you take the server off-line then you do not have to deal with the sys-admins at all. You can just concentrate on having a build server with any configuration you want on it.
Romain Hippeau
@Romain: But they cannot take the *production* server off-line, so the sysadmins will still update that - and then the software has been tested against the (old) build server configuration, not the production server configuration.
caf
It sounds like after that you just need to beat your QA dept with a baseball bat.
Romain Hippeau
@caf has this right.
gareth_bowles
+1  A: 

Well, for the most stable process, I would have two build servers, "Build with Initial config, Build with update config", and two autotest test servers with similar differences. Use virtualization to do this effectively and scriptably.

Paul Nathan