views:

179

answers:

1

The Problem

I have a big infrastructure consisting of several kinds of servers running Linux. For instance, database servers, load balancers, application-specific servers. There are many instances of every kind of server, and all of them need to be reproducible.

Every kind of server is basically a custom distribution. Customisations include changes to the upstream packages (other upstream version, build options, patches, whatever) and, possibly, some extra custom packages.

For example, I need a server running the latest OpenLDAP slapd compiled with specific options and some patches. And this is where things get complicated.

Updating to the latest slapd will also require updating libraries it depends on, which means rebuilding all packages that depend on these libraries, too. That is I basically need to rebuild significant part of the distribution. I'm looking for a solution that helps automate this process.

Solution requirements

Kind of vague. I want to prepare everything necessary for building my custom distro, give it a name (e.g ldap-server) and give that name to the automated build system any time I need to reproduce the build.

I think this is something Gengoo or LFS community should have. Also I've seen projects like ALT Linux Hasher, Fedora Mock, Debian pbuilder/sbuild but never used any of them.

Any ideas?

Thanks in advance!

+3  A: 

I wont ask why you chose to maintain a custom distro for your production servers ... but ... I have had some experience of this kind of hackathon ... and the massive headaches that go with it.

  1. To automate your build of the distro, I used an XML definition of the build order and dependencies and scripted GNU Make to build in parallel independent branches and construct the binary packages. The resulting output from the XML+shell-script+bit of python+Make/Autotools was a complete build of a special set of 'core' tools and then extras.

  2. The second step was installing these binaries/raw build directories into a system. I used installwatch (i think) to use inotify to keep an eye on where things were installed to. I then output XML of this along with the dependencies of any binaries.

  3. After this I had a build manifest (XML) and for each package an XML file with the details of the installed packages. I then made a tool to convert the XML and the in-place binaries into various formats (RPM etc)

  4. Now (use your imagination) I have an install script to automate build, tons of meta-data on built packages and their dependencies, and a method of turning that meta-data into deployable packages

  5. Next, I made build-scripts for various servers, from glib up :) ... and ran those builds. The system knew which packages/./configure's were common and shared those packages. This left me with
    o A repo called /common
    o A repo for each build type and architecture

  6. A few scripts/rsync-over-ssh and patch management scripts and you are away.

obviously this is a very rough overview of my approach to building multiple distros for a common environment. Some packages were meta-packages that affected the source-tree (but were treated like normal packages as build time. One example was a meta-package that ran first and applied patches to the kernel).

Then there is the matter of tool-chain automation.

It all started off with LFS ... but as you can see, things got a little adventurous.

Bottom line is, it was very fun but I just ditched it all for a BSD and Fedora.

Things like the Suse Build Service might be of interest. Farming out the stable-source-combination-finding and compilation will make things simpler! You don't even need to build anything to do with Suse.

Aiden Bell
Thanks for your answer Aiden!Your approach is interesting. Especially as it looks very much like what my colleagues already have in FreeBSD — a piece of software that takes some XML and produces a set of ready to build ports. Later the guys use Tinderbox to automatically build package sets from this.Now we need something like this on Linux.BTW, is you stuff open source by any chance? And why did you abandon it? Seems like you put much effort on the implementation.Also, why did you choose Fedora?Thanks
Timur
@Timur - The scripts were not *that* well developed to be autonomous. They worked well but still required some watching. I don't know why I abandoned it ... it was going towards a Gentoo like system and I didn't like that much. Fedora is great for my personal systems, and FreeBSD's ports are great for production. Maybe look at a derived ports database specific for your needs, just import the kernel+basics and the FreeBSD install system?
Aiden Bell