tags:

views:

102

answers:

3

We've got a Linux-based build system in which a build consists of many different embedded targets (with correspondingly different drivers and featuresets enabled), each one built with another single main source tree.

Rather than try to convert our make-based system to something more multiprocess-friendly, we want to just find the best way to fire off builds for all of these targets simultaneously. What I'm not sure about is how to get the best performance.

I've considered the following possible solutions:

  • Lots of individual build machines. Downsides: lots of copies of the shared code, or working from a (slow) shared drive. More systems to maintain.
  • A smaller number of multiprocessor machines (dual quadcores, perhaps), with fast striped RAID local storage. Downsides: I'm unsure of how it will scale. It seems that the volume would be the bottleneck, but I don't know how well Linux handles SMP these days.
  • A similar SMP machine, but with a hypervisor or Solaris 10 running VMware. Is this silly, or would it provide some scheduling benefits? Downsides: Doesn't address the storage bottleneck issue.

I intend to just sit down and experiment with these possibilities, but I wanted to check to see if I've missed anything. Thanks!

A: 

If you're interested in fast incremental performance, then the cost of calculating the files which need to be rebuilt will dominate over the actual compile time and this will put higher demands on efficient I/O between the machines.

However, if you're mostly interested in fast full rebuilds (nightly builds, for example), then you may be better of rsyncing out the source tree to each build slave, or even have each build slave check out its own copy from source control. A CI-system such as Hudson would help to manage each of the slave build servers.

JesperE
Allan Anderson
That's definitely a question for serverfault.com.
JesperE
+1  A: 

As far as software solutions go, I can recommend Icecream. It is maintained by SUSE and builds on distcc.

We used it very successfully at my previous company, which had similar build requirements to what you describe.

Amal Sirisena
A: 

If your makefiles are sufficiently complete and well-structured, the -j flag may also be useful in overcoming I/O bottlenecks, if your build machine(s) have enough memory. This lets make run multiple independent tasks in parallel, so that your CPUs will ideally never block waiting on I/O. Generally, I've found good results by allowing several more tasks than I have CPUs in a machine.

It's not clear from your question if your current makefiles are not amenable to this, or if you just don't want to jump to something entirely different than make.

Novelocrat