views:

122

answers:

2

This is a rather general question ..

What hardware setup is best for large C/C++ compile jobs, like a Linux kernel or Applications ?

I remember reading a post by Joel Spolsky on experiments with solid state disks and stuff like that.

Do I have to have rather more CPU power or more RAM or a fast harddisk IO solution like solid state ? Would it for example be convenient to have a 'normal' harddisk for the standard system and then use a solid state for the compilation ? Or can I just buy lots of RAM ? And how important is the CPU, or is it just sitting around most of the compile time ?

Probably it's a stupid question, but I dont have a lot of experience on that field, thanks for answers.

Here's some info on the SSD issue

+2  A: 

I think you need enough of everything. CPU is very important, and compiles can easily be parallelised (with make -j), so you want as many CPU cores as possible. Then, RAM is probably just as important, since it provides more 'working space' for the compiler and allows to your IO to be buffered. Finally, of course drive speed is probably the least important of the three - kernel code is big, but not that big.

alex tingle
Talking about memory, that would most likely imply that I have to have a fast connection between CPU and RAM, I used a laptop computer most of the time in recent years, so I'm not really up to date .. Can I get away with a standard consumer mainboard ?
Homer J. Simpson
+2  A: 

Definitely not a stupid question, getting the build-test environment tuned correctly will make a lot of headaches go away.

Hard disk performance would probably top the list. I'd stay well away from the solid state drive as they're only rated for a large-but-limited number of write operations and make-clean-build cycles will hammer it.

More importantly, can you take advantage of a parallel or shared build environment - from memory ClearCase and Perforce had mechanisms to handle shared builds. Unless you have a parallelising build system, having multiple CPUs will be fairly pointless

Last but not least, I would doubt that the build time would be the limiting factor - more likely you should focus on the needs of your test system. Before you look at the actual metal though, try to design a build-test system that's appropriate to how you'll actually be working - how often are your builds, how many people are involved, how big is your test system ....

Chris McCauley
SSD lifespan is now a non-issue. I used to worry about that, too, but modern SSDs have a per-cell lifetime of about a million writes. You could take a current SSD with a typical size, write speed, and wear-levelling controller, and do continuous writes 24x7 for years without a problem.Build time is a limiting factor for a lot of people. I work on a code base of about 20 million lines of C, and often have to rebuild it from scratch, sometimes several times a day.Both jam (from Perforce) and gnu make support parallel builds.I'm analyzing all this stuff on my blog.
Bob Murphy
Thanks, that's good to know.
Chris McCauley