views:

765

answers:

11

Several people round here recommended switching to the new WD Velociraptor 10000rpm harddisk. Also magazine articles praise the performance. I bought one and mirrored my old system to it. The resulting increase in compilation-speed is somewhat disappointing:

  • On my old Samsung drive (SATA, 7200), the compilation time was 16:02.
  • On the Velociraptor the build takes 15:23.

I have a E6600 with 1.5G ram. It's a C++-Project with 1200 files. The build is done in Visual Studio 2005. The acoustic managment is switchted off (no big difference anyway).

Did something go wrong or is this modest acceleration really all, I can expect?

Edit: Some recommended increasing the RAM. I did now and got a minimal gain (3-5%) by doubling my RAM to 3GB.

+1  A: 

I imagine that hard disk reading was not your bottleneck in compilation. Realistically, few things need to be read/written from/to the hard disk. You would likely see more performance increase from more ram or a faster processor.

Howler
+1  A: 

I'd suggest from the results that either your hdd latency speed wasn't the bottleneck you were looking for, or that your project is already close to building as fast as possible. Other items to consider would be:

  1. hdd access time (although you may not be able to do much with this due to bus speed limitations)
  2. RAM access speed and size
  3. Processor speed
  4. Reducing background processes
workmad3
+1  A: 

~6% increase in speed just from improving your hard drive. Just like Howler said. Grab some faster ram, and PCU.

David McGraw
+1  A: 

As many have already pointed out, you probably didn't attack the real bottleneck. Randomly changing parts (or code for that matter) is as one could say "bass ackwards". You first identify the performance bottleneck and then you changesomething.

Perfmon can help you get a good overview if you're CPU or I/O bound, you want to look at CPU utilization, disk queue length and IO bytes to get a first glimpse on what's going on.

Torbjörn Gyllebring
+2  A: 

Visual Studio 2005 can build multiple projects in parallel, and will do so by default on a multi-core machine, but depending on how your projects depend on each other it may be unable to parallel build them.

If your 1200 cpp files are in a single project, you're probably not using all of your CPU. If I'm not mistaken a C6600 is a quad-core CPU.

Dave

Dave Van den Eynde
Its An E6600, sorry.
Christof Schardt
No need to apologize! The E6600 is a dual-core CPU anyway, so you're still in for some dramatic improvement if you're not building projects in parallel.
Dave Van den Eynde
A: 

1200 Source files is a lot, but none of them is likely to be more than a couple hundred K, so while they all need to be read into memory, it's not going to take long to do so.

Bumping your system memory to 4G (yes, yes I know about the 3.somethingorother limit that 32-bit OSes have), and maybe looking at your CPU are going to provide a lot more performance improvement than merely using a faster disk drive could.

warren
+1  A: 

That is actually a pretty big bump in speed for just replacing a hard disk. You are probably memory or CPU bound at this point. 1.5GB is light these days, and RAM is very cheap. You might see some pretty big improvements with more memory.

Just as a recommendation, if you have more than one drive installed, you could try setting your build directory to be somewhere on a different disk than your source files.

As for this comment:

If your 1200 cpp files are in a single project, you're probably not using all of your CPU. If I'm not mistaken a C6600 is a quad-core CPU.

Actually, a C6600 isn't anything. There is a E6600 and a Q6600. The E6600 is a dual core and the Q6600 is a quad core. On my dev machine I use a quad core CPU, and although our project has more than 1200 files, it is still EASILY processor limited during compile time (although a faster hard drive would still help speed things up!).

TM
You're right. It's an E6600.
Christof Schardt
+6  A: 

Are you using the /MP option (undocumented, you have to enter it manually to your processor options) to enable source-level parallel build? That'll speed up your compile much more than just a faster harddisk. Gains from that are marginal.

Roel
This is offically supported in VC++ 2008.
Aardvark
Great!!! After adding /MP to the compilation-options it was build in 9:48 (rather than 15:23).
Christof Schardt
Great! I didn't know it was already included in the VC++2005 compiler, this definately helped me squeeze some more speed out of my build.
Dave Van den Eynde
A: 

VC 2005 does not compile more then one file at the time per project so either move to VC 2008 to use both of your CPU cores, or break your solution to multiple libraries sub projects to get multiple compilations going.

Raz
A: 

Sam Hasler
A: 

I halved my compilation time by putting all my source onto a ram drive.

I tried these guys http://www.superspeed.com/desktop/ramdisk.php, installed a 1GB ramdrive, then copied all my source onto it. If you build directly from RAM, the IO overhead is vastly reduced.

To give you an idea of what I'm compiling, and on what;

  • WinXP 64-bit
  • 4GB ram
  • 2.? GHz dual-core processors
  • 62 C# projects
  • approx 250kloc.

My build went from about 135s to 65s.

Downsides are that your source files are living in RAM, so you need to be more vigilant about source control. If your machine lost power, you'd lose all unversioned changes. Mitigated slightly by the fact that some RAMdrives will save themselves to disk when you shut the machine down, but still, you'll lose everything from either your last checkout, or the last time you shut down.

Also, you have to pay for the software. But since you're shelling out for hard drives, maybe this isn't that big a deal.

Upsides are the increased compilation time, and the fact that the exes are already living in memory, so the startup time and debugging time is a bit better. The real benefit is the compilation time, though.

Steve Cooper
You mean you halved the compilation time. =)
Seiti
yeah ;) corrected
Steve Cooper