views:

402

answers:

5

On a native C++ project, linking right now can take a minute or two, yet during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity? If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4Gb of RAM I should be able to save a lot of disk access and make it CPU-bound again, no?

update: so the obvious follow-up is, can VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does?

+6  A: 

In debug builds in visual studio you can use incremental linking which allows you to usually avoid alot of the time spent on linking.
Basically it means that instead of linking the whole EXE(or DLL) from scratch it builds upon the one you last linked, replacing only the things that changed.

This is however not recommended for release builds since it adds some overhead in runtime and can result in an EXE that is several times larger than the usual.

shoosh
Sorry, Doesn't address the question.
Byron Whitlock
It does address the issue of making a "big change" in linking performance.
shoosh
huh? he asked if a ssd would speed up linking if it is io bound.
Byron Whitlock
Offering alternate approaches is just as valid as answering the literal question (unless the question says the alternate approach has already been considered and discarded).
Ben Voigt
I view Incremental linking (/Gm?) as a valid answer. Sadly I have turned it off anyway in favour of /MP (multi-thread compilation) to use my cores better.
John
+10  A: 

Linking is indeed primarily a disk-based activity. Borland Pascal (back in the day) would keep the entire program in memory, which is why it would link so fast.

Your OBJ files aren't kept in RAM because the compiler and linker are separate programs. If your development environment had an integrated compiler and linker (instead of running them as a separate processes), it could indeed keep everything in RAM.

But you would lose the ability to separate the development environment from the compilers and/or linkers - you would have to use the same compiler/linker, and you wouldn't be able to run the compiler outside the environment.

Eric Brown
Delphi still does.
dthorpe
I thought it might, but with the various incarnations of Delphi, I wasn't sure if it still did.
Eric Brown
If you're running on any reasonable OS, the information will already be cached in memory, lessening the need to have the entire object set in memory for linking.
Billy ONeal
I've run it on XP, Vista and W7 and it doesn't seem to make much difference. I don't suppose W7 provides any way to see what files are being RAM-cached?
John
The obj and especially pdb files are often huge and simply will not fit into the memory (at least not into the part of the memory system is willing to use as a cache).
Suma
I don't believe I end up with 4Gb of temp files when I build... if I did then a couple of years ago I'd keep running out of disk space. Besides, Windows can use wht it has and has near-infinite memory when VM is included.
John
+4  A: 

you can try installing some of those Ram disks utilities and keep your obj directory on the ramdisk or even whole project dir. That should speed it up considerably.

Don't forget to make it permanent afterwards :-D

Kugel
worth a go... I think you can set intermediate files to go some place separate so it wouldn't need to be permanent except that if the ONJ files go, you'd have to do full builds all the time. Not sure if it's worth the hassle, depends how helpful a RAM disk might be in automating all this
John
+3  A: 

It's hard to say what exactly is taking the linker so long without knowing how it is interacting with the OS. Thankfully, Microsoft provides Process Monitor so you can do just that.

It's helped me diagnose bugs with the Visual Studio IDE and debugger without access to source.

MSN
+4  A: 

The Visual Studio linker is largely I/O bound, but how much so depends on a few variables.

  1. Incremental linking (common in Debug builds) generally requires a lot less I/O.

  2. Writing a PDB file (for symbols) can consume a lot of the time. It's a specific bottleneck that Microsoft targeted in VS 2010. The PDB writing is now done asynchronously. I haven't tried it, but I've heard it can help link times quite a bit.

  3. If you using link-time code generation (LTCG) (common in Release builds), you have all the usual I/O initially. Then, the linker re-invokes the compiler to re-generate code for sections that can be further optimized. This portion is generally much more CPU-intensive. Off hand, I don't know if the linker actually spins up the compiler in a separate process and waits (in which case you'll still see low CPU usage for the linker process), or if the compilation is done in the linker process (in which case you'll see the linker go through phases of heavy-I/O then heavy-CPU).

Using an SSD can help with the I/O bound portions. Simply having a second drive can help, too. For example, if you source and objects are all on one drive, and you write your PDB to a separate drive, the linker should spend less time waiting for the PDB writer.

Adrian McCarthy