views:

47

answers:

3

Is there any particular reason that the linking phase when building a project with distcc is done locally rather than sent off to other computers to be done like compiling is? Reading the distcc whitepages didn't give a clear answer but I'm guessing that the time spent linking object files is not very significant compared to compilation. Any thoughts?

+2  A: 

Linking, almost by definition, requires that you have all of the object files in one place. Since distcc drops the object files on the computer that invoked distcc in the first place, the local machine is the best place to perform the link as the objects are already present.

In addition, remote linking would become particularly hairy once you start throwing libraries into the mix. If the remote machine linked your program against its local version of a library, you've opened up potential problems when the linked binary is returned to the local machine.

Jack Kelly
+2  A: 

The reason that compliation can be sent to other machines is that each source file is compiled independently of others. In simple terms, for each .c input file there is a .o output file from the compilation step. This can be done on as many different machines as you like.

On the other hand, linking collects all the .o files and creates one output binary. There isn't really much to distribute there.

Greg Hewgill
That is true, but aren't there cases where the project you want to build ends up with multiple assemblies (exes, dlls)? In that case wouldn't the overall build time benefit from distributing the linking process?
Matthew
Depends on what you need to do to actually perform a distributed link. As of now, distcc does not need the remote servers to have *anything* in common (but the compiler version) with the clients. The client preprocesses (removing the dependency on build variables and headers) and sends a single preprocessed file over the network. The file is self contained and the remote system can compile it without requiring any other file. Not to raise the requirements, the compiler would have to send all object files and libraries over the network and then get the linked object back.
David Rodríguez - dribeas
+2  A: 

The way that distcc works is by locally preprocessing the input files until a single file translation unit is created. That file is then sent over the network and compiled. At that stage the remote distcc server only needs a compiler, it does not even need the header files for the project. The output of the compilation is then moved back to the client and stored locally as an object file. Note that this means that not only linking, but also preprocessing is performed locally. That division of work is common to other build tools, like ccache (preprocessing is always performed, then it tries to resolve the input with previously cached results and if succeeds returns the binary without recompiling).

If you were to implement a distributed linker, you would have to either ensure that all hosts in the network have the exact same configuration, or else you would have to send all required inputs for the operation in one batch. That would imply that distributed compilation would produce a set of object files, and all those object files would have to be pushed over the network for a remote system to link and return the linked file. Note that this might require system libraries that a referred and present in the linker path, but not present in the linker command line, so a 'pre-link' would have to determine what set of libraries are actually required to be sent. Even if possible this would require the local system to guess/calculate all real dependencies and send them with a great impact in network traffic and might actually slow down the process, as the cost of sending might be greater than the cost of linking --if the cost of getting the dependencies is not itself almost as expensive as linking.

The project I am currently working on has a single statically linked executable of over 100M. The static libraries range in size but if a distributed system would consider that the final executable was to be linked remotely it would require probably three to five times as much network traffic as the final executable (templates, inlines... all these appear in all translation units that include them, so there would be multiple copies flying around the network).

David Rodríguez - dribeas
Thanks, that clears some things up. The added complexity and network overhead probably isn't worth implementing a distributed linker, and it might in fact make the whole process slower.
Matthew