views:

491

answers:

4

I am going to start a new C++ project that will rely on a series of libraries, including part of the Boost libraries, the log4cxx or the google logging library - and as the project evolves other ones as well (which I can not yet anticipate).

It will have to run on both 32 and 64 bit systems, most probably in a quite diverse Linux environment where I do not expect to have all the required libraries available nor su privileges.

My question is, should I build my application by dynamically or statically linking to all these libraries?

Notes:

(1) I am aware the static linking might be a pain during development (longer compile times, cross-compiling for both 32 and 64 bit, going down dependency chains to include all libraries, etc), but it's a lot easier during testing - just move the file and run.

(2) On the other hand, dynamic linking seams easier during development phase - short compile times, (don't really know how to handle dynamic linking to 64 bit libraries from my 32 bit dev environment), no hustle with dependency chains. Deployment of new versions on the other hand can be ugly - especially when new libraries are required (see condition above of not having su rights on the targeted machines, nor these libraries available).

(3) I've read the related questions regarding this topic but couldn't really figure out which approach would best fit my scenario.

Conclusions:

  1. Thank you all for your input!
  2. I will probably go with static linking because:
    • Easier deployment
    • Predictable performance and more consistent results during perf. testing (look at this paper: http://www.inf.usi.ch/faculty/hauswirth/publications/CU-CS-1042-08.pdf)
    • As pointed out, the size and duration of compilation of static vs. dynamic does not seem to be such a huge difference
    • Easier and faster test cycles
    • I can keep all the dev. cycle on my dev. machine
+3  A: 

I would probably use dynamic linking during (most of) development, and then change over to static linking for the final phases of development and (all of) deployment. Fortunately, there's little need for extra testing when switching from dynamic to static linkage of the libraries.

Jerry Coffin
+1  A: 

Best is to leave that up to the packager and provide both options in the configure/make scripts. Usually dynamic linking would have the preference since then it would be easy to upgrade the libraries when necessary, i.e. when security vulnerabilities, etc. are discovered.

Note that if you do not have root privileges to install the libraries in the system directories you can compile the program such that it will first look elsewhere for any needed dynamic libraries, this is accomplished by setting the runpath directive in ELF binaries. You can specify such a directory with the -rpath option of the linker ld.

wich
Or let the person running the program indicate that it should look in extra directories by using the LD_LIBRARY_PATH environment variable.
jamessan
+7  A: 

Static linking has a bad rap. We have huge hard drives these days, and extraordinarily fat pipes. Many of the old arguments in favor of dynamic linking are way less important now.

Plus, there is one really good reason to prefer static linking on Linux: The plethora of platform configurations out there make it almost impossible to guarantee your executable will work across even a small fraction of them without static linking.

I suspect this will not be a popular opinion. Fine. But I have 11 years experience deploying applications on Linux, and until something like LSB really takes off and really extends it's reach, Linux will continue to be much more difficult to deploy applications on. Until then, statically link your application, if you have to run across a wide range of platforms.

dicroce
It also makes it more robust after installation. If the user installs something that changes dynamic libraries your program will not be affected.
Jay
One still very valid problem with static linking (and also with bundled libraries) is that necessary security updates are often neglected.
wich
i haven't seen noticeable improvements in speed with dynamic linking on linux. go with static -- it's easier and the memory footprint for your app will be smaller unless another program happens to be running at the same time with the exact same dependency (dynamic libraries must be loaded up in their entirety in memory, even if you only use 1 function).
Matthew Herrmann
As I will be running a lot of tests (generally performance tests) on the system that I will be developing, I think that static linking makes a lot more sense.Static linking also gives me the benefit of predictable performance, while with dynamic linking performance can oscillate and can even be influence by the order of the linking. For a more comprehensive study of this, here is a very good paper from the University of Lugano: http://www.inf.usi.ch/faculty/hauswirth/publications/CU-CS-1042-08.pdf
Salo
+1  A: 

This is another vote for static linking. I haven't noticed significantly longer linking times for out application. The app in question is a ~50K line console app, with multiple libraries that is compiled for a bunch of out of the ordinary machines, mostly supercomputers with 100-10,000 cores. With static linking, you know exactly what libraries you are going to be using, can easily test out new versions of them.

In general, this is the way that most Mac apps are built. It is what allows installation to be simply copying a directory onto the system.

KeithB