views:

173

answers:

7

Are there a real reason to use dynamic linking and binary distributions these days?

For binary distributions there's an alternative in distributing everything in source code and letting the platform do the choice of compiling binaries or not. But whether it is usable or not depends about how well can today's computers compile from source code.

Dynamic linking belongs to the question, since it allows distributing libraries in binaries as well.

So, how good performance a compiler can show off? With optimizations or without? What can be done to get better performance out from a compiler?

+3  A: 

Your question is a bit unclear but it appears to be relating two items which are not related. Breaking up components into dynamic and static linking libraries does make compilation faster in many cases. However dynamic and static libraries were not invented for this purpose. They were invented to provide re-usable components between features and programs. Not to make compiling faster.

JaredPar
You appear to fail see connection if it isn't stuck down from your throat. Yes, dynamic linking and binary distribution do not make compiling faster, instead they make sure you don't need to compile programs regularly enough to make compiler performance an issue. They were invented back in the days when computers weren't fast enough to compile software every time when it gets used.
Cheery
No, they were invented to reuse code and have a single location for the routines which could then be used by the whole system or to be upgraded in an easy way. Remember when somebody found a hole in zlib? There were many apps with zlib statically linked for various reasons, upgrading them all was a pain.
Vinko Vrsalovic
@Cheery, please do a little research before handing out insults. http://en.wikipedia.org/wiki/Dynamic_libraries
JaredPar
I clarified the question up a bit. I hope you now are able to see what connection I did between things. ..and insulting makes good for you.
Cheery
+1  A: 

Three things come right up to mind reading your question:

  1. Dynamic linking is unrelated to binary distribution.

  2. You just can't use good optimizations at compile time if you want compilation to be as fast as possible. (Ie, to make a fast compiler, remove every optimization)

  3. JIT compilers seems to be able to achieve a good compromise between execution speed and compilation speed but the code they run is deployed and distributed as binary because there are still some optimizations that can be ran on the first compile (the most expensive ones) and because you really do not want to have a complete toolchain on every computer to be able to distribute source code.

Vinko Vrsalovic
+1  A: 

Dynamic linking also allows for runtime discovery of installed components which is useful in at least two situations:

  1. An application supports licensed features that may or may not be present in a given installation of the product.
  2. An application supports a plugin architecture where a third party can create components for it.
17 of 26
A: 

When recompiling a project with single binary target normally only files changed since the last compilations are recompiled before linking thus having separate binary targets should improve the overall compilation time, but only slightly.

Muxecoid
+1  A: 

There are real reasons to provide binary distribution of software. Setting aside the business concerns for obfuscating your software by compilation so proprietary logic is not provided in a very clear way, it makes things simpler for the user, which is who the software is for.

I would really dislike to get large software packages like GCC, Gnome, PHP, and a million others in source-format unless I were developing that software. Even on my quad core machine, compilation takes time. Id much rather just move some binary blobs about.

Also remember that (on Linux systems at-least) creating binary distributions allows for consistent and stable systems that have been tested. Creating binary distributions is the best directly translated tested software configurations to the user.

Given that many JIT/Interpreted languages run around 1/2 the speed of C (roughly, im sure some do better), I would rather have a machine-code distribution of the software than see everything written in Java/C#. Especially when I have no need to see the code. Never mind downloading source distributions and compiling-on-demand. As a user (and developer) RPM/.debs are much simpler.

So this kind of answers "Are there a real reason for people to use dynamic linking and binary distributions these days?". The issue of dynamic libraries isn't really an issue. Runtime symbol resolution doesn't degrade performance too much. How would projects like Apache and countless others handle a module architecture otherwise? (Hey, they could always have a built in compiler/interpreter, linker, loader and do it manually! shudders )

software is compiled once, and used a hell of alot

As for making compilers faster, this depends on the semantics of the language being compiled, and the grunt work of analysis. You could write a very fast C compiler, but the code might not be optimal and will therefore run slower and have a larger memory footprint. Considering software is compiled once and ran often, I would rather software take 1-hour more to compile but save me that time in speed later. But that doesn't matter because we have binary distributions.

Aiden Bell
A: 

The gentoo Linux distribution does just that. We're close to the point where it becomes more cheap to distribute source instead of binaries. Currently, there are a couple of issues which need to be solved:

  1. You need a compiler, first. That will always have to be supplies as binary, so a 100% source system will never work. But that's a side issue, really.
  2. Compiling is slow, even today. Compilers are getting better and CPUs are getting faster and tools like make allow to compile code in parallel. But it's still slower than copying a file from the installation media to disk. A lot slower.
  3. Modern languages (a.k.a Scripting Languages) usually compile on the fly. That solves this issue at the cost of runtime speed. But they are getting better. In a couple of years, they'll catch up. In the end, it's just a limit of the CPU power how many optimizations you can run in a scripting language.
  4. Companies don't want to hand out source code.
  5. To compile something from source, you need all the things it depends on, so you need to compile them, too, even if you don't really need them. Imagine an image manipulation program. It can read lots of different file formats. Do you really want to compile all libraries for each exotic image format out there before you can start installing the program itself?

OSS solves issue #3. gentoo solves #4. Right now, we're just stuck at #2, really. Todays CPUs are just too slow to run something like games or MS Office from source code.

Aaron Digulla
A: 

It depends on the type of project you are building.

Some word editor that waits 100 times more for input than calculations doesn't need binary but a real time game will surely need the last bit of speed especially those that have ai players.

And there are more cases: operating systems, graphics editors (you don't want to wait 5 min for an effect to be processed), simulators, research things. So most of the interesting things that can be made with a computer need speed at runtime and not so much compile time speed.

csiz