views:

514

answers:

9

My colleague claims that we should dissect our C++ application (C++, Linux) into shared libraries to improve code modularity, testability and reuse.

From my point of view it's a burden since the code we write does not need to be shared between applications on the same machine neither to be dynamically loaded or unloaded and we can simply link a monolithic executable application.

Furthermore, wrapping C++ classes with C-function interfaces IMHO makes the uglier.

I also think single-file application will be much more easy to upgrade remotely at customer's site.

So my question is should dynamic libraries be used when there is no need to share binary code between applications and no dynamic code loading?

Thanks.

+5  A: 

I'd say that splitting code into shared libraries to improve without having any immediate goal in mind is a sign of buzzwords-infested development environment. It is better to write code that can easily be split at some point.

But why would you need to wrap c++ classes into C-function interfaces, except for, maybe, for object creation?

Also, splitting into shared libraries here sounds like interpreted language mindset. In compiled languages you try not to postpone till runtime what you can do at compile-time. Unnecessary dynamic linking is exactly the case.

Michael Krelin - hacker
Incompatibilities of compilators make it easier not to use any C++ features in shared libraries interfaces. For example, if layout of class is implemented differently on the clients side (different aligment, for example), code compiled by their compilers would cause errors being applied to your data structures. There also were vtable issues with some old compilers.
Basilevs
Basilevs, that's generally true, but for what I understand it in no way applies to the case.
Michael Krelin - hacker
@Bavilevs: The solution is to offer two DLLs. On C++ DLL which will be used by client code on the same compiler and the other offering a C interface to the first. There is no need to penalize clients which share the same compilers by imposing them a C interface.
paercebal
@Basilevs: And hacker is right: In the current case, the compiler is already known, and unique for all static libraries. I see no reason for this to change should they try the "shared library" solution. So, basically, there is no need for a C interface between the shared libraries. IMHO, this "C interface" argument shows the question author is not familiar with shared libraries.
paercebal
+2  A: 

Shared libraries come with their headaches, but I think shared libraries are the right way to go here. I would say in most cases you should be able to make parts of your app modular and reusable elsewhere in your business. Also, depending on the size of this monolithic executable, it may be easier to just upload a set of updated libraries instead of one big file.

IMO, libraries in general lead to better code, more testable code, and allows for future projects to be created in a more efficient manner b/c your not reinventing the wheel.

In short, I agree with your colleague.

RC
But where is the point in modularizing code that probably will never be reused?I would agree to the modularization but only to a point where it is worth the efforts..
Shirkrin
IMO, I find modularization is worth the effort in most cases, especially when talking about a "monolithic" app. I don't find it that time consuming to compile stuff into libraries and use them in my code. Why libraries do nothing magical, I feel they help to make dependencies more apparent, lead to better testability, faster compile times, and to a better environment. Later If the code does end up being used again, when and who decides to make it a lib? Will the programmer take the time to do it then or just copy the files into his/her project and compile? It avoids a mess to do it early.
RC
+5  A: 

Enforcing shared libraries ensures that libraries doesn't have circular dependencies. Using shared libraries often leads to faster linkage and link errors are discovered at an earlier stage than if there isn't any linking before the final application is linked. If you want to avoid shipping multiple files to customers you can consider linking the application dynamically in your development environment and statically when creating release builds.

EDIT: I don't really see a reason to why you need to wrap your C++ classes using C interfaces - this is done behind the scenes. On Linux you can use shared libraries without any special handling. On Windows, however, you would need to ___declspec(export) and ___declspec(import).

larsm
You can, at least on some systems, have shared libraries with unresolved symbols. So its possible to create a group of shared libraries such that the must all be linked to an application together.
KeithB
+2  A: 

Short answer: no.

Longer answer: dynamic libraries add nothing to add to testing, modularity, or reuse that cannot be done just as easily in a monolithic app. About the only benefit that I can think of is that is may force the creation of an API in a team that does not have the discipline to do it on their own.

There is nothing magical about a library (dynamic or otherwise). If you have all of the code to build an application an the assorted libraries, you can just as easily compile it all together in a single executable.

In general, we've found that the costs of having to deal with dynamic libraries is not worth it unless there is a compelling need (libraries in multiple applications, needing to update a number of applications without recompiling, enabling the user to add functions to the application).

KeithB
+4  A: 

Improve reuse even though there will not be any? Doesn't sound like a strong argument.

Modularity and testability of code need not depend upon the unit of ultimate deployment. I would expect linking to be a late decision.

If truly you have one deliverable and never anticipate any change to that then it sounds like overkill and needless complexity to deliver in pieces.

djna
+1  A: 

Do a simple cost/benefit analysis - do you really need modularity, testability and reuse? Do you have the time to spend refactoring your code to get those features? Most importantly, if you do refactor, will the benefits you gain justify the time it took to perform the refactoring?

Unless you have issues with testing now, I'd recommend leaving your app as-is. Modularization is great but Linux has its own version of "DLL hell" (see ldconfig), and you've already indicated that reuse is not a necessity.

Ian Kemp
who doesn't need testability?
just somebody
+1  A: 

On Linux (and Windows) you can create a shared library using C++ and not have to load it using C function exports.

ie. You build classA.cpp into classA.so, and you build classB.cpp into classB(.exe) which links to classA.so. All you're really doing is splitting your application into multiple binary files. This does have the advantage that they are faster to compile, easier to manage and you can write apps that load just that library code for testing.

Everything is still c++, everything links, but your .so is separate from your statically linked application.

Now, if you wanted to load a different object at runtime (ie you don't know which one to load until runtime) then you would need to create a shared object with c-exports, but you will also be loading those functions manually, you would not be able to use the linker to do this for you.

gbjbaanb
+1  A: 

If you're asking the question and the answer isn't obvious, then stay where you are. If you haven't gotten to the point where building a monolithic application takes too long or it's too much of a pain for your group to work on together, then there's no compelling reason to move to libraries. You can build a test framework that works on the application's files if you want to as it stands or you can simply create another project that uses the same files, but attach a testing API and build a library with that.

For shipping purposes, if you want to build libraries and ship one big executable, you can always link to them statically.

If modularity would help with development, i.e. you're always butting heads with other developers over file modifications, then libraries may help, but that's no guarantee either. Using good object-oriented code design will help regardless.

And there's no reason to wrap any functions with C-callable interfaces necessary to create a library unless you want it to be callable from C.

+1  A: 

After re-reading your question, I re-edited my answer.

Dissecting your colleague arugments

If he believes that splitting your code into shared libraries will improve code modularity, testability and reuse, then I guess that this means he believes you have some problems with your code, and that enforcing a "shared library" architecture will correct it.

Modularity?

Your code must have undesired interdependencies that would not have happened with a cleaner separation between "library code" and "code using library code".

Now, this can be achieved through static libraries, too.

Testing?

Your code could be tested better, perhaps building unit tests for each separate shared library, automated at each compilation.

Now, this can be achieved through static libraries, too.

Reuse of code?

Your colleague would like to reuse some code that is not exposed because hidden in the sources of your monolithic application.

Conclusion

The points 1 and 2 can still be achieved with static libraries. The 3 would make shared libraries mandatory.

Now, if you have more than one depth of library linking (I'm thinking about linking together two static libraries which alread were compiled linking the other libraries), this can be complex. On Windows, this leads to error to link as some functions (usually the C/C++ runtime functions, when linked with statically) are referenced more than once, and the compiler can't choose which function to call. I don't know how this work on Linux, but I guess this could happen, too.

Dissecting your own arguments

Your own arguments are somewhat biased:

Burden of compilation/linking of shared libraries?

The burden of compiling and linking to shared libraries, compared to compiling and linking to static libraries is non-existent. So this argument has no value.

Dynamicaly loading/unloading?

Dynamically loading/unloading a shared library could be a problem in a very limited use case. In normal cases, the OS loads/unloads the library when needed without your intervention, and anyway, your performance problems lie elsewhere.

Exposing C++ code with C interfaces?

As for using a C-function interface for you C++ code, I fail to understand: You already link together static libraries with C++ interface. Linking shared libraries is no different.

You would have a problem if you had different compilers to produce each libraries of your application, but this is not the case, as you already link statically your libraries.

A single file binary is easier?

You're right. On Windows, the difference is negligible, but then, there is still the problem of DLL-Hell, which disappears if you add the version to your library names or work with WinXP. On Linux, in addition to the Windows problem above, you have the fact that by default, the shared libraries need to be in some system default directories to be useable, so you'll have to copy them there at install (which can be a pain...) or change some default environment settings (which can be a pain, too...)

Conclusion: Who is right?

Now, your problem is not "is my colleague is right ?". He is. As you are, too.

Your problem is:

  1. What do you really want to achieve?
  2. is the work necessary for this task worth it?

The first question is very important, as it seems to me that your arguments and your colleague's arguments are biased to lead to the conclusion that seems more natural for each of you.

Put it in another wording: Each of you already know what the ideal solution should be (according to each viewpoint) and each of you stacks up arguments to reach this solution.

There is no way to answer that hidden question...

^_^

paercebal
tl;dra (too long; did read anyway) the OP doesn't suggest they're linking static libraries, and without such a hint, it should be assumed that they produce the executable by linking together a large number of object files. also, it appears to me that the OP might be biased to the point of (subconsciously) conflating static and shared libraries; I find it hard to believe that his colleague wouldn't know (most of) the benefits he was after could be achieved by using static libraries.
just somebody