tags:

views:

227

answers:

5

I am developing a portable C++ application and looking for some best practices in doing that. This application will have frequent updates and I need to build it in such a way that parts of program can be updated easily.

  1. For a frequently updating program, creating the program parts into libraries is the best practice? If program parts are in separate libraries, users can just replace the library when something changes.
  2. If answer for point 1 is "yes", what type of library I have to use? In LINUX, I know I can create a "shared library", but I am not sure how portable is that to windows. What type of library I have to use? I am aware about the DLL hell issues in windows as well.

Any help would be great!

A: 

As soon as you're trying to deal with both Windows and a UNIX system like Linux, life gets more complicated.

What are the service requirements you have to satisfy? Can you control when client systems get upgraded? How many systems will you need to support? How much of a backward-compatibility requirement do you have.

Charlie Martin
Thanks for the reply. This is an opensource product. I don't require too much of backward compatibility.
Appu
+3  A: 
  1. Yes, using libraries is good, but the idea of "simply" replacing a library with a new one may be unrealistic, as library APIs tend to change and apps often need to be updated to take advantage of, or even be compatible with, different versions of a library. With a good amount of integration testing though, you'll be able to 'support' a range of different versions of the library. Or, if you control the library code yourself, you can make sure that changes to the library code never breaks the application.

  2. In Windows DLLs are the direct equivalent to shared libraries (so) in Linux, and if you compile both in a common environment (either cross-compiling or using MingW in Windows) then the linker will just do it the same way. Presuming, of course, that all the rest of your code is cross-platform and configures itself correctly for the target platform.

    IMO, DLL hell was really more of a problem in the old days when applications all installed their DLLs into a common directory like C:\WINDOWS\SYSTEM, which people don't really do anymore simply because it creates DLL hell. You can place your shared libraries in a more appropriate place where it won't interfere with other non-aware apps, or - the simplest possible - just have them in the same directory as the executable that needs them.

thomasrutter
Okay. That makes sense. Thanks for the answer.
Appu
+3  A: 

I'm not entirely convinced that separating out the executable portions of your program in any way simplifies upgrades. It might, maybe, in some rare cases, make the update installer smaller, but the effort will be substantial, and certainly not worth it the one time you get it wrong. Replace all executable code as one in most cases.

On the other hand, you want to be very careful about messing with anything your users might have changed. Draw a bright line between the part of the application that is just code and the part that is user data. Handle the user data with care.

TokenMacGuy
Small point upgrades that only fix a couple of bugs or security holes can often benefit from the application being broken up, because you only need to redistribute those files which have changed. Whether this is a good justification for breaking up your code into different libraries or not is another matter. Also, a more complex approach which doesn't need breaking up files may be to distribute a binary diff against the previous version, which is the way point upgrades to Mozilla Firefox work, for example.
thomasrutter
Whether incremental updates are worth it depends a whole lot on your volumes. At 10 million downloads, the advantages of a smaller installer are substantial. The costs of doing it right are fixed. But all steps in picking components for this installer should be automated. A manual process will fail from time to time.
MSalters
+2  A: 

If it is an application my first choice would be to ship a statically-linked single executable. I had the opportunity to work on a product that was shipped to 5 platforms (Win2K,WinXp, Linux, Solaris, Tru64-Unix), and believe me maintaining shared libraries or DLLs with large codebase is a hell of a task. Suppose this is a non-trivial application which involves use of 3rd Party GUI, Threads etc. Using C++, there is no real one way of doing it on all platforms. This means you will have to maintain different codebases for different platforms anyway. Then there are some wierd behaviours (bugs) of 3rd Party libraries on different platforms. All this will create a burden if application is shipped using different library versions i.e. different versions are to be attached to different platforms. I have seen people shipping libraries to all platforms when the fix is only for a particular platform just to avoid the versioning confusion. But it is not that simple, customer often has a different angle to how he/she wants to upgrade/patch which is also to be considered.

Ofcourse if the binary you are building is huge, then one can consider DLLs/shared-libraries. Even if that is the case, what i would suggest is to build your application in the form of layers like:- Application-->GUI-->Platform-->Base-->Fundamental

So here some libraries can have common-code for all platforms. Only specific libraries like 'Platform' can be updated for specific behaviours. This will make you life a lot easier.

IMHO a DLL/shared-library option is viable when you are building a product that acts as a complete solution rather than just an application. In such a case different subsystems use common logic simultaneously within your product framework whose logic can then be shared in memory using DLLs/shared-libraries.

HTH,

Abhay
MSalters
To be specific, what i intended to say is that, one can wrap the 3rd Party libraries in a layer-wise hierarchy for better maintainability. I do not see where am i suggesting to reinvent the wheel!
Abhay
A: 

To answer your question with a question, why are you making the application native if being portable is one of the key goals?

You could consider moving to a a virtual platform like Java or .Net/Mono. You can still write C++ libraries (shared libraries on linux, DLL's on windows) for anything that would be better as native code, but the bulk of your application will be genuinely portable.

justinhj
GUI toolkits like wxWidgets and GTK+ (and now QT4) can make cross-platform development a lot easier without resorting to application compatibility layers or virtual machines.
thomasrutter
I've written a lot of cross platform code and there's a lot of baggage to add to a system beyond the tool kits. Things like threading, synchronization, networking. Why bother making your own compatibility layers when you get all of it for free with VM's?
justinhj