views:

266

answers:

7

Is it possible to use shared object files in a portable way like DLLs in Windows??

I'm wondering if there is a way I could provide a compiled library, ready to use, for Linux. As the same way you can compile a DLL in Windows and it can be used on any other Windows (ok, not ANY other, but on most of them it can).

Is that possible in Linux?

EDIT:
I've just woke up and read the answers. There are some very good ones.
I'm not trying to hide the source code. I just want to provide an already-compiled-and-ready-to-use library, so users with no experience on compilation dont need to do it themselves.
Hence the idea is to provide a .so file that works on as many different Linuxes as possible.
The library is written in C++, using STL and Boost libraries.

A: 

Just putting a .so file into /usr/lib may work, but you are likely to mess up the scheme that your distro has for managing libraries.

Take a look at the linux standard base - That is the closest thing you will find to a common platform amongst linux distros.

http://www.linuxfoundation.org/collaborate/workgroups/lsb

What are you trying to accomplish?

Jay Dubya
Well there is the convenience of not having to publish the source, and also the hassle of compiling.
Unknown
+2  A: 

So, the question is, how to develop shared libraries for Linux? You could take a look at this tutorial or the Pogram Library Howto.

skoob
no, he is not. he is asking how to remove/avoid the static version dependency in SO.
Francis
A: 

I know what you are asking. For Windows, MSFT has carefully made the DLLs all compatible, so your DLLs are usually compatible for almost every version of Windows, that's why you call it "portable".

Unfortunately, on Linux there are too many variations (and everyone is thinking to be "different" to make money) so that you cannot get same benefits as Windows, and that's why we have lots of same packages compiled for different distributions, distro version, CPU type, ...

Some say the problem is caused by (CPU) architecture, but it is not. Even on same arch, there's still different between distributions. Once you've really tried to release a binary package, you would know how much hard it is - even C runtime library dependency is hard to maintain. Linux OS lacks too much stuff so almost every services involves dependency issue.

Usually you can only build some binary that is compatible to some distribution (or, several distributions if you are lucky). That's why releasing Linux programs in binary always screwed up, unless bound to some distro like Ubuntu, Debian, or RH.

Francis
This is a bit of a defeatist attitude. Linux supports a wider variety of architectures than Windows, you're not going to get around that. But within a single architecture, distributions are almost entirely binary compatible, provided you don't have dependencies on other libraries or executables etc on the system.
MarkR
Apparently you haven't seen the real world and got the idea for why and how they did that. If you've ever worked in the Linux software company you'll know.
Francis
+3  A: 

If you'd like to help your users by giving them compiled code, the best way I know is to give them a statically linked binary + documentation how they can run the binary. (This is possibly in addition to giving the source code to them.) Most statically linked binaries work on most Linux distributions of the same architecture (+ 32-bit (x86) statically linked binaries work on 64-bit (amd64)). It is no wonder Skype provides a statically linked Linux download.

Back to your library question. Even if you are in an expert in writing shared libraries on Linux, and you take your time to minimize the dependencies so your shared library would work on different Linux distributions, including old and new versions, there is no way to ensure that it will work in the future (say, 2 years). You'll most probably end up maintaining the .so file, i.e. making small modifications over and over again so the .so file becomes compatible with newer versions of Linux distributions. This is no fun doing for a long time, and it decreases your productivity substantially: the time you spend on maintaining the library compatibility would have been much better spent on e.g. improving the functionality, efficiency, security etc. of the software.

Please also note that it is very easy to upset your users by providing a library in .so form, which doesn't work on their system. (And you don't have the superpower to make it work on all Linux systems, so this situation is inevitable.) Do you provide 32-bit and 64-bit as well, including x86, PowerPC, ARM etc.? If the .so file works only on Debian, Ubuntu and Red Hat (because you don't have time time to port the file to more distributions), you'll most probably upset your SUSE and Gentoo users (and more).

pts
I'm sorry, I dont completely understand this static and dynamic linking stuff yet. Is it possible to link/load at runtime (i.e. dynamically) a static library (a .a file) ?? or just .so files are possible to link/load dynamically?
GetFree
You can add the contents of an .a file to your executable at compile time (this is called static linking). You can tell your executable fetch some code (symbols) from an .so file when it starts running (this is called dynamic linking). It is not possible to load an .a file dynamically. It is not possible to add the contents of an .so file to an executable at compile/link time. It is possible to link some libraries statically (if you have the .a files), and some libraries dynamically (if you have the .so files) to the same executable. An executable with no dynamic libs is a static executable.
pts
+8  A: 

I highly highly recommend using the LSB app / library checker. Its going to tell you quickly if you:

  • Are using extensions that aren't available on some distros
  • Introduce bash-isms in your install scripts
  • Use syscalls that aren't available in all recent kernels
  • Depend on non-standard libraries (it will tell you what distros lack them)
  • And lots, upon lots of other very good checks

You can get more information here as well as download the tool. Its easy to run .. just untar it, run a perl script and point your browser at localhost .. the rest is browser driven.

Using the tool, you can easily get your library / app LSB certified (for both versions) and make the distro packager's job much easier.

Beyond that, just use something like libtool (or similar) to make sure your library is installed correctly, provide a static object for people who don't want to link against the DSO (it will take time for your library to appear in most distributions, so writing a portable program, I can't count on it being present) and comment your public interface well.

For libraries, I find that Doxygen works the best. Documentation is very important, it surely influences my choice of library to use for any given task.

Really, again, check out the app checker, its going to give you portability problem reports that would take a year of having the library out in the wild to obtain otherwise.

Finally, try to make your library easy to drop 'in tree', so I don't have to statically link against it. As I said, it could take a couple of years before it becomes common in most distributions. Its much easier for me to just grab your code, drop it in src/lib and use it, until and if your library is common. And please, please .. give me unit tests, TAP (test anything protocol) is a good and portable way to do that. If I hack your library, I need to know (quickly) if I broke it, especially when modifying it in tree or en situ (if the DSO exists).

Tim Post
+3  A: 

Ideally, you'll want to use GNU autoconf, automake, and libtool to create configure and make scripts, then distribute the library as source with the generated configure and Makefile.in files.

Here's an online book about them.

./configure; make; make install is fairly standard in Linux.

The root of the problem is that Linux runs on many different processors. You can't just rely on the processor supporting x86 instructions like Windows does (for most versions: Itanium (XP and newer) and Alpha (NT 4.0) are the exceptions).

R. Bemrose
A: 

Tinkertim's answer is spot on. I'll add that it's important to understand and plan for changes to gcc's ABI. Things have been fairly stable recently and I guess all the major distros are on gcc 4.3.2 or so. However, every few years some change to the ABI (especially the C++ related bits) seems to cause mayhem, at least for those wanting to release cross-distro binaries and for users who have gotten used to picking up packages from other distros than they actually run and finding they work. While one of these transitions is going on (all the distros upgrade at their own pace) you ideally want to release libs with ABIs supporting the full range of gcc versions in use by your users.

timday