views:

261

answers:

8

I need 2 different programs to work on a single set of data. I have can set up a network (UDP) connection between them but I want to avoid the transfer of the whole data by any means.

It sounds a little absurd but is it possible to share some kind of pointer between these two programs so that when one updates it the other can simple get the pointer and start using it ??

I am using Ubuntu 9.10

A: 

No, sorry. I heard long ago of an experimental OS that had a very large address space, where parts of it were on one machine and other parts on other machines. It would have allowed exactly what you ask...

Note: I am assuming that the 2 programs run on different machines. If they are simply different processes, you can use named sections to share data.

Timores
+12  A: 

You're talking about IPC - Interprocess Communication. There are many options.

One is a memory-mapped file. It comes close to doing what you described. It may or may not be the optimal approach for your requirements, though. Read up on IPC to get some depth.

Cheeso
Here's a link that might help. It uses the Boost libraries.http://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/quick_guide.html#interprocess.quick_guide.qg_interprocess_map
Dan
That's right but Linux's IPC are not User Friendly at all ...
Niklaos
IPC is generally not easy. Things like files (including memory-mapped files) and sockets get cleaned up by the OS on process termination. Shared memory generally does not. So you generally don't know if a given shared memory block is in use, and your program has to somehow determine if the needed shared memory block needs to be created or not.
Mike DeSimone
Niklaos: Good thing we're Programmers and not Users, then ;)
caf
+1  A: 

POSIX shared memory functions for Unix flavors. IBM mainframes (370/xa/esa/Zos) can use cross-memory services at a low level. You also have to consider whether your app will scale beyond a single processor or not.

+9  A: 

What you're looking for is usually called a "shared memory segment", and how you access it is platform-specific.

On POSIX (most Unix/Linux) systems, you use the shm_*() APIs in sys/shm.h.

On Win32, it's done with memory-mapped files, so you'll use CreateFileMapping()/MapViewOfFile() etc.

Not sure about Macs but you can probably use shm_*() there as well.

Drew Hall
shm_* will work as well on Macs, though Mach ports are the preferred way.
zneak
A: 

Putting aside the fact that it can be done, interprocess communication is never done by sharing resources - not to mention memory spaces. That is a proper recipe for disaster.

Proper IPC is done by proper means of communication such as sockets. Sharing memory is never the way to go.

Yuval A
Why never? It depends on the application, and even an apache webserver uses a shared memory "scoreboard" for communication between its processes.
frunsi
In general, it's better to try and avoid two programs/threads writing to the same memory (though sometimes it really is the best way). However, if one of the processes/threads is only reading the memory, this is largely OK, with all the caveats that apply to using shared memory.
Chinmay Kanchi
+2  A: 

Shared memory can give about the highest bandwidth of any form of IPC available, but it's also kind of a pain to manage -- you need to synchronize access to the shared memory, just like you would with threads. If you really need that raw bandwidth, it's about the best there is -- but a design that needs that kind of bandwidth is often one with a poorly chosen dividing line between the processes, in which case it may be unnecessarily difficult to get it to work well.

Also note that pipes (for one example) are a lot easier to use, and still have pretty serious bandwidth -- they still (normally) use a kernel-allocated buffer in memory, but they automate synchronizing access to it. The loss of bandwidth is because automating synchronization requires very pessimistic locking algorithm. That still doesn't impose a huge amount of overhead though...

Jerry Coffin
IPC mechanism performances changes depending on OS. No guarantee on performance of Shared Any. On some systems Shared memory is even slower than other methods, like using pipes as you proposed.
kriss
+1  A: 

Perhaps using "memcached" as broker between your two processes might be better, then each process can swap key's between each other.

Your constrained by I believe 1024Kb per key/value pair or less, but immediate benefits is interoperability, stability, and future ability to connect multiple processes on multiple machines together.

David
+1  A: 

If you really, really need to do that it's a hint that your two programs may really be changed to one with two threads... (if you have program sources it's a piece of cake to do it).

kriss