tags:

views:

652

answers:

6

I'm looking for a way to get two programs to efficiently transmit a large amount of data to each other, which needs to work on Linux and Windows, in C++. The context here is a P2P network program that acts as a node on the network and runs continuously, and other applications (which could be games hence the need for a fast solution) will use this to communicate with other nodes in the network. If there's a better solution for this I would be interested.

+2  A: 

This is a hard problem.

The bottleneck is the internet, and that your clients might be on NAT.

If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.

Because it boils down to: use TCP. Suck it up.

Will
The two processes are on the same machine, but one of them is communicating with the internet. The other process must communicate with that one to send data on the P2P network.
Stephen Cross
aha, and I was thinking the question was about how to make a P2P network. I didn't even get started on hole-punching and relay servers and such.
Will
+1  A: 

I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.

Omnifarious
While UDP can be good in certain circumstances, TCP is the way to go if you want the data there in one piece and latency isn't the most critical component.
Xorlev
+8  A: 

boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.

Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.

villintehaspam
yep , winning combo
Hassan Syed
Ok, looks like boost is the way to go with this...I'm already using it for boost::signal and I was probably going to use it for boost::asio
Stephen Cross
+4  A: 

I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.

Boatzart
ICE either requires that you publish your program under GPL or buy a commercial license, which may or may not pose a problem for the OP.
villintehaspam
The program itself will be licensed under the GPL so this isn't a problem
Stephen Cross
+1  A: 

So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.

First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.

Second, use both UPnP and NAT-PMP. One library here, for example.

Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.

Andrew McGregor
+1  A: 

Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)

Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.

Jeremy Friesner