views:

42

answers:

1

I'm sending a very large string from one application to another on localhost using sockets in python. Small strings move instantly, but large strings seem to take a while longer (I say large, but I'm talking maybe a MB or two at the very most). Enough that I have to sit and wait a few seconds after I do something in one app before it shows up in another.

What bottlenecks are involved here? As I understand it, with sockets on 127.0.0.1, all I'm really doing is moving data from one point in memory to another. So transferring even hundreds of MB at a time should move perceptually instantly on my workstation.

+3  A: 

You are still moving the data through the entire network stack, just not going out through the network interface card itself.

There may be some shortcuts taken around the network stack with localhost, but it's most likely dependent on how the stack is implemented on the system you are using. Regardless shared memory or pipes will be much faster.

Here is a high level overview: http://docs.python.org/howto/sockets.html

PS: Not sure if this will work for your case, but the multiprocessing module has a some ways of sharing data between several processes in an efficient way.

PPS: You could try using a UDP socket instead of a TCP socket. This could potentially give you better throughput and not change drastically your method of IPC.

MattK
Would it then make sense to compress the string before sending it through the socket and deflating it on the other side?
directedition
Sockets, on the other hand, are very flexible when it's time to deploy the apps. It's a major coding effort to spread two processes talking via shared memory onto two boxes. With sockets - it's a parameter change. It's always a trade-off.
Nikolai N Fetissov
@directedition: No. Compressing is likely to be much slower than the gain.
nosklo