views:

1279

answers:

2

I'm developing an FTP-like program to download a large number of small files onto an Xbox 360 devkit (which uses Winsock), and porting it to Playstation3 (also a devkit, and uses linux AFAIK). The program uses BSD-style sockets (TCP). Both of the programs communicate with the same server, downloading the same data. The program iterates through all the files in a loop like this:

for each file
    send(retrieve command)
    send(filename)
    receive(response)
    test response
    receive(size)
    receive(data)

On the Xbox 360 implementation, the whole download takes 1:27, and the time between the last send and first receive takes about 14 seconds. This seems quite reasonable to me.

The Playstation3 implementation takes 4:01 for the same data. The bottleneck seems to be between the last send and first receive, which takes up 3:43 of that time. The network and disk times are both significantly less than the Xbox 360.

Both these devkits are on the same switch as my PC, which does the file serving, and there is no other traffic on said switch.

I've tried setting the TCP_NODELAY flag, which didn't change things significantly. I've also tried setting the SO_SNDBUF/SO_RCVBUF to 625KB, which also didn't significantly affect the time.

I'm assuming that the difference lies between the TCP/IP stack implementations between Winsock and linux; is there some socket option that I could set to make the linux implementation behave more like Winsock? Is there something else I'm not accounting for?

The only solution looks to be to rewrite it so that it sends all the file requests together, then receives them all.

Unfortunately, Sony's implementation does not have the TCP_CORK option, so I cannot say if that is the difference.

+2  A: 

You want TCP_CORK. It'll prevent partial frames from being sent increasing throughput (at the expense of latency) - just like winsock.

int v,vlen;
v=1; vlen=sizeof(v);
setsockopt(fd, IPPROTO_TCP, TCP_CORK, &v, &vlen);

Set v=0 to flush the frames before receive:

int v,vlen;
v=0; vlen=sizeof(v);
setsockopt(fd, IPPROTO_TCP, TCP_CORK, &v, &vlen);

On most unixes you can improve your throughput further by using writev() or sendfile()...

geocar
+1  A: 

Wireshark is your friend, sniff the wire -- look at the packets see how each is being sequenced and see if you can't spot the difference/problem.

On high latency links you really want to make sure you buffer as much as possible keeping each TCP packet maxed out.

Send coalesce is ususally a good idea. It only triggers when there is more than one unacknowledged frame queued on the send side. Typically you should ONLY disable this feature if you know what your doing and your system provides comprehensive buffering otherwise disabling it is certain to negativly effect system performance on high latency networks.

For highest throughput buffer demarc should be exact factors of path MTU.