views:

221

answers:

3

I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too):

64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms
64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms
64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms
64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms
64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms
64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms
64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms
64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms
64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms

Even if this is double what the iPhone->Host or Host->iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options?

Traceroute reports more workable times:

traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets
 1  192.168.1.3 (192.168.1.3)  4.662 ms  3.182 ms  3.034 ms

can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

+3  A: 

Remember that a "round trip" for ping includes the times for host1->AP->host2->AP->host1, while "round trip" for traceroute includes host1->AP->host1. Those ping RTT times are actually pretty good. At my house, they average close to 250 ms and frequently reach over 300ms for my 3GS.

Ping response times are impacted by the kernel's availability. If the CPU is busy when an ICMP request comes in, then it is buffered until the CPU can process it. There are plenty of opportunities for this blocking to occur on a resource-constrained device like the iPhone (or, say, an overburdened router). In addition, the iPhone OS will to some extent attempt to queue packets in order to transmit in bursts. This prevents the radio from continuously transmitting, thus saving power. Of course, this affects latency and would challenge any application that needs a low and/or steady latency (e.g. VoIP).

There is currently no public standard for TCP/IP over USB per se (as opposed to 1394, for which there is). Since USB is a serial link layer, data can theoretically be passed over the dock connector using your own protocol, or a predefined one (e.g. PPP). Once an EASession is established, communication occurs over the normal NSInputStream/NSOutputStream.

Zack
Thanks for that. So what would more likely mirror an app-to-app data send, ping or traceroute or neither?
Yar
That would depend a lot on how you plan to do "an app-to-app data send". TCP will have different characteristics than UDP, XMPP will have different characteristics than HTTP, etc. I'd worry less about ping and traceroute and just do a spike solution of your intended transport, and see what happens. BTW, ping times to an Android device are on par with what you're seeing for the iPhone.
CommonsWare
Thanks @CommonsWare, I'll be testing some existing apps next to get an idea. Building my own, just to prototype this potential bottleneck, is costly with this objective-c stuff (for those who have to learn it).
Yar
There are plenty of live performance apps available for the iPhone. Don't let the latency get you down. Rather than transmitting MIDI, you might consider transmitting a compressed PCM data over UDP. Depending on the codec, dropped packets can be masked. This path has been well trodden in the VoIP world. Packet jitter can be managed with an appropriate buffer structure. For example, many VoIP systems can handle up to 200ms of packet jitter, and over 1000ms of packet delay. Most humans will probably tolerate delays of less than 300ms for common applications.
Zack
+1  A: 

I do a lot of cellular work with Verizon and AT&T. Ping times when pointing to a mobile device have to be taken with the understanding that any initial connection attempt will be higher than normal.

If the baseline that we see for ping RTT can be around 300 ms on average for AT&T. They are even higher for Verizon 400 ms to 600 ms.

But the first packet for each carrier has to first find the mobile device. And because of that the first response you get can be really (really) high. 3000 ms to as high as 4500 ms is what I've seen on a network I manage that has 2700 mobile endpoints that we connect to regularly from a monitoring system.

Additionally any environment with a lot of RF noise will create latency and dropped packets. Even your home can generate plenty of noise to interfere with devices that operate over radio.

This probably isn't helpful but... If you can use an API that has better buffering capabilities you might be better off... or look more closely at the buffering capabilities of the current API you're thinking about using.

I hope you get it working =)

jfgrissom
Thanks, that does help to a certain extent, +1
Yar
+1  A: 

I think this may be the WiFi power-save mode killing you. I think the phone buffers up packets and sends them out only occasionally. I saw similar behavior over WiFi on an N900 I was playing with.

Notice the strong pattern in the pings you posted. This is likely a beat pattern generated by the pings and antenna switching on and off periodically.

Justin L.
I'm marking this as best answer for now, let's see if anyone else has anything to say. Thanks!
Yar