views:

21

answers:

2

I am doing research about network traffic characterization. While processing collected data (captured by tcpdump and saved to a database), I stumbled over the weird phenomenon with packet (or flow) inter-arrival times:

Inter-arrival times of 35-170µsec are never observed

Of course, without a DAG card (which would do hardware time stamping of the packets), I can't rely on a precision in dimensions below msec. Nevertheless I'm searching for a reason, why this gaps exists in the following cumulative distribution function:CDF for flow inter-arrival times

I've also plotted the number of flows seen with a specific IAT: alt text

My data basis contains >13 Mio flows, so it's very unlikely that this gap exists by accident - I'm only searching for the reason.

Has it sth. to do with scheduling? I know the linux kernel scheduler (was a debian machine) uses a frequency of 250Hz, thus each tick is 4ms, which is bigger than my gap of 35-170µsec by factor >200. Is there any kind of scheduling done by the network card? There are many IATs of 0µsec seen, so I assume these packets are processed directly after each other. I can imagine that the kind of scheduler tick I'm searching for is about 40µsec, resulting in IATs of 0<x<40µsec and afterwards other things than my capturing is done (for 120µsec = 3ticks) and I get only ticks >120µsec.

Do you have a clue, how I could explain this gap? Thanks a lot! Steffen

A: 

Not sure really, but I could imagine the card doing some kind of book-keeping itself with a certain tick rate. Also, how does the range 35-170 µs relate to a packet length?

Amigable Clark Kant
A colleague of mine just had a theory that it could be sth. like 120µsec filling buffer and 40µsec reading the buffer.. I can't see ar realation between packet size and IAT: 1-3µsec also happen for 1500bytes packets and >200µsec also observed for tiny packets.
StephenKing
Interesting, does he have any data on that or is he just making things up like I do? :-)
Amigable Clark Kant
No, I also only showed him the graphics. Of course, it would be good to have a proven reason for this behavior, but it would also be sufficient to have some reasonable assumptions, what *could* cause this behavior.
StephenKing
+1  A: 

This is just a hypothesis (aka WAG), but perhaps 170us is the minimum time between consecutive interrupts from the NIC (either due to the NIC hardware, the DMA controller, the interrupt controller, the CPU or some combination of all these).

The packets with inter-arrival times of <35us would correspond to multiple packets received in the one interrupt (with different processing times, depending on size and protocol). 35us itself would correspond to the maximum number of packets that can be received in one interrupt (due to the size of the NIC buffers), with the worst-case processing times.

caf
Thanks, I've searched for PCI interrupt frequency, one thing I found was about an Intel PRO/1000 XT: "The large intercept of ~168 µs suggests that the driver enables interrupt coalescence, which was confirmed by the throughput tests that showed one interrupt for approximately every 33 (~ 400 µs) packets sent and one interrupt for every 10 packets received (~ 120 µs)." http://datatag.web.cern.ch/datatag/pfldnet2003/papers/hughes-jones.pdf.
StephenKing
.. and I also used an Intel PRO/1000 whatever card - so some interrupt handling sounds like a good pretext to explain this behavior.
StephenKing