I am doing research about network traffic characterization. While processing collected data (captured by tcpdump and saved to a database), I stumbled over the weird phenomenon with packet (or flow) inter-arrival times:
Inter-arrival times of 35-170µsec are never observed
Of course, without a DAG card (which would do hardware time stamping of the packets), I can't rely on a precision in dimensions below msec. Nevertheless I'm searching for a reason, why this gaps exists in the following cumulative distribution function:
I've also plotted the number of flows seen with a specific IAT:
My data basis contains >13 Mio flows, so it's very unlikely that this gap exists by accident - I'm only searching for the reason.
Has it sth. to do with scheduling? I know the linux kernel scheduler (was a debian machine) uses a frequency of 250Hz, thus each tick is 4ms, which is bigger than my gap of 35-170µsec by factor >200. Is there any kind of scheduling done by the network card? There are many IATs of 0µsec seen, so I assume these packets are processed directly after each other. I can imagine that the kind of scheduler tick I'm searching for is about 40µsec, resulting in IATs of 0<x<40µsec and afterwards other things than my capturing is done (for 120µsec = 3ticks) and I get only ticks >120µsec.
Do you have a clue, how I could explain this gap? Thanks a lot! Steffen