views:

98

answers:

3

We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity.

The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal.

The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.)

We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it.

For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

+3  A: 

This is a great question. On CentOS like most flavors of *nix there is a UDP receive/send buffer for every multicast socket. The size of this buffer is controlled by sysctl.conf you can view the size of your buffers by calling /sbin/sysctl -a

The below items show my default and max udp receive size in bytes. The larger these numbers the more buffering and therefor latency the network/kernel can introduce if your application is too slow in consuming the data. If you have built in good tolerance for data loss you can make these buffers very tiny and you will not see the latency build up and recovery you described above. The trade off is data loss as the buffer overflows - something you may be seeing already.

[~]$ /sbin/sysctl -a | mem net.core.rmem_default = 16777216 net.core.wmem_default = 16777216 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216

In most cases you need to set default = to your max unless you are controlling this when you create your socket.

the last thing you can do (depending on your kernel version) is view the UDP stats of the PID for your process or at the very least the box overall.

cat /proc/net/snmp | grep -i Udp Udp: InDatagrams NoPorts InErrors OutDatagrams Udp: 81658157063 145 616548928 3896986

cat /proc/PID/net/snmp | grep -i Udp Udp: InDatagrams NoPorts InErrors OutDatagrams Udp: 81658157063 145 616548928 3896986

If it wasn't clear from my post, the latency is due to your application not consuming the data fast enough and forcing the kernel to buffer traffic in the above structure. The network, kernel, and even your network card ring buffers can play a roll in latency but all those items typically only add a few milliseconds.

Let me know your thoughts and I can give you more information on where to look in your app to squeeze some more performance.

avirtuos
So is it fair to say that on the sending side, there is no in-kernel buffering?And on the receiving side, there is no kernel buffering, UNLESS the data isn't consumed quickly enough?Unfortunately, I don't have per-PID UDP stats (I'm not sure if my 2.6.9-78.ELlargesmp kernel isn't new enough, or simply not configured for this).Anyway, I'm definitely interested in any information regarding squeezing more performance out of our applications.Also note that we are actually concerned with "only a few milliseconds"---this is effectively a real-time application.Thanks!
Matt
Your summation is nearly 100% in-line with my understanding of the multicast stack in linux. The only difference is that queueing on the 'send' side can happen regardless of how fast you consume the data on the other end. Send side queuing is the result of insufficient network speed, poor quality network card, and general performance issues (load, mem, etc..). Send side queuing is _not_ very common. And yes, this is all on a milliseconds or more scale.
avirtuos
One more thing, on if your network card is using intel e1000 drivers on the send side you can see burst of 0.1 ms latency because of the way the driver handles hardware interrupts.
avirtuos
+1  A: 

Packets can queue up in the send and receive side kernel, the NIC and the networking infrastructure. You will find a plethora of items you can test and tweak.

For the NIC you can usually find interrupt coalescing parameters - how long the NIC will wait before notifying the kernel or sending to the wire whilst waiting to batch packets.

For Linux you have the send and receive "buffers", the larger they are the more likely you are to experience higher latency as packets get handled in batched operations.

For the architecture and Linux version you have to be aware of how expensive context switches are and whether there are locks or pre-emptive scheduling enabled. Consider minimizing the number of applications running, using process affinity to lock processes to particular cores.

Don't forget timing, the Linux kernel version you are using has pretty terrible accuracy on the gettimeofday() clock (2-4ms) and is quite an expensive call. Consider using alternatives such as reading from the core TSC or an external HPET device.

Diagram from Intel: alt text

Steve-o
+1  A: 

If you decide you need to capture packets in the production environment, it may be worth looking at using monitor ports on your switches and capture the packets using non-production machines. That'll also allow you to capture the packets on multiple points across the transmission path and compare what you're seeing.

Vatine
Plus consider one of the many consulting houses or monitoring specialists with dedicated hardware appliances like TS-Associates TipOff:http://www.ts-associates.com/view/tipoff
Steve-o