We recently completed an analysis of multicast sending performance. Happily, Java and C performed almost identically as we tested different traffic sending rates on Windows and Solaris.
However, we noticed that the time to send a multicast message increases as the time between sends increases. The more frequently we call send, the less time it takes to complete the send call.
The application lets us control the amount of time we wait between calling send, below you see the time increasing as the delay between packets goes up. When sending 1000 packets/second (1 ms wait time), it only takes 13 microseconds to call send. At 1 packet/second (1000 ms wait time), that time increases to 20 microseconds.
Wait time (ms) us to send
0 8.67
1 12.97
10 13.06
100 18.03
1000 20.82
10000 57.20
We see this phenomenon from both Java and C and on both Windows and Solaris. We’re testing on a Dell 1950 server, with an Intel Pro 1000 dual port network card. Micro-benchmarking is hard, especially in Java, but we don’t think this is related to JITing or GC.
The Java code and the command line I'm using for the tests are at: http://www.moneyandsoftware.com/2009/09/18/multicast-send-performance/