My Java application receives data through UDP. It uses the data for an online data mining task. This means that it is not critical to receive each and every packet, which is what makes the choice of UDP reasonable on the first place. Also, the data is transferred over LAN, so the physical network should be reasonably reliable. Anyway, I have no control over the choice of protocol or the data included.
Still, I am concerned about packet loss that may arise from overload and long processing time of the application itself. I would like to know how often these things happen and how much data is lost.
Ideally I am looking for a way to monitor packet loss continuously in the production system. But a partial solution would also be welcome.
I realize it is impossible to always know about UDP packet losses (without control on the packet contents). I was thinking of something along the lines of packets received by the OS but never arriving to the application; or maybe some clever solution inside the application, like a fast reading thread that drops data when its client is busy.
We are deploying under Windows Server 2003 or 2008.