I wrote a C++ application (running on Linux) that serves an RTP stream of about 400 kbps. To most destinations this works fine, but some destinations expericence packet loss. The problematic destinations seem to have a slower connection in common, but it should be plenty fast enough for the stream I'm sending.
Since these destinations are able to receive similar RTP streams for other applications without packet loss, my application might be at fault.
I already verified a few things: - in a tcpdump, I see all RTP packets going out on the sending machine - there is a UDP send buffer in place (I tried sizes between 64KB and 300KB) - the RTP packets mostly stay below 1400 bytes to avoid fragmentation
What can a sending application do to minimize the possibility of packet loss and what would be the best way to debug such a situation ?