I was under the impression that the instability of UDP is a property of the physical layer, but seems that it isn't:
I am trying to send a message over UDP, which is divided into a sequence of packets. Message identification and re-ordering is done implicitly.
I tested this method over two apps that run on the same computer, and expected it to run smoothly. However, even though the data tansfer was entirly between two programs on the same machine, there were packet losses, and quite frequent too. The losses also seem to be quite random: Sometimes the whole message got through, sometimes not.
Now, the fact the losses occur even on the same machine, makes me wonder wether I am doing it right?
Originally, I sent all the peices of the message asynchronously in a single-shot, without waiting for the completion of one peice before sending the next one.
Then, I tried to send the next peice of the message from within the completion routine of the previous one. That did improve the packet-loss ratio, but didn't prevent it altogether.
If I added a pause (Sleep(...)) between the peices, it works 100%.
EDIT: As the answeres suggested: packets are simply sent too fast, and the OS does minimal buffering. That's logical.
So, if I would like to prevent adding acknowledgement and re-transmission into the system (I could just use TCP then), what should I do? What's the best way to improve the packet-loss ratio, without dropping the datarate to levels that could have been higher?
EDIT 2: It occured to me that the problem might not be exactly buffer-overfill, rather than buffer-inavailablity. I am using async WSARecvFrom to receive, which takes a buffer that as I understand, overrides the default OS buffer. When a datagram is received, it is fed into the buffer, and the completion routine is called wether the buffer is full or not.
At that point, there is no buffer at all to handle incoming data, until WSARecvFrom is re-called from within the completion routine.
The question is if there's a way to create some sort of buffers-pool, so data could be buffered while a different buffer is being processed?