One way is to send the data using UDP instead of TCP.
If you do, it's possible that some of the UDP packets will be lost (dropped by the network), so your code will need a method (e.g. sequence number in the packet headers) to detect lost packets.
If a TCP packet is lost, then TCP will retransmit it, which results in a delay. It's possible that for your application, when a packet is lost you might just want to do without that lost packet, not retransmit it, don't display this video frame (or display only the partial frame), and go on to display the next frame.
It depends on the application:
Are you streaming canned/prerecorded/non-real-time video, where you want to receive/display every frame even if some of them cause a delay?
Are you streaming live video, where you want to display the current frame in near-real-time (and even if some previous frames were lost, you don't want to delay while they're retransmitted)?
In terms of winsock architecture, the TransmitFile
or TransmitPackets
APIs are quite efficient: they're executing in the kernel, instead of causing round trips between your user mode code and O/S kernel mode code as each buffer is transmitted.
"minumum delay" aim is achieved
You may want some delay, to avoid jitter: better to have a small (e.g. 150 msec) fixed delay, than a delay which varies from 2 to 120 msec. See http://www.google.ca/search?hl=en&q=jitter+network