views:

292

answers:

4

While testing out a UDP multicast server that I've written on Windows 7 Ultimate x64, I came across a most curious thing. Playing music with foobar2000 in the background significantly improved the server's transmission rate yet also incurred minor packet loss. Turning the music off immediately dropped the transmission rate to below acceptable levels but also produced 0 packet loss. (I have a client application which talks to the server and reports back unacknowledged packets)

I am aware of Vista's (and up) throttling behavior to make media and network applications play well together, but I certainly did not expect that playing music would improve network performance, nor that turning it off degraded network performance so significantly.

What can I do about this from a code standpoint in my server application so that it performs consistently whether playing music or not on Vista and up? I would certainly like to avoid having to inform all my clients about how to tweak their registry to get acceptable transmission rates, and would also like to avoid having them simply "play music" in order to get acceptable transmission rates as well. The application should "just work" in my opinion.

I'm thinking the solution involves something along the lines of process priorities, MMCSS, or possibly some other obscure Windows API call to get it to do The Right Thing(TM) here.

Also, sorry but creating a reproducible test case is a non-trivial amount of work. The throttling behavior occurs only when the driver for the physical NIC is actively doing work and cannot be reproduced using the loopback interface. One would need a client implementation, a server implementation, and physical network hardware to test with.

A: 

Foobar got many plugins written by different people. These may be the cause of your issue. I propose you to come closer to the real reason. Try to switch off plugins one by one performing your test each time a plugin is disabled.

Hope the idea will help.

Vasiliy Borovyak
I highly doubt the problem is foobar2000 specific. I will try other media players to diagnose the issue further. It *might* have something to do with the USB 2.0 audio interface I use. I will run some tests with the onboard sound card as well.
James Dunne
A: 

This sounds like TSP/IP managing throughput based on it's primitive algorithm. The white paper here should give more background. http://www.asperasoft.com/?gclid=CICSzMqD8Z0CFShGagod%5FltSMQ Their product is a UDP protocol that works very well.

Mike Trader
I should mention that I use both TCP and UDP protocols within the same application. TCP is only for control information and coordination of all clients; it should not be the bottleneck here. I use UDP purely for data transfer and keep the packets under 1500 bytes. I was using all UDP but ran into serious synchronization issues and was about to re-invent the TCP wheel so I said why not just use TCP.
James Dunne
TCP/IP is the first place I would look for your problem.
Mike Trader
@Mike: Do you think the UDP packet loss on the server side is somehow related to the TCP communication going on? TCP is only used to notify all clients that the next batch of data is about to be sent over UDP. There is also a 'sector complete' message that is sent after a short delay from the server. While a UDP batch transfer is in place, there are no TCP messages sent back and forth by the application except the normal TCP traffic flow that the OS does in order to keep the connection established.
James Dunne
+2  A: 

It has been many years since I wrote network protocol related code, but I'll offer a guess.

I suspect this is an issue of throughput and latency. Playing music is introducing I/O contention and adding latency in transmitting the packets. However, the added latency is likely causing the packets to queue and thus batched increasing throughput.

To address this in your code, you might try sending the packets in batches yourself. I am assuming that you are sending each packet to the system for transmission as the data becomes ready. Group multiple packets and send them to the system at the same time. Even just a group of two or three packets could make a dramatic difference especially if you are introducing your own small delay between each system call.

I couldn't find any directly relevant links from a quick search on Google. However, you can see the concept in this discussion of network tuning for Linux or in this FAQ which describes techniques such as batching to improve throughput.

g .
I am already batching packets together in groups of 1,024. The packets are about 1400 bytes each. I get about 50-60 negatively acknowledged packets by the client out of the 1,024. When I turn the media player off on the server machine, the NAK count drops immediately to 0 and better throughput is achieved. This seems to indicate that the server is responsible for the packet loss. I'm wondering what I can do from the server side, if anything, to automatically accommodate for this I/O contention.
James Dunne
Your comment seems to indicate the opposite of your question. Are you really trying to send 1024 packets every 50-100 µs? That's insane throughput. And is the problem really the throughput or the data loss?
g .
Oh no no. The 50-100 microsecond delay is the delay between sending individual packets within the 1024 packet sector. Also I don't believe I mentioned my 50-100 microsecond delay in this question. Are you cross-referencing my SO questions? :)
James Dunne
+1  A: 

What you observe is the side-effect of your media player setting clock resolution of your machine to 1 ms.

This happens only during the play

The side-effect is - your app has smaller timeslices and this imporves your app because you probably had lot of CPU stolen from your app and with longer timeslices - for longer time.

To test it you can simply set the timer resolution within your app to 1ms and compare performance without media playing.

Should be the same as if with no clocres setting but with media playing.

Bobb