views:

170

answers:

5

I am developing a Windows proxy program where two TCP sockets, connected through different adapters are bridged by my program. That is, my program reads from one socket and writes to the other, and vice versa. Each socket is handled by its own thread. When one socket reads data it is queued for the other socket to write it. The problem I have is the case when one link runs at 100Mb and the other runs at 10Mb. I read data from the 100Mb link faster than I can write it to the 10Mb link. How can I "slow down" the faster connection so that it is essentially running at the slower link speed? Changing the faster link to a slower speed is not an option. --Thanks

+8  A: 

Create a fixed length queue between reading and writing threads. Block on the enqueue when queue is full and on dequeue when it's empty. Regular semaphore or mutex/condition variable should work. Play with the queue size so the slower thread is always busy.

Nikolai N Fetissov
+6  A: 

If this is a problem, then you're writing your program incorrectly.

You can't put more than 10mbps on a 10mbps link, so your thread that is writing on the slower link should start to block as you write. So as long as your thread uses the same size read buffer as write buffer, the thread should only consume data as quickly as it can throw it back out the 10mbps pipe. Any flow control needed to keep the remote sender from putting more than 10mbps into the 100mbps pipe to you will be taken care of automatically by the TCP protocol.

So it just shouldn't be an issue as long as your read and write buffers are the same size in that thread (or any thread).

Southern Hospitality
Hmm, I think this only works if reads and writes are done on the same thread.
Nikolai N Fetissov
@Nikolai, why? As long as you don't overflow your buffers, you'll be fine.
Carl Norum
I guess this stems from the fact that I allocate a buffer, read into it and queue it for the write thread. And then repeat this process. The write thread dequeues, writes and frees the buffer.
meg18019
As described, the OP uses a queue between two threads with two different socket descriptors (in/out). The 'in'-thread gets data faster then 'out'-thread can write it out. So unless there's internal flow control the queue, unless bounded as I suggested, would overflow available memory (you don't know how many times I've seen this in 'production-grade' applications :) Mine is just one of possible solution. Yours works too but it needs control dependency between reads and writes.
Nikolai N Fetissov
Probably pointless to use a second thread anyway. It's going to be blocked on the 10mbps side so much a read of the 100mbps tcp buffer is going to be irrelevant in the scheme of things.
Duck
@Duck, yes! Totally pointless. Single thread handling both sides is easy and would let TCP handle the flow control.
Nikolai N Fetissov
Unfortunately, I cannot run this in the same thread. One link is running through the Windows TCP stack and the other through a trial stack. So all socket calls are handled independently by each stack.
meg18019
It's easy to put flow control in the queue. Just have a limited number of buffers, on free and used list (queue). When you need a buffer, get it from the free list and fill it and put it on the used (queue). When you empty it, move it back. After a while, all your buffers will be on the full queue and you'll block trying to get one off the empty queue. And since you are blocked, you won't be able to read from your input socket and thus it'll back up and quench the source. Bam, that's your flow control. Keep free and empty queues for each direction separate ideally.
Southern Hospitality
+4  A: 

Stop reading the data when you are not able to write it.

There is a queue of bytes coming into your program from the 100Mb/s link, and a queue out of your program to the 10Mb/s link. When the outgoing queue is full, stop reading from the incoming queue and TCP with throttle back the client on the 100Mb/s link.

You can use an internal queue between the reader and the writer to implement this cleanly.

janm
A: 

If you are doing a non-blocking, select()-style event loop: only call FD_SET(readSocket, &readSet) if your outgoing-data queue is smaller than some hard-coded maximum size.

That way, when the outgoing socket falls behind, your proxy will stop reading data from the faster client until it catches back up. The TCP protocol will take care of the rest (in particular, it will tell your faster client to slow down for a while)

Jeremy Friesner
+3  A: 

A lot of complicated - and correct - solutions have been expounded. But really, to get to the crux of the matter - why do you have two threads? If you did the socket-100 read, socket-10 write in a single thread, it would naturally block on the write and you wouldn't have to design anything complicated.

Chris Becke