tags:

views:

45

answers:

1

I'm creating a wcf web service that will be accepting many continuous requests, the web service will need to hold onto those requests until an internal application (which will be polling the web service every couple seconds) will make a request to the web service to determine if any requests exist and retrieve them if so. The internal application will then send a response back to the web service, which will pass the response back to the initial caller.

IE:

Client ---1) Request---> Web Service <---2) Request--- Internal Application

Client <--4) Response--- Web Service <---3) Response---Internal Application

I'm trying to design the implementation of the web service and was trying to think of the way to implement the mechanism that will accept the request, hold onto it until the internal application makes a request for data and the web service waits for the response from the internal application.

My major concerns are:

1) What if thousands of requests come in before the internal application makes a request, how should they be held?

2) What if the internal application dies and massive amounts of requests build up? (I'm going to need to have to timeout those requests)

3) How do I connect the initial request with the client and the response from the internal application?

4) The Client will be waiting be waiting for a response.

Would WCF Message Queuing help in this situation? Would the web service be able manage the Message Queue internally, as in, when a message comes in the web service would add the message to the queue and similarly, when the internal application makes a request the web service grabs the message from the top of the queue and passes it to the internal application and waits for the response from the internal application and passes it back to the client?

Is it that even possible?

What if 2000 client request come in at the same time, because the client will synchronously be waiting for a response, will the the above scenario work? Will I be able to match up the original request with the response back from the internal applications thread to provide the response back to the client?

Does the message queue method seem overkill? Could I just hold the requests in some static dictionary?

Do you have any other suggestions?

A: 

First off, frankly, a polling-type architecture sounds very wrong for real-time request handling. If it is at all possible to forward the requests directly to the application that will be processing them, that would be is infinitely preferable. You're opening yourself up to a whole host of additional problems, and you're limiting even your best-case response times to "a couple seconds", which is abysmal.

But. Assuming that you can't do anything about the architecture, queuing the messages for retrieval by the internal app is not the problem. An in-memory queue structure can handle that fine (just limit the size of the queue to a couple thousand and return an error if it fills up). Your real bottleneck is going to be the number of open requests on the Web Service. I just don't see how the specified architecture is going to be able to handle thousands of requests/second if each request has a minimum open time of "a couple seconds". Servers are not designed to keep that many requests open. Those that are capable of processing thousands of requests/second do so by responding to each request in a matter of milliseconds, and then clearing it out. If each request is open for multiple seconds, it's going to plaster your server - requests will queue up and 90% of them will end up timing out, assuming your server itself doesn't crash under the weight.

levand
thanks for you advise... This isn't my design, i just need to work with it... or see if i can push to change it...Thanks, have a good one.
stevenrosscampbell