views:

33

answers:

2

I'm writing a web site that uses multiple web services with throttle restrictions. I.e. Amazon is 1 request per second, another is 5000/day another is x/minute.

When a user does something, I need to trigger one or more requests to the above services and return the results (to the browser) when available.

The solution need to be flexible so I can easily add/remove services.

I thought of a FIFO queing system, but some later requests may actually be eligible for processing before earlier ones.

I'm asking for a design pattern, but any suitable technology suggestions are very welcome, particularly .NET.

Thanks!

A: 

I'm not sure that I've completely understood where you see the problem. From

some later requests may actually be eligible for processing before earlier ones.

I infer that you're concerned about buffering requests that cannot be satisified now but may be worked on shortly.

You received a request such as

 { Amazon, X }

and due to (say) X throttling can't satisify that request right now.

My first question would be, are the requests independent, that is can I process the Amazon request immediately and queue the X request? If so, then a simple FIFO queue for each servive will surely do the job. You probably will need to have a maximum size of queue (given that HTTP requests timeout, you can't wait for hours).

If you have in mind deferring issuing the Amazon request until it's possible to issue the X request then things get more complicated. I think you have in effect a meeting scheduling problem. You need to find a slot when both Amazon and X are free. So you could have some kind of List of Queues, each queue is for requests to be satisified in that time unit for a service.

Amazon(3 per sec)
      09:05:31  -  request A, B, C
      09:05:32  -  request D, E, F
      09:05:33  -  request G  -  -  <=== slots available
      ---                           <=== times and slots available

X (2 per min)
      09:05     -  request M, N
      09:06     -  request O        <=== slot available

Here our { Amazon, X } has a slot available at 09:06

Amazon(3 per sec)
      09:05:31  -  request A, B, C
      09:05:32  -  request D, E, F
      09:05:33  -  request G  -  -  <=== slots available
      ---                           <=== times and slots available
      09:06:01  -  request P

 X (2 per min)
      09:05     -  request M, N
      09:06     -  request O, P

Personally, I'd start with something much simpler: If the request cannot be satisified right now because any one service limit is reached, just reject the request.

djna
A: 

Thanks for your comment. Basically I don't want to reject requests, I want to queue them and display back to user when they have been processed. Kind of like an ordering system.

0:00:01 Amazon request comes in -> next slot available is in 2 seconds (0:00:03)

0:00:02 x request comes in -> next slot available for this service is 5 seconds (0:00:07)

0:00:03 Amazon request comes in -> next slot available is in 2 seconds (0:00:05)

I need a queue system that will pull the 2 amazon requests out first. I guess my question lies in whether to create separate queues for each service and any common technology (i.e. Service Broker) that is well suited to the throttling, if not I'll end up creating my own throttling/queueing system, which is why I was looking for common design patterns (i.e. producer/consumer or something) since it's not FIFO due to the above example.

So far, it looks like a FIFO queue for each service, with it's own throttling, looks like the way to move forward.

DaveO