views:

361

answers:

1

One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task.

This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code.

My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue?

The process would work like this:

  1. User request comes in and triggers a few tasks that shouldn't be blocking.
  2. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL.
  3. The gearman server finds a worker, passes the url and post data to a worker
  4. The worker simply posts to the url with the data, thus executing the task.

Assume the following:

  1. Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request.
  2. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout)

What are the potential pitfalls of such an approach? Here's one that worries me:

  • The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit.

Any thoughts?

+3  A: 

As a user of both Django and Google AppEngine, I can certainly appreciate what you're getting at. At work I'm currently working on the exact same scenario using some pretty cool open source tools.

  1. Take a look at Celery. It's a distributed task queue built with Python that exposes three concepts - a queue, a set of workers, and a result store. It's pluggable with different tools for each part.

  2. The queue should be battle-hardened, and fast. Check out RabbitMQ for a great queue implementation in Erlang, using the AMQP protocol.

  3. The workers ultimately can be Python functions. You can trigger workers using either queue messages, or perhaps more pertinent to what you're describing - using webhooks

Check out the Celery webhook documentation. Using all these tools you can build a production ready distributed task queue that implements your requirements above.

I should also mention that in regards to your first pitfall, celery implements rate-limiting of tasks using a Token Bucket algorithm.

rlotun