views:

127

answers:

4

Hi folks,

I'm having quite a problem deciding how to serve a few Python scripts.

The problem is that the basic functionality could be generalized by this:

do_something()
time.sleep(3)
do_something()

I tried various WSGI servers, but they have all been giving me concurrency limitations, as in I have to specify how many threads to use and so on.

I only wish that the resources on the server be used efficiently and liberally.



Any ideas?

A: 

What about CherryPy WSGI server?

What does that sleep mean? Are you really writing a web application?

Messa
@Messa: thanks for the reply! sleep means that for a second the script is not doing anything for 3 seconds. I tried CherryPy WSGI server, but it requires me to choose how many threads to use and this is currently the main limitation.
RadiantHex
Sorry, I thought that CherryPy WSGI server manages thread count dynamically, but I have now looked on its source code and I don't think so anymore :)
Messa
+1  A: 

Have you checked tornado with its non-blocking asynchronous requests?

http://www.tornadoweb.org/

I have never used it though but here is an example from doc:

class MainHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    def get(self):
        http = tornado.httpclient.AsyncHTTPClient()
        http.fetch("http://friendfeed-api.com/v2/feed/bret",
                   callback=self.async_callback(self.on_response))

    def on_response(self, response):
        if response.error: raise tornado.web.HTTPError(500)
        json = tornado.escape.json_decode(response.body)
        self.write("Fetched " + str(len(json["entries"])) + " entries "
                   "from the FriendFeed API")
        self.finish()
hadrien
@hadrienL for some reason it only serves 1 request at a time :|
RadiantHex
A: 

You might find Spawning a good fit. It has several options for deployment, one of which is somewhat transparent async (as implemented by Eventlet). So if you literally do time.sleep(3) it'll be okay. Not everything you might do is transparently handled, so you have to pay some attention to Eventlet and how it works. Sockets are, for instance, so if you read from a socket (and that socket blocks) it won't halt the server or consume a thread. But if you do CPU-heavy work that will block all requests. So... it's a bit tricky. Spawning has some other deployment options that might work for you too.

You might be able to use WaitForIt, though it has some gotchas. It will spawn threads for long-running requests, and provides some browser feedback, so if you are creating a very simplistic frontend to long-running backend processes it might be useful. It acts as WSGI middleware.

Ian Bicking
A: 

So it's OK for the client to be tied up waiting for an answer for 3 seconds, but not OK for the server? That seems...odd.

If you'd rather not have the client tied up for 3 seconds, a common mechanism is to have the initial request return "202 Accepted" ASAP with a URL to a status monitor. Then the server can spawn a new thread or subprocess for the task, and the client can do other things, and then poll the status URL to find out when the task is done.

fumanchu