I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages.
Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.
The flow would look like:
User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results
I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .