views:

220

answers:

1

I've got a python web crawler and I want to distribute the download requests among many different proxy servers, probably running squid (though I'm open to alternatives). For example, it could work in a round-robin fashion, where request1 goes to proxy1, request2 to proxy2, and eventually looping back around. Any idea how to set this up?

To make it harder, I'd also like to be able to dynamically change the list of available proxies, bring some down, and add others.

If it matters, IP addresses are assigned dynamically.

Thanks :)

+1  A: 

Make your crawler have a list of proxies and with each HTTP request let it use the next proxy from the list in a round robin fashion. However, this will prevent you from using HTTP/1.1 persistent connections. Modifying the proxy list will eventually result in using a new or not using a proxy.

Or have several connections open in parallel, one to each proxy, and distribute your crawling requests to each of the open connections. Dynamics may be implemented by having the connetor registering itself with the request dispatcher.

Bernd
Thanks, gave me some good ideas, mainly using a redis set to store the proxy list, and the srandmember command to get a proxy.
Jacob