Hi guys,
I've created a web spider that accesses both a US and EU server. The US and EU servers are the same data structure, but have different data inside them, and I want to collate it all. In order to be nice to the server, there's a wait time between each request. As the program is exactly the same, in order to speed up processing, I've threaded the program so it can access the EU and US servers simultaneously.
This crawling will take on the order of weeks, not days. There will be exceptions, and while I've tried to handle everything inside the program, it's likely something weird might crop up. To be truly defensive about this, I'd like to catch a thread that's failed, log the error and restart it. Worst case I lose a handful of pages out of thousands, which is better than having a thread fail and lose 50% of speed. However, from what I've read, Python threads die silently. Does anyone have any ideas?
class AccessServer(threading.Thread):
def __init__(self, site):
threading.Thread.__init__(self)
self.site = site
self.qm = QueueManager.QueueManager(site)
def run(self):
# Do stuff here
def main():
us_thread = AccessServer(u"us")
us_thread.start()
eu_thread = AccessServer(u"eu")
eu_thread.start()