Use a Queue.Queue
instance, which is intrinsically thread-safe. Each thread can .put
its results to that global instance when it's done, and the main thread (when it knows all working threads are done, by .join
ing them for example as in @unholysampler's answer) can loop .get
ting each result from it, and use each result to .extend
the "overall result" list, until the queue is emptied.
Edit: there are other big problems with your code -- if the maximum number of threads is less than the number of keywords, it will never terminate (you're trying to start a thread per keyword -- never less -- but if you've already started the max numbers you loop forever to no further purpose).
Consider instead using a threading pool, kind of like the one in this recipe, except that in lieu of queueing callables you'll queue the keywords -- since the callable you want to run in the thread is the same in each thread, just varying the argument. Of course that callable will be changed to peel something from the incoming-tasks queue (with .get
) and .put
the list of results to the outgoing-results queue when done.
To terminate the N threads you could, after all keywords, .put
N "sentinels" (e.g. None
, assuming no keyword can be None
): a thread's callable will exit if the "keyword" it just pulled is None
.
More often than not, Queue.Queue
offers the best way to organize threading (and multiprocessing!) architectures in Python, be they generic like in the recipe I pointed you to, or more specialized like I'm suggesting for your use case in the last two paragraphs.