views:

535

answers:

4

I have a multithreaded program where I create a generator function and then pass it to new threads. I want it to be shared/global in nature so each thread can get the next value from the generator.

Is it safe to use a generator like this, or will I run into problems/conditions accessing the shared generator from multiple threads?

If not, is there a better way to approach the problem? I need something that will cycle through a list and produce the next value for whichever thread calls it.

+16  A: 

It's not thread-safe; simultaneous calls may interleave, and mess with the local variables.

The common approach is to use the master-slave pattern (now called farmer-worker pattern in PC). Make a third thread which generates data, and add a Queue between the master and the slaves, where slaves will read from the queue, and the master will write to it. The standard queue module provides the necessary thread safety and arranges to block the master until the slaves are ready to read more data.

Martin v. Löwis
Definitely +1 for Queue.Queue, great way to organize threading system when applicable (which is most of the time, and definitely for this task).
Alex Martelli
+2  A: 

No, they are not thread-safe. You can find interesting info about generators and multi-threading in:

http://www.dabeaz.com/generators/Generators.pdf

Mihail
A: 

It depends on which python implementation you're using. In CPython, the GIL makes all operations on python objects threadsafe, as only one thread can be executing code at any given time.

http://en.wikipedia.org/wiki/Global_Interpreter_Lock

Algorias
"the GIL makes all operations on python objects threadsafe" - huh? all operations are not atomic
Corey Goldberg
This is dangerously misleading. The GIL only means that Python code won't corrupt the Python state in a multithreaded environment: you can't change threads in the middle of a bytecode op. (For example, you can modify a shared dict without corrupting it.) You can still change threads between any two bytecode ops.
Glenn Maynard
+4  A: 

Edited to add benchmark below.

You can wrap a generator with a lock. For example,

import threading
class LockedIterator(object):
    def __init__(self, it):
        self.lock = threading.Lock()
        self.it = it.__iter__()

    def __iter__(self): return self

    def next(self):
        self.lock.acquire()
        try:
            return self.it.next()
        finally:
            self.lock.release()

gen = [x*2 for x in [1,2,3,4]]
g2 = LockedIterator(gen)
print list(g2)


Locking takes 50ms on my system, Queue takes 350ms. Queue is useful when you really do have a queue; for example, if you have incoming HTTP requests and you want to queue them for processing by worker threads. (That doesn't fit in the Python iterator model--once an iterator runs out of items, it's done.) If you really do have an iterator, then LockedIterator is a faster and simpler way to make it thread safe.

from datetime import datetime
import threading
num_worker_threads = 4

class LockedIterator(object):
    def __init__(self, it):
        self.lock = threading.Lock()
        self.it = it.__iter__()

    def __iter__(self): return self

    def next(self):
        self.lock.acquire()
        try:
            return self.it.next()
        finally:
            self.lock.release()

def test_locked(it):
    it = LockedIterator(it)
    def worker():
        try:
            for i in it:
                pass
        except Exception, e:
            print e
            raise

    threads = []
    for i in range(num_worker_threads):
        t = threading.Thread(target=worker)
        threads.append(t)
        t.start()

    for t in threads:
        t.join()

def test_queue(it):
    from Queue import Queue
    def worker():
        try:
            while True:
                item = q.get()
                q.task_done()
        except Exception, e:
            print e
            raise

    q = Queue()
    for i in range(num_worker_threads):
         t = threading.Thread(target=worker)
         t.setDaemon(True)
         t.start()

    t1 = datetime.now()

    for item in it:
        q.put(item)

    q.join()

start_time = datetime.now()
it = [x*2 for x in range(1,10000)]

test_locked(it)
#test_queue(it)
end_time = datetime.now()
took = end_time-start_time
print "took %.01f" % ((took.seconds + took.microseconds/1000000.0)*1000)
Glenn Maynard
Less efficient then using a Queue.Queue, but beautifully done.
gooli