views:

263

answers:

5

I feel like I should know the answer to this, but I'm going to ask anyway just in case I'm making a potentially catastrophic mistake.

The following code executes as expected with no errors/exceptions:

static void Main(string[] args)
{
    ManualResetEvent flag = new ManualResetEvent(false);
    ThreadPool.QueueUserWorkItem(s =>
    {
        flag.WaitOne();
        Console.WriteLine("Work Item 1 Executed");
    });
    ThreadPool.QueueUserWorkItem(s =>
    {
        flag.WaitOne();
        Console.WriteLine("Work Item 2 Executed");
    });
    Thread.Sleep(1000);
    flag.Set();
    flag.Close();
    Console.WriteLine("Finished");
}

Of course, as is usually the case with multi-threaded code, a successful test does not prove that this is actually thread safe. The test also succeeds if I put Close before Set, even though the documentation clearly states that attempting to do anything after a Close will result in undefined behaviour.

My question is, when I invoke the ManualResetEvent.Set method, is it guaranteed to signal all waiting threads before returning control to the caller? In other words, assuming that I'm able to guarantee that there will be no further calls to WaitOne, is it safe to close the handle here, or is it possible that under some circumstances this code would prevent some waiters from getting signaled or result in an ObjectDisposedException?

The documentation only says that Set puts it in a "signaled state" - it doesn't seem to make any claims about when waiters will actually get that signal, so I'd like to be sure.

A: 

I'm pretty sure it's OK.

SLaks
+1  A: 

It is not okay. You are getting lucky here because you only started two threads. They'll immediately start running when you call Set on a dual core machine. Try this instead and watch it bomb:

    static void Main(string[] args) {
        ManualResetEvent flag = new ManualResetEvent(false);
        for (int ix = 0; ix < 10; ++ix) {
            ThreadPool.QueueUserWorkItem(s => {
                flag.WaitOne();
                Console.WriteLine("Work Item Executed");
            });
        }
        Thread.Sleep(1000);
        flag.Set();
        flag.Close();
        Console.WriteLine("Finished");
        Console.ReadLine();
    }

Your original code will fail similarly on an old machine or your current machine when it is very busy with other tasks.

Hans Passant
This is completely correct, but it's bombing because of the test code, not because of the behaviour of `Set`. In this case, the event is being closed before some of the threads have even *started* running. Increasing the sleep timeout to 5 seconds makes this run fine on a quad core. I can prevent the above scenario by serializing access to the event and nulling it immediately after closing; I'm primarily concerned with threads that are already waiting on the wait handle.
Aaronaught
There is no guarantee whatsoever that a thread will actually start running after an arbitrary long sleep. Sleeping longer merely reduces the risk of an ObjectDisposedException, it cannot eliminate it. These kind of timing dependencies are the ultimate nail in the coffin of anybody making assumptions about threading.
Hans Passant
Of course I would not use `Thread.Sleep` in production code - the actual implementation used various sync primitives. But there did turn out to be a subtle race condition, which has now been fixed. Thanks for your input; +1 for being the first to call this out.
Aaronaught
A: 

My take is that there is a race condition. Having written event objects based on condition variables, you get code like this:

mutex.lock();
while (!signalled)
    condition_variable.wait(mutex);
mutex.unlock();

So while the event may be signalled, the code waiting on the event may still need access to parts of the event.

According to the documentation on Close, this only releases unmanaged resources. So if the event only uses managed resources, you may get lucky. But that could change in the future so I would err on the side of precaution and not close the event until you know it is no longer being used.

R Samuel Klatchko
+3  A: 

When you signal with a ManualResetEvent.Set you are guaranteed that all the threads that are waiting for that event (i.e. in a blocking state on flag.WaitOne) will be signaled before returning control to the caller.

Of course there are circumstances when you might set the flag and your thread doesn't see it because it's doing some work before it checks the flag (or as nobugs suggested if you're creating multiple threads):

ThreadPool.QueueUserWorkItem(s =>
{
    QueryDB();
    flag.WaitOne();
    Console.WriteLine("Work Item 1 Executed");
});

There is contention on the flag and now you can create undefined behavior when you close it. Your flag is a shared resource between your threads, you should create a countdown latch on which every thread signals upon completion. This will eliminate contention on your flag.

public class CountdownLatch
{
    private int m_remain;
    private EventWaitHandle m_event;

    public CountdownLatch(int count)
    {
        Reset(count);
    }

    public void Reset(int count)
    {
        if (count < 0)
            throw new ArgumentOutOfRangeException();
        m_remain = count;
        m_event = new ManualResetEvent(false);
        if (m_remain == 0)
        {
            m_event.Set();
        }
    }

    public void Signal()
    {
        // The last thread to signal also sets the event.
        if (Interlocked.Decrement(ref m_remain) == 0)
            m_event.Set();
    }

    public void Wait()
    {
        m_event.WaitOne();
    }
}
  1. Each thread signals on the countdown latch.
  2. Your main thread waits on the countdown latch.
  3. The main thread cleans up after the countdown latch signals.

Ultimately the time you sleep at the end is NOT a safe way to take care of your problems, instead you should design your program so it is 100% safe in a multithreaded environment.

UPDATE: Single Producer/Multiple Consumers
The presumption here is that your producer knows how many consumers will be created, After you create all your consumers, you reset the CountdownLatch with the given number of consumers:

// In the Producer
ManualResetEvent flag = new ManualResetEvent(false);
CountdownLatch countdown = new CountdownLatch(0);
int numConsumers = 0;
while(hasMoreWork)
{
    Consumer consumer = new Consumer(coutndown, flag);
    // Create a new thread with each consumer
    numConsumers++;
}
countdown.Reset(numConsumers);
flag.Set();
countdown.Wait();// your producer waits for all of the consumers to finish
flag.Close();// cleanup
Lirik
This looks like it might be a better option in concept; I'm having trouble understanding how it applies to the situation of a single producer with multiple consumers (this seems geared toward multiple producers, single consumer). Can you explain further how this latch would be used in that scenario?
Aaronaught
I've updated my answer to reflect the single producer/multiple consumer situation.
Lirik
OK, got it. Have to use the event *and* the latch. Unfortunately the "producer" in this case will have no idea how many consumers there will be, the reality is a lot more complicated than the test code... but I might be able to adapt this somehow. I'll accept this one if it turns out to be the best answer after a little while.
Aaronaught
@Aaronaught if you don't know how many consumers will be created and you expect the system to be more complicated, then you can look at the ThreadPool example (you can wait for multiple ManualResetEvent items in WaitAll(array<WaitHandle>[])): http://msdn.microsoft.com/en-us/library/3dasc8as%28VS.80%29.aspx
Lirik
@Aaronaught I also updated the example to reflect a situation where you don't know how many consumers you will have... it should be completely safe now.
Lirik
The producer and consumer are actually quite distant and producer really has no way to track consumers - however, they do acquire a shared object which I have control over, and I think what I can do is have all threads, *including* the producer, latch at acquisition time and un-latch at the very end, and whoever uses it last will drain the counter causing the shared object to close its inner event. Going to test it out now. Thanks again!
Aaronaught
@Aaronaught it seems like you have your hands full :), but let me know if you're still having issues: I'll do my best to help you out. Good luck! :)
Lirik
Nah, I'm good, just ran it through a battery of unit tests to verify that nothing was leaking and/or failing, it seems fine. Once I got the concept of, er, let's call it inversion-of-concurrency-control, the pieces came together pretty quick. Cheers again, the point is yours.
Aaronaught
A: 

Looks like a risky pattern to me, even if due to the [current] implementation, it is OK. you are trying to dispose a resource which may still be in use.

It is like newing and constructing an object and deleting it blindly even before consumers of that object is done.

Even otherwise there is a problem here. Program may exit, even before other threads ever got a chance to run. Thread pool threads are background threads.

Given that you have to wait for other threads anyway, you might as well clean up afterwards.

Fakrudeen