views:

1484

answers:

3

When should I use volatile/Thread.MemoryBarrier() for thread safety?

+6  A: 

What's wrong with

private static readonly object syncObj = new object();
private static int counter;

public static int NextValue()
{
    lock (syncObj)
    {
        return counter++;
    }
}

?

This does all necessary locking, memory barriers, etc. for you. It's well understood and more readable than any custom synchronization code based on volatile and Thread.MemoryBarrier().


EDIT

I can't think of a scenario in which I'd use volatile or Thread.MemoryBarrier(). For example

private static volatile int counter;

public static int NextValue()
{
    return counter++;
}

is not equivalent to the code above and is not thread-safe (volatile doesn't make ++ magically become thread-safe).

In a case like this:

private static volatile bool done;

void Thread1()
{
    while (!done)
    {
        // do work
    }
}

void Thread2()
{
    // do work
    done = true;
}

(which should work) I'd use a ManualResetEvent to signal when Thread2 is done.

dtb
Alex
.net also has quite a strict memory model, such a read or write cannot follow another write.
KeeperOfTheSoul
Though the ECMA spec has a fairly weak memory model, so you might want to watch out for that, see http://www.bluebytesoftware.com/blog/2007/11/10/CLR20MemoryModel.aspx and http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx
KeeperOfTheSoul
It would be if you only did retrieves or assignments though (speaking of second example code) since they are atomic. But yeah, very limited use. The only place I could think of is bool fields.
Skurmedel
+5  A: 

Basically if you're using any other kind of synchronization to make your code threadsafe then you don't need to.

Most of the lock mechanisms (including lock) automatically imply a memory barrier so that multiple processor can get the correct information.

Volatile and MemoryBarrier are mostly used in lock free scenarios where you're trying to avoid the performance penalty of locking.

Edit: You should read this article by Joe Duffy about the CLR 2.0 memory model, it clarifies a lot of things (if you're really interested you should read ALL the article from Joe Duffie who is by large the most expert person in parallelism in .NET)

Jorge Córdoba
+1, but just so it's absolutely clear. If you do what dtb does, volatile/Thread.MemoryBarrier is not needed. There are also situations where volatile alone on a member isn't enough.
Skurmedel
+5  A: 

You use volatile/Thread.MemoryBarrier() when you want to access a variable across threads without locking.

Variables that are atomic, like an int for example, are always read and written whole at once. That means that you will never get half of the value before another thread changes it and the other half after it has changed. Because of that you can safely read and write the value in different threads without syncronising.

However, the compiler may optimize away some reads and writes, which you prevent with the volatile keyword. If you for example have a loop like this:

sum = 0;
foreach (int value in list) {
   sum += value;
}

The compiler may actually do the calculations in a processor register and only write the value to the sum variable after the loop. If you make the sum variable volatile, the compiler will generate code that reads and writes the variable for every change, so that it's value is up to date throughout the loop.

Guffa
+1, but in which scenarios does it matter that the compiler will generate code that reads and writes the variable for every change?
dtb
That's mostly desired when threading dtb, if not one thread may get a different value than the other and all kinds of crazy things could happen.
Skurmedel
Well, that or use a memory barrier.
Skurmedel
Sure, but I'm looking for a real-world scenario :) I can't come up with any except the last example in my answer.
dtb