views:

70

answers:

3

Environment: .NET 3.5 SP1.

I've got two threads: UI thread and a background worker thread. The background worker thread periodically updates some fields in a shared object and the UI thread checks them. Nothing spectacular - just the progress, return values and thrown exceptions. Also the worker thread raises some events on the UI thread (via Control.BeginInvoke) when it changes these fields.

The worker thread ONLY WRITES these fields, and the UI thread ONLY READS them. They are not used for any other communication. For the sake of performance I'd like to avoid locking on the shared object or the individual properties. There will never be an invalid state in the shared object.

However I'm worried about things like processor caches and compiler optimizations. How can I avoid the situation when an updated value is not visible in the event handler on the UI thread? Will adding volatile to all fields be enough?

A: 

It is far, far better to use established multithreading guidelines. Producer/consumer collections are available in .NET 4.0; these are also available for .NET 3.5 if you include references to the Rx library.

Lock-free code is almost impossible to get right. If you don't want to use the producer/consumer queue, then use a lock.

If you insist on going down the path of pain, then yes, volatile will enable any thread reading that variable to get the last written value.

Stephen Cleary
Well, as long as just one thread is just writing and the other just reading - what can go wrong? :P
Vilx-
Are you aware that [the double-checked locking idiom is broken](http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html)? If that is obscure, then don't write lock-free code.
Stephen Cleary
Not to be picky or anything, but isn't that article for Java?
Vilx-
Yes. But similar arguments apply for C#. Double-checked locking *is* broken for the .NET CLR.
Stephen Cleary
Double-checked locking should be okay in .NET as long as the `volatile` keyword is used. The reason being that in .NET a volatile read produces an acquire-fence and a volatile write produces a release-fence. That means when you assign the object instance to the variable all writes necessary to initialize the instance must be committed first. On the opposite end, the read of the variable has to be completed before the instance members. So basically there is no way a thread can perceive the instance in a half-baked state.
Brian Gideon
Double-checked locking is OK when `volatile` is used. The problem is that the double-checked locking idiom does not use `volatile`, and is broken. (Actually, it's only broken for the *standard* CLR; Microsoft's implementation allows it to work, even on x64). But why all this discussion? Just use a `lock` and sleep well at night *knowing* your code is correct.
Stephen Cleary
Oh, I totally agree. I mean, really, how often do we use the singleton pattern anyway (as opposed to static classes) in addition to having a requirement that it be so fast that taking one short lived lock isn't good enough? To answer the question from personal experience...never :)
Brian Gideon
Indeed. A few years ago I went through all my personal libraries and removed all lock-free code. I'll use the .NET lock-free and lock-minimizing collections / `Lazy<T>` / synchronization primitives, but I just don't write lock-free code myself anymore. Ever. :)
Stephen Cleary
A: 

Generall speaking I advise against using advanced synchronization mechanisms because they are notoriously difficult to get right, but in this case a call to Thread.MemoryBarrier could be acceptable. Of course that assumes there are no atomicity requirements and the shared object can never be in a half-baked state. This might actually be easier than sprinkling everything with volatile.

object shared;

void WorkerThread()
{
  MakeChangesToSharedObject(shared);
  Thread.MemoryBarrier(); // commit the changes to main memory
}

void UIThread()
{
  Thread.MemoryBarrier(); // ensure updates are read from main memory
  UseSharedObject(shared);
}

Perhaps designing the code so that the shared object is immutable would be better though. Then all you have to do is swap out the shared object reference in one simple and quick atomic operation.

volatile object shared;

void WorkerThread()
{
  // The following operation is safe because the variable is volatile.
  shared = GetNewSharedObject();
}

void UIThread()
{
  // The following operation is safe because the variable is volatile.
  object value = shared.SomeProperty;
}
Brian Gideon
There aren't that many fields, so sprinkling with `volatile` is quite acceptable. And how complicated can it be if one thread is only writing and the other only reading?
Vilx-
You will probably be okay with marking everything as `volatile` then. And you are right, one reader and one writer simplifies things **a lot**.
Brian Gideon
+1  A: 

You're okay, no need to worry. A memory barrier is required to flush any pending writes to memory. There's an implicit one with any lock statement. Control.Begin/Invoke() needs to take a lock to protect the list of pending delegates so that's sufficient.

The volatile requirement is a harder one, mostly because its exact semantics are so poorly documented. On x86/x64 hardware it merely prevents the JIT compiler from caching the value of a variable in a CPU register. This is not an issue in your case because the delegate target points to a method. Variables are not cached across methods if the methods are not inlined. Your delegate target cannot be inlined.

Hans Passant