views:

1001

answers:

5

I'm reading Joe Duffy's post about Volatile reads and writes, and timeliness, and i'm trying to understand something about the last code sample in the post:

while (Interlocked.CompareExchange(ref m_state, 1, 0) != 0) ;
m_state = 0;
while (Interlocked.CompareExchange(ref m_state, 1, 0) != 0) ;
m_state = 0;
…

When the second CMPXCHG operation is executed, does it use a memory barrier to ensure that the value of *m_state* is indeed the latest value written to it? Or will it just use some value that is already stored in the processor's cache? (assuming *m_state* isn't declared as volatile).
If I understand correctly, if CMPXCHG won't use a memory barrier, then the whole lock acquisition procedure won't be fair since it's highly likely that the thread that was the first to acquire the lock, will be the one that will acquire all of following locks. Did I understand correctly, or am I missing out on something here?

Edit: The main question is actually whether calling to CompareExchange will cause a memory barrier before attempting to read m_state's value. So whether assigning 0 will be visible to all of the threads when they try to call CompareExchange again.

+3  A: 

ref doesn't respect the usual volatile rules, especially in things like:

volatile bool myField;
...
RunMethod(ref myField);
...
void RunMethod(ref bool isDone) {
    while(!isDone) {} // silly example
}

Here, RunMethod is not guaranteed to spot external changes to isDone even though the underlying field (myField) is volatile; RunMethod doesn't know about it, so doesn't have the right code.

However! This should be a non-issue:

  • if you are using Interlocked, then use Interlocked for all access to the field
  • if you are using lock, then use lock for all access to the field

Follow those rules and it should work OK.


Re the edit; yes, that behaviour is a critical part of Interlocked. To be honest, I don't know how it is implemented (memory barrier, etc - note they are "InternalCall" methods, so I can't check ;-p) - but yes: updates from one thread will be immediately visible to all others as long as they use the Interlocked methods (hence my point above).

Marc Gravell
I'm not asking about volatiles, but only if a Interlocked.Exchange is necessary when releasing the lock (or, Thread.VolatileWrite will be more appropriate).and does the only problem that could arise from this code is a habit of "unfairness" (as Joe mentions at the beginning of this post)
@Marc: the source of InternalCall methods can be viewed (for the most part) through the Shared Source CLI SSCLI, aka Rotor. The Interlocked.CompareExchange is explained in this interesting read: http://www.moserware.com/2008/09/how-do-locks-lock.html
Abel
+1  A: 

The interlocked functions are guaranteed to stall the bus and the cpu while it resolves the operands. The immediate consequence is that no thread switch, on your cpu or another one, will interrupt the interlocked function in the middle of its execution.

Since you're passing a reference to the c# function, the underlying assembler code will work with the address of the actual integer, so the variable access won't be optimized away. It will work exactly as expected.

edit: Here's a link that explains the behaviour of the asm instruction better: http://faydoc.tripod.com/cpu/cmpxchg.htm
As you can see, the bus is stalled by forcing a write cycle, so any other "threads" (read: other cpu cores) that would try to use the bus at the same time would be put in a waiting queue.

Blindy
Actually, the reverse (partially) is true. Interlocked does an atomic operation and uses the `cmpxchg` assembly instruction. It does not require putting the other threads in a wait state, hence it is very performant. See section "Inside InternalCall" on this page: http://www.moserware.com/2008/09/how-do-locks-lock.html
Abel
+1  A: 

MSDN says about the Win32 API functions: "Most of the interlocked functions provide full memory barriers on all Windows platforms"

(the exceptions are Interlocked functions with explicit Acquire / Release semantics)

From that I would conclude that the C# runtime's Interlocked makes the same guarantees, as they are documented withotherwise identical behavior (and they resolve to intrinsic CPU statements on the platforms i know). Unfortunately, with MSDN's tendency to put up samples instead of documentation, it isn't spelled out explicitly.

peterchen
+2  A: 

There seems to be some comparison with the Win32 API functions by the same name, but this thread is all about the C# Interlocked class. From its very description, it is guaranteed that its operations are atomic. I'm not sure how that translates to "full memory barriers" as mentioned in other answers here, but judge for yourself.

On uniprocessor systems, nothing special happens, there's just a single instruction:

FASTCALL_FUNC CompareExchangeUP,12
        _ASSERT_ALIGNED_4_X86 ecx
        mov     eax, [esp+4]    ; Comparand
        cmpxchg [ecx], edx
        retn    4               ; result in EAX
FASTCALL_ENDFUNC CompareExchangeUP

But on multiprocessor systems, a hardware lock is used to prevent other cores to access the data at the same time:

FASTCALL_FUNC CompareExchangeMP,12
        _ASSERT_ALIGNED_4_X86 ecx
        mov     eax, [esp+4]    ; Comparand
  lock  cmpxchg [ecx], edx
        retn    4               ; result in EAX
FASTCALL_ENDFUNC CompareExchangeMP

An interesting read with here and there some wrong conclusions, but all-in-all excellent on the subject is this blog post on CompareExchange.

Abel
+3  A: 

Any x86 instruction that has lock prefix has full memory barrier. As shown Abel's answer, Interlocked* APIs and CompareExchanges use lock-prefixed instruction such as lock cmpxchg. So, it implies memory fence.

Yes, Interlocked.CompareExchange uses a memory barrier.

Why? Because x86 processors did so. From Intel's Volume 3A: System Programming Guide Part 1, Section 7.1.2.2:

For the P6 family processors, locked operations serialize all outstanding load and store operations (that is, wait for them to complete). This rule is also true for the Pentium 4 and Intel Xeon processors, with one exception. Load operations that reference weakly ordered memory types (such as the WC memory type) may not be serialized.

volatile has nothing to do with this discussion. This is about atomic operations; to support atomic operations in CPU, x86 guarantees all previous loads and stores to be completed.

minjang