views:

1956

answers:

9

The other week, I wrote a little thread class and a one-way message pipe to allow communication between threads (two pipes per thread, obviously, for bidirectional communication). Everything worked fine on my Athlon 64 X2, but I was wondering if I'd run into any problems if both threads were looking at the same variable and the local cached value for this variable on each core was out of sync.

I know the volatile keyword will force a variable to refresh from memory, but is there a way on multicore x86 processors to force the caches of all cores to synchronize? Is this something I need to worry about, or will volatile and proper use of lightweight locking mechanisms (I was using _InterlockedExchange to set my volatile pipe variables) handle all cases where I want to write "lock free" code for multicore x86 CPUs?

I'm already aware of and have used Critical Sections, Mutexes, Events, and so on. I'm mostly wondering if there are x86 intrinsics that I'm not aware of which force or can be used to enforce cache coherency.

Thanks.

+2  A: 

Volatile won't do it. In C++, volatile only affects what compiler optimizations such as storing a variable in a register instead of memory, or removing it entirely.

dsimcha
+4  A: 

You don't need to worry about cache coherency. The hardware will take care of that. What you may need to worry about is performance issues due to that cache coherency.

If core#1 writes to a variable and core#2 reads that same variable, the processor will make sure that the cache for core#2 is updated. Since an entire cache line (64 bytes) has to be read from memory, it will have some performance cost. In this case, it's unavoidable. This is the desired behavior.

The problem is that when you have multiple variables in the same cache line, the processor might spend extra time keeping the caches in sync even if the cores are reading/writing different variables within the same cache line. That cost can be avoided by making sure those variables are not in the same cache line.

Ferruccio
The "has to be read from memory" bit is misleading, as the data might be snooped from another cache.
ArtemGr
I hadn't thought of that. I assume there would still be a performance cost, but not of the same magnitude as a read from RAM.
Ferruccio
+2  A: 

You didn't specify which compiler you are using, but if you're on windows, take a look at this article here. Also take a look at the available synchronization functions here. You might want to note that in general volatile is not enough to do what you want it to do, but under VC 2005 and 2008, there are non-standard semantics added to it that add implied memory barriers around read and writes.

If you want things to be portable, you're going to have a much harder road ahead of you.

Eclipse
+7  A: 

volatile only forces your code to re-read the value, it cannot control where the value is read from. If the value was recently read by your code then it will probably be in cache, in which case volatile will force it to be re-read from cache, NOT from memory.

There are not a lot of cache coherency instructions in X86. There are prefetch instructions like prefetchnta. This tells the processor not to store the value in L1 cache, but it will still be in L2.

I suspect that X86 cores automatically invalidate the cache of other cores on the same chip when ever a value is written back to memory. You should read the documentation to see if that is the case.

If that is the case, then an mfence instruction will force execution to pause until the value has been written. (Force example, you can do an mfence before releasing a mutex to ensure another process doesn't begin execution before the value hits the memory bus.)

Edit: There is a clflush instruction in SSE2 and up which, according to the NASM instruction reference, "invalidates the cache line that contains the linear address specified by the source operand from all levels of the processor cache hierarchy." This combined with an mfence should get the intended behavior.

SoapBox
What's the right order here then?_InterlockedExchange(); // atomic write_clflush() // sync caches_mfence() // cause a wait until caches syncedOr do I need another _mfence() above the _clflush()? Thanks.
Furious Coder
AtomicWrite, Memory fence to wait for the AtomicWrite to hit the cache, CacheFlush, Memory Fence to make sure the next thing you write isn't visible until after the flush. This last fence may not be needed, I'm not sure.
SoapBox
Okay, cool, I'll try that. Of course I have to wrap the whole thing in a conditional to determine whether _cflush exists, and since the whole thing should be packed tightly, I'm guessing I should just have an inline function that decides what to do based on a runtime system info class. Thanks!
Furious Coder
-1 the whole point of 'volatile' is to force the CPU to ignore cached values. Maybe your version of 'volatile' is broken.
Casey
The answer is right. @SoapBox probably means the cpu cache - but what you talk about is caching a result into a register. In essence, volatile is for declaring "device register" variables - which tells the compiler "this doesn't read from memory, but from an external source" - and so the compiler will re-read it any time since it can't be sure the read value will equal to the value last written. If "read" for your implementation is defined to issue a "loadw", then surely it will sometimes read from the CPU cache - but that's fine from C's point of view.
Johannes Schaub - litb
A: 

Herb Sutter seemed to simply suggest that any two variables should reside on separate cache lines. He does this in his concurrent queue with padding between his locks and node pointers.

Edit: If you're using the Intel compiler or GCC, you can use the atomic builtins, which seem to do their best to preempt the cache when possible.

greyfade
Of course, fixed-length padding will likely fail on some later chip.
David Thornley
Of course, you can always choose a larger pad later on if the existing one is too small. It might make a cache miss more likely, but isn't that the point?
greyfade
+10  A: 

Cache coherence is guaranteed between cores due to the MESI protocol employed by x86 processors. You only need to worry about memory coherence when dealing with external hardware which may access memory while data is still siting on cores' caches. Doesn't look like it's your case here, though, since the text suggests you're programming in userland.

About about multi-processor systems?
SoapBox
MESI protocol is not used in x86, but MESIF and MOESI are.
osgx
x86 does handle coherence. But read up on memory *consistency*: it's not guaranteed that all writes (such as writing the data and releasing the lock, to name two) will be visible to all CPUs in the same order! That's what the memory fences are for.
Wim
+1  A: 

There's a series of articles explaining modern memory architectures here, including Intel Core2 caches and many more modern architecture topics.

Articles are very readable and well illustrated. Enjoy !

davidnr
+1  A: 

There are several sub-questions in your question so I'll answer them to the best of my knowledge.

  1. There currently is no portable way of implementing lock-free interactions in C++. The C++0x proposal solves this by introducing the atomics library.
  2. Volatile is not guaranteed to provide atomicity on a multicore and its implementation is vendor-specific.
  3. On the x86, you don't need to do anything special, except declare shared variables as volatile to prevent some compiler optimizations that may break multithreaded code. Volatile tells the compiler not to cache values.
  4. There are some algorithms (Dekker, for instance) that won't work even on an x86 with volatile variables.
  5. Unless you know for sure that passing access to data between threads is a major performance bottleneck in your program, stay away from lock-free solutions. Use passing data by value or locks.
Bartosz Milewski
A: 

The following is a good article in reference to using volatile w/ threaded programs.

Volatile Almost Useless for Multi-Threaded Programming.

Casey