views:

277

answers:

5

Hi ,

I have a multi-R/W lock class that keeps the read, write and pending read , pending write counters. A mutex guards them from multiple threads.

My question is Do we still need the counters to be declared as volatile so that the compiler won't screw it up while doing the optimization.

Or does the compiler takes into account that the counters are guarded by the mutex.

I understand that the mutex is a run time mechanism to for synchronization and "volatile" keyword is a compile time indication to the compiler to do the right thing while doing the optimizations.

Regards, -Jay.

+4  A: 

You still need the "volatile" keyword.

The mutexes prevent the counters from concurrent access.

"volatile" tells the compiler to actually use the counter instead of caching it into a CPU register (which would not be updated by the concurrent threat).

Black
+6  A: 

volatile is used to inform the optimizer to always load the current value of the location, rather than load it into a register and assume that it won't change. This is most valuable when working with dual-ported memory locations or locations that can be updated real-time from sources external to the thread.

The mutex is a run-time OS mechanism that the compiler really doesn't know anything about - so the optimizer wouldn't take that into account. It will prevent more than one thread from accessing the counters at one time, but the values of those counters are still subject to change even while the mutex is in effect.

So, you're marking the vars volatile because they can be externally modified, and not because they're inside a mutex guard.

Keep them volatile.

codefool
+5  A: 

While this may depend on the threading library you are using, my understanding is that any decent library will not require use of volatile.

In Pthreads, for example, use of a mutex will ensure that your data gets committed to memory correctly.

EDIT: I hereby endorse tony's answer as being better than my own.

Steve S
Thanks steve for the specific example. But there are other thread libraries (OpenThread, Boost , etc) which I am not sure whether do it.
Jay D
Most libraries today will *have* to take care of it, because `volatile` does not guarantee correctness on multi-processor systems.
Steve S
btw, I'm almost certain Boost takes care of this. (but double-check the docs)
Steve S
+9  A: 

From Herb Sutter's article "Use Critical Sections (Preferably Locks) to Eliminate Races" (http://www.ddj.com/cpp/201804238):

So, for a reordering transformation to be valid, it must respect the program's critical sections by obeying the one key rule of critical sections: Code can't move out of a critical section. (It's always okay for code to move in.) We enforce this golden rule by requiring symmetric one-way fence semantics for the beginning and end of any critical section, illustrated by the arrows in Figure 1:

  • Entering a critical section is an acquire operation, or an implicit acquire fence: Code can never cross the fence upward, that is, move from an original location after the fence to execute before the fence. Code that appears before the fence in source code order, however, can happily cross the fence downward to execute later.
  • Exiting a critical section is a release operation, or an implicit release fence: This is just the inverse requirement that code can't cross the fence downward, only upward. It guarantees that any other thread that sees the final release write will also see all of the writes before it.

So for a compiler to produce correct code for a target platform, when a critical section is entered and exited (and the term critical section is used in it's generic sense, not necessarily in the Win32 sense of something protected by a CRITICAL_SECTION structure - the critical section can be protected by other synchronization objects) the correct acquire and release semantics must be followed. So you should not have to mark the shared variables as volatile as long as they are accessed only within protected critical sections.

Michael Burr
+1 great link!!
Dan
+5  A: 

There are 2 basically unrelated items here, that are always confused.

  • volatile
  • threads, locks, memory barriers, etc.

volatile is used to tell the compiler to produce code to read the variable from memory, not from a register. And to not reorder the code around. In general, not to optimize or take 'short-cuts'.

memory barriers (supplied by mutexes, locks, etc), as quoted from Herb Sutter in another answer, are for preventing the CPU from reordering read/write memory requests, regardless of how the compiler said to do it. ie don't optimize, don't take short cuts - at the CPU level.

Similar, but in fact very different things.

In your case, and in most cases of locking, the reason that volatile is NOT necessary, is because of function calls being made for the sake of locking. ie:

Normal function calls affecting optimizations:

external void library_func(); // from some external library

global int x;

int f()
{
   x = 2;
   library_func();
   return x; // x is reloaded because it may have changed
}

unless the compiler can examine library_func() and determine that it doesn't touch x, it will re-read x on the return. This is even WITHOUT volatile.

Threading:

int f(SomeObject & obj)
{
   int temp1;
   int temp2;
   int temp3;

   int temp1 = obj.x;

   lock(obj.mutex); // really should use RAII
      temp2 = obj.x;
      temp3 = obj.x;
   unlock(obj.mutex);

   return temp;
}

After reading obj.x for temp1, the compiler is going to re-read obj.x for temp2 - NOT because of the magic of locks - but because it is unsure whether lock() modified obj. You could probably set compiler flags to aggressively optimize (no-alias, etc) and thus not re-read x, but then a bunch of your code would probably start failing.

For temp3, the compiler (hopefully) won't re-read obj.x. If for some reason obj.x could change between temp2 and temp3, then you would use volatile (and your locking would be broken/useless).

Lastly, if your lock()/unlock() functions were somehow inlined, maybe the compiler could evaluate the code and see that obj.x doesn't get changed. But I guarantee one of two things here: - the inline code eventually calls some OS level lock function (thus preventing evaluation) or - you call some asm memory barrier instructions (ie that are wrapped in inline functions like __InterlockedCompareExchange) that your compiler will recognize and thus avoid reordering.

EDIT: P.S. I forgot to mention - for pthreads stuff, some compilers are marked as "POSIX compliant" which means, among other things, that they will recognize the pthread_ functions and not do bad optimizations around them. ie even though the C++ standard doesn't mention threads yet, those compilers do (at least minimally).

So, short answer

you don't need volatile.

tony