views:

2723

answers:

10

In the C programming language and Pthreads as the threading library; do variables/structures that are shared between threads need to be declared as volatile? Assuming that they might be protected by a lock or not (barriers perhaps).

Does the pthread POSIX standard have any say about this, is this compiler-dependent or neither?

Edit to add: Thanks for the great answers. But what if you're not using locks; what if you're using barriers for example? Or code that uses primitives such as compare-and-swap to directly and atomically modify a shared variable...

+2  A: 

Volatile means that we have to go to memory to get or set this value. If you don't set volatile, the compiled code might store the data in a register for a long time.

What this means is that you should mark variables that you share between threads as volatile so that you don't have situations where one thread starts modifying the value but doesn't write its result before a second thread comes along and tries to read the value.

Volatile is a compiler hint that disables certain optimizations. The output assembly of the compiler might have been safe without it but you should always use it for shared values.

This is especially important if you are NOT using the expensive thread sync objects provided by your system - you might for example have a data structure where you can keep it valid with a series of atomic changes. Many stacks that do not allocate memory are examples of such data structures, because you can add a value to the stack then move the end pointer or remove a value from the stack after moving the end pointer. When implementing such a structure, volatile becomes crucial to ensure that your atomic instructions are actually atomic.

Tom Leys
`volatile` doesn't guarantee atomicity, though. It's for indicating something outside the program is modifying the contents of the variable.
Allen
Even with volatile, something as simple as "a = a + 1;" is not atomic. It just means that the compiler will re-load 'a' for this operation, and store it back immediately. There is still a window in which another thread can race for it.
bdonlan
+3  A: 

In my experience, no; you just have to properly mutex yourself when you write to those values, or structure your program such that the threads will stop before they need to access data that depends on another thread's actions. My project, x264, uses this method; threads share an enormous amount of data but the vast majority of it doesn't need mutexes because its either read-only or a thread will wait for the data to become available and finalized before it needs to access it.

Now, if you have many threads that are all heavily interleaved in their operations (they depend on each others' output on a very fine-grained level), this may be a lot harder--in fact, in such a case I'd consider revisiting the threading model to see if it can possibly be done more cleanly with more separation between threads.

Dark Shikari
+12  A: 

As long as you are using locks to control access to the variable, you do not need volatile on it. In fact, if you're putting volatile on any variable you're probably already wrong.

http://softwareblogs.intel.com/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/

Don Neufeld
Thanks for the answer; but what about scenarios where you're not using locks (refer to the edited question for an example).
fuad
I think this is actually wrong, see my reply below. The problem is that the compiler can do anything it likes to keep a value in local registers in a thread unless it is marked volatile. So volatile is needed to make sure data is written back to memory.
jakobengblom2
If you are not using locks, you almost certainly need to use explicit memory barriers. Note that volatile is NOT a memory barrier as it does not affect any other loads and stores other than those to the volatile variable itself. It is also often a pessimization.
bdonlan
+1  A: 

Volatile would only be useful if you need absolutely no delay between when one thread writes something and another thread reads it. Without some sort of lock, though, you have no idea of when the other thread wrote the data, only that it's the most recent possible value.

For simple values (int and float in their various sizes) a mutex might be overkill if you don't need an explicit synch point. If you don't use a mutex or lock of some sort, you should declare the variable volatile. If you use a mutex you're all set.

For complicated types, you must use a mutex. Operations on them are non-atomic, so you could read a half-changed version without a mutex.

Branan
+3  A: 

I think one very important property of volatile is that it makes the variable be written to memory when modified, and reread from memory each time it accessed. The other answers here mix volatile and synchronization, and it is clear from some other answers than this that volatile is NOT a sync primitive (credit where credit is due).

But unless you use volatile, the compiler is free to cache the shared data in a register for any length of time... if you want your data to be written to be predictably written to actual memory and not just cached in a register by the compiler at its discretion, you will need to mark it as volatile. Alternatively, if you only access the shared data after you have left a function modifying it, you might be fine. But I would suggest not relying on blind luck to make sure that values are written back from registers to memory.

Especially on register-rich machines (i.e., not x86), variables can live for quite long periods in registers, and a good compiler can cache even parts of structures or entire structures in registers. So you should use volatile, but for performance, also copy values to local variables for computation and then do an explicit write-back. Essentially, using volatile efficiently means doing a bit of load-store thinking in your C code.

In any case, you positively have to use some kind of OS-level provided sync mechanism to create a correct program.

For an example of the weakness of volatile, see my Decker's algorithm example at http://jakob.engbloms.se/archives/65, which proves pretty well that volatile does not work to synchronize.

jakobengblom2
Saving variables in registers for a long time is exactly the point of the compilers optimizer. Using volatile completely negates that. Note that in GCC (and probably most compilers) function calls clobber memory, meaning that if you write to a non-local variable then do a function call, the compiler is not allowed to push the write after the function call - which is seemingly what your intention is to be using volatile for is anyways. That isn't what volatile is for...
Greg Rogers
Volatile is for marking a variable that could change spontaneously (embedded systems mapping hardware entities to memory locations is one example of this). If you are using exclusive locking an ordinary variable can't change spontaneously. Herb Sutter's article on this is pretty good: http://www.ddj.com/hpc-high-performance-computing/212701484
Greg Rogers
+2  A: 

NO.

Volatile is only required when reading a memory location that can change independently of the CPU read/write commands. In the situation of threading, the CPU is in full control of read/writes to memory for each thread, therefore the CPU's MMU can transparently cache data structures in memory for all cases of the program's instructions.

The primary usage for volatile is for accessing memory-mapped I/O. In this case, the underlying device can change the value of a memory location independently from CPU. If you do not use volatile under this condition, the CPU may use a previously cached memory value, instead of reading the newly updated value.

Casey
+5  A: 

The answer is absolutely, unequivocally, NO. You do not need to use 'volatile' in addition to proper synchronization primitives. Everything that needs to be done are done by these primitives.

The use of 'volatile' is neither necessary nor sufficient. It's not necessary because the proper synchronization primitives are sufficient. It's not sufficient because it only disables some optimizations, not all of the ones that might bite you. For example, it does not guarantee either atomicity or visibility on another CPU.

"But unless you use volatile, the compiler is free to cache the shared data in a register for any length of time... if you want your data to be written to be predictably written to actual memory and not just cached in a register by the compiler at its discretion, you will need to mark it as volatile. Alternatively, if you only access the shared data after you have left a function modifying it, you might be fine. But I would suggest not relying on blind luck to make sure that values are written back from registers to memory."

Right, but even if you do use volatile, the CPU is free to cache the shared data in a write posting buffer for any length of time. The set of optimizations that can bite you is not precisely the same as the set of optimizations that 'volatile' disables. So if you use 'volatile', you are relying on blind luck.

On the other hand, if you use sychronization primitives with defined multi-threaded semantics, you are guaranteed that things will work. As a plus, you don't take the huge performance hit of 'volatile'. So why not do things that way?

A: 

I don't get it. How does syncing primitives force the compiler to reload the value of a variable? Why wouldn't it just use the latest copy it already has?

Volatile means that the variable is updated outside the scope of the code, and thus, the compiler cannot assume it know the current value of it. Even memory barriers are useless, as the compiler, which is oblivious to memory barriers (right?), might still use a cached value.

jjj
A: 

Some people obviously are assuming that the compiler treats the synchronization calls as memory barriers. "Casey" is assuming there is exactly one CPU.

If the sync primitives are external functions and the symbols in question are visible outside the compilation unit (global names, exported pointer, exported function that may modify them) then the compiler will treat them -- or any other external function call -- as a memory fence with respect to all externally visible objects.

Otherwise, you are on your own. And volatile may be the best tool available for making the compiler produce correct, fast code. It generally won't be portable though, when you need volatile and what it actually does for you depends a lot on the system and compiler.

Stephen Nuchia
There are several interacting factors, none of which are addressed by the standard.1) you need to influence the scheduler and/or the delay the progress of other threads explicitly. System mutex primitives, spin waits, and bus-locking instructions all do this at different levels.
Stephen Nuchia
2) You need to influence the hardware's memory hierarchy, from registers to RAM and maybe all the way to backing store, to ensure the coherence properties you need are met.3) You need to ensure atomicity is present when you are depending on it, this is closely related to 2 but also encompases alignment and potentially aliasing issues.
Stephen Nuchia
4) You need to influence the generation of load/store code, including implicit load/stores on CISC machines, so that the compiler does not defeat the intent of your textually-correct concurrency-aware code.A proper mutex library, properly integrated into the compiler and properly used, will do all of these except atomicity, and if you use it in a way that does not depend on atomicity you're golden.If you are using any other combination of tools and techniques you are on your own.
Stephen Nuchia
to jjj below: whether the compiler is oblivious to the memory barrier depends on the barrier you use and the compiler.I see a lot of old C/C++ code with inline asm cpuid instructions used as as-hoc memory barriers. It works because the microsoft compiler is conservative about inline assembly.
Stephen Nuchia