views:

260

answers:

2

How does InterlockedIncrement work?

Is the concern only on multi-processor systems?

What does it do, disable interrupts across all processors?

+3  A: 

InterlockedIncrement works by using machine level instructions to increment and store a value in an atomic manner. Meaning no operation can be performed on the value and the storage location during the process.

It is of concern any time multiple threads or processes or accessing the same value. So a shared variable in a multi-threaded application, or shared memory for multiple processes.

I don't believe the instruction disables interrupts, at least on x86 type hardware.

jcopenha
On the contrary, it doesn't matter what system you are running on, as long as your application is multi-threaded, then you need to worry. The inverse isn't true though, in a single-threaded application running on a multi-processor system, there's still no need to worry.
Matthew Scharley
changed that a bit to mention multithreaded and shared memory in a multi-process context.
jcopenha
+1  A: 

jcopenha is correct, but I just wanted to answer to "Is the concern only on multi-processor systems?"

I don't know which Interlocked are you using. If you mean the c++ one, then on a single-core you "should be" safe to do "++x" if x is not bigger than your "bitness". I write "should be", because compiler can optimise it in some strange way in the function - for example change two "++x" into a normal "add ...,2" in a completely different place and some of your multithreading logic may fail because of that. On a multicore, even ++x on a 32-bit x can have weird effects (the instruction can be "inc mem", or "lock inc mem" and when you increment one mem address from two cpus when it's not locked, you get strange results).

If the "bitness" of your x is higher than your cpu, then you need interlocked in any multithreaded code - doesn't matter if it's single- or multicore, because that instruction has to be compiled into two asm codes anyways and the context switch might happen in between. (this can be fixed with RCU though)

In .NET it's basically the same story, but you have overloaded Increment, instead of Interlocked... and Interlocked...64.

So yeah - whenever you write multithreaded stuff (even on a single-core), just use the interlocked increments on shared memory. It's not worth trying to be "smarter" than machine here.

viraptor
I get the "should do" part for shore as you never know if it will be run on a multicore system. Also, x++ will resolve to different machine code depending where x is located, and the capabilities of the cpu. I was thinking the case of incrementing register in place, but that is only one possible way the compiler could do it. It could end up as read mem, increment, write mem. Which is surely not atomic.Trying to translate this back to the embedded 8bit AVR and ARM world, I'm used to.
JeffV

related questions