views:

59

answers:

2

From the Wikipedia article on Read-Copy-Update:

The reason that it is safe to run the removal phase concurrently with readers is the semantics of modern CPUs guarantee that readers will see either the old or the new version of the data structure rather than a partially updated reference.

Is this true for all modern CPUs (ARM, x86, PPC, etc.)? Is it likely to change in the future? It seems awfully nice to never need to pay the cost of doing a locked load so long as you don't mind possibly getting the old value again (this probably isn't an issue for many applications -- basically for any app that could use read-copy-update).

A: 

It's still not safe to assume your hardware will support unchecked updates.

If you're coding in something low-level (C/C++), use macros to wrap the based operations. Then, if you're SURE a particular hardware configuration will work natively you can always #define those operations to be trivial, just as if you didn't protect yourself.

But generally it's better to be right than fast.

Jason Cohen
+1  A: 

Well if you use primitive types with size <= databus size and the data is aligned properly then it is true. So it more depends on your code then on a modern cpu.

You can assume that this will continue to exist because it is impossible to write a garbage collector if there can be partial updated pointers. And using lock prefixes around each single pointer access would kill performance totally.

So yes, the article is correct (again assuming size and alignment).

Lothar
I realize in general it's unrealistic for arbitrary size objects to be updated without at some point being partially filled, but why is the determining factor the databus size? It's been awhile since I took architecture ;)
Joseph Garvin