views:

303

answers:

2

Hello

What is the cost of the atomic operation (any of compare-and-swap or atomic add/decrement)? How much cycles does it consume? Will it pause other processors on SMP or NUMA, or will it block memory accesses? Will it flush reorder buffer in out-of-order CPU?

What effects will be on the cache?

I'm interested in modern, popular CPUs: x86, x86_64, PowerPC, SPARC, Itanium.

Thanks.

A: 

On bus-based SMP, atomic prefix LOCK does assert (turn on) a bus wire signal LOCK#. It will prohibit other cpus/devices on the bus for using it.

Ppro & P2 book http://books.google.com/books?id=3gDmyIYvFH4C&pg=PA245&dq=lock+instruction+pentium&lr=&ei=_E61S5ehLI78zQSzrqwI&cd=1#v=onepage&q=lock%20instruction%20pentium&f=false pages 244-246

Locked instructions are serializing, synchronizing operations .... /about Out-of-order/ locked RMW/read-modify-write = atomic itself/ instruction ensures that the processor will execute all instructions before the locked instruction prior to executing it. /about yet not flushed writes/ it forces all posted writes within the processor to be flushed to external memory before executing the next instruction.

/about SMP/ semaphore is in cache in S state... issuing a read and invalidate transaction for 0 bytes of date (this is a kill/of shared copies of the cache line in adjacent CPUs/)

osgx
+1  A: 

I have looked for actual data for the past days, and found nothing. However, I did some research, which compares the cost of atomic ops with the costs of cache misses.

The cost of the x86 LOCK prefix, or CAS, before PentiumPro (as described in the doc), is a memory access (like a cache miss), + stopping memory operations by other processors, + any contention with other processors trying to LOCK the bus. However, since PentiumPro, for Writeback (i.e. cacheable) memory (all memory an app deals with, unless you talk directly with hardware), instead of blocking all memory operations, only the relevant cacheline is blocked (based on the link posted above).

Actually, the CAS case can be more complicated, as explained on this page, with no timings but an insightful description by a trustworthy engineer.

Before going in two much detail, I'll say that a LOCKed operation costs one cache miss + the possible contention with other processor on the same cacheline, while CAS + the preceding load (which is almost always required except on mutexes, where you always CAS 0 and 1) can cost two cache misses.

He explains that a load + CAS on a single location can actually cost two cache misses, like Load-Linked/Store-Conditional (see there for the latter). His explaination relies on knowledge of the MESI cache coherence protocol. It uses 4 states for a cacheline: M(odified), S(hared), E(xclusive), I(nvalid) (and therefore it's called MESI), explained below where needed. The scenario, explained, is the following:

  • the LOAD causes a cache miss - the relevant cacheline is loaded from memory in Shared state (i.e. other processors are still allowed to keep that cacheline in memory; no changes are allowed in this state). If the location is in memory, this cache miss is skipped. Possible cost: 1 cache miss. (skipped if the cacheline is in Shared, Exclusive or Modified state, i.e. the data is in cache).
  • the program calculates the new values to store,
  • and it calls CAS.
    • It has to avoid concurrent modification, so it must remove copies of the cacheline from the cache of other CPUs, to move the cacheline to the Exclusive state. Possible cost: 1 cache miss. This is not needed if it is already exclusively owned, i.e. in the Exclusive or Modified state. In both states, no other CPUs hold the cacheline, but in the Exclusive state it has not been modified (yet).
    • After this communication, the variable is modified in cache, and will be written to main memory according to usual algorithms, I guess.
    • Other processors trying to read or modify that variable will first have to obtain that cacheline in Shared or Exclusive mode, and to do so will contact this processor and receive the updated version of the cacheline. A LOCKed operation, instead, can only cost a cache miss (because the cacheline will be requested directly in Exclusive state).

In all cases, a cacheline request can be stalled by other processors already modifying the data.

Blaisorblade
why chaning of state on other cpus costs as 1 cache miss?
osgx
Because it's communication outside the CPU, and thus slower than accessing the cache. While a cache miss has to pass from other CPUs anyway.Actually, it might be the case that talking with another CPU is faster than talking with memory, if a direct interconnection is used, like AMD Hypertransport (since a huge time ago), or Intel QuickPath Interconnect from Intel, on the very latest Xeon processors based on Nehalem. Otherwise the communication with other CPUs take place on the same FSB as the one for memory.Search for HyperTransport and Front Side Bus on Wikipedia for more info.
Blaisorblade
Wow, never thought that his is so expensive - a cache miss can be a few thousands cycles.
Lothar