views:

243

answers:

4

Hello,

I have few questions regarding Cache memories used in Multicore CPUs or Multipprocessors systems. (Although not directly related to programming, it has many repurcussions while one writes software for multicore processors/multiprocessors systems, hence asking here!)

1.) In a multiprocessor system or a multicore processor(Intel Quad Core, Core two Duo etc..) does wach cpu core/processor have its own cache memory (data and program cache)?

2.) Can one processor/core access each others cache memory, because if they are allowed to access each others' cache, then i believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumtion valid and true?

3.) Will there be any problems in allowing any processor to access any processors' cache memory?

Thank you for bearing multiple questions in one post.

-AD

A: 

Unless You're thinking about writing your own compiler for language X, wikipedia should provide sufficient answer.

Ragnar
A: 

Hi goldenmean,

I know this is not a programming question, but it's still quite interesting.

To answer your first, I know the Core 2 Duo has a 2-tier caching system, in which each processor has its own first-level cache, and they share a second-level cache. This helps with both data synchronization and utilization of memory.

To answer your second question, I believe your assumption to be correct. If the processors were to be able to access each others' cache, there would obviously be less cache misses as there would be more data for the processors to choose from. Consider, however, shared cache. In the case of the Core 2 Duo, having shared cache allows programmers to place commonly used variables safely in this environment so that the processors will not have to access their individual first-level caches.

To answer your third question, there could potentially be a problem with accessing other processors' cache memory, which goes to the "Single Write Multiple Read" principle. We can't allow more than one process to write to the same location in memory at the same time.

For more info on the core 2 duo, read this neat article.

http://software.intel.com/en-us/articles/software-techniques-for-shared-cache-multi-core-systems/

Aaron
+1  A: 

Quick answers 1) Yes 2)No, but it all may depend on what memory instance/resource you are referring, data may exist in several locations at the same time. 3)Yes.

For a full length explanation of the issue you should read the 9 part article "What every programmer should know about memory" by Ulrich Drepper ( http://lwn.net/Articles/250967/ ), you will get the full picture of the issues you seem to be inquiring about in a good and accessible detail.

Panic
+1 for pointing out the excellent paper.
sybreon
+1  A: 
  1. Yes. It varies by the exact chip model, but the most common design is for each CPU core to have its own private L1 data and instruction caches. The L2 unified cache is typically shared between all cores.

  2. No. Each CPU core's L1 caches are on the same die as the core and cannot be accessed by other cores. The cores are each connected to the L2 cache via the shared data bus.

  3. Yes -- there simply aren't wires connecting the various CPU caches to the other cores. If a core wants to access data in another core's cache, the only data path through which it can do so is the system bus.

A very important related issue is the cache coherency problem. Consider the following: suppose one CPU core has a particular memory location in its cache, and it writes to that memory location. Then, another core reads that memory location. How do you ensure that the second core sees the updated value? That is the cache coherency problem. There are a variety of solutions; see Wikipedia et al.

Adam Rosenfield