views:

136

answers:

4

Suppose I have two memory segments (equal size each, approximately 1kb in size) , one is read-only (after initialization), and other is read/write.

what is the best layout in memory for such segments in terms of memory performance? one allocation, contiguous segments or two allocations (in general not contiguous). my primary architecture is linux Intel 64-bit.

my feeling is former (cache friendlier) case is better. is there circumstances, where second layout is preferred?

+1  A: 

It'll depend on what you're doing with the memory. I'm fairly certain that contiguous (and page aligned!) would never be slower than two randomly placed segments, but it won't necessarily be any faster.

Kitsune
my primary concern is cache thrashing. are 1kb segments small enough where it is not concerned? to be honest efficient cache use is still dark matter for me
aaa
2KB should be able to fit into the L1 cache (on modern x86 chips at least) with no problem, likely the associated code would also fit in as well assuming it's fairly data-centric. It'd also easily fit inside the L2 cache with a huge amount of room to spare on most any recent processor (L2 is generally measured in the **M**Bs). If you're going to be accessing both segments very frequently, it can't hurt to have them together and page-aligned.
Kitsune
+4  A: 

I would put the 2KB of data in the middle of a 4KB page, to avoid interference from reads and writes close to the page boundary. Similarly, keeping the write data separate is also good idea for the same reason.

Having contiguous read/write blocks may be less effiicent than keeping them separate. For example, a cache that is storing data for code interested in just the read-only portion may become invalidated by a write from another cpu. The cache line will be invalidated and refreshed, even though the code wasn't reading the writable data. By keeping the blocks separate, you avoid this case, and writes to the writable data block only invalidate cache lines for the writable block, and do not interfere with cache lines for the read only block.

Note that this is only a concern at the block boundary between the readable and writable blocks. If your block sizes were much larger than the cache line size, then this would be a peripheral problem, but as your blocks are small, requiring just a few cache lines, then the problem of invalidating lines could be significant.

mdma
+1  A: 

Given that it's an Intel processor, you probably only need to ensure that the addresses are not exactly a multiple of 64k apart. If they are, loads from either section that map to the same modulo 64k address will collide in L1 and cause an L1 miss. There's also a 4MB aliasing issue, but I'd be surprised if you ran into that.

MSN
+1  A: 

With that small of data, it really shouldn't matter much. Both of those arrays will fit into any level cache just fine.

Mark Borgerding
+1 for pointing out absurdity of question
BlueRaja - Danny Pflughoeft