views:

434

answers:

4

I have a bunch of buffers (25 to 30 of them) in my application that are are fairly large (.5mb) and accessed simulataneousley. To make it even worse the data in them is generally only read once, and it is updated frequently (like 30 times per second). Sort of the perfect storm of non optimal cache use.

Anyhow, it occurred to me that it would be cool if I could mark a block of memory as non cacheable... Theoretically, this would leave more room in the cache for everything else.

So, is their a way to get a block of memory marked as non cacheable in Linux?

A: 

On certain processor architectures, there are special instructions that can be used to mark certain cache-lines as disabled. However, these are usually architecture specific and depend on some assembly instruction. So, I would advise you to refer to the processor architecture documentation and figure out how to do it in assembly. You can then use inline assembly with GCC to activate it. It would make performance suck though.

PS: If you can, you may want to think of a different way to handle the data?

sybreon
You're not going to be able to use instructions like that from userspace...
bdonlan
Yep, on processors where it is a privileged instruction. Then, with Linux, you'll need to find a place to drop it in kernel space and write some sort of user space function to access it.
sybreon
A: 

You might also want to look into processor affinity to reduce cache thrashing.

Nikolai N Fetissov
+4  A: 

How to avoid polluting the caches with data like this is covered in (What Every Programmer Should Know About Memory)[http://people.redhat.com/drepper/cpumemory.pdf] (PDF) - This is written from the perspective of Red Hat development so perfect for you. However, most of it is cross-platform.

What you want is called "Non-Temporal Access" and tell the processor to expect that the value you are reading now will not be needed again for a while. The processor then avoids caching that value.

See page 49 of the PDF I linked above. It uses the intel intrinsic to do the streaming around the cache.

On the read side, processors, until recently, lacked support aside from weak hints using non-temporal access (NTA) prefetch instructions. There is no equivalent to write-combining for reads, which is especially bad for uncacheable memory such as memory-mapped I/O. Intel, with the SSE4.1 extensions, introduced NTA loads. They are implemented using a small number of streaming load buffers; each buffer contains a cache line. The first movntdqa instruction for a given cache line will load a cache line into a buffer, possibly replacing another cache line. Subsequent 16-byte aligned accesses to the same cache line will be serviced from the load buffer at little cost. Unless there are other reasons to do so, the cache line will not be loaded into a cache, thus enabling the loading of large amounts of memory without polluting the caches. The compiler provides an intrinsic for this instruction:

#include <smmintrin.h>
__m128i _mm_stream_load_si128 (__m128i *p);

This intrinsic should be used multiple times, with addresses of 16-byte blocks passed as the parameter, until each cache line is read. Only then should the next cache line be started. Since there are a few streaming read buffers it might be possible to read from two memory locations at once

It would be perfect for you if when reading, the buffers are read in linear order through memory. You use streaming reads to do so. When you want to modify them, the buffers are modified in linear order, and you can use streaming writes to do that if you don't expect to read them again any time soon from the same thread.

Tom Leys
+1  A: 

Frequently updated data actually is the perfect application of cache. As jdt mentioned, modern CPU caches are quite large, and 0.5mb might well fit in cache. More importantly, though, read-modify-write to uncached memory is VERY slow - the initial read has to block on memory, then the write operation ALSO has to block on memory in order to commit. And just to add insult to injury, the CPU might implement no-cache memory by loading the data into cache, then immediately invalidating the cache line - thus leaving you in a position which is guaranteed to be worse than before.

Before you try outsmarting the CPU like this, you really should benchmark the entire program, and see where the real slowdown is. Modern profilers such as valgrind's cachegrind can measure cache misses, so you can find if that is a significant source of slowdown as well.

On another, more practical note, if you're doing 30 RMWs per second, this is at the worst case something on the order of 1920 bytes of cache footprint. This is only 1/16 of the L1 size of a modern Core 2 processor, and likely to be lost in the general noise of the system. So don't worry about it too much :)

That said, if by 'accessed simultaneously' you mean 'accessed by multiple threads simultaneously', be careful about cache lines bouncing between CPUs. This wouldn't be helped by uncached RAM - if anything it'd be worse, as the data would have to travel all the way back to physical RAM each time instead of possibly passing through the faster inter-CPU bus - and the only way to avoid it as a problem is to minimize the frequency of access to shared data. For more about this, see http://www.ddj.com/hpc-high-performance-computing/217500206

bdonlan
Read-modify-write is very slow, but this is only a problem if you don't write to an entire cache line over a short number of instructions. The CPU can detect if an entire cache line was modified and avoids the read.
Tom Leys
Locking memory that another CPU has in cache is also supposed to be very slow, so I would be careful in your assertion that it is faster to lock memory for modification that is in another cache than it is to read that same memory from the system.
Tom Leys