views:

400

answers:

4

I have an application that streams through 250 MB of data, applying a simple and fast neural-net threshold function to the data chunks (which are just 2 32-bit words each). Based on the result of the (very simple) compute, the chunk is unpredictably pushed into one of 64 bins. So it's one big stream in and 64 shorter (variable length) streams out.

This is repeated many times with different detection functions.

The compute is memory bandwidth limited. I can tell this because there's no speed change even if I use a discriminant function that's much more computationally intensive.

What is the best way to structure the writes of the new streams to optimize my memory bandwidth? I am especially thinking that understanding cache use and cache line size may play a big role in this. Imagine the worst case where I have my 64 output streams and by bad luck, many map to the same cache line. Then when I write the next 64 bits of data to a stream, the CPU has to flush out a stale cache line to main memory, and load in the proper cache line. Each of those uses 64 BYTES of bandwidth... so my bandwidth limited application may be wasting 95% of the memory bandwidth (in this hypothetical worst case, though).

It's hard to even try to measure the effect, so designing ways around it is even more vague. Or am I even chasing a ghost bottleneck that somehow the hardware optimizes better than I could?

I'm using Core II x86 processors if that makes any difference.

Edit: Here's some example code. It streams through an array and copies its elements to various output arrays picked pseudo-randomly. Running the same program with different numbers of destination bins gives different runtimes, even though the same amount of computation and memory reads and writes were done:

2 output streams: 13 secs
8 output streams: 13 secs
32 output streams: 19 secs
128 output streams: 29 seconds
512 output streams: 47 seconds

The difference between using 512 versus 2 output streams is 4X, (probably??) caused by cache line eviction overhead.

#include <stdio.h>
#include <stdlib.h>
#include <ctime>

int main()
{
  const int size=1<<19;
  int streambits=3;
  int streamcount=1UL<<streambits; // # of output bins
  int *instore=(int *)malloc(size*sizeof(int));
  int **outstore=(int **)malloc(streamcount*sizeof(int *));
  int **out=(int **)malloc(streamcount*sizeof(int));
  unsigned int seed=0;

  for (int j=0; j<size; j++) instore[j]=j;

  for (int i=0; i< streamcount; ++i) 
    outstore[i]=(int *)malloc(size*sizeof(int));

  int startTime=time(NULL);
  for (int k=0; k<10000; k++) {
    for (int i=0; i<streamcount; i++) out[i]=outstore[i];
    int *in=instore;

    for (int j=0; j<size/2; j++) {
      seed=seed*0x1234567+0x7162521;
      int bin=seed>>(32-streambits); // pseudorandom destination bin
      *(out[bin]++)=*(in++);
      *(out[bin]++)=*(in++);
    }

  }
  int endTime=time(NULL);
  printf("Eval time=%ld\n", endTime-startTime);
}
+4  A: 

As you're writing to the 64 output bins, you'll be using many different memory locations. If the bins are filled essentially at random, it means that you'll sometimes have two bins that couls share the same cache line. Not a big problem; the Core 2 L1 cache is 8-way associative. That means you'd get a problem only with the 9th cache line. With just 65 live memory references at any time (1 read/64 write), 8-way associative is OK.

The L2 cache is apparently 12-way associative (3/6MB total, so 12 isn't that weird a number). So even if you'd have collisions in L1, chances are pretty good you're still not hitting main memory.

However, if you don't like this, re-arrange the bins in memory. Instead of stroing each bin sequentially, interleave them. For bin 0, store chunks 0-15 at offsets 0-63, but store chunks 16-31 at offset 8192-8255. For bin 1, store chunks 0-15 at offsets 64-127, etcetera. This takes just a few bit shifts and masks, but the result is that a pair of bins share 8 cache lines.

Another possible way to speed up your code in this case is SSE4, especially in x64 mode. You'd get 16 registers x 128 bits, and you can optimize the read (MOVNTDQA) to limit cache pollution. I'm not sure if that will help a lot with the read speed, though - I'd expect the Core2 prefetcher to catch this. Reading sequential integers is the most simple kind of access possible, any prefetcher should optimize that.

MSalters
SPWorley
+1  A: 

You might want to explore to map the files into memory. This way the kernel can take care of the memory management for you. The kernel usually knows best how to handle page caches. This is especially true if your application needs to run on more than one platform, as the different Oses handle memory management in different ways.

There are frameworks like ACE (http://www.cs.wustl.edu/~schmidt/ACE.html) or Boost (http://www.boost.org) That allow you to write code that does memory mapping in a platform independent way.

lothar
+2  A: 

Here are some ideas if you really get desperate...

You might consider upgrading hardware. For streaming applications somewhat similar to yours, I've found I got a big speed boost by changing to an i7 processor. Also, AMD processors are supposedly better than Core 2 for memory-bound work (though I haven't used them recently myself).

Another solution you might consider is doing the processing on a graphics card using a language like CUDA. Graphics cards are tuned to have very high memory bandwidth and to do fast floating point math. Expect to spend 5x to 20x the development time for CUDA code relative to a straight-forward non-optimized C implementation.

Mr Fooz
+3  A: 

Do you have the option of writing your output streams as a single stream with inline metadata to identify each 'chunk'? If you were to read a 'chunk,' run your threshhold function on it, then instead of writing it to a particular output stream you would just write which stream it belonged to (1 byte) followed by the original data, you'd seriously reduce your thrashing.

I would not suggest this except for the fact that you have said that you have to process these data many times. On each successive run, you read your input stream to get the bin number (1 byte) then do whatever you need to do for that bin on the next 8 bytes.

As far as the cacheing behavior of this mechanism, since you are only sliding through two streams of data and, in all but the first case, writing as much data as you are reading, the hardware will give you all the help you could possibly hope for as far as prefetching, cache line optimization, etc.

If you had to add that extra byte every time you processed your data, your worst case cache behavior is the average case. If you can afford the storage hit, it seems like a win to me.