views:

122

answers:

2

Here's the code that I use to create a differently ordered array:

const unsigned int height = 1536;
const unsigned int width = 2048;

uint32_t* buffer1 = (uint32_t*)malloc(width * height * BPP);
uint32_t* buffer2 = (uint32_t*)malloc(width * height * BPP);

int i = 0;
for (int x = 0; x < width; x++)
    for (int y = 0; y < height; y++) 
     buffer1[x+y*width] = buffer2[i++];

Can anyone explain why using the following assignment:

buffer1[i++] = buffer2[x+y*width];

instead of the one in my code take twice as much time?

+3  A: 

It's likely down to CPU cache behaviour (at 12MB, your images far exceed the 256KB L2 cache in the ARM Cortex A8 that's inside an iphone3gs).

The first example accesses the reading array in sequential order, which is fast, but has to access the writing array out of order, which is slow.

The second example is the opposite - the writing array is written in fast, sequential order and the reading array is accessed in a slower fashion. Write misses are evidently less costly under this workload than read misses.

Ulrich Drepper's article What Every Programmer Should Know About Memory is recommended reading if you want to know more about this kind of thing.

Note that if you have this operation wrapped up into a function, then you will help the optimiser to generate better code if you use the restrict qualifier on your pointer arguments, like this:

void reorder(uint32_t restrict *buffer1, uint32_t restrict *buffer2)
{
    int i = 0;
    for (int x = 0; x < width; x++)
        for (int y = 0; y < height; y++) 
            buffer1[x+y*width] = buffer2[i++];
}

(The restrict qualifier promises the compiler that the data pointed to by the two pointers doesn't overlap - which in this case is necessary for the function to make sense anyway).

caf
+2  A: 

Each pixel access in the first has a linear locality of reference, the second blows your cache on every read having to goto main memory for each.

The processor can much more efficiently handle writes with bad locality than reads, if the write has to go to main memory, that write can happen in parallel to another read/arithmetic operation. If a read misses the cache it can completely stall the processor waiting for more data to filter through the caches hierarchies.

joshperry