I am developing some scientific software for my university. It is being written in C++ on Windows (VS2008). The algorithm must calculate some values for a large number of matrix pairs, that is, at the core resides a loop iterating over the matrices, collecting some data, e.g.:
sumA = sumAsq = sumB = sumBsq = diffsum = diffsumsq = return = 0;
for (int y=0; y < height; ++y)
{
for (int x=0; x < width; ++x)
{
valA = matrixA(x,y);
valB = matrixB(x,y);
sumA+=valA;
sumAsq+=valA*valA;
sumB+=valB;
sumBsq+=valB*valB;
diffsum+=valA-valB;
diffsumsq+=(valA-valB)*(valA-valB);
}
}
return = sumA + sumB / sumAsq + sumBsq * diffsum * diffsumsq
This routine is executed millions of times for different matrixA, matrixB pairs. My problem is that this program is extremely slow, compiled in Release mode with all optimizations activated. Using the "pause-when-busy-and-inspect" debugger technique, I established that the program sits inside this loop virtually every time, even though, as you might expect, this routine is surrounded by a whole bunch of conditions and control branches. What puzzles me the most is that during its execution on a dual-processor Xeon-based system, the program utilizes one of the 4 cores (no surprise, it is single-threaded for now) but only up to about 25% of its limit, and with relatively large oscillations, where I would expect steady, 100% load until the program terminates.
The current version is actually a rewrite, created with optimizing the performance in mind. I was devastated when I found out it's actually slower than the original. The previous version used Boost matrices, which I replaced by OpenCV matrices, after having established them to be over 10 times faster in comparing the execution time of multiplying two 1000x100 matrices. I access the matrix by manually dereferencing a raw pointer to its data which I hoped would gain me some performance. I made the calculation routine a multi-line #define macro to enforce its inlining and to avoid function calls and returns. I improved the math behind the calculations so that the final value is calculated in a single pass through the matrices (the old version requires two passes). I expected to get huge gains and yet the opposite is true. I'm nowhere near my old program's efficiency, not to mention commercial software for the particular application.
I was wondering if it perhaps had something to do with the matrix data being 8-bit chars, I once saw that access to floats was actually slower than to doubles in my old program, perhaps chars are even slower since the processor retrieves data in 32-bit chunks (this Xeon probably grabs even 64bits). I also considered turning the matrices into vectors to avoid a loop-inside-loop construct, as well as some form of vectorization, like for example calculating the data for 4 (less? more?) consecutive matrix cells on a single loop iteration. Any other ideas please?
EDIT: actual code in the new, OpenCV based version:
const char *Aptr, *Bptr;
double sumA = 0, sumB = 0, sumAsq = 0, sumBsq = 0, diffsum = 0, diffsumsq = 0;
char Aval, Bval;
for (int y=0; y < height; ++y)
{
Aptr = (char*)(AMatrix.imageData + AMatrix.widthStep * y);
Bptr = (char*)(BMatrix.imageData + BMatrix.widthStep * y);
for (int x=0; x < width; ++x)
{
Aval = Aptr[x];
Bval = Bptr[x];
sumA+=Aval;
sumB+=Bval;
sumAsq+=Aval*Aval;
sumBsq+=Bval*Bval;
diffsum+=Aval-Bval;
diffsumsq+=(Aval-Bval)*(Aval-Bval);
}
}