views:

553

answers:

7

I need to draw peak meters for audio in realtime. Minimum 44100 samples per second times a minimum 40 streams. Each buffer is between 64 and 1024 samples. I need to grab the abs max from each buffer. (These are then fed through a kind of lowpass filter and drawn at about 20ms intervals.)

for(int i = 0; i < numSamples; i++)
{ 
      absMaxOfBuffer = MAX( fabs( buffer[i] ), absMaxOfBuffer);
}

That's how I do it now. I'd like to do it much faster. The buffers have floats in the -1 to 1 range, hence the fabs.

Question, is there some tricky comp-sci quicksort-esque way of doing this faster?

Failing that, branchless ABS and MAX functions for floats, do they exist?

edit: Primary platform is Linux/gcc but a windows port is planned (probably with mingw).

edit, the second:
I gave the accept to onebyone because of the bit regarding the actual algo structure which was central to the question.
I'll try unrolling the loop to four at the time, zeroing the signbits and then getting the max with SSE (maxps instruction) and see if that doesn't peel the banana. Thanks for the suggestions, I've up-voted a few of you, as runners up. :)

+1  A: 

answering a question with another question isn't exactly answering, but hey...I'm no C++ developer either.

Since you're developing this in C++ and you're doing dsp, can't you connect to matlab, or octave ( which is opensource ) to the math for your and just get the result ?

there are already functions ( like conv, fft,ifft, fir and plotting util functions like freqz, stem, graph, plot, mesh, etc. ) implemented in those pieces of software. I had a look in Photoshop and there's a big folder called MATLAB...with some .m files that get calls from the application, send it to a dynamic matlab thingy then return the result to the application.

Hope it helps.

George Profenza
+5  A: 

fabs and comparison are both really fast for IEEE floats (like, single-integer-op fast in principle).

If the compiler isn't inlining both operations, then either poke it until it does, or find the implementation for your architecture and inline it yourself.

You can maybe get something out of the fact that positive IEEE floats go in the same order as the integers with the same bit patterns. That is,

f > g   iff   *(int*)&f > *(int*)&g

So once you've fabs'ed, I think that a branch-free max for int will also work for float (assuming they're the same size of course). There's an explanation of why this works here: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm. But your compiler already knows all this, as does your CPU, so it may not make any difference.

There is no complexity-faster way of doing it. Your algorithm is already O(n), and you can't beat that and still look at every sample.

I guess there's probably something in your processor's SIMD (that is, SSE2 on Intel) that would help, by processing more data per clock cycle than your code. But I don't know what. If there is, then it quite possibly will be several times faster.

You could probably parallelize on a multi-core CPU, especially since you're dealing with 40 independent streams anyway. That will be at best a few factors faster. "Just" launch the appropriate number of extra threads, split the work between them, and use the lightest-weight primitive you can to indicate when they're all complete (maybe a thread barrier). I'm not quite clear whether you're plotting the max of all 40 streams, or the max of each separately, so maybe you don't actually need to synchronise the worker threads, other than to ensure results are delivered to the next stage without data corruption.

It's probably worth taking a look at the disassembly to see how much the compiler has unrolled the loop. Try unrolling it a bit more, see if that makes any difference.

Another thing to think about is how many cache misses you are getting, and whether it's possible to reduce the number by giving the cache a few clues so it can load the right pages ahead of time. But I have no experience with this, and I wouldn't hold out much hope. __builtin_prefetch is the magic incantation on gcc, and I guess the first experiment would be something like "prefetch the beginning of the next block before entering the loop for this block".

What percentage of the required speed are you currently at? Or is it a case of "as fast as possible"?

Steve Jessop
"too many already". As long as this function is already used from multiple threads. Otherwise, if one thread is doing all the work then you aren't fully utilising the CPU, regardless of the raw number of threads in existence. You can have 1000 threads, and they're all sat around waiting for this thread then you don't have enough threads ;-)
Steve Jessop
Correct, I don't know of a better algorithm, on the assumption that you have to look at every single sample. You could perhaps experiment with only taking the max of the first half of each buffer, see how often you're wrong and by how much, and decide whether you care. And of course if the wave is clipped you know it can't get any bigger, and can ignore the rest of that buffer. But to detect that you need an extra comparison in the loop, which will slow things down in the non-clipped case. And I assume with float audio data it's not necessarily obvious what magnitude constitutes "clipped".
Steve Jessop
+2  A: 

Things to try:

  • fabs() might not be an inline function.
  • Special assembly instructions might help. On Intel, SSE has an instruction to compute the maximum of four floats at once.
  • Failing that, the IEEE 754 specification is such that if a and b are non-negative floats, then a < b is equivalent to *(int *)&a < *(int *)&b. Moreover, for any a, you can compute -a from a by flipping the MSB. Together, these properties might enable some bit-twiddling hacks.
  • Do you really need the maximum of every sample? Perhaps the maximum might occur more than once, opening up the possibility of not examining every input.
  • Can you compute the maximum in a streaming fashion?
Dave
"fabs() might not be an inline function". That's depressing...
Steve Jessop
I was trying not to assume anything.
Dave
It's a good call - something to check with the disassembler.
Steve Jessop
Or just build without assembling in the first place (with gcc I think this is -S?).
John Zwinck
+1  A: 

There is a branchless fabs documented on http://www.scribd.com/doc/2348628/The-Aggregate-Magic-Algorithms

Please also note that recent versions of GCC will inline a branchless fabs for you, using MMX instructions. There is also fmin and fmax, but GCC won't inline those (you'll get a call fmin).

pts
A: 

Easy optimizations I see:

  • translate the loop to a pointer arithmetic version (assuming that your compiler don't see this)
  • the IEEE 754 standard defines its representation as sign-magnitude. So, setting the most-significant bit to 0 will be the same as calling fabs(). Maybe this function already uses this trick.
GogaRieger
+1  A: 

You may want to look at Eigen.

It is a C++ template library that uses SSE (2 and later) and AltiVec instruction sets with graceful fallback to non-vectorized code.

Fast. (See benchmark).
Expression templates allow to intelligently remove temporaries and enable lazy evaluation, when that is appropriate -- Eigen takes care of this automatically and handles aliasing too in most cases.
Explicit vectorization is performed for the SSE (2 and later) and AltiVec instruction sets, with graceful fallback to non-vectorized code. Expression templates allow to perform these optimizations globally for whole expressions.
With fixed-size objects, dynamic memory allocation is avoided, and the loops are unrolled when that makes sense.
For large matrices, special attention is paid to cache-friendliness.

lothar
+1 Cache line friendliness.And yet another template library: http://nt2.sourceforge.net/
Anonymous
A: 

For your purpose you could square it instead of taking the absolute value; as mathematically |a| < |b| if a^2 < b^2 and vice versa. Multiplication might be faster than fabs() on some machines(?), I dunno.

newacct