fabs and comparison are both really fast for IEEE floats (like, single-integer-op fast in principle).
If the compiler isn't inlining both operations, then either poke it until it does, or find the implementation for your architecture and inline it yourself.
You can maybe get something out of the fact that positive IEEE floats go in the same order as the integers with the same bit patterns. That is,
f > g iff *(int*)&f > *(int*)&g
So once you've fabs'ed, I think that a branch-free max for int will also work for float (assuming they're the same size of course). There's an explanation of why this works here: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm. But your compiler already knows all this, as does your CPU, so it may not make any difference.
There is no complexity-faster way of doing it. Your algorithm is already O(n), and you can't beat that and still look at every sample.
I guess there's probably something in your processor's SIMD (that is, SSE2 on Intel) that would help, by processing more data per clock cycle than your code. But I don't know what. If there is, then it quite possibly will be several times faster.
You could probably parallelize on a multi-core CPU, especially since you're dealing with 40 independent streams anyway. That will be at best a few factors faster. "Just" launch the appropriate number of extra threads, split the work between them, and use the lightest-weight primitive you can to indicate when they're all complete (maybe a thread barrier). I'm not quite clear whether you're plotting the max of all 40 streams, or the max of each separately, so maybe you don't actually need to synchronise the worker threads, other than to ensure results are delivered to the next stage without data corruption.
It's probably worth taking a look at the disassembly to see how much the compiler has unrolled the loop. Try unrolling it a bit more, see if that makes any difference.
Another thing to think about is how many cache misses you are getting, and whether it's possible to reduce the number by giving the cache a few clues so it can load the right pages ahead of time. But I have no experience with this, and I wouldn't hold out much hope. __builtin_prefetch is the magic incantation on gcc, and I guess the first experiment would be something like "prefetch the beginning of the next block before entering the loop for this block".
What percentage of the required speed are you currently at? Or is it a case of "as fast as possible"?