The main reasons why these two examples are so different in speed are:
- the faster one doesn't use any generics, so it doesn't face boxing/unboxing.
- the faster one doesn't create temporary collections and, thus, avoids extra memory copies.
Let's consider the slower one by parts. First:
first.zip(second)
That creates a new array, an array of Tuple2
. It will copy all elements from both arrays into Tuple2
objects, and then copy a reference to each of these objects into a third array. Now, notice that Tuple2
is parameterized, so it can't store Float
directly. Instead, new instances of java.lang.Float
are created for each number, the numbers are stored in them, and then a reference for each of them is stored into the Tuple2
.
map{ case (a,b) => a*b }
Now a fourth array is created. To compute the values of these elements, it needs to read the reference to the tuple from the third array, read the reference to the java.lang.Float
stored in them, read the numbers, multiply, create a new java.lang.Float
to store the result, and then pass this reference back, which will be de-referenced again to be stored in the array (arrays are not type-erased).
We are not finished, though. Here's the next part:
reduceLeft(_+_)
That one is relatively harmless, except that it still do boxing/unboxing and java.lang.Float
creation at iteration, since reduceLeft
receives a Function2
, which is parameterized.
Scala 2.8 introduces a feature called specialization which will get rid of a lot of these boxing/unboxing. But let's consider alternative faster versions. We could, for instance, do map
and reduceLeft
in a single step:
sum = first.zip(second).foldLeft(0f) { case (a, (b, c)) => a + b * c }
We could use view
(Scala 2.8) or projection
(Scala 2.7) to avoid creating intermediary collections altogether:
sum = first.view.zip(second).map{ case (a,b) => a*b }.reduceLeft(_+_)
This last one doesn't save much, actually, so I think the non-strictness if being "lost" pretty fast (ie, one of these methods is strict even in a view). There's also an alternative way of zipping that is non-strict (ie, avoids some intermediary results) by default:
sum = (first,second).zipped.map{ case (a,b) => a*b }.reduceLeft(_+_)
This gives much better result that the former. Better than the foldLeft
one, though not by much. Unfortunately, we can't combined zipped
with foldLeft
because the former doesn't support the latter.
The last one is the fastest I could get. Faster than that, only with specialization. Now, Function2
happens to be specialized, but for Int
, Long
and Double
. The other primitives were left out, as specialization increases code size rather dramatically for each primitive. On my tests, though Double
is actually taking longer. That might be a result of it being twice the size, or it might be something I'm doing wrong.
So, in the end, the problem is a combination of factors, including producing intermediary copies of elements, and the way Java (JVM) handles primitives and generics. A similar code in Haskell using supercompilation would be equal to anything short of assembler. On the JVM, you have to be aware of the trade-offs and be prepared to optimize critical code.