hello
My Question is regarding performance of Java versus compiled code, for example C++/fortran/assembly in high-performance numerical applications. I know this is contentious topic, but I am looking for specific answers/examples. Also community wiki. I have asked similar questions before, but I think I put it broadly and did not get answers I was looking for.
double precision matrix matrix multiplication, commonly known as dgemm in blas library, is able to achieve nearly 100 percent peak CPU performance (in terms of floating operations per second).
There are several factors which allow to achieve that performance:
cache blocking, to achieve maximum memory locality
loop unrolling to minimize control overhead
vector instructions, such as SSE
memory prefetching
guarantee no memory aliasing
I have have seen lots of benchmarks using assembly, C++, fortran, Atlas, vendor BLAS (typical cases are matrix of dimension 512 and above). On the other hand I have have heard that the principle byte compiled languages/implementations such as Java can be fast or nearly as fast as machine compiled languages. However I have not seen definite benchmarks showing that it is so. On the contrary, it seems (from my own research) byte compiled languages are much slower.
Do you have good matrix matrix multiplication benchmarks for Java/C #? does just-in-time compiler (actual implementation, not hypothetical) able to produce instructions which satisfy points I have listed?
Thanks
with regards to performance:
every CPU has peak performance, depending on number of instructions processor can execute per second. For example, modern 2 ghz Intel CPU can achieve 8 billion double precision add/multiply a second, resulting in 8 gflops peak performance. Matrix matrix multiply is one of algorithms which is able to achieve nearly full performance with regards number of operations per second, main reason being higher ratio of compute over memory operations (N^3/N^2)
. Numbers I am interested in a something on the order N > 500
.
with regards to implementation: higher-level details such as blocking is done at source code level. Lower-level optimization is handled by compiler, perhaps with compiler hints with regards to alignment/alias. Byte compiled implementation can be written using block approach as well, so in principle source code details for decent implementation will be very similar.