I have two algorithms written in C++. As far as I know, it is conventional to compile with
-O0 -NDEBUG (g++) while comparing the performance of two algorithms(asymptomatically they are same).
But I think the optimization level is unfair to one of them, because it uses STL in every case. The program which uses plain array outperforms the STL-heavy algorithm 5 times faster while compiled with -O0 options. But the performance difference is not much different when I compile them with -O2 -NDEBUG.
Is there any way to get the best out of STL ( I am getting heavy performance hit in vector [] operator) in optimization level -O0 ?
What optimization level (and possibly variables like -NDEBUG) you use while comparing two algorithms ?
It will be also great help if someone can give some idea about the trend in academic research about comparing the performance of algorithms written in C++ ?
EDIT::
Ok, To isolate the problem of optimization level, I am using one algorithm but two different implementation now.
I have changed one of the functions with raw pointers(int and boolean) to std::vector and std::vector... With -O0 -NDEBUG the performances are 5.46s(raw pointer) and 11.1s(std::vector). And with -O2 -NDEBUG , the performances are 2.02s(raw pointer) and 2.21s(std::vector). Same algorithm, one implementation is using 4/5 dynamic arrays of int and boolean. And the other one is using using std::vector and std::vector instead. They are same in every other case
You can see that in -O0 std::vector is outperformed with twice faster pointers. While in -O2 they are almost the same.
But I am really confused, because in academic fields, when they publish the results of algorithms in running time, they compile the programs with -O0.
Is there some compiler options I am missing ?