views:

447

answers:

8

I am doing a performance comparison test. I want to record the run time for my c++ test application and compare it under different circumstances. The two cases to be compare are: 1) a file system driver is installed and active and 2) also when that same file system driver is not installed and active.

A series of tests will be conducted on several operating systems and the two runs described above will be done for each operating system and it's setup. Results will only be compared between the two cases for a given operating system and setup.

I understand that when running a c/c++ application within an operating system that is not a real-time system there is no way to get the real time it took for the application to run. I don't think this is a big concern as long as the test application runs for a fairly long period of time, therefore making the scheduling, priorities, switching, etc of the CPU negligible.

Edited: For Windows platform only How can I generate some accurate application run time results within my test application?

+4  A: 

If you're on a POSIX system you can use the time command, which will give you the total "wall clock" time as well as the actual CPU times (user and system).

Edit: Apparently there's an equivalent for Windows systems in the Windows Server 2003 Resource Kit called timeit.exe (not verified).

ezod
Yes, timeit.exe is the best way to do this on a Windows system, though cygwin probably gives you the unix 'time' command.
Dan Olson
Do you know if timeit.exe works Windows Vista and Win7?
Brian T Hannan
I see instances of both on Google, but I have no way to verify.
ezod
+1  A: 

Just to expand on ezod's answer.
You run the program with the time command to get the total time - there are no changes to your program

Martin Beckett
A: 

If not on POSIX, you could have the program time itself using the standard ctime library.

Cory Petosky
+1  A: 

You can put this

#if _DEBUG
time_t start = time(NULL);
#endif

and finish with this

#if _DEBUG
time end = time(NULL);
#endif

in your int main() method. Naturally you'll have to return the difference either to a log or cout it.

wheaties
Although some of the other solutions may have been robust, this was the most simple one and it is the one I ended up using and it seems to be working as far as I can tell.
Brian T Hannan
@wheaties: what is the unit of time used above?
Lazer
@eSKay in seconds. It can not time anything more accurately than that I'm afraid. However, for most applications it gets the job done.
wheaties
@wheaties: thanks!
Lazer
+1  A: 

If you're on a Windows system you can use the high-performance counters by calling QueryPerformanceCounter():

#include <windows.h>
#include <string>
#include <iostream>

int main()
{
    LARGE_INTEGER li = {0}, li2 = {0};
    QueryPerformanceFrequency(&li);
    __int64 freq = li.QuadPart;

    QueryPerformanceCounter(&li);
        // run your app here...
    QueryPerformanceCounter(&li2);

    __int64 ticks = li2.QuadPart-li.QuadPart;
    cout << "Reference Implementation Ran In " << ticks << " ticks" << " (" << format_elapsed((double)ticks/(double)freq) << ")" << endl;
    return 0;
}

...and just as a bonus, here's a function that converts the elapsed time (in seconds, floating point) to a descriptive string:

std::string format_elapsed(double d) 
{
    char buf[256] = {0};

    if( d < 0.00000001 )
    {
        // show in ps with 4 digits
        sprintf(buf, "%0.4f ps", d * 1000000000000.0);
    }
    else if( d < 0.00001 )
    {
        // show in ns
        sprintf(buf, "%0.0f ns", d * 1000000000.0);
    }
    else if( d < 0.001 )
    {
        // show in us
        sprintf(buf, "%0.0f us", d * 1000000.0);
    }
    else if( d < 0.1 )
    {
        // show in ms
        sprintf(buf, "%0.0f ms", d * 1000.0);
    }
    else if( d <= 60.0 )
    {
        // show in seconds
        sprintf(buf, "%0.2f s", d);
    }
    else if( d < 3600.0 )
    {
        // show in min:sec
        sprintf(buf, "%01.0f:%02.2f", floor(d/60.0), fmod(d,60.0));
    }
    // show in h:min:sec
    else 
        sprintf(buf, "%01.0f:%02.0f:%02.2f", floor(d/3600.0), floor(fmod(d,3600.0)/60.0), fmod(d,60.0));

    return buf;
}
John Dibling
I don't think I need high precision and there appears to be different types of output on various systems. Although this is a good suggestion, I think it's a bit much for what I'm doing.
Brian T Hannan
@Brian: Perhaps not, just be aware that the granularity of GetTickCount() is not what it may seem. The docs say it returns a value in milliseconds - but what they don't say is that the granularity of that value is perhaps several hundred milliseconds. For a long-running program (ie > 1 second or so) this becomes a non-issue, but for timing code blocks GetTickCount() is worthless.
John Dibling
Correct, I will be timing the task of finding, opening, and reading x amount of characters from all the files within a specified directory path. The run times will be seconds and probably even minutes.
Brian T Hannan
+2  A: 

I think what you are asking is "How do I measure the time it takes for the process to run, irrespective of the 'external' factors, such as other programs running on the system?" In that case, the easiest thing would be to run the program multiple times, and get an average time. This way you can have a more meaningful comparison, hoping that various random things that the OS spends the CPU time on will average out. If you want to get real fancy, you can use a statistical test, such as the two-sample t-test, to see if the difference in your average timings is actually significant.

Dima
Nice answer, I took statistics and it's a good idea but I think it might be overkill.
Brian T Hannan
A: 

Download Cygwin and run your program by passing it as an argument to the time command. When you're done, spend some time to learn the rest of the Unix tools that come with Cygwin. This will be one of the best investments for your career you'll ever make; the Unix toolchest is a timeless classic.

Diomidis Spinellis
That's a good idea, but would I have to install Cygwin on each test system I setup? I don't want to have to install anything on the test machines.
Brian T Hannan
You could copy a minimal installation of cygwin on a USB stick.
Diomidis Spinellis
About how long does it take to install? I will be repeating this test over and over again on machines that will be re-ghosted from ghost images. I will be running tests on various Windows OS's and don't want to waste time installing this thing time and time again.
Brian T Hannan
I was suggesting to put a ready to run installation on the USB stick.
Diomidis Spinellis
A: 

QueryPerformanceCounter can have problems on multicore systems, so I prefer to use timeGetTime() which gives the result in milliseconds

you need a 'timeBeginPeriod(1)' before and 'timeEndPeriod(1)' afterwards to reduce the granularity as far as you can but I find it works nicely for my purposes (regulating timesteps in games), so it should be okay for benchmarking.

sack