tags:

views:

77

answers:

5

I have the following C99 program which measures performance of simple division operations relative to addition. However, the difftime function keeps returning 0 even though the program is clearly taking several seconds to process runAddition and runDivision with iterations set to 1 billion.

#include <stdio.h>
#include <time.h>

void runAddition(long long iterations)
{
    long long temp;
    for (long long i = 1; i <= iterations; i++)
    {
        temp = temp + i;
    }
}

void runDivision(long long iterations)
{
    long long temp;

    // Start at 1 to avoid division by 0!
    for (long long i = 1; i <= iterations; i++)
    {
        temp = temp / i;
    }
}

int main()
{
    long long iterations = 1000000000;
    time_t startTime;

    printf("How many iterations would you like to run of each operation? ");
    scanf("%d", &iterations);

    printf("Running %d additions...\n", iterations);
    startTime = time(NULL);
    runAddition(iterations);
    printf("%d additions took %f seconds\n", iterations, difftime(time(NULL), startTime));

    printf("Running %d divisions...\n", iterations);
    startTime = time(NULL);
    runDivision(iterations);
    printf("%d divisions took %f seconds\n", iterations, difftime(time(NULL), startTime));
}
A: 

It calculates the difference in seconds between time1 and time2. So maybe your time difference is less than 1 second?

Output your start and end time to verify.

Sheen
No, it is definitely more than a second. More like 10 seconds. I've verified this by printing `startTime` and `time(NULL)`.
Jake Petroules
@Jake Petroules, can you output start and end time to debug?
Sheen
Yes, as I stated, I did that.
Jake Petroules
A: 

time() returns a time_t, which has a resolution of one second.

The time required for runDivision() is less than one second; one billion operations on a multi-GHz core will take less than a second.

Blank Xavier
Takes 40 seconds on my machine. As I stated, "the program is *clearly* taking several seconds to process".
Jake Petroules
+1  A: 

Make temp volatile so it does not get optimized away. The compiler is probably seeing it as a section/function with no side effects.

leppie
This is the most likely thing, in my opinion.
DeadMG
Don't think so. "temp = temp / i" requires you to perform the calculation to determine the final value of temp. You can't optimize it away.
Blank Xavier
Given the large number of iterations, and the function is inlined, temp would become 0 in about 30 or so iterations. Edit: Actually 30 is based on divide by 2 all the time. With increasing `i`, the number of iterations would even be less.
leppie
+5  A: 

Your format string expects an int (%d), and a double (%f). Your arguments are long long and double. You should set the first format string as %lld.

When pushing arguments on the stack to call printf, you push a long long using 8 bytes, and a double using 8 bytes too. When the function printf reads the format string, it expects first an int on 4 bytes, and a double on 8 bytes. printf gets the int correctly as you are little-endian and the first four bytes of your long long are enough to represent the value. printf then gets the double for which it gets the last four bytes of the long long, followed by the first four bytes of the double. As the last four bytes of the long long are zeroes, what printf thinks is a double starts with four bytes with value zero, resulting in a very very tiny value for the double according to the binary representation of doubles.

Didier Trosset
Ahh... I had no idea that using an incorrect format specifier could affect the others. This fixed the problem, +1 and thanks. Also, could you explain this further at all or point me to some documentation?
Jake Petroules
Be careful because the format specifier for `long long` is CRT dependent. Better avoid printing longs.
ruslik
@ruslik: that's what [`stdint.h`](http://www.opengroup.org/onlinepubs/9699919799/basedefs/stdint.h.html) and [`inttypes.h`](http://www.opengroup.org/onlinepubs/9699919799/basedefs/inttypes.h.html) are for.
Hasturkun
@Hasturkun @codaddict, both files are not found in Microsoft Visual Studio 2008. Looking into C++ standard, I find that long long is not supported in C++ standard 2003 (ISO/IEC 14882), but supported by C++ 0x draft (n3126). See section "Integer literals" of both documents
Sheen
@Sheen: true, but as question did refer to C99 and not C++, it's still relevant
Hasturkun
@Hasturkun, I didn't say question is irrelevant. I just state a fact. And more important fact is that Microsoft compiler doesn't have the standard C header files. Does it mean it is not universally portable? Any more portable way?
Sheen
+1  A: 

Try using %lld in place of %d in the printf:

printf("%lld additions t
         ^^^

Works fine after this change.

codaddict
+1 This is it, printf is expecting 32 bits but it gets 64.
PigBen
@PigBen, Curious about this. Seems my long-standing assumption that larger integer will be truncated by printf() is not true ???
Sheen
@Sheen -- In the old days, before long long existed, this was never a problem. All integer types(char,short,int,long) would be upcast to long. But now with the addition of long long, things are different. I'm not sure exactly what the standard says about it, I don't use printf anymore, I prefer C++ streams.
PigBen