#include<stdio.h>
main()
{
float x=2;
float y=4;
printf("\n%d\n%f",x/y,x/y);
printf("\n%f\n%d",x/y,x/y);
}
Output:
0
0.000000
0.500000
0
compiled with gcc 4.4.3 The program exited with error code 12
#include<stdio.h>
main()
{
float x=2;
float y=4;
printf("\n%d\n%f",x/y,x/y);
printf("\n%f\n%d",x/y,x/y);
}
Output:
0
0.000000
0.500000
0
compiled with gcc 4.4.3 The program exited with error code 12
This result is not surprising, in the first %d you passed a double where an integer was expected.
Yes. Arguments are read from the vararg list to printf in the same order that format specifiers are read.
Both printf
statements are invalid because you're using a format specifier expecting a int, but you're only giving it a floatdouble.
When you say %d
in the printf format string, you must pass an int
value as the corresponding argument. Otherwise the behavior is undefined, meaning that your computer may crash or aliens might knock at your door. Similar for %f
and double
.
What you are doing is undefiend behaviour. What you are seeing is coincidental; printf
could write anything.
You must match the exact type when giving printf
arguments. You can e.g. cast:
printf("\n%d\n%f", (int)(x/y), x/y);
printf("\n%f\n%d", x/y, (int)(x/y));
As noted in other answers, this is because of the mismatch between the format string and the type of the argument.
I'll guess that you're using x86 here (based on the observed results).
The arguments are passed on the stack, and x/y
, although of type float
, will be passed as a double
to a varargs function (due to type "promotion" rules).
An int
is a 32-bit value, and a double
is a 64-bit value.
In both cases you are passing x/y
(= 0.5) twice. The representation of this value, as a 64-bit double
, is 0x3fe0000000000000
. As a pair of 32-bit words, it's stored as 0x00000000
(least significant 32 bits) followed by 0x3fe00000
(most significant 32-bits). So the arguments on the stack, as seen by printf()
, look like this:
0x3fe00000
0x00000000
0x3fe00000
0x00000000 <-- stack pointer
In the first of your two cases, the %d
causes the first 32-bit value, 0x00000000
, to be popped and printed. The %f
pops the next two 32-bit values, 0x3fe00000
(least significant 32 bits of 64 bit double
), followed by 0x00000000
(most significant). The resulting 64-bit value of 0x000000003fe00000
, interpreted as a double
, is a very small number. (If you change the %f
in the format string to %g
you'll see that it's almost 0, but not quite).
In the second case, the %f
correctly pops the first double
, and the %d
pops the 0x00000000
half of the second double
, so it appears to work.
http://en.wikipedia.org/wiki/Format_string_attack
Something related to my question. Supports the answer of Matthew.