int main()
{
double i=4;
printf("%d",i);
return 0;
}
can anybody tell why does this program gives output as 0.
int main()
{
double i=4;
printf("%d",i);
return 0;
}
can anybody tell why does this program gives output as 0.
%d is for integers:
int main()
{
int i=4;
double f = 4;
printf("%d",i); // prints 4
printf("%0.f",f); // prints 4
return 0;
}
Because "%d"
specifies that you want to print an int
, but i
is a double
. Try printf("%f\n");
instead (the \n
specifies a new-line character).
Because i is a double and you tell printf to use it as if it were an int (%d).
Because the language allows you to screw up and you happily do it.
More specifically, '%d' is the formatting for an int and therefore printf("%d") consumes as many bytes from the arguments as an int takes. But a double is much larger, so printf only gets a bunch of zeros. Use '%lf'.
The simple answer to your question is, as others have said, that you're telling printf
to print a integer number (for example a variable of the type int
) whilst passing it a double-precision number (as your variable is of the type double
), which is wrong.
Here's a snippet from the printf(3)
linux programmer's manual explaining the %d
and %f
conversion specifiers:
d, i The int argument is converted to signed decimal notation. The
precision, if any, gives the minimum number of digits that must
appear; if the converted value requires fewer digits, it is
padded on the left with zeros. The default precision is 1.
When 0 is printed with an explicit precision 0, the output is
empty.
f, F The double argument is rounded and converted to decimal notation
in the style [-]ddd.ddd, where the number of digits after the
decimal-point character is equal to the precision specification.
If the precision is missing, it is taken as 6; if the precision
is explicitly zero, no decimal-point character appears. If a
decimal point appears, at least one digit appears before it.
To make your current code work, you can do two things. The first alternative has already been suggested - substitute %d
with %f
.
The other thing you can do is to cast your double
to an int
, like this:
printf("%d", (int) i);
The more complex answer(addressing why printf
acts like it does) was just answered briefly by Jon Purdy. For a more in-depth explanation, have a look at the wikipedia article relating to floating point arithmetic and double precision.
@jagan for the sub- question"What is Left most third byte. Why it is 00000001 ? Can somebody explain" 10000000001 is for 1025 in binary format.
Jon Purdy gave you a wonderful explanation of why you were seeing this particular result. However, bear in mind that the behavior is explicitly undefined by the language standard:
7.19.6.1.9: If a conversion specification is invalid, the behavior is undefined.248) If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
(emphasis mine) where "undefined behavior" means
3.4.3.1: behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements
IOW, the compiler is under no obligation to produce a meaningful or correct result. Most importantly, you cannot rely on the result being repeatable. There's no guarantee that this program would output 0 on other platforms, or even on the same platform with different compiler settings (it probably will, but you don't want to rely on it).