Hi,
I was trying to understand the floating point representation in C using this code (both float and int are 4 bytes on my machine):
int x = 3;
float y = *(float*) &x;
printf("%d %e \n", x, y);
We know that the binary representation of x will be the following
00000000000000000000000000000011
Therefore I would have expected y to be represented as follows
Sign bit (first bit from left) = 0
Exponent (bits 2-9 from left) = 0
Mantissa (bits 10-32): 1 + 2^(-22)+2^(-23)
Leading to y=(-1)^0 * 2^(0-127) * (1+2^(-22) + 2^(-23)) = 5.87747E-39
My program however prints out
3 4.203895e-45
That is, y has the value 4.203895e-45 instead of 5.87747E-39 as I expected. Why does this happen. What am I doing wrong?
P.S. I have also printed the values directly from gdb so it is not a problem with the printf command.