In hexadecimal 1065353216 is 0x3F800000. If you interpret that as a 32-bit floating point number you get 1.0. If you write it out in binary you get this:
3 F 8 0 0 0 0 0
0011 1111 1000 0000 0000 0000 0000 0000
Or grouped differently:
0 01111111 00000000000000000000000
s eeeeeeee vvvvvvvvvvvvvvvvvvvvvvv
The first bit (s
) is the sign bit, the next 8 bits (e
) are the exponent, and the last 23 bits (v
) are the significand. "The single precision binary floating-point exponent is encoded using an offset binary representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard." Interpreting this you see that the sign is 0 (positive), the exponent is 0 (01111111b = 127, the "zero offset"), and the significand is 0. This gives you +00 which is 1.0.
Anyhow, what's happening is that you are taking a reference to a float (b
) and reinterpreting it as an int reference (int&)
. So when you read the value of j
you get the bits from b
. Interpreted as a float those bits mean 1.0, but interpreted as an int those bits mean 1065353216.
For what it's worth, I have never used a cast using &
like (int&)
. I would not expect to see this or use this in any normal C++ code.