This is in C, but I tagged it C++ incase it's the same. This is being built with: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.220 for 80x86 if that makes any different
Why does this work?
(inVal is 0x80)
float flt = (float) inVal;
outVal = *((unsigned long*)&flt);
(results in outVal being 0x43000000 -- correct)
But this doesn't?
outVal = *((unsigned long*)&((float)inVal));
(results in outVal being 0x00000080 -- NOT CORRECT :( )
Before asking this question I googled around a bit and found this function in java that basically does what I want. If you're a bit confused about what I'm trying to do, this program might help explain it:
class hello
{
public static void main(String[] args)
{
int inside = Float.floatToIntBits(128.0f);
System.out.printf("0x%08X", inside);
}
}