I am having some trouble understanding how to retrieve the result of a calculation done using the Intel's x86 coprocessor.
Please consider the following data segment.
.data
res real4 ?
x real4 5.0
k real4 3.4
and the following code segments, version 1:
.code
main:
fld x ; 5.0
fadd k1 ; 5.0 + 3.4
fistp res ; store as integer (it will round, in this case down)
mov eax, res ; eax = 00000008
end main
and version 2:
.code
main:
fld x ; 5.0
fadd k ; 5.0 + 3.4
fstp res ; store as real
mov eax, res ; eax = 41066666
end main
I understand version 1, with no problem.
It's version 2 I don't understand. I can see in the debugger that it does the exact some calculation as version 1, but when it's time to store is does so as "41066666"!?
What is the reason for this?
What "encoding" was used to make 8.4 into "41066666"?
Is there a way to convert it back to 8.4, so that I can print it in the console (for example, using StdOut masm32 library function)?
Thanks.