I was able to reproduce the problem you experienced (VBA), and it appears to be indeed a bug in the treatment of the Single
type by (specifically) VB IDEs. Namely, the VB IDEs will improperly cast the Single
default value to int
before printing it out again (as part of the method signature) as a (truncated) single-precision floating-point value.
This problem does not exist in the Microsoft Script Editor, nor does it exist in OleView.exe
etc.
To test, try the following Single
default value: 18446744073709551615.0
. In my case, this value is properly encoded in the TLB and properly displayed by OleView.exe
and by Microsoft Script Editor as 1.844674E+19
. However, it gets displayed as -2.147484E+09
in the VB IDEs. Indeed, casting (float)18446744073709551615.0
to int
produces -2147483648
which, displayed as float
, produces the observed (incorrect) VB IDE output -2.147484E+09
.
Similarly, 50.6
gets cast to int
to produce 51
, which is then printed out as 51
.
To work around this issue use Double
instead of Single
, as Double
is converted and displayed properly by all IDEs I was able to test.
On a tangent, you are probably already aware of the fact that certain floating point values (such as 0.1
) do not have a corresponding exact IEEE 754 representation and cannot be distinguished from other values (e.g. 0.1000000015
.) Thus, specifying a default double-precision value of 0.1
will be displayed in most IDEs as 0.100000001490116
. One way to alleviate this precision issue is to choose a different scale for your parameters (e.g. switch from seconds to milliseconds, thus 0.1
seconds would become 100
milliseconds, unambiguously representable as both single- and double precision floating point, as well as integral values/parameters.)
Cheers,
V.