I have a program written in C# and some parts are writing in native C/C++. I use doubles to calculate some values and sometimes the result is wrong because of too small precision. After some investigation i figured out that someone is setting the floating-point precision to 24-bits. My code works fine, when i reset the precision to at least 53-bits (using _fpreset or _controlfp), but i still need to figure out who is responsible for setting the precision to 24-bits in the first place.
Any ideas who i could achieve this?