Imagine I write a simple calculator application, that just calculates simple stuff like
1.5 + 30 + 9755 - 30 - 20000 + 999900.54
I remember slightly that there were some precision problems when using floating point numbers. At which point would my calculator app start to create wrong results? Most of the time, I would just calculate integers like 1 + 2 - 963422, but sometimes I may enter a floating point number. I have no big clue about where the precision problems would start to take effect. just the very last numbers of that double? like -963419.0000000000003655? Or how would that look like? And any idea how to catch those?