Folks,
Even experienced programmers write C# code like this sometimes:
double x = 2.5;
double y = 3;
if (x + 0.5 == 3) {
// this will never be executed
}
Basically, it's common knowledge that two doubles (or floats) can never be precisely equal to each other, because of the way the computer handles floating point arithmetic.
The problem is, everyone sort-of knows this, but code like this is still all over the place. It's just so easy to overlook.
Questions for you:
- How have you dealt with this in your development organization?
- Is this such a common thing that the compiler should be checking that we all should be screaming really loud for VS2010 to include a compile-time warning if someone is comparing two doubles/floats?
UPDATE: Folks, thanks for the comments. I want to clarify that I most certainly understand that the code above is incorrect. Yes, you never want to == compare doubles and floats. Instead, you should use epsilon-based comparison. That's obvious. The real question here is "how do you pinpoint the problem", not "how do you solve the technical issue".