There are many questions about detection of the integer overflow BEFORE the actual addition/substraction because of possible undefined behavior
. So, my question is
Why it will produce this undefined behavior
in the first place?
I can think of 2 causes:
1) A processor that generates exception in this case. Sure, it can be toggled off, and most probably a well written CRT will do that.
2) A processor that uses other binary representations of numbers (1's complement? base 10?). In that case the undefined behavior will manifest itself as different result (but will not crash!). Well, we could live with that.
So, why should someone avoid causing it? Am I missing something?