views:

1600

answers:

12

I know that modern languages handle integer divide by zero as an error just like the hardware does, but what if we could design a whole new language?

Ignoring existing hardware, what should a programming language does when an integer divide by zero occurs? Should it return a NaN of type integer? Or should it mirror IEEE 754 float and return +/- Infinity? Or is the existing design choice correct, and an error should be thrown?

Is there a language that handles integer divide by zero nicely?

EDIT When I said ignore existing hardware, I mean don't assume integer is represented as 32 bits, it can be represented in anyway you can to imagine.

+9  A: 

Yes, it is an error, This is a number that has no meaning (it is not infinity despite what some may say).

The best solution would be to check for this before the division occurs.

The second best case is to throw an exception and handle it.

Here is a link to a blog entry on zero Good math bad math

Jim C
Agree with this 100%. A division op is math, and in math dividing by 0 is undefined. It's not legal, it's not allowed, and it's not infinite unless taking limits. Unless your language is overloading division to be more than a mathematical op, divide by 0 should be an error.
nezroy
+6  A: 

Usually, integer types of programming languages do not support NaN values. Since they use the whole, say, 32 bits for storing the number without any special values. IEEE754 defines special bit patterns for +Inf, -Inf, Nan, ... which are not defined for an int.

Personally, I believe the way it's handled by throwing a DivideByZeroException is pretty good. If you reach a point that you hadn't checked for divide by zero, you've probably missed something you shouldn't have ignored, therefore, it's critical to issue a fatal error.

EDIT: Ignoring the way hardware handles integers, I still believe the error should be in form of an exception or something, at least by default. The primary reason doubles allow divide by zero is the fact that due to a computation, you might have a very small value near to zero to divide by. In fact, +Infinity does not mean real infinity. It means it's much larger than can be handled by a double. For an int, the range is much limited and the closest positive value to 0 is 1, so divide by zero in an int is most likely a programming error as opposed to a precision loss.

Mehrdad Afshari
I said ignoring existing hardware, so why are integers 32 bits?
Pyrolistical
I thought you meant ignoring the way hardware deals with divide by zero!
Mehrdad Afshari
+3  A: 

The existing design choice is correct. If you want NaNs and infinities you should be computing in the floating-point space to begin with. Today's floating-point units do integer arithmetic very efficiently.

A language that handles divide by zero nicely is Standard ML: dividing by zero is guaranteed to raise an exception, but it's an exception (Div) like any other and can be caught by user code.

Norman Ramsey
A: 

Sadly, the only general answer to this one is...

... it depends.

'Nicely' can be a right PITA in fact.

Best is to know the implications of such a condition, and handle them.

ChrisA
A: 

Well, I don't know if it's too important, but I think it throwing an error is reasonable. I'm guessing this is highly subjective, but to me having NaN defined in a language is a workaround for something that shouldn't be done. Dividing by zero is meaningless, and thus an error should be thrown.

But that's just my subjective opinion.

EdgarVerona
+1  A: 

I think division by zero is pretty much the standard example of how exceptions work. Higher level languages throw DivisionByZero exception, the x86 hardware (in protected mode) throws a division by zero exception as wel, etc. Exceptions can be handled.

A division by zero is an error. Failing to report a severe error like that could have disastrous consequences.

What would you prefer: if the software of the X-Ray machine that is scanning you would throw an exception and shut down if it divided with zero for some reason, or just returned positive infinity and this would be the X-Ray dose you would get?

DrJokepu
What do you prefer, a life support machine that halts that divides by zero, or just prints NaN since it was for display purposes? We both can make up examples that assume poor testing or design.
Pyrolistical
Pyrolistical: Obviously, on life-critical systems, robustness is extremely important. That's why a life support machine should throw an exception and *reset* if it divides by zero. Otherwise, it might kill the patient by using invalid data. It should never ignore division by zero.
DrJokepu
You missed by point. I am saying for life critical software, no matter how divide by zero is handled, it would be a great failure on testing and design if those two examples were to occur.
Pyrolistical
True, it shall never happen. But what if it still happens? No amount of good development practices, code reviews etc could filter out all of the bugs in all possible circumstances. There must be an error handling mechanism, and dividing by zero is definitely an error. Not handling it could be fatal.
DrJokepu
+1  A: 

I don't think there's a nice way to handle it. Throwing an exception is probably the best thing to do, because it forces the developer to think about it, an code correctly. Assuming 1/0 returned NaN, what happens when you try to add NaN to something else, does it throw an exception, or does it return NaN again (same could be said for infinity). Since there is really no define action to take, the developer should program what they want to happen when divide by zero occurs.

Kibbee
The way IEEE 754 defines it is operations using NaN produces NaN, and NaN does to equal NaN.
Pyrolistical
A: 

You can do that, but most of the time it won't make any sense.

I have the experience that it is almost always sensible to check if you are going to dived by zero. Because most of the times it will indicate a special situation.

Gamecat
A: 

Matlab, a numerical computing environment and programming language, has Inf.

1/0 is Inf. 0/0 is NaN.

splattne
Matlab doesn't have integers, it just doubles pretending to be integers.
Pyrolistical
1/ very very small number tends toward infinity, but 1/0 is meaningless.
Jim C
actually the limit of a/x for +x -> 0 is positive infinity for positive a, so you can say the limit of 1/0 is infinity
Pyrolistical
A: 
mh
A: 

An integer, by definition, is a whole number.

NaN, infinity, etc are NOT integers.

If you expect to be working with objects that are not integers, you should use objects that support the needed representations.

If you are expecting your data to always be representable by an integer, then by all means use an integer type, and if there is an error you'll need to handle it, but the bad data is the culprit - not the program (as long as the program is designed correctly). The program should then decide how to recover depending on the design.

So, in short, the current programming languages offer everything you need - an integer type for integers, and various other types for mathematical representations that are not whole numbers.

Please don't turn the integer into something it's not - there's no reason to make the integer handle non-integer values.

Adam Davis
You could have two different types of NaN, one is integer, other is the IEEE 754 one... I don't see your point.
Pyrolistical
The point is that neither of these conforms to the algebraic laws on numbers.
reinierpost
A: 

Division by zero doesn't make sense algebraically: a / b is defined as the number c such that c * b = a, but if b = 0, the only possible value for c * b you can produce is going to be 0, while a can be any number. So you can't pick a c even if you invent one - you're going to have to special-case that c in your code anyway.

Your program probably makes some implicit assumption somewhere that / means what it is defined to mean algebraically, and that assumption goes out the window as soon as you assign a number to be the value of division by zero. So it's best to blow up right away, where the problem arises.

reinierpost