views:

404

answers:

10

What is an integer overflow error? Why do i care about such an error? What are some methods of avoiding or preventing it?

A: 

From wikipedia:

In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is larger than can be represented within the available storage space. For instance, adding 1 to the largest value that can be represented constitutes an integer overflow. The most common result in these cases is for the least significant representable bits of the result to be stored (the result is said to wrap).

You should care about it especially when choosing the appropriate data types for your program or you might get very subtle bugs.

Darin Dimitrov
this answers the what but not the why and the how
David
A: 

This happens when you attempt to use an integer for a value that is higher than the internal structure of the integer can support due to the number of bytes used. For example, if the maximum integer size is 2,147,483,647 and you attempt to store 3,000,000,000 you will get an integer overflow error.

CS
Overflow is an artifact of math operations, not on assignment / storage.
dthorpe
A: 

From http://www.first.org/conference/2006/papers/seacord-robert-slides.pdf :

An integer overflow occurs when an integer is increased beyond its maximum value or decreased beyond its minimum value. Overflows can be signed or unsigned.

P.S.: The PDF has detailed explanation on overflows and other integer error conditions, and also how to tackle/avoid them.

N 1.1
+1  A: 

An integer overflow error occurs when an operation makes an integer value greater than its maximum.

For example, if the maximum value you can have is 100000, and your current value is 99999, then adding 2 will make it 'overflow'.

You should care about integer overflows because data can be changed or lost inadvertantly, and can avoid them with either a larger integer type (see long int in most languages) or with a scheme that converts long strings of digits to very large integers.

Riddari
+5  A: 

The easiest way to explain it is with a trivial example. Imagine we have a 4 bit unsigned integer. 0 would be 0000 and 1111 would be 16. So if you increment 16 instead of getting 17 you'll circle back around to 0000 as 17 is actually 10000 and we can not represent that with less than 5 bytes. Ergo overflow.

In practice the numbers are much bigger and it circles to a large negative number on overflow if the int is signed but the above is basically what happens.

Another way of looking at it is to consider it as largely the same thing that happens when the odometer in your car rolls over to zero again after hitting 999999 km/mi

Kris
"less than 5 bits", not "less than 5 bytes".
indiv
+8  A: 

Integer overflow occurs when you try to express a number that is larger than the largest number the integer type can handle.

If you try to express the number 300 in one byte, you have an integer overflow (maximum is 255). 100,000 in two bytes is also an integer overflow (65,535 is the maximum).

You need to care about it because mathematical operations won't behave as you expect. A + B doesn't actually equal the sum of A and B if you have an integer overflow.

You avoid it by not creating the condition in the first place (usually either by choosing your integer type to be large enough that you won't overflow, or by limiting user input so that an overflow doesn't occur).

John at CashCommons
+1  A: 

When you store an integer in memory, the computer stores it as a series of bytes. These can be represented as a series of ones and zeros.

For example, zero will be represented as 00000000 (8 bit integers), and often, 127 will be represented as 01111111. If you add one to 127, this would "flip" the bits, and swap it to 10000000, but in a standard two's compliment representation, this is actually used to represent -128. This "overflows" the value.

With unsigned numbers, the same thing happens: 255 (11111111) plus 1 would become 100000000, but since there are only 8 "bits", this ends up as 00000000, which is 0.

You can avoid this by doing proper range checking for your correct integer size, or using a language that does proper exception handling for you.

Reed Copsey
`10000000` is INT_MIN (e.g., -128 for 1 signed byte) in 2's compliment. -1 is `FFFFFFFF`.
indiv
+2  A: 

I'd like to be a bit contrarian to all the other answers so far, which somehow accept crappy broken math as a given. The question is tagged language-agnostic and in a vast number of languages, integers simply never overflow, so here's my kind-of sarcastic answer:

What is an integer overflow error?

An obsolete artifact from the dark ages of computing.

why do i care about it?

You don't.

how can it be avoided?

Use a modern programming language in which integers don't overflow. (Lisp, Scheme, Smalltalk, Self, Ruby, Newspeak, Ioke, Haskell, take your pick ...)

Jörg W Mittag
+1  A: 

Overflow is when the result of an arithmetic operation doesn't fit in the data type of the operation. You can have overflow with a byte-sized unsigned integer if you add 255 + 1, because the result (256) does not fit in the 8 bits of a byte.

You can have overflow with a floating point number if the result of a floating point operation is too large to represent in the floating point data type's exponent or mantissa.

You can also have underflow with floating point types when the result of a floating point operation is too small to represent in the given floating point data type. For example, if the floating point data type can handle exponents in the range of -100 to +100, and you square a value with an exponent of -80, the result will have an exponent around -160, which won't fit in the given floating point data type.

You need to be concerned about overflows and underflows in your code because it can be a silent killer: your code produces incorrect results but might not signal an error.

Whether you can safely ignore overflows depends a great deal on the nature of your program - rendering screen pixels from 3D data has a much greater tolerance for numerical errors than say, financial calculations.

Overflow checking is often turned off in default compiler settings. Why? Because the additional code to check for overflow after every operation takes time and space, which can degrade the runtime performance of your code.

Do yourself a favor and at least develop and test your code with overflow checking turned on.

dthorpe
+1 for "do yourself a favour".
Jeroen Pluimers