tags:

views:

274

answers:

9

why integer has a range between 0 and 32768?

+1  A: 

If this is an unsigend integer it is because 2 to the power of 16 = 65536 and in your implementation unsigned ints are 16 bits long.

shuttle87
+3  A: 

Because that is what your implementation supports.

Integer ranges in C (and, well, most other languages as well) are - usually - based on the range of numbers that can be represented in 2-complement binary. The number of bits used is - usually - based on features of the platform you use, such as the number of bits that fits in a CPU register.

However, the C standard only specifies the minimum (in magnitude) representable bounds for the different integer types, so an implementation is actually free to use other ways of representing integers, which may lead to other ranges as well, hence the initial comment.

See section 2.2.4.2 of the C89 standard, or 5.2.4.2 in the C99 standard.

Christoffer
+9  A: 

int is guaranteed to be at least 16-bit long (2 ^ 16 = 65536) so the minimal range for int type is -32767 - 32767 (there is allowed to be a "negative zero" value) or 0 - 65535 for unsigned int.

Crozin
That is not entirely correct, the minimum is not `-32767` but `-32768` (for 16-bit architectures).
bitmask
@bitmask: It's allowed to use signed-magnitude format, which assigns two bit-patterns to 0 and excludes -32768.
Potatoswatter
@Crozin: 65536 for `unsigned` is an error though, should be 65535.
Potatoswatter
Ohh... fixed. ;)
Crozin
@bitmask: Look it up (e.g., C99 5.2.4.2.1). The range on a typical 16-bit two's complement implementation does indeed start at -32768. However the standard does not require that implementations use two's complement, hence the minimal range required is specified as -32767 to +32767. This facilitates one's complement implementations that probably won't ever exist again...
John Marshall
I have to say I'm not sure which range is correct (I'm not a C programmer), however the point is that `int` is at least 16-bit, not 32-bit so its **minimal** range is ±32768, not ±2147483648. :)
Crozin
A: 

Because if it's a 16 bit integer, the maximum binary value it can store is sixteen 1's: 1111111111111111. Which is 65535 in unsigned form. IT IS NOT 65536.

Griever
+3  A: 

its a good practice to use constants INT_MAX, UINT_MAX etc. This way you don't have to worry about the underlying size of the int on different platforms.

Abhinay K Reddyreddy
OK for C - in C++ use `std::numeric_limits` instead
Steve Townsend
A: 

ints are warrantied to be at least 16-bit long. However, on most 32 bit architectures, they are instead 32-bit long. Their range is therefore: [-2147483648, 2147483647].

short ints, however, are usually 16-bit long.

Edgar Bonet
A: 

First, the range 0 to 65536 represents the range of an "unsigned short integer". Other integer types can have values that range much higher and can be negative. Second, an unsigned short integer is stored using 16 bits so 2^16 = 65536 are all the binary combinations hence all the possible values that can be stored.

helperMan
Range 0..65535, actually.
David Thornley
+1  A: 

The technical answer is that an unsigned short int typically has a range of 0 - 65,535. (Not 65,536! The zero must also be counted.) This is an artifact (or "feature" if you prefer) of computer memory which uses 8-bit bytes.

Usual numbers: The char data type uses only a single byte, the short data type uses two bytes, the int data type uses four bytes, etc.

The difference between signed and unsigned data types is that the most-significant bit is used in the signed number to indicated signedness; a 0 to flag positive numbers, and a 1 to flag negative numbers. That effectively cuts the upper range for positive numbers in half, as it allocates the other half to the negative range.

Data type sizes are only standardized through the stdint.h C standard library header. In this header, uint64_t has the greatest positive range, providing for numbers between (and including) 0 - 18,446,744,073,709,551,615.

Parasyte
+1 for the "zero must also be counted". I was reading through the answers and was wondering why it was 65535 opposed to 65536
Marlon
Forgetting that zero is also a number leads to bugs called off-by-one errors: http://en.wikipedia.org/wiki/Off-by-one_error
Parasyte
+1  A: 
  1. stdint.h: has typedefs for integers of different widths (8, 16, 32, 64) both signed and unsigned, e.g. int8_t, uint32_t etc. More...

  2. limits.h: has symbolic constants for minimum and maximum values for each of the above types, e.g. INT_MIN, UINT_MAX etc. More....

    2.1 However, as Steve Townsend commented above, the more C++-ish way and means to explore more information (e.g. digits()) is to use std::numeric_limits. More...

ArunSaha