why integer has a range between 0 and 32768?
If this is an unsigend integer it is because 2 to the power of 16 = 65536 and in your implementation unsigned ints are 16 bits long.
Because that is what your implementation supports.
Integer ranges in C (and, well, most other languages as well) are - usually - based on the range of numbers that can be represented in 2-complement binary. The number of bits used is - usually - based on features of the platform you use, such as the number of bits that fits in a CPU register.
However, the C standard only specifies the minimum (in magnitude) representable bounds for the different integer types, so an implementation is actually free to use other ways of representing integers, which may lead to other ranges as well, hence the initial comment.
See section 2.2.4.2 of the C89 standard, or 5.2.4.2 in the C99 standard.
int
is guaranteed to be at least 16-bit long (2 ^ 16 = 65536
) so the minimal range for int
type is -32767 - 32767
(there is allowed to be a "negative zero" value) or 0 - 65535
for unsigned int
.
Because if it's a 16 bit integer, the maximum binary value it can store is sixteen 1's: 1111111111111111. Which is 65535 in unsigned form. IT IS NOT 65536.
its a good practice to use constants INT_MAX, UINT_MAX etc. This way you don't have to worry about the underlying size of the int on different platforms.
ints are warrantied to be at least 16-bit long. However, on most 32 bit architectures, they are instead 32-bit long. Their range is therefore: [-2147483648, 2147483647].
short ints, however, are usually 16-bit long.
First, the range 0 to 65536 represents the range of an "unsigned short integer". Other integer types can have values that range much higher and can be negative. Second, an unsigned short integer is stored using 16 bits so 2^16 = 65536 are all the binary combinations hence all the possible values that can be stored.
The technical answer is that an unsigned short int
typically has a range of 0 - 65,535. (Not 65,536! The zero must also be counted.) This is an artifact (or "feature" if you prefer) of computer memory which uses 8-bit bytes.
Usual numbers: The char
data type uses only a single byte, the short
data type uses two bytes, the int
data type uses four bytes, etc.
The difference between signed
and unsigned
data types is that the most-significant bit is used in the signed number to indicated signedness; a 0 to flag positive numbers, and a 1 to flag negative numbers. That effectively cuts the upper range for positive numbers in half, as it allocates the other half to the negative range.
Data type sizes are only standardized through the stdint.h C standard library header. In this header, uint64_t
has the greatest positive range, providing for numbers between (and including) 0 - 18,446,744,073,709,551,615.
stdint.h
: has typedefs for integers of different widths (8, 16, 32, 64) both signed and unsigned, e.g.int8_t
,uint32_t
etc. More...limits.h
: has symbolic constants for minimum and maximum values for each of the above types, e.g.INT_MIN
,UINT_MAX
etc. More....2.1 However, as Steve Townsend commented above, the more C++-ish way and means to explore more information (e.g.
digits()
) is to usestd::numeric_limits
. More...