tags:

views:

460

answers:

9

hi .....plz clarify what exactly does range define?

A: 

Range is the minimum to maximum value supported for tht datatype.

Integers in C are of 16-bit.

Signed int will be -32768 to 32767 i.e. (-2^15) to (2^15 -1)

Unsigned int: 0 to 65535 i.e. 0 to (2^16)

Rashmi Pandit
This isn't the correct range for an int in C
Garry Shutler
Sorry these are for c#
Rashmi Pandit
Edited response accordingly
Rashmi Pandit
A: 

Range is the range of values an datatype can use, defined between its minimum and maximum values.

Silfverstrom
int is that small? you mean short? and int, short and long are not that explicitely defined in C-Language, they are somewaht: short is short, long is machine-dependent and int is somewehere between those (including). best look as <limits.h> on your machine
Peter Miehle
-1 for seeming to be 16-bit and not even admitting that 16-bit is perhaps not the default/mainstream platform at the moment.
unwind
+5  A: 

Range means the maximum and minimum value that can be stored inside the variable of a given type. For example if you have unsigned char and if we assume that the size of the datatype is 8 bits then you can store values ranging from 0 - 2^8-1 i.e. 0-255 inside it.

Naveen
+1  A: 

more about datatypes in c and their ranges: here

Fortega
+3  A: 

you should take a look at limits.h in your standard include-path. there are the exact ranges for your machine-type

Peter Miehle
A: 

On x86, char is 1 byte, short is 2, int is 4, float is 4, double is 8

Depending how you use them (signed or unsigned), you can calculate the range based off that.

samoz
+1  A: 

Range of a variable

The range of a variable is given as the difference between the highest and lowest value that that variable can hold. For example, the range of a signed 16-bit integer variable is -32,768 to +32,767. In the case of an integer, the variable definition is restricted to whole numbers only, and the range will cover every number within its range (including the maximum and minimum). However, for other numeric types, such as floating point numbers, the range only expresses the largest and smallest number that may be stored - within the range there will be many numbers that cannot be represented.

From wikipedia

Colin Pickard
+2  A: 

Most types in C are machine dependant, so you should look the limits.h provided by your compiler for your architecture.

There's also a manual way to check it, for ordinal types:

  • if unsigned: min = 0, max = 2**(sizeof(type)*8)-1
  • if signed: min = -2**(sizeof(type)*8-1), max = 2**(sizeof(type)*8-1)-1

For the floating point values, you can take a look to the IEEE 754 standard, as it's the most common format used in nearly all architectures.

EDIT:

The definition of range is the difference between the max and min values the type can hold. For ordinal types it's 2**(sizeof(type)*8).

fortran
A: 

A data type is an abstraction that is applied to a chunk of memory to see that piece of memory as an area that can represent a value.

For example, a single byte consists of 8 bits of memory. In the following diagram, each bit is represented by an underscore (_):

byte: _ _ _ _ _ _ _ _  <- 8 bits

Since we have 8 positions where we can input either an 0 or 1 (as each memory bit can only be set to an on or off state -- hence binary), so we can have 2^8 or 256 combinations of distinct values we can represent from the 8 bits.

This is were the concept of range comes into play -- how do we allocate those 256 combinations of values to an usable range?

One way is to take the first of the 256 combinations as a 0, and the final combination as 255:

byte: 0 0 0 0 0 0 0 0  <- Represents a "0"
byte: 0 0 0 0 0 0 0 1  <- Represents a "1"

        .. so on ..

byte: 1 1 1 1 1 1 1 0  <- Represents a "254"
byte: 1 1 1 1 1 1 1 1  <- Represents a "255"

For this data type, the range is from 0 to 255. This type is generally called an unsigned byte, as the values it can represent is unsigned as it has no sign. Basically, it is handled as if it were all positive numbers.

On the other hand, since we have 256 combinations, what if we assign half of them as postive numbers while the other half is negative numbers? So, we assign a positive or negative value to a byte representation:

byte: 0 1 1 1 1 1 1 1  <- Represents a "127"
byte: 0 1 1 1 1 1 1 0  <- Represents a "126"

        .. so on ..

byte: 0 0 0 0 0 0 0 1  <- Represents a "0"
byte: 0 0 0 0 0 0 0 0  <- Represents a "0"
byte: 1 1 1 1 1 1 1 1  <- Represents a "-1"

        .. so on ..

byte: 1 0 0 0 0 0 0 1  <- Represents a "-127"
byte: 1 0 0 0 0 0 0 0  <- Represents a "-128"

The above representation is called a "two's complement" system, and the table above has been adapted from the Wikipedia article on two's complement.

With this type of representation, in the same 8-bits, we could define a way to represent a range of numbers from -128 to 127. This representation is generally called a signed byte, because it is a byte type that can have both positive and negative representations of the number.

In comparing unsigned byte and signed byte, their ranges are different:

unsigned byte :       0  -  255
signed byte   :    -128  -  127

However, they are both have 256 possible combinations of values they can represent. They only differ by the range of values they can represent. That's the range of a data type.

Similarly, this can be extended to int, long, float, double types as well. The number of bits that are assigned to each data type is different. For example:

int:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _   <- 16 bits
long: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _  <- 32 bits

Note: The actual number of bits for each type, such as int and long can be implementation- and architecture-dependent, so the above chart is not necessarily true.

In the above chart, the int type is represented by 16-bits, which is 2^16 or 65536 combinations of values that it can represent. Again, like the byte, the range of values can be all positives or split into positive and negatives:

unsigned int  :       0  -  65535
signed int    :  -32768  -  32767

(Again, int does not necessarily have to be 16-bits.)

Floating point types such as float and double are also represented by a bits in memory, but their data representation differs from integer data types such as byte and int in that they will store a value in memory as binary fractions and exponents. Floating point types also do have the concept of ranges as well.

For the nitty-gritty details about how floating point values are defined and calculated in modern systems, please refer to Wikipedia has an article on the IEEE-754.

Data ranges of a data type arises from the combinations of values which can be represented from the amount of memory assigned for a single unit of the single data type, and how those possible combinations are assigned to actual values that they represent.

coobird