tags:

views:

263

answers:

8

Why do we at all use integers in C?

#include<stdio.h>
int main()
{
    char c=10;
    printf("%d",c);
    return 0;
}

Is same as:

#include<stdio.h>
int main()
{
    int c=10;
    printf("%d",c);
    return 0;

}
+11  A: 

Because char holds numbers only from -127 to 127

qrdl
It's guaranteed range is either -127 to 127 *or* 0 to 255, so 0 to 127 is the portable range.
caf
This isn't the 'real' reason for a type system.
jjnguy
@caf By default all integer types are signed, isn't it?
qrdl
@Justin I'm not discussing the reason for type system, but merely answering OP's question.
qrdl
@qrdl: Plain `char` is special - it is allowed to be either signed or unsigned (and is actually a distinct type from both `signed char` and `unsigned char`).
caf
@qrdl, I understand. Your answer is definitely not wrong.
jjnguy
@caf I keep learning after 20+ years of programming. Didn't know that, thank you.
qrdl
+2  A: 

A character can keep only 8 bits, while integer can have 16, 32, or even 64 bits (long long int).

rhino
Or bigger than 64 bits; IBM Power7 has 128-bit integer registers.
Jonathan Leffler
@Jonathan Leffler: the width of the integer types in C is dependent on the compiler, not the machine architecture.
JeremyP
@Jeremy: true, and the compilers for Power7 support 128-bit integers (and also support floating point decimals, too).
Jonathan Leffler
+2  A: 

Try this:

#include <stdio.h>
int main()
{
    int c = 300;
    printf("%d", c);
    return 0;
}

#include <stdio.h>
int main()
{
    char c = 300;
    printf("%d", c);
    return 0;
}

The data types char, short, int, long and long long hold (possibly) different size integers that can take values up to a certain limit. char holds an 8-bit number (which is technically neither signed nor unsigned, but will actually be one or the other). Therefore the range is only 256 (-127 to 128 or 0 to 255).

Good practice is to avoid char, short, int, long and long long and use int8_t, int16_t, int32_t, uint8_t etc, or even better: int_fast8_t, int_least8_t etc.

Al
I disagree entirely on that "good practice". `char`, `short`, `int`, `long` and `long long` all have guaranteed minimum ranges, which in most cases is what you need (the main exception is when you are dealing with a specified binary interface, such as a binary file or network protocol). The exact-width types are not guaranteed to be provided.
caf
@caf: Hence the "even better". With C99, `int_fast8_t`, `int_least8_t` etc are guaranteed to be provided and are guaranteed to be the fastest and smallest (respectively) possible integer that have at least 8 bits. Likewise for the other sizes.
Al
And the latter, `int_least8_t`, is guaranteed to be `signed char`. So why not just use `signed char`?
R..
@R: That's fair enough for `signed char`, but not as helpful for 16 and 32 bit integers. `int_least8_t` then has the advantage of consistency with the rest of the code that uses (e.g.) `uint_fast16_t`.
Al
+1  A: 

Because of

#include <stdio.h>

int main(int argc, char **argv)
{
        char c = 42424242;
        printf("%d", c); /* Oops. */
        return(0);
}
Michael Foukarakis
That Oops will be for int too :P thats out of its range :D
fahad
@fahad: I don't think so. Why do you say so?
Michael Foukarakis
@Michael:42424242 would be out of range of int.You may be needing a long for that.
fahad
Out of the range the standard guarantees `int` to have, but safely within the size of `int` on any sane platform.
R..
+16  A: 

Technically all datatypes are represented with 0's and 1's. So, if they are all the same in the back end, why do we need different types?

Well, a type is a combination of data, and the operations you can perform on the data.

We have ints for representing numbers. They have operations like + for computing the sum of two numbers, or - to compute the difference.

When you think of a character, in the usual sense, it represents one letter or symbol in a human readable format. Being able to sum 'A' + 'h' doesn't make sense. (Even though c lets you do it.)

So, we have different types in different languages to make programming easier. They essentially encapsulate data and the functions/operations that are legal to perform on them.

Wikipedia has a good article on Type Systems.

jjnguy
+2  A: 

Broadly speaking, a char is meant to be the smallest unit of sensible data storage on a machine, but an int is meant to be the "best" size for normal computation (eg. the size of a register). The size of any data type can be expressed as a number of chars, but not necessarily as a number of ints. For example, on Microchip's PIC16, a char is eight bits, an int is 16 bits, and a short long is 24 bits. (short long would have to be the dumbest type qualifier I have ever encountered.)

Note that a char is not necessarily 8 bits, but usually is. Corollary: any time someone claims that it's 8 bits, someone will chime in and name a machine where it isn't.

detly
Like that Cray with 32-bit int and 64-bit char?
Vatine
`int` is not the "best" size (any more) for computations but just the least reasonable. this is why all operands for `+` etc are first promoted to `int` when they have a smaller width, I think. On modern 64 bit architectures the "best" size is mostly the 64 bit type, whereas `int` still remains at 32 bit. this is captured by the `int_fastN_t` types that Al mentions in his answer. And no, I will not comment on the 8 bit issue ;-)
Jens Gustedt
@Vatine: since C requires 'sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)', I believe the Cray actually had 32-bit `char`; they wanted to have UTF-32 support, perhaps?
Jonathan Leffler
@Jens Gustedt — I didn't know that 64 bit arches still had 32 bit `int`s ... live and learn :)
detly
@detly: if in the future we want to have standard integer types that have 8, 16, 32, 64, and 128 bit, then the only possibility is to have `int` being 32 bit. In particular x86_64 follows that line and has `int` at 32 bit and pointers with 64.
Jens Gustedt
@JOnathan Leffler I think they simply ignored that part of the C standard. I don't think UTF-32 was a big issue in the early-mid 80s.
Vatine
A: 

char cant hold what integers can. Not everything.
At least not in the way you assign a value to a char. Do some experimenting with sizeof to see if there is a difference between char and int.

If you really wish to use char instead of int, you probably should consider char[] instead, and store the ASCII, base 10, character representation of the number. :-)

MattBianco
+1  A: 

From a machine architecture point of view, a char is a value that can be represented in 8 bits (what ever happened to non-8 bit architectures?).

The number of bits in an int is not fixed; I believe it is defined as the "natural" value for a specific machine, i.e. the number of bits it is easiest/fastest for that machine to manipulate.

As has been mentioned, all values in computers are stored as sequences of binary bits. How those bits are interpreted varies. They can be interpreted as binary numbers or as a code representing something else, such as a set of alphabetic characters, or as many other possibilities.

When C was first designed the assumption was that 256 codes were sufficient to represent all the characters in an alphabet. (Actually, this was probably not the assumption, but it was good enough at the time and the designers were trying to keep the language simple and match the then-current machine architectures). Hence an 8-bit value (256 possibilities) was considered sufficient to hold an alphabetic character code and the char data type was defined as a convenience

Disclaimer: all that is written above is my opinion or guess. The designers of C are the only ones who can truly answer this question.

A simpler, but misleading, answer is that you can't store integer value 257 in a char but you can in an int. natural

yde