views:

474

answers:

6

A question was asked, and I am not sure whether I gave an accurate answer or not. The question was, why use int, why not char, why are they separate? It's all reserved in memory, and bits, why data types have categories?

Can anyone shed some light upon it?

+9  A: 

char is the smallest addressable chunk of memory – suits well for manipulating data buffers, but can't hold more than 256 distinct values (if char is 8 bits which is usual) and therefore not very good for numeric calculations. int is usually bigger than char – more suitable for calculations, but not so suitable for byte-level manipulation.

sharptooth
It is not true in general for C that char is 8-bit. It is just very common, but not dictated or guaranteed by the language in any way. See <limit.h> and CHAR_BIT.
unwind
Well, the ANSI C language definition guarantees that a char is at least 8 bits, but it could be larger, theoretically.
Lucas Lindström
I once worked on a CPU design where sizeof(char), sizeof(short), sizeof(int), sizeof(long), and sizeof(float) were all 1. Had <limit.h> existed then, CHAR_BIT would have been 32. Luckily for most programmer's sanity, the customer abandoned the project and it never went anywhere.
RBerteig
sizeof returns the size in bytes though, doesn't it? So sizeof(char) == 1 would be true for any system that implements char as one byte (whether that is 8 bits, 7 bits or something even crazier).
Lucas Lindström
A: 

Hi

In general, there are algorithms and designs which are abstractions and data types help in implementing those abstractions. For example - there is a good chance that weight is usually represented as a rational number which can be best implemented in storage in the form of float/double i.e. a number which has a precision part to it.

I hope this helps.

Andriyev
A: 

int is the "natural" integer type, you should use it for most computations.

char is essentially a byte; it's the smallest memory unit addressable. char is not 8-bit wide on all platforms, although it's the case most of the time.

Bastien Léonard
+1  A: 

In the past, computers had little memory. That was the prime reason why you had different data types. If you needed a variable to only hold small numbers, you could use an 8-bit char instead of using a 32-bit long. However, memory is cheap today. Therefore, this reason is less applicable now but has stuck anyway.

However, bear in mind that every processor has a default data type in the sense that it operates at a certain width (usually 32-bit). So, if you used an 8-bit char, the value would need to be extended to 32-bits and back again for computation. This may actually slow down your algorithm slightly.

sybreon
+2  A: 

Remember that C is sometimes used as a higher level assembly language - to interact with low level hardware. You need data types to match machine-level features, such as byte-wide I/O registers.

From Wikipedia, C (programming language):

C's primary use is for "system programming", including implementing operating systems and embedded system applications, due to a combination of desirable characteristics such as code portability and efficiency, ability to access specific hardware addresses, ability to "pun" types to match externally imposed data access requirements, and low runtime demand on system resources.

gimel
+1  A: 

The standard mandates very few limitations on char and int :

  • A char must be able to hold an ASCII value, that is 7 bits mininum (EDIT: CHAR_BIT is at least 8 according to the C standard). It is also the smallest addressable block of memory.

  • An int is at least 16 bits wide and the "recommended" default integer type. This recommendation is left to the implementation (your C compiler.)

Dug up from the C standard 6.2.5.3 : An object declared as type char is large enough to store any member of the basic execution character set. If a member of the basic execution character set is stored in a char object, its value is guaranteed to be nonnegative. If any other character is stored in a char object, the resulting value is implementation-defined but shall be within the range of values that can be represented in that type.
5.2.4.2.1 says: number of bits for smallest object that is not a bit-field (byte) - CHAR_BIT 8. This means that there must be at least 8 bits in a 'char' value (but it could be signed or unsigned).
Jonathan Leffler