views:

143

answers:

3

I noticed on windows and linux x86, float is 4bytes, double is 8, but long double is 12 and 16 on x86 and x86_64 respectively. C99 is supposed to be breaking such barriers with the specific integral sizes.

The initial technological limitation appears to be due to the x86 processor not being able to handle more than 80bit floating operations (plus 2 bytes to round it up) but why the inconsistency in the standard compared to int types? Why don't they go at least to 80bit standardization?

+3  A: 

They were trying to (mostly) accommodate pre-existing C implementations, some of which don't even use IEEE floating point formats.

Darron
+1 The standard specifies the precision of the types, not how many bits it takes to store that level of precision (floating-point numbers can be implemented in many different ways).
bta
The standard does not specify the precision, and even an implementation where all floats get rounded to 0 would probably be conformant. It does however recommend IEEE precision **and** format.
R..
+1  A: 

ints can be used to represent abstract things like ids, colors, error code, requests, etc. In this case ints are not really used as integers numbers but as sets of bits (= a container). Most of the time a programmer knows exactly how many bits he needs, so he wants to be able to use just as many bits as needed.

floats on the other hand are design for a very specific usage (floating point arithmetic). You are very unlikely to be able to size precisely how many bits you need for your float. Actually, most of the time the more bits you have the better it is.

Ben
This is true as long as you **know** the number of bits of precision. Often I find I need to know this in order to choose a large power of 2 to add/subtract to round to a specific number of binary places..
R..
+5  A: 

The C language doesn't specify the implementation of various types, so that it can be efficiently implemented on as wide a variety of hardware as possible.

This extends to the integer types too - the C standard integral types have minimum ranges (eg. signed char is -127 to 127, short and int are both -32,767 to 32,767, long is -2,147,483,647 to 2,147,483,647, and long long is -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807). For almost all purposes, this is all that the programmer needs to know.

C99 does provide "fixed-width" integer types, like int32_t - but these are optional - if the implementation can't provide such a type efficiently, it doesn't have to provide it.

For floating point types, there are equivalent limits (eg double must have at least 10 decimal digits worth of precision).

caf