views:

498

answers:

4

Is there a reliable way to declare typedefs for integer types of fixed 8,16,32, and 64 bit length in ISO Standard C?

When I say ISO Standard C, I mean that strictly:

  • ISO C89/C90, not C99.
  • No headers not defined in the ISO standard.
  • No preprocessor symbols not defined in the ISO standard.
  • No type-size assumptions not specified in the ISO standard.
  • No proprietary vendor symbols.

I see other questions similar to this in StackOverflow, but no answers yet that do not violate one of the above constraints. I'm not sure it's possible without resorting to platform symbols.

+6  A: 

Strictly speaking, ISO 9899:1999 superceded ISO 9899:1990 so is the only current ISO standard C language specification.

As exact width typedef names for integer types were only introduced into the standard in the 1999 version, what you want is not possible using only the 1990 version of the standard.

Charles Bailey
Yes, technically C99 is the current standard, but compiler support remains spotty and that would be far too easy to answer anyway. ;-)
kbluck
Realistically, though, the only one who doesn't support inttypes.h this day and age is Microsoft. It would probably be sufficient to include inttypes.h on most platforms and typedef from stuff like DWORD, QWORD, etc. on Windows.
asveikau
`inttypes.h` isn't even in C90, it's a Unixish (well, POSIX) header. `stdint.h` is part of C99, and, more importantly, C++TR1 and C++0x, so at least it will be supported in VC++2010.
Pavel Minaev
I think he meant to say `stdint.h`. It's worth noting that at least two projects provide a `stdint.h` for the MS environment.
DigitalRoss
@Pavel: `inttypes.h` _is_ part of the C99 standard lib (section 7.8)
Christoph
+2  A: 

There is none. There is a reliable way to declare individual integer variables up to 32 bits in size, however, if you're willing to live with some restrictions. Just use long bitfields (the latter is guaranteed to be at least 32-bit wide, and you're allowed to use up to as many bits in a bitfields as would fit in the variable if bitfield declarator was omitted). So:

struct {
   unsigned long foo : 32; 
} bar;

Obviously, you get all the limitations that come with that, such as inability to have pointers to such variables. The only thing this really buys you is guaranteed wraparound at the specified boundary on over/underflow, and even then only for unsigned types, since overflow is undefined for signed.

Aside from that, there's no portable way to do this in pure C90. Among other things, a conformant C90 implementation need not even have a 8-bit integer, for example - it would be entirely legal to have a platform in which sizeof(char) == sizeof(short) == sizeof(int) == 1 and CHAR_BIT == 16 (i.e. it has a 16-bit machine word, and cannot address individual bytes). I've heard that such platforms do in fact exist in practice in form of some DSPs.

Pavel Minaev
A: 

No, you can't do that.

Now, if you want to count a multi-stage configuration process like Gnu configure as a solution, you can do that and stick to C89. And there are certainly various types you can use that are in C89, and that will DTRT on almost every implementation that's around today, so you get the sizes you want and stick with pure conforming C89. But the bit widths, while what you want, will not in general be specified by the standard.

DigitalRoss
+5  A: 

Yes you can.

The header file limits.h should be part of C90. Then I would test through preprocessor directives values of SHRT_MAX, INT_MAX, LONG_MAX, and LLONG_MAX and set typedefs accordingly.

Example:

#include <limits.h>

#if SHRT_MAX == 2147483647
typedef unsigned short int uint32_t;
#elif INT_MAX == 2147483647
typedef unsigned int uint32_t;
#elif LONG_MAX == 2147483647
typedef unsigned long uint32_t ;
#elif LLONG_MAX == 2147483647
typedef unsigned long long uint32_t;
#else
#error "Cannot find 32bit integer."
#endif
Viliam
This only gives you the amount of storage taken up by each of the integer types. Even in combination with CHAR_BIT you can't guarantee the number of value bits in any type because of the possibility of padding or trap bits.
Charles Bailey
So what do you do if you need a 16-bit integer, and `CHAR_BIT` is 10 (with associated values values of `*_MAX`)?
Pavel Minaev
@Charles: for unsigned integral types C guarantees no trap bits. For `char` it also guarantees no padding. But, obviously, that's still not good enough.
Pavel Minaev
As a side note `LLONG_MAX` isn't a part of C90, because `long long` itself is not.
Pavel Minaev
@Charles: I believe those are rather theoretical thoughts. Strictly speaking, the standard does not even guarantee that there exists 16-bit integer type, just only say that the size of int<=long<=long long. Padding is completely different story.
Viliam
@Pavel Minaev: By trap bits, I mean (in standards speak) padding bits that might contribute to a trap representation, sorry about the loose language. Only `unsigned char` is guaranteed to be padding free.
Charles Bailey
@Pavel: If there is no standard type for 8, 16, 32, 64-bit integer required by the question, then report error. Simple.
Viliam
They aren't as theoretical as one might think. I doubt you'll find a machine with 29-bit word these days, but you may find a DSP for which there's simply no 8-bit integer type at all (comp.lang.c++.moderated had a few such horror stories). In any case, the question seems to be quite deliberately asked in such a way that precludes any reliance on implementation details (which presence/ab sence of a particular type is).
Pavel Minaev
@Charles, yes, I stand corrected.
Pavel Minaev
It would seem that the amount of value bits can be detected by observing behavior of compile-time unsigned overflow (which is well-defined), so one could first look at CHAR_BIT and UINT_MAX, for example, and if they add up to 32, check whether `0xFFFFFFFFu + 1u == 0u`.
Pavel Minaev
By theoretical I don't mean those doesn't exist - (I have practical experience with these!) But if your implementation relay on 8, 16, etc bit sizes (and at-least x bit is not sufficient), then I think just compilation error is fine.
Viliam
@Charles, why do you say though that testing `SHRT_MAX` and things would give you an amount of storage? It surely gives you the amount of values and this is i think what the questioner wants. @Viliam `typedef int uint32_t;` looks a bit odd though - sure you don't mean `typedef unsigned int uint32_t;` ?
Johannes Schaub - litb
Agree with litb. The `*_MAX` macros are defined in terms of values representable, and thus imply number of value bits, irrespective of padding.
caf
@litb: Yes, my mistake. I realised it a while ago, but you can't edit comments and it had already started a thread! `CHAR_BIT` and `sizeof` give you an indication of the storage space, `*_MAX` give you an indication of the number of value bits. (In combination with `*_MIN` you may also be able to get an idea of whether two's complement is being used.) I wanted to point out that the posted answer only checked one and had a 'thinko' and pointer out the wrong one.
Charles Bailey
@litb: Right, I have to fix that. Thanks.
Viliam
Although I acknowledge that there is no bulletproof solution, this answer comes closest to a practical workaround.
kbluck