tags:

views:

957

answers:

8

This is related to following question,

http://stackoverflow.com/questions/1229131/how-to-declare-a-32-bit-integer-in-c

Several people mentioned int is always 32-bit on most platforms. I am curious if this is true.

Do you know any modern platforms with int of a different size? Ignore dinosaur platforms with 8-bit or 16-bit architectures.

NOTE: I already know how to declare a 32-bit integer from the other question. This one is more like a survey to find out which platforms (CPU/OS/Compiler) supporting integers with other sizes.

+2  A: 

It vastly depends on your compiler. Some compile them as 64-bit on 64-bit machines, some compile them as 32-bit. Embedded systems are their own little special ball of wax.

Best thing you can do to check:

printf("%d\n", sizeof(int));

Note that sizeof will print out bytes. Do sizeof(int)*CHAR_BIT to get bits.

Code to print the number of bits for various types:

#include <limits.h>
#include <stdio.h>

int main(void) {
    printf("short is %d bits\n",     CHAR_BIT * sizeof( short )   );
    printf("int is %d bits\n",       CHAR_BIT * sizeof( int  )    );
    printf("long is %d bits\n",      CHAR_BIT * sizeof( long )    );
    printf("long long is %d bits\n", CHAR_BIT * sizeof(long long) );
    return 0;
}
Eric
This is wrong on many dimensions. First, `sizeof` can operate on types so there is no need for `randomint`. Second, `CHAR_BITS` is not guaranteed to be eight. There are a few more things but these are the errors related to the question.
Sinan Ünür
True, not always 8-bits in a byte
Ed Swangren
@Eric its `CHAR_BIT`. I misspelled it in my comment.
Sinan Ünür
It's also not guaranteed that every bit in the underlying representation of the type is a value bit - you might have things like overflow bits (or even padding bits).
caf
+9  A: 

"is always 32-bit on most platforms" - what's wrong with that snippet? :-)

The C standard does not mandate the sizes of its variables. It does mandate relative sizes, for example, sizeof(int) >= sizeof(short) and so on. It also mandates minimum ranges but allows for multiple encoding schemes (two's complement, one's complement and sign/magnitude).

If you want a specific sized variable, you need to use one suitable for the platform you're running on, such as the use of #ifdef's, something like:

#ifdef LONG_IS_32BITS
    typedef long int32;
#else
    #ifdef INT_IS_32BITS
        typedef long int32;
    #else
        #error No 32-bit data type available
    #endif
#endif

Alternatively, C99 allows for exact width integer types intN_t and uintN_t:


  1. The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two’s complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
  2. The typedef name uintN_t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
  3. These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names.
paxdiablo
Beat me to it :) Relying upon the size of a built-in variable in C or C++ is inherently a bug.
kyoryu
The C standard does mandate minimum ranges (which implies minimum sizes). The minimum range of int is -32767 to +32767, and the minimum range of long is -2147483647 to +2147483647.
caf
(which means that if you just want a variable that can store the range of a 32 bit integer, use long or unsigned long - no preprocessor bodginess required).
caf
True, that's okay for ensuring that a data type will hold at least a given value but you may want an exactly-32-bit value (e.g., for binary writes to a file) rather than an at-least-32-bit one. That's where you need the preprocessor.
paxdiablo
+2  A: 

No. Small embedded systems use 16 bit integers.

starblue
+2  A: 

At this moment in time, most desktop and server platforms use 32-bit integers, and even many embedded platforms (think handheld ARM or x86) use 32-bit ints. To get to a 16-bit int you have to get very small indeed: think "Berkeley mote" or some of the smaller Atmel Atmega chips. But they are out there.

Norman Ramsey
+17  A: 

As several people have stated, there are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the c99 specification.

int8_t
uint8_t
int32_t
uint32_t

etc...

they are generally of the form [u]intN_t, where the 'u' specifies that you want an unsigned quantity, and N is the number of bits

the correct typedefs for these should be available in stdint.h on whichever platform you are compiling for, using these allows you to write nice, portable code :-)

David Claridge
+1 simple, clean, just works ;)
Daniel Persson
+1, go with this.
nj
Nice answer, just not to the question asked :)
Christoffer
Of course, i just checked I and I think this is broken on windows: http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspxBut that does not sound right so I'm going to double check on my brothers windows machine.
Robert Massaioli
+1  A: 

Well, most ARM-based processors can run Thumb code, which is a 16-bit mode. That includes the yet-only-rumored Android notebooks and the bleeding-edge smartphones.

Also, some graphing calculators use 8-bit processors, and I'd call those fairly modern as well.

Christoffer
You can't have a conforming C implementation with 8 bit int, so even if those calculators are 8-bit, if they have a C compiler then it must make int at least 16 bit.
Steve Jessop
Ah, that's correct.
Christoffer
Thumb code still uses 32-bit int; the '16-bit' aspect is just the size of the encoded instructions.
Matthew Wightman
+1  A: 

TI are still selling OMAP boards with the C55x DSPs on them, primarily used for video decoding. I believe the supplied compiler for this has a 16 bit int. It is hardly dinosaur (the Nokia 770 was released in 2005), although you can get 32 bit DSPs.

Most code you write, you can safely assume it won't ever be run on a DSP. But perhaps not all.

Steve Jessop
A: 

If you are also interested in the actual Max/Min Value instead of the number of bits, limits.h contains pretty much everything you want to know.

Michael Stum