views:

362

answers:

5

Well I'm doing some Java - C integration, and throught C library werid type mappings are used (theres more of them;)):

#define CHAR        char                    /*  8 bit signed int            */
#define SHORT       short                   /* 16 bit signed int            */
#define INT         int                     /* "natural" length signed int  */
#define LONG        long                    /* 32 bit signed int            */
typedef unsigned    char    BYTE;           /*  8 bit unsigned int          */
typedef unsigned    char    UCHAR;          /*  8 bit unsigned int          */
typedef unsigned    short   USHORT;         /* 16 bit unsigned int          */
typedef unsigned    int     UINT;           /* "natural" length unsigned int*/

Is there any legitimate reason not to use them? It's not like char is going to be redefined anytime soon.

I can think of:

  1. Writing platform/compiler portable code (size of type is underspecified in C/C++)
  2. Saving space and time on embedded systems - if you loop over array shorter than 255 on 8bit microprocessor writing:

     for(uint8_t ii = 0; ii < len; ii++)
    

    will give meaureable speedup.

+4  A: 

That is exactly the reason. C is used across a number of systems and its actually rather disturbing how often type sizes actually do change between platforms/hardware/versions.

Serapth
But stdint.h (http://www.opengroup.org/onlinepubs/000095399/basedefs/stdint.h.html) provides a much cleaner and simpler way to do this.
Matthew Flaschen
I am not saying it is ideal, I am explaining why it was done. Keep in mind, this is a 20+ year old decision. I imagine size was a much greater factor back then... or, it could have simply been made in error, who knows now?
Serapth
stdint.h is only in C99. Not all compilers provide it.
Alex
+1  A: 

Well as you said there are more.. Some of these must be like INT32, and INT64

int by default has no standard size. It is allowed to be 32 bit or 64 bit.

so having declarations like above help to write portable code where you can make safe assumptions that INT32 will always give me 32 bit int.

Yogi
If you want a portable 32-bit integral type, use int32_t (http://linux.die.net/man/3/int32_t). In my opinion, defines like INT are usually pointless.
Matthew Flaschen
A: 

Not only for portability across operating systems, but also across architectures, for example between 32- and 64-bit machines.

Imagine you wrote some network code on a 32-bit machine that used two unsigned ints to store 64 consecutive bit flags. Compile that same code on a 64-bit machine and it would virtually be guaranteed to not work, given that most 64-bit machines allocate 8 bytes for an int, so you would end up with 128 bits allocated and 32 bits of gap between your two sets of 32 bit flags. That's obviously really bad for portability.

On *nix machines a lot of times you'll see typedefs that specifically refer the size of memory allocated for that variable, for example uint16_t and uint32_t. These are then typedef'd to whatever type gets you that much unsigned storage on a particular architecture, so your code can remain consistent across operating systems and architectures.

Bob Somers
+1  A: 

Q: Using typedefs or #defines?

A: Well, defines are preprocessor directives, but typedefs are actually work performed by compiler. Personally, I prefer using typedefs for type definitions, and defines for constants and function wrappers, etc.

mtasic
+2  A: 

The C standard doesn't specify the size of a number of the integer types; it depends on the compiler, and the processor on which the code will run.

Therefore, for maximum portability, it's best to have a header which uses standard names which indicate how big each type is for that particular target.

MISRA-C and others use uint16_t, sint32_t, etc. A shorter form, e.g. u16, s32 is also in use.

Regarding #define v typedef: use typedef, to enforce type checking by the compiler.

Steve Melnikoff
As to uint16_t and such - i used them only when doing microprocesor programming in my university, but then we used them explicitly only to save space/memory - like when looping over array that is less that 255bytes (almost all were), we used uint8_t (microprocessor was 8bit one). And not for portability
jb
Indeed; they serve that purpose as well.
Steve Melnikoff