views:

139

answers:

7

It's common practice where I work to avoid directly using built-in types and instead include a standardtypes.h that has items like:

// \Common\standardtypes.h
typedef double             Float64_T;
typedef int                SInt32_T;

Almost all components and source files become dependent on this header, but some people argue that it's needed to abstract the size of the types (in practice this hasn't been needed).

Is this a good practice (especially in large-componentized systems)? Are there better alternatives? Or should the built-in types be used directly?

+1  A: 

I think it's not a good practice. Good practice is to use something like uint32_t where you really need 32-bit unsigned integer and if you don't need a particular range use just unsigned.

Pmod
ITYM 32 *bits* ?
Paul R
+8  A: 

You can use the standardized versions available in modern C and C++ implementations in the header file: stdint.h

It has types of the like: uint8_t, int32_t, etc.

In general this is a good way to protect code against platform dependency. Even if you haven't experienced a need for it to date, it certainly makes the code easier to interpret since one doesn't need to guess a storage size as you would for 'int' or 'long' which will vary in size with platform.

Amardeep
+3  A: 

It would probably be better to use the standard POSIX types defined in stdint.h et al, e.g. uint8_t, int32_t, etc. I'm not sure if there are part of C++ yet but they are in C99.

Paul R
They'll be in the next standard, which will be out in 200B or so. In the meantime, implementors are likely to include them.
David Thornley
+1  A: 

It might matter if you are making cross-platform code, where the size of native types can vary from system to system. For example, the wchar_t type can vary from 8 bits to 32 bits, depending on the system.

Personally, however, I don't think the approach you describe is as practical as its proponents may suggest. I would not use that approach, even for a cross-platform system. For example, I'd rather build my system to use wchar_t directly, and simply write the code with an awareness that the size of wchar_t will vary depending on platform. I believe that is FAR more valuable.

Brent Arias
For lots of development, I agree with you. However, this is for embedded, which typically means a need for more control over what is generated, and often means interfacing with fields of defined size. When dealing with a memory-mapped 16-bit hardware register, calling it something like `volatile uint16_t` works better than hoping that `short` or `wchar_t` will work.
David Thornley
@David Thornley: That's especially true for something like the 68000, where the size of an 'int' may vary from one compiler to another, even on the same hardware platform.
supercat
@supercat: Tell me about it. One C compiler I had for the Macintosh at one time came with a compiler option for 16- or 32-bit `int`s. Same computer, same compiler, different option set.
David Thornley
+1  A: 

The biggest problem with this approach is that so many developers do it that if you use a third-party library you are likely to end up with a symbol name conflict, or multiple names for the same types. It would be wise where necessary to stick to the standard implementation provided by C99's stdint.h.

If your compiler does not provide this header (as for example VC++), then create one that conforms to that standard. One for VC++ for example can be found at http://msinttypes.googlecode.com/svn/trunk/stdint.h

In your example I can see little point for defining size specific floating-point types, since these are usually tightly coupled to the FP hardware of the target and the representation used. Also the range and precision of a floating point value is determined by the combination of exponent width and significant width, so the overall width alone does not tell you much, or guarantee compatibility across platforms. With respect to single and double precision, there is far less variability across platforms, most of which use IEEE-754 representations. On some 8 bit compilers float and double are both 32-bit, while long double on x86 GCC is 80 bits, but only 64 bits in VC++. The x86 FPU supports 80 bits in hardware (2).

Clifford
+1  A: 

Since it hasn't been said yet, and even though you've already accepted an answer:

Only used concretely-sized types when you need concretely sized types. Mostly, this means when you're persisting data, if you're directly interacting with hardware, or using some other code (e.g. a network stack) that expects concretely-sized types. Most of the time, you should just use the abstractly-sized types so that your compiler can optimize more intelligently and so that future readers of your code aren't burdened with useless details (like the size and signedness of a loop counter).

(As several other responses have said, use stdint.h, not something homebrew, when writing new code and not interfacing with the old.)

pkh
A: 

As others have said, use the standard types as defined in stdint.h. I disagree with those who say to only use them in some places. That works okay when you work with a single processor. But when you have a project which uses multiple processor types (e.g. ARM, PIC, 8051, DSP) (which is not uncommon in embedded projects) keeping track of what an int means or being able to copy code from one processor to the other almost requires you to use fixed size type definitions.

At least it is required for me, since in the last six months I worked on 8051, PIC18, PIC32, ARM, and x86 code for various projects and I can't keep track of all the differences without screwing up somewhere.

sbass