I've always used typedef in embedded programming to avoid common mistakes:
int8_t
- 8 bit signed integer
int16_t
- 16 bit signed integer
int32_t
- 32 bit signed integer
uint8_t
- 8 bit unsigned integer
uint16_t
- 16 bit unsigned integer
uint32_t
- 32 bit unsigned integer
The recent embedded muse (issue 177, not on the website yet) introduced me to the idea that it's useful to have some performance specific typedefs. This standard suggests having typedefs that indicate you want the fastest type that has a minimum size.
For instance, one might declare a variable using int_fast16_t
, but it would actually be implemented as an int32_t
on a 32 bit processor, or int64_t
on a 64 bit processor as those would be the fastest types of at least 16 bits on those platforms. On an 8 bit processor it would be int16_t
bits to meet the minimum size requirement.
Having never seen this usage before I wanted to know
- Have you seen this in any projects, embedded or otherwise?
- Any possible reasons to avoid this sort of optimization in typedefs?