views:

282

answers:

6
+3  Q: 

Smart typedefs

I've always used typedef in embedded programming to avoid common mistakes:

int8_t - 8 bit signed integer
int16_t - 16 bit signed integer
int32_t - 32 bit signed integer
uint8_t - 8 bit unsigned integer
uint16_t - 16 bit unsigned integer
uint32_t - 32 bit unsigned integer

The recent embedded muse (issue 177, not on the website yet) introduced me to the idea that it's useful to have some performance specific typedefs. This standard suggests having typedefs that indicate you want the fastest type that has a minimum size.

For instance, one might declare a variable using int_fast16_t, but it would actually be implemented as an int32_t on a 32 bit processor, or int64_t on a 64 bit processor as those would be the fastest types of at least 16 bits on those platforms. On an 8 bit processor it would be int16_t bits to meet the minimum size requirement.

Having never seen this usage before I wanted to know

  • Have you seen this in any projects, embedded or otherwise?
  • Any possible reasons to avoid this sort of optimization in typedefs?
+3  A: 

Check out stdint.h from C99.

rlbond
A: 

I'm actually not much of a fan of this sort of thing.

I've seen this done many times (in fact, we even have these typedefs at my current place of employment)... For the most part, I doubt their true usefulness... It strikes me as change for changes sake... (and yes, I know the sizes of some of the built ins can vary)...

dicroce
+3  A: 

The main reason I would avoid this typedef is that it allows the type to lie to the user. Take int16_t vs int_fast16_t. Both type names encode the size of the value into the name. This is not an uncommon practice in C/C++. I personally use the size specific typedefs to avoid confusion for myself and other people reading my code. Much of our code has to run on both 32 and 64 bit platforms and many people don't know the various sizing rules between the platforms. Types like int32_t eliminate the ambiguity.

If I had not read the 4th paragraph of your question and instead just saw the type name, I would have assumed it was some scenario specific way of having a fast 16 bit value. And I obviously would have been wrong :(. For me it would violate the "don't surprise people" rule of programming.

Perhaps if it had another distinguishing verb, letter, acronym in the name it would be less likely to confuse users. Maybe int_fast16min_t ?

JaredPar
the standard specifies those values... most programmers will know what they mean.
rmeador
@rmeador, I disagree. Only programmers familiar with that standard would know and the amount of programmers that are is almost certainly much less than half. On the other hand, knowledge of the standard is not required to understand int32_t and the like.
JaredPar
Size-specific values are potentially inefficient. There is a need for something like "fastest integral datatype of at least 16 bits", which is the old definition for "int".
David Thornley
+2  A: 

For instance, one might declare a variable using int_fast16_t, but it would actually be implemented as an int32_t on a 32 bit processor, or int64_t on a 64 bit processor as those would be the fastest types of at least 16 bits on those platforms

That's what int is for, isn't it? Are you likely to encounter an 8-bit CPU any time soon, where that wouldn't suffice?

How many unique datatypes are you able to remember?

Does it provide so much additional benefit that it's worth effectively doubling the number of types to consider whenever I create a simple integer variable?

I'm having a hard time even imagining the possibility that it might be used consistently.

Someone is going to write a function which returns a int16fast_t, and then someone else is going to come along and store that variable into an int16_t.

Which means that in the obscure case where the fast variants are actually beneficial, it may change the behavior of your code. It may even cause compiler errors or warnings.

jalf
+1 for the function return type issue. The compiler would complain about different types if they are internally different sized once resolved, but still it's pertinent. I use 8 bit CPUs all the time in embedded work though, and typedef issues are nightmare (app developers assuming int is 32 bits)
Adam Davis
Yeah, definitely use something like int32_t if you need a datatype that is 32 bits wide. Making assumption about the general int type is bad bad bad. :)I'm just saying that splitting the typedefs up into int32 and int32fast seems like it is going to add little more than additional confusion.
jalf
A: 

I commonly use size_t, it happens to be the fastest address size, a tradition I picked up in embedding. And it never caused any issues or confusion in embedded circles, but it actually began causing me problems when I began working on 64bit systems.

Robert Gould
oof! That smacks of danger... Further, is it always the fastest? On segmented architectures the size_t might be larger than the data bus, requiring two reads per use minimum...
Adam Davis
What problems did it cause? The fact that it was larger than the (usually 32-bit) int? I converted lots of ints to size_ts in the conversion to 64 bits, and had no problems with them.
David Thornley
Mostly caused issues with serialization code, nothing a few days of fixing and testing couldn't solve, but it was some extra work, and required tracking down many variables.
Robert Gould
On our devices things were aligned. But you illustrated my point Adam, in embedded circles people think about this stuff, so its not dangerous, but most PC programmers will miss bugs related to this usages, so it's more of an issue there. Anyways we used a single fast type.
Robert Gould
+2  A: 

When I am looking at int_fast16_t, and I am not sure about the native width of the CPU in which it will run, it may make things a complicated, for example the ~ operator.

int_fast16_t i = 10; int_16_t j = 10;

if (~i != ~j) { // scary !!! }

Somehow, I would like to willfully use 32 bit or 64 bit based on the native width of the processor.

Alphaneo