views:

449

answers:

3

While researching how to do cross-platform printf() format strings in C (that is, taking into account the number of bits I expect each integer argument to printf() should be) I ran across this section of the Wikipedia article on printf(). The article discusses non-standard options that can be passed to printf() format strings, such as (what seems to be a Microsoft-specific extension):

printf("%I32d\n", my32bitInt);

It goes on to state that:

ISO C99 includes the inttypes.h header file that includes a number of macros for use in platform-independent printf coding.

... and then lists a set of macros that can be found in said header. Looking at the header file, to use them I would have to write:

 printf("%"PRId32"\n", my32bitInt);

My question is: am I missing something? Is this really the standard C99 way to do it? If so, why? (Though I'm not surprised that I have never seen code that uses the format strings this way, since it seems so cumbersome...)

+4  A: 

Correct, this is how the C99 standard says you should use them. If you want truly portablt code that is 100% standards-conformant to the letter, you should always print an int using "%d" and an int32_t using "%"PRId32.

Most people won't bother, though, since there are very few cases where failure to do so would matter. Unless you're porting your code to Win16 or DOS, you can assume that sizeof(int32_t) <= sizeof(int), so it's harmless to accidentally printf an int32_t as an int. Likewise, a long long is pretty much universally 64 bits (although it is not guaranteed to be so), so printing an int64_t as a long long (e.g. with a %llx specifier) is safe as well.

The types int_fast32_t, int_least32_t, et al are hardly ever used, so you can imagine that their corresponding format specifiers are used even more rarely.

Adam Rosenfield
"My question is: am I missing something? Is this really the standard C99 way to do it? **If so, why?**"
John Kugelman
Though I would really like to know the "why" part, I realize that it is somewhat of a rhetorical question. I doubt anyone would know, unless they attending the discussion at the standards organizations when C99 was being talked about.I am imaging a bunch of engineers discussing the merits of requiring changes to printf() when they already had format strings to print just about anything. They probably just decided to make it a #define and be done with it.So unless someone else has some profound insight here, I will likely accept this answer; it answers most of my question.
Mike
+8  A: 

The C Rationale seems to imply that <inttypes.h> is standardizing existing practice:

<inttypes.h> was derived from the header of the same name found on several existing 64-bit systems.

but the remainder of the text doesn't write about those macros, and I don't remember they were existing practice at the time.

What follows is just speculation, but educated by experience of how standardization committees work.

One advantage of the C99 macros over standardizing additional format specifier for printf (note that C99 also did add some) is that providing <inttypes.h> and <stdint.h> when you already have an implementation supporting the required features in an implementation specific way is just writing two files with adequate typedef and macros. That reduces the cost of making existing implementation conformant, reduces the risk of breaking existing programs which made use of the existing implementation specifics features (the standard way doesn't interfere) and facilitate the porting of conformant programs to implementation who don't have these headers (they can be provided by the program). Additionally, if the implementation specific ways already varied at the time, it doesn't favorize one implementation over another.

AProgrammer
+1  A: 

I can only speculate about why. I like AProgrammer's answer above, but there's one aspect overlooked: what are you going to add to printf as a format modifier? There are already two different ways that numbers are used in a printf format string (width and precision). Adding a third kind of number to say how many bits of precision are in the argument would be great, but where are you going to put it without confusing people? Unfortunatey one of the flaws in C is that printf was not designed to be extensible.

The macros are awful, but when you have to write code that is portable across 32-bit and 64-bit platforms, they are a godsend. Definitely saved my bacon.

I think the answer to your question why is either

  • Nobody could think of a better way to do it, or
  • The standards committee couldn't agree on anything they felt was clearly better.
Norman Ramsey
Good point, but Microsoft added a way to do it using a format modifier (which is ugly but it works).One way could have been to decouple the current "d = int, u = unsigned int" thinking and re-assign the type specifiers such that "d = int32_t, u = uint32_t". That might have been less disruptive to C than the Java route. (specify the bit width of every type explicitly - which they maybe should have done initially...) But it still would have required changes to printf() implementations.I think they just took the easy way out.
Mike
@Mike: do you have a pointer to the Microsoft way? I hate the macros. And I have most of a printf implementation kicking around already.
Norman Ramsey
It was mentioned on the Wikipedia article I linked, but here it is straight from the source: http://msdn.microsoft.com/en-us/library/tcxf1dw6.aspx
Mike