views:

510

answers:

2

There was a bit of a surprise with some code today. I was compiling it on AIX, with the warning level set to anal to see what rogue issues might be lurking. Something new crawled out of the code.

 1540-2837 (W) '0' flag is disregarded when combined with 
               precision and 'i' printf format.

After looking at the offending line, I put together a little program to reproduce it. Testing it on several platforms showed that it is not AIX specific.

The first printf below mimics what was found in the program.

#include <stdio.h>    
int main(void)
{
    int x = 3;

    printf("Format 0.3i <%0.3i>\n", x);  // prints 003, and AIX does a warning
    printf("Format  .3i <%.3i>\n", x);   // prints 003, with no warning
    printf("Format   3i <%3i>\n", x);    // prints   3, with no warning.

    return 0;
}

Normally, if leading zeros were needed, a format of "03i" would do the job nicely.

What does "%.3i" really mean here?

Why does it have the behavior that it does?

+4  A: 

.X, where x is a number means "print at least X" digits, so %.3i means print at least 3 digits. If the number is less than 100, it is left padded with zeros.

From a doc on printf

"For integer specifiers (d, i, o, u, x, X): precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0."

There is another concept, "width" (e.g. "%3i") which causes a certain number of characters to be output (not necessarily digits, and the 0 specified is used to say that those characters should be 0s, as in "003" and not spaces, as in " 3")

Daniel LeCheminant
Great Globs of Glue.I forgot to RTFMDaniel Thank You.
EvilTeach
@EvilTeach: No problem :]
Daniel LeCheminant
A: 

From man 3 printf:

If a precision is given with a numeric conversion (d, i, o, u, x, and X), the 0 flag is ignored.

The . is specifying the precision and therefore the 0 is ignored. As to the "Why?" of it, you'd have to ask the authors of the C standard :)

Sean Bright