While I'm using %g as format specifier in printf(), sometimes it rounds up upto 2 places after decimal point, sometimes upto 3 places , someimes upto 4 places...how it does so ? Actually where we should use %g instead of %f or %e for floating point numbers..?
%g
automatically 'flips' between using %e
and %f
depending on the value, in an attempt to display as much information as possible, in the same way as hand-held calculators do.
Also, with %g
trailing zeroes and decimal point are not included.
From the printf manual:
"The double argument is converted in style f or e (or F or E for G conversions). The precision specifies the number of significant digits. If the precision is missing, 6 digits are given; if the precision is zero, it is treated as 1. Style e is used if the exponent from its conversion is less than -4 or greater than or equal to the precision. Trailing zeros are removed from the fractional part of the result; a decimal point appears only if it is followed by at least one digit."
I don't mean to "RTFM", but you'll probably find what you're looking for in the manual sections on controlling precision and length.
The %g
format specifier does its rounding just like %f
would do, but if %f
would result in 4.234000
, then %g
will omit the trailing zeros and print 4.234
.
%g
should be used when it makes the most sense in your output format that some numbers are printed as 12345.6
, while a slightly bigger number would be printed as 1.235e04
.
For the %f
conversion, the “precision” is the number of digits after the decimal point, while for %g
it is the number of significant digits.
The default precision is 6 in both cases.