views:

160

answers:

3

I've been trying to reduce implicit type conversions when I use named constants in my code. For example rather than using

const double foo = 5;

I would use

const double foo = 5.0;

so that a type conversion doesn't need to take place. However, in expressions where I do something like this...

const double halfFoo = foo / 2;

etc. Is that 2 evaluated as an integer and is it implicitly converted? Should I use a 2.0 instead?

+5  A: 

The 2 is implicitly converted to a double because foo is a double. You do have to be careful because if foo was, say, an integer, integer division would be performed and then the result would be stored in halfFoo.

I think it is good practice to always use floating-point literals (e.g. 2.0 or 2. wherever you intend for them to be used as floating-point values. It's more consistent and can help you to find pernicious bugs that can crop up with this sort of thing.

James McNellis
Second thought, what about comparison? Like comparing a double to be < or > 0? I'm wondering whether to use 0 or 0.0
Anonymous
I'd use 0.0, just to stay in the habit, although it really doesn't matter in that case.
David Thornley
@Person: The conversions occur at compile time, not run-time, so (run-time) performance is not an issue. If the LHS of the comparison is producing a float or double, the RHS should be a float or double constant too, for clarity to humans reading the code. But to the compiler, there is negligible difference.
Jonathan Leffler
Thanks for all the info.
Anonymous
A: 

This is known as Type Coercion. Wikipedia has a nice bit about it:

Implicit type conversion, also known as coercion, is an automatic type conversion by the compiler. Some languages allow, or even require, compilers to provide coercion.

In a mixed-type expression, data of one or more subtypes can be converted to a supertype as needed at runtime so that the program will run correctly.

...

This behavior should be used with caution, as unintended consequences can arise. Data can be lost when floating-point representations are converted to integral representations as the fractional components of the floating-point values will be truncated (rounded down). Conversely, converting from an integral representation to a floating-point one can also lose precision, since the floating-point type may be unable to represent the integer exactly (for example, float might be an IEEE 754 single precision type, which cannot represent the integer 16777217 exactly, while a 32-bit integer type can). This can lead to situations such as storing the same integer value into two variables of type integer and type real which return false if compared for equality.

In the case of C and C++, the value of an expression of integral types (i.e. longs, integers, shorts, chars) is the largest integral type in the expression. I'm not sure, but I imagine something similar happens (assuming floating point values are "larger" than integer types) with expressions involving floating point numbers.

Conrad Meyer
A: 

Strictly speaking, what you are trying to achieve seems to be counterproductive.

Normally, one would strive to reduce the number of explicit type conversions in a C program and, generally, to reduce all and any type dependencies in the source code. Good C code should be as type-independent as possible. That generally means that it is a good idea to avoid any explicit syntactical elements that spell out specific types as often as possible. It is better to do

const double foo = 5; /* better */

than

const double foo = 5.0; /* worse */

because the latter is redundant. The implicit type conversion rules of C language will make sure that the former works correctly. The same can be said about comparisons. This

if (foo > 0)

is better than

if (foo > 0.0)

because, again, the former is more type-independent.

Implicit type conversion in this case is a very good thing, not a bad thing. It helps you to write generic type-independent code. Why are you trying to avoid them?

It is true that in some cases you have no other choice but to express the type explicitly (like use 2.0 instead of 2 and so on). But normally one would do it only when one really has to. Why someone would do it without a real need is beyond me.

AndreyT