tags:

views:

131

answers:

2

The C# language (and other languages I'm sure) require suffixes at the end of numeric literals. These suffixes indicate the type of the literal. For example, 5m is a decimal, 5f is a floating point number.

My question is: are these suffixes really necessary, or is it possible to infer the type of a literal from its context?

For example, the code decimal d = 5.0 should infer that 5.0 is not a double, but a decimal. Does that kind of grammar cause problems?

+6  A: 

That's fine for simple cases like:

float f = 7;

but it's far better to be explicit so that you don't have to worry about statements like:

float f = (double)(int)(1 / 3 + 6e22 / (double)7);

Yes, I know that's a contrived example but we don't know the intent of the coder from just the type on the left, especially if it's not being assigned to a variable at all (such as being passed as an argument to an overloaded function that can take one of int, float, double, decimal et al).

paxdiablo
+2  A: 

In general, we try to design the language so that information about types flows "out", not "in". That is, the type of an expression is determined by first analyzing each of its subexpressions, and then we see if it is consistent with its context -- we don't go the other way usually, reasoning from context to subexpressions. You can get into some very hard-to-analyze situations when type information can flow both ways, as it can with lambda expressions.

Now, in the specific situation you mention, we could have written a rule that says "floating point literals that can be converted to decimal without loss of precision or magnitude do not require the m suffix", just as literal ints that fit into short convert automatically. But it's easy enough to add the "m", it would be confusing when some literals converted automatically and others did not, and requiring that the types be consistent makes the code easier to understand and more self-documenting.

Eric Lippert