In general, we try to design the language so that information about types flows "out", not "in". That is, the type of an expression is determined by first analyzing each of its subexpressions, and then we see if it is consistent with its context -- we don't go the other way usually, reasoning from context to subexpressions. You can get into some very hard-to-analyze situations when type information can flow both ways, as it can with lambda expressions.
Now, in the specific situation you mention, we could have written a rule that says "floating point literals that can be converted to decimal without loss of precision or magnitude do not require the m suffix", just as literal ints that fit into short convert automatically. But it's easy enough to add the "m", it would be confusing when some literals converted automatically and others did not, and requiring that the types be consistent makes the code easier to understand and more self-documenting.