views:

203

answers:

3

I'm going through the "Head First C#" book and in one of the chapters I created a program and uses variables declared as ints and decimals. Visual Studio got cranky with me a couple of times about mixing and matching the two. For example:

dinnerParty.NumberOfPeople = (int) numericUpDown1.Value;

NumberOfPeople is declared as an int and apparently numeric updowns are decimals.

Also, the book puts an M after some numbers when adding them together. For example:

public void SetHealthyOption(bool healthy)
{
    if (healthy)
    {
        CostOfBeveragesPerPerson = 5.00M;
    }
    else
    {
        CostOfBeveragesPerPerson = 20.00M;
    }
}

CostOfBeveragesPerPerson is declared as a decimal.

So I have two specific questions:

1) How can you know when you need to cast something? I'm sure there is quite a bit to casting... can anyone provide some good links to learn about casting?

2) What does the M after the numbers do?

EDIT

So the M denotes that the number is a decimal and not a double. Why not just cast the number as a decimal like: (decimal) 50.00? And what is that "function" called? If I wanted to see what "letters" were available what would I google?

+6  A: 
  1. Explicit casts are generally needed when there's a loss of precision between the two types. For example, if you had an int and assigned it to a long, no cast is necessary since long can hold all values that an int can. If you were assigning a long to an int, however, a cast would be required as int can hold less values than a long can, which can lead to data loss.
  2. The M defines the number as a Decimal type. If you omit this, the number is interpreted as a double.
Turnor
Ok sounds good. So what happens if I cast a decimal, say 10.1 as an int? Does that then become 10? What if it was 10.9? Does it become 11 or is it still 10?As far as the M defining the number as a decimal, what other options are there? What's that called if I wanted to google it?
Pete
decimal (and float, double) conversion to int is always a case of removing the decimal part, no rounding is done. Both 10.1 and 10.9 are 10 as int. As for the literal values, 10 is an int, 10.0 is a double, 10.0F is a float and 10.0M is a decimal. No idea what that's officially called though.
Turnor
Here's a link that explains it a bit better than I can: http://www.blackwasp.co.uk/CSharpNumericLiterals.aspx
Turnor
Sorry if I am starting to sound dense but if you declare something as a float, why do you need to add the F or the M if you declared as a decimal? Why declare it if you can just use a letter to denote what it is?
Pete
Nice! I will mark this as the answer. I guess what I really needed to know is what a literal actually was. Using the F or M is simply casting a hard coded number on the fly.
Pete
It just seems to be the way the language is designed. Though in C# 3.0, what you suggest is actually possible. If you use <code>var number = 1.0f</code>, the compiler will figure out the type of <code>number</code> from its usage.
Turnor
+2  A: 
  1. Here's a good link on casting straight from MSDN.
  2. The M tells the compiler that the number is a decimal, otherwise it will assume it to be a double
Alexander Kahoun
Thanks for the link! I bookmarked that MSDN site as there is a lot of helpful info in there.
Pete
+3  A: 
Type    Suffix         Example
uint    U or u         100U
long    L or l         100L
ulong   UL or ul        100UL
float   F or f          123.45F
decimal M or m          123.45M

There's a lot of pages that explain C# numeric literals. The letter at the end is not a cast or any kind of function. It is syntax showing that what you are writing represents a value of a particular type. So writing (decimal) 5.0 uses a cast, but writing 5.0m does not.

RossFabricant