views:

124

answers:

2

I have to say I was pleased when I opened up c# to see the integer data types being Int16,Int32 and Int64. It removed any ambiguity e.g int increasing in size with age.

What surprises me is why there isn't or doesn't seem to be Float16,Float32 and Float64 or atleast not in normal use: a quick search of MSDN refers to float64 as R8 (unmanaged type) isn't this the same as a double

My guess would be there isn't as much ambiguity in Single and Double (or even Extended(Float80) which doesn't exist in c# as far as I know, I'm not sure how this could be marshalled for that matter.) Although Decimal seems to be a Float128 and I've noted it refered to as "Extended Floating Point Precision", Should we see an Int128 to match it?
EDIT: There isn't any ambiguity at all in Single or Double (which was a guess but it appears to be true and I thought I'd add this for clarity.)

Should we expect to see this kind of naming convention?/ would you appreciate it if we did?

Or should we go one step further and have Int<N> for arbitary number sizes? (yes I realise there are libraries out there which support this kind of thing)

+1  A: 

Well, there are various different things to consider here:

  • The types exposed by .NET
    • What's available
    • What each type is called
  • What ends up being aliased by other languages, e.g. C# using int to mean System.Int32

Personally I would have preferred Float32 and Float64 as the CLR types. I can certainly see some confusion in F# naming the types "float" (for System.Double) and "float32" (for System.Single). I wouldn't want Decimal to be called Float128; possibly Decimal128 to allow for other similar types though.

Note that Byte isn't UInt8 by the way - presumably because bytes are usually used for arbitrary binary storage rather than for genuinely numberic quantities.

I don't think there's very much reason to have arbitrary values for Int<N> though. At least, I suspect the usage is sufficiently specialised to relegate it to a custom class library rather than making it part of the framework. (Note, however, that BigInteger is part of .NET 4.0.)

Jon Skeet
+1 because this is a poll-style question and I would also have preferred Float32/Float64... very slightly. There's something to be said for ada-style ranged integers as an even further step to `Int<N>`, but eiffel-style contracts seem to cover those kinds of situations better in practice.
romkyns
A: 

Well, it's float and double in C#, instead of single and double, but it's the same general difference. Keep in mind there's an alias in C# for every built-in type. As for the arbitrary number sizes, F# is introducing BigInteger, which should take care of that, but the arbitrary size while specifying the number of rationals the integer can hold would most likely be a performance hit, as the built-in types are all finely tuned to performance in the CLR.

Also, decimal isn't a floating point type in the IEEE sense. Sure, it's a high precision type that allows for a floating decimal point, but it's not a "floating point type" in the IEEE sense of the word. Other than it's limited precision, it won't do the strange things float will do (such as 2x2=3.9999999999999), and for that reason, it's frequently used in financial calculations.

If you're really looking for something that will allow you to crunch numbers heavily, you may want to look to a functional language like F# instead of C#. Functional languages are typically better at heavy number crunching anyways... they tend to be more concise.

As for naming convention... I started on C#... C# is my standard, and so to me, other languages are weird. It's all in the perspective the developer comes from, I suppose.

David Morton