I would like to understand why on .NET there are nine integer types: Char
, Byte
, SByte
, Int16
, UInt16
, Int32
, UInt32
, Int64
, and UInt64
; plus other numeric types: Single
, Double
, Decimal
; and all these types have no relation at all.
When I first started coding in C# I thought "cool, there's a uint
type, I'm going to use that when negative values are not allowed". Then I realized no API used uint
but int
instead, and that uint
is not derived from int
, so a conversion was needed.
What are the real world application of these types? Why not have, instead, integer
and positiveInteger
? These are types I can understand. A person's age in years is a positiveInteger
, and since positiveInteger
is a subset of integer
there's so need for conversion whenever integer
is expected.
The following is a diagram of the type hierarchy in XPath 2.0 and XQuery 1.0. If you look under xs:anyAtomicType
you can see the numeric hierarchy decimal
> integer
> long
> int
> short
> byte
. Why wasn't .NET designed like this? Will the new framework "Oslo" be any different?