I have a project with many calculations involving a lot of real world units :
- Distance;
- Temperature;
- Flow rate;
- ...
This project involves complicated and numerous calculation formulas.
That's why I supposed the use of custom types like Temperature, Distance... can be good for code readability. For example:
Temperature x = -55.3;
Meter y = 3;
or
var x = new Temperature(-55.3);
I tried to make a Temperature class that uses a double internal value.
public class Temperature
{
double _Value = double.NaN;
public Temperature() { }
public Temperature(double v) {
_Value = v;
}
public static implicit operator Temperature(double v) {
return new Temperature(v);
}
}
But class are nullable. This mean that something like :
Temperature myTemp;
is "correct" and will be null. I dont want this. I dont want to use structs because they are too limited :
- They cannot use parameterless constructor nor instance field intializers like
double _Value = double.Nan;
to define a default value (I wand default underlying double value to be NaN) - They cannot inherits from classes, they only can implement Interfaces
Them I wonder whether there is a way to tell C#:
Temperature myTemp = 23K; // C# does not implement anything to make K unit...
but I know C# does not handle no custom units.
Temperature myTemp = new Kelvin(23); // This might work
So I imagine I could create two Celsius and Kelvin classes that inherits from Temperature, and then I started to wonder if the idea really worth it, because it involves a lot of coding and testing.
That's the discussion I would like to start :
Would the use of real world units in my code instead of .NET types would be a good thing or not ? Did anyone this already ? What are the pitfalls and the best practices ? Or should I better keep stay away from this and use standard .NET types ?