Consider the following code:
void f(byte x) {print("byte");}
void f(short x) {print("short");}
void f(int x) {print("int");}
void main() {
byte b1, b2;
short s1, s2;
f(b1 + b2); // byte + byte = int
f(s1 + s2); // short + short = int
}
In C++, C#, D, and Java, both function calls resolve to the "int" overloads... I already realize this is "in the specs", but why are languages designed this way? I'm looking for a deeper reason.
To me, it makes sense for the result to be the smallest type able to represent all possible values of both operands, for example:
byte + byte --> byte
sbyte + sbyte --> sbyte
byte + sbyte --> short
short + short --> short
ushort + ushort --> ushort
short + ushort --> int
// etc...
This would eliminate inconvenient code such as short s3 = (short)(s1 + s2)
, as well as IMO being far more intuitive and easier to understand.
Is this a left-over legacy from the days of C, or are there better reasons for the current behavior?