Regarding the motivation: lets imagine the alternatives to this behaviour and see why they don't work:
Alternative 1: the result should always be the same as the inputs.
What should the result be for adding an int and a short?
What should the result be for multiplying two shorts? The result in general will fit into an int, but because we truncate to short, most multiplications will fail silently. Casting to an int afterwards won't help.
Alternative 2: the result should always be the smallest type that can represent all possible outputs.
If the return type were a short, the answer would not always be representable as a short.
A short can hold values -32,768 to 32,767. Then this result will cause overflow:
short result = -32768 / -1; // 32768: not a short
So your question becomes: why does adding two ints not return a long? What should multiplication of two ints be? A long? A BigNumber to cover the case of squaring integer min value?
Alternative 3: Choose the thing most people probably want most of the time
So the result should be:
- int for multiplying two shorts, or any int operations.
- short if adding or subtracting shorts, dividing a short by any integer type, multiplying two bytes, ...
- byte if bitshifting a byte to the right, int if bitshifting to the left.
- etc...
Remembering all the special cases would be difficult if there is no fundamental logic to them. It's simpler to just say: the result of integer operations is always an int.