It's that way by design in C#, and, in fact, dates back all the way to C/C++ - the latter also promotes operands to int
, you just usually don't notice because int -> char
conversion there is implicit, while it's not in C#. This doesn't just apply to |
either, but to all arithmetic and bitwise operands - e.g. adding two byte
s will give you an int
as well. I'll quote the relevant part of the spec here:
Binary numeric promotion occurs for
the operands of the predefined +, –,
*, /, %, &, |, ^, ==, !=, >, <, >=, and <= binary operators. Binary
numeric promotion implicitly converts
both operands to a common type which,
in case of the non-relational
operators, also becomes the result
type of the operation. Binary numeric
promotion consists of applying the
following rules, in the order they
appear here:
If either operand is of
type decimal, the other operand is
converted to type decimal, or a
compile-time error occurs if the other
operand is of type float or double.
Otherwise, if either operand is of
type double, the other operand is
converted to type double.
Otherwise,
if either operand is of type float,
the other operand is converted to type
float.
Otherwise, if either operand
is of type ulong, the other operand is
converted to type ulong, or a
compile-time error occurs if the other
operand is of type sbyte, short, int,
or long.
Otherwise, if either
operand is of type long, the other
operand is converted to type long.
Otherwise, if either operand is of
type uint and the other operand is of
type sbyte, short, or int, both
operands are converted to type long.
Otherwise, if either operand is of
type uint, the other operand is
converted to type uint.
Otherwise,
both operands are converted to type
int.
I don't know the exact rationale for this, but I can think about one. For arithmetic operators especially, it might be a bit surprising for people to get (byte)200 + (byte)100
suddenly equal to 44
, even if it makes some sense when one carefully considers the types involved. On the other hand, int
is generally considered a type that's "good enough" for arithmetic on most typical numbers, so by promoting both arguments to int
, you get a kind of "just works" behavior for most common cases.
As to why this logic was also applied to bitwise operators - I imagine this is so mostly for consistency. It results in a single simple rule that is common for all non-boolean binary types.
But this is all mostly guessing. Eric Lippert would probably be the one to ask about the real motives behind this decision for C# at least (though it would be a bit boring if the answer is simply "it's how it's done in C/C++ and Java, and it's a good enough rule as it is, so we saw no reason to change it").