Looking at this C# code:
byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte'
The result of any math performed on byte
(or short
) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte:
byte z = (byte)(x + y); // this works
What I am wondering is why? Is it architectural? Philosophical?
We have:
int
+ int
= int
long
+ long
= long
float
+ float
= float
double
+ double
= double
So why not:
byte
+ byte
= byte
short
+ short
= short
?A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.
The third line of your code snippet:
byte z = x + y;
actually means
byte z = (int) x + (int) y;
So, there is no + operation on bytes, bytes are first cast to integers and the result of addition of two integers is a (32-bit) integer.