byte + byte = int... why?
Looking at this C# code:
byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte'
The result of any math performed on byte
(or short
) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte:
byte z = (byte)(x + y); // this works
What I am wondering is why? Is it architectural? Philosophical?
We have:
int
+ int
= int
long
+ long
= long
float
+ float
= float
double
+ double
= double
So why not:
byte
+ byte
= byte
short
+ short
= short
? A bit of background: I am performing a long list of calculations on "small numbers" (ie < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.
The third line of your code snippet:
byte z = x + y;
actually means
byte z = (int) x + (int) y;
So, there is no + operation on bytes, bytes are first cast to integers and the result of addition of two integers is a (32-bit) integer.
In terms of "why it happens at all" it's because there aren't any operators defined by C# for arithmetic with byte, sbyte, short or ushort, just as others have said. This answer is about why those operators aren't defined.
I believe it's basically for the sake of performance. Processors have native operations to do arithmetic with 32 bits very quickly. Doing the conversion back from the result to a byte automatically could be done, but would result in performance penalties in the case where you don't actually want that behaviour.
I think this is mentioned in one of the annotated C# standards. Looking...
EDIT: Annoyingly, I've now looked through the annotated ECMA C# 2 spec, the annotated MS C# 3 spec and the annotation CLI spec, and none of them mention this as far as I can see. I'm sure I've seen the reason given above, but I'm blowed if I know where. Apologies, reference fans :(
I thought I had seen this somewhere before. From this article, The Old New Thing:
Suppose we lived in a fantasy world where operations on 'byte' resulted in 'byte'.
byte b = 32;
byte c = 240;
int i = b + c; // what is i?
In this fantasy world, the value of i would be 16! Why? Because the two operands to the + operator are both bytes, so the sum "b+c" is computed as a byte, which results in 16 due to integer overflow. (And, as I noted earlier, integer overflow is the new security attack vector.)
EDIT : Raymond is defending, essentially, the approach C and C++ took originally. In the comments, he defends the fact that C# takes the same approach, on the grounds of language backward compatibility.
链接地址: http://www.djcxy.com/p/3578.html上一篇: Interop类型不能嵌入
下一篇: 字节+字节=整数...为什么?