Why does the smallest int, −2147483648, have type 'long'?

This question already has an answer here:

  • Why is 0 < -0x80000000? 6 answers
  • (-2147483648> 0) returns true in C++? 4 answers

  • In C, -2147483648 is not an integer constant. 2147483648 is an integer constant, and - is just a unary operator applied to it, yielding a constant expression. The value of 2147483648 does not fit in an int (it's one too large, 2147483647 is typically the largest integer) and thus the integer constant has type long , which causes the problem you observe. If you want to mention the lower limit for an int , either use the macro INT_MIN from <limits.h> (the portable approach) or carefully avoid mentioning 2147483648 :

    printf("PRINTF(d) t: %dn", -1 - 2147483647);
    

    The problem is that -2147483648 is not an integer literal. It's an expression consisting of the unary negation operator - and the integer 2147483648 , which is too big to be an int if int s are 32 bits. Since the compiler will choose an appropriately-sized signed integer to represent 2147483648 before applying the negation operator, the type of the result will be larger than an int .

    If you know that your int s are 32 bits, and want to avoid the warning without mutilating readability, use an explicit cast:

    printf("PRINTF(d) t: %dn", (int)(-2147483648));
    

    That's defined behaviour on a 2's complement machine with 32-bit int s.

    For increased theoretical portability, use INT_MIN instead of the number, and let us know where you found a non-2's-complement machine to test it on.


    To be clear, that last paragraph was partly a joke. INT_MIN is definitely the way to go if you mean "the smallest int ", because int varies in size. There are still lots of 16-bit implementations, for example. Writing out -231 is only useful if you definitely always mean precisely that value, in which case you would probably use a fixed-sized type like int32_t instead of int .

    You might want some alternative to writing out the number in decimal to make it clearer for those who might not notice the difference between 2147483648 and 2174483648 , but you need to be careful.

    As mentioned above, on a 32-bit 2's-complement machine, (int)(-2147483648) will not overflow and is therefore well-defined, because -2147483648 will be treate as a wider signed type. However, the same is not true for (int)(-0x80000000) . 0x80000000 will be treated as an unsigned int (since it fits into the unsigned representation); -0x80000000 is well-defined (but the - has no effect if int is 32 bits), and the conversion of the resulting unsigned int 0x80000000 to int involves an overflow. To avoid the overflow, you would need to cast the hex constant to a signed type: (int)(-(long long)(0x80000000)) .

    Similarly, you need to take care if you want to use the left shift operator. 1<<31 is undefined behaviour on 32-bit machines with 32-bit (or smaller) int s; it will only evaluate to 231 if int is at least 33 bits, because left shift by k bits is only well-defined if k is strictly less than the number of non-sign bits of the integer type of the left-hand argument.

    1LL<<31 is safe, since long long int is required to be able to represent 263-1, so its bit size must be greater than 32. So the form

    (int)(-(1LL<<31))
    

    is possibly the most readable. YMMV.


    For any passing pedants, this question is tagged C, and the latest C draft (n1570.pdf) says, with respect to E1 << E2 , where E1 has a signed type, that the value is defined only if E1 is nonnegative and E1 × 2E2 "is representable in the result type". (§6.5.7 para 4).

    That's different from C++, in which the application of the left-shift operator is defined if E1 is nonnegative and E1 × 2E2 "is representable in the corresponding unsigned type of the result type" (§5.8 para. 2, emphasis added).

    In C++, according to the most recent draft standard, the conversion of an integer value to a signed integer type is implementation-defined if the value cannot be represented in the destination type (§4.7 para. 3). The corresponding paragraph of the C standard -- §6.3.1.3 para. 3 -- says that "either the result is implementation-defined or an implementation-defined signal is raised".)

    链接地址: http://www.djcxy.com/p/21154.html

    上一篇: PDF在Safari中隐藏了Jquery Modal

    下一篇: 为什么最小的int,-2147483648,类型是'long'?