Why is this output when compare a float and a double

This question already has an answer here:

  • Difference between decimal, float and double in .NET? 17 answers

  • When you do the first assignment, the constant is truncated to fit the float . When you do the second assignment, the float -precision literal 1.1111...11F is converted to double . Since c contains the value of the 1.1111...11F literal, initialization of d is equivalent to

    double d = ((double)c);
    

    Both assignments change the precision of the constant from the literal, but they change it differently. That is why you see different printouts in the first two WriteLine s.

    When you compare c and d , the value with the lower precision, ie c , is converted to the type with the higher precision, ie to double . That is the same conversion as the conversion that has been performed when you assigned the 1.1111...11F literal to variable d , so the values compare the same in the == operation. In other words, when you do this

    Console.WriteLine(c == d);
    

    the compiler does this:

    Console.WriteLine(((double)c) == d);
    

    That is why the comparison returns true .

    链接地址: http://www.djcxy.com/p/21348.html

    上一篇: 计算结果因小数或双倍而不同

    下一篇: 为什么在比较一个浮点数和一个双精度数时输出结果?