point math consistent in C#? Can it be?

No, this is not another "Why is (1/3.0)*3 != 1" question.

I've been reading about floating-points a lot lately; specifically, how the same calculation might give different results on different architectures or optimization settings.

This is a problem for video games which store replays, or are peer-to-peer networked (as opposed to server-client), which rely on all clients generating exactly the same results every time they run the program - a small discrepancy in one floating-point calculation can lead to a drastically different game-state on different machines (or even on the same machine!)

This happens even amongst processors that "follow" IEEE-754, primarily because some processors (namely x86) use double extended precision. That is, they use 80-bit registers to do all the calculations, then truncate to 64- or 32-bits, leading to different rounding results than machines which use 64- or 32- bits for the calculations.

I've seen several solutions to this problem online, but all for C++, not C#:

  • Disable double extended-precision mode (so that all double calculations use IEEE-754 64-bits) using _controlfp_s (Windows), _FPU_SETCW (Linux?), or fpsetprec (BSD).
  • Always run the same compiler with the same optimization settings, and require all users to have the same CPU architecture (no cross-platform play). Because my "compiler" is actually the JIT, which may optimize differently every time the program is run , I don't think this is possible.
  • Use fixed-point arithmetic, and avoid float and double altogether. decimal would work for this purpose, but would be much slower, and none of the System.Math library functions support it.

  • So, is this even a problem in C#? What if I only intend to support Windows (not Mono)?

    If it is, is there any way to force my program to run at normal double-precision?

    If not, are there any libraries that would help keep floating-point calculations consistent?


    I know of no way to way to make normal floating points deterministic in .net. The JITter is allowed to create code that behaves differently on different platforms(or between different versions of .net). So using normal float s in deterministic .net code is not possible.

    The workarounds I considered:

  • Implement FixedPoint32 in C#. While this is not too hard(I have a half finished implementation) the very small range of values makes it annoying to use. You have to be careful at all times so you neither overflow, nor lose too much precision. In the end I found this not easier than using integers directly.
  • Implement FixedPoint64 in C#. I found this rather hard to do. For some operations intermediate integers of 128bit would be useful. But .net doesn't offer such a type.
  • Implement a custom 32 bit floatingpoint. The lack of a BitScanReverse intrinsic causes a few annoyances when implementing this. But currently I think this is the most promising path.
  • Use native code for the math operations. Incurs the overhead of a delegate call on every math operation.
  • I've just started a software implementation of 32 bit floating point math. It can do about 70million additions/multiplications per second on my 2.66GHz i3. https://github.com/CodesInChaos/SoftFloat . Obviously it's still very incomplete and buggy.


    The C# specification (§4.1.6 Floating point types) specifically allows floating point computations to be done using precision higher than that of the result. So, no, I don't think you can make those calculations deterministic directly in .Net. Others suggested various workarounds, so you could try them.


    The following page may be useful in the case where you need absolute portability of such operations. It discusses software for testing implementations of the IEEE 754 standard, including software for emulating floating point operations. Most information is probably specific to C or C++, however.

    http://www.math.utah.edu/~beebe/software/ieee/

    A note on fixed point

    Binary fixed point numbers can also work well as a substitute for floating point, as is evident from the four basic arithmetic operations:

  • Addition and subtraction are trivial. They work the same way as integers. Just add or subtract!
  • To multiply two fixed point numbers, multiply the two numbers then shift right the defined number of fractional bits.
  • To divide two fixed point numbers, shift the dividend left the defined number of fractional bits, then divide by the divisor.
  • Chapter four of this paper has additional guidance on implementing binary fixed point numbers.
  • Binary fixed point numbers can be implemented on any integer data type such as int, long, and BigInteger, and the non-CLS-compliant types uint and ulong.

    As suggested in another answer, you can use lookup tables, where each element in the table is a binary fixed point number, to help implement complex functions such as sine, cosine, square root, and so on. If the lookup table is less granular than the fixed point number, it is suggested to round the input by adding one half of the granularity of the lookup table to the input:

    // Assume each number has a 12 bit fractional part. (1/4096)
    // Each entry in the lookup table corresponds to a fixed point number
    //  with an 8-bit fractional part (1/256)
    input+=(1<<3); // Add 2^3 for rounding purposes
    input>>=4; // Shift right by 4 (to get 8-bit fractional part)
    // --- clamp or restrict input here --
    // Look up value.
    return lookupTable[input];
    
    链接地址: http://www.djcxy.com/p/85590.html

    上一篇: 用MATLAB找出三角函数的Minimax多项式逼近

    下一篇: 点数学在C#中一致? 是真的吗?