Floating Point Div/Mul > 30 times slower than Add/Sub?

I recently read this post: Floating point vs integer calculations on modern hardware and was curious as to the performance of my own processor on this quasi-benchmark, so I put together two versions of the code, one in C# and one in C++ (Visual Studio 2010 Express) and compiled them both with optimizations to see what falls out. The output from my C# version is fairly reasonable:

int add/sub: 350ms
int div/mul: 3469ms
float add/sub: 1007ms
float div/mul: 67493ms
double add/sub: 1914ms
double div/mul: 2766ms

When I compiled and ran the C++ version something completely different shook out:

int add/sub: 210.653ms
int div/mul: 2946.58ms
float add/sub: 3022.58ms
float div/mul: 172931ms
double add/sub: 1007.63ms
double div/mul: 74171.9ms

I expected some performance differences, but not this large! I don't understand why the division/multiplication in C++ is so much slower than addition/subtraction, where the managed C# version is more reasonable to my expectations. The code for the C++ version of the function is as follows:

template< typename T> void GenericTest(const char *typestring)
{
    T v = 0;
    T v0 = (T)((rand() % 256) / 16) + 1;
    T v1 = (T)((rand() % 256) / 16) + 1;
    T v2 = (T)((rand() % 256) / 16) + 1;
    T v3 = (T)((rand() % 256) / 16) + 1;
    T v4 = (T)((rand() % 256) / 16) + 1;
    T v5 = (T)((rand() % 256) / 16) + 1;
    T v6 = (T)((rand() % 256) / 16) + 1;
    T v7 = (T)((rand() % 256) / 16) + 1;
    T v8 = (T)((rand() % 256) / 16) + 1;
    T v9 = (T)((rand() % 256) / 16) + 1;

    HTimer tmr = HTimer();
    tmr.Start();
    for (int i = 0 ; i < 100000000 ; ++i)
    {
        v += v0;
        v -= v1;
        v += v2;
        v -= v3;
        v += v4;
        v -= v5;
        v += v6;
        v -= v7;
        v += v8;
        v -= v9;
    }
    tmr.Stop();

      // I removed the bracketed values from the table above, they just make the compiler
      // assume I am using the value for something do it doesn't optimize it out.
    cout << typestring << " add/sub: " << tmr.Elapsed() * 1000 << "ms [" << (int)v << "]" << endl;

    tmr.Start();
    for (int i = 0 ; i < 100000000 ; ++i)
    {
        v /= v0;
        v *= v1;
        v /= v2;
        v *= v3;
        v /= v4;
        v *= v5;
        v /= v6;
        v *= v7;
        v /= v8;
        v *= v9;
    }
    tmr.Stop();

    cout << typestring << " div/mul: " << tmr.Elapsed() * 1000 << "ms [" << (int)v << "]" << endl;
}

The code for the C# tests are not generic, and are implemented thus:

static double DoubleTest()
{
    Random rnd = new Random();
    Stopwatch sw = new Stopwatch();

    double v = 0;
    double v0 = (double)rnd.Next(1, int.MaxValue);
    double v1 = (double)rnd.Next(1, int.MaxValue);
    double v2 = (double)rnd.Next(1, int.MaxValue);
    double v3 = (double)rnd.Next(1, int.MaxValue);
    double v4 = (double)rnd.Next(1, int.MaxValue);
    double v5 = (double)rnd.Next(1, int.MaxValue);
    double v6 = (double)rnd.Next(1, int.MaxValue);
    double v7 = (double)rnd.Next(1, int.MaxValue);
    double v8 = (double)rnd.Next(1, int.MaxValue);
    double v9 = (double)rnd.Next(1, int.MaxValue);

    sw.Start();
    for (int i = 0; i < 100000000; i++)
    {
        v += v0;
        v -= v1;
        v += v2;
        v -= v3;
        v += v4;
        v -= v5;
        v += v6;
        v -= v7;
        v += v8;
        v -= v9;
    }
    sw.Stop();

    Console.WriteLine("double add/sub: {0}", sw.ElapsedMilliseconds);
    sw.Reset();

    sw.Start();
    for (int i = 0; i < 100000000; i++)
    {
        v /= v0;
        v *= v1;
        v /= v2;
        v *= v3;
        v /= v4;
        v *= v5;
        v /= v6;
        v *= v7;
        v /= v8;
        v *= v9;
    }
    sw.Stop();

    Console.WriteLine("double div/mul: {0}", sw.ElapsedMilliseconds);
    sw.Reset();

    return v;
}

Any ideas here?


For the float div/mul tests, you're probably getting denormalized values, which are much slower to process that normal floating point values. This isn't an issue for the int tests and would crop up much later for the double tests.

You should be able to add this to the start of the C++ to flush denormals to zero:

_controlfp(_DN_FLUSH, _MCW_DN);

I'm not sure how to do it in C# though (or if it's even possible).

Some more info here: Floating Point Math Execution Time


It's possible that C# optimized the division by vx to multiplication by 1 / vx since it knows those values aren't modified during the loop and it can compute the inverses just once up front.

You can do this optimization yourself and time it in C++.


If you're interested in floating point speed and possible optimizations, read this book: http://www.agner.org/optimize/optimizing_cpp.pdf

also you can check this: http://msdn.microsoft.com/en-us/library/aa289157%28VS.71%29.aspx

Your results could depend on things such as JIT, compilation flags (debug/release, what kind of FP optimizations to perform or allowed instruction set).

Try setting these flags to max optimizations and change your program, so that it surely won't produce overflows or NANs, because they affect the computation speed. (even something like "v += v1; v += v2; v -= v1; v -= v2;" is ok, because it won't be reduced on "strict" or "precise" floating point mode). Also try not to use more variables than you have FP registers.

链接地址: http://www.djcxy.com/p/31600.html

上一篇: 为什么迭代一个小字符串比小列表慢?

下一篇: 浮点Div / Mul>比Add / Sub慢30倍?