How do I achieve the theoretical maximum of 4 FLOPs per cycle?

How can the theoretical peak performance of 4 floating point operations (double precision) per cycle be achieved on a modern x86-64 Intel CPU?

As far as I understand it take three cycles for an SSE add and five cycles for a mul to complete on most of the modern Intel CPUs (see for example Agner Fog's 'Instruction Tables' ). Due to pipelining one can get a throughput of one add per cycle if the algorithm has at least three independent summations. Since that is true for packed addpd as well as the scalar addsd versions and SSE registers can contain two double 's the throughput can be as much as two flops per cycle.

Furthermore, it seems (although I've not seen any proper documentation on this) add 's and mul 's can be executed in parallel giving a theoretical max throughput of four flops per cycle.

However, I've not been able to replicate that performance with a simple C/C++ programme. My best attempt resulted in about 2.7 flops/cycle. If anyone can contribute a simple C/C++ or assembler programme which demonstrates peak performance that'd be greatly appreciated.

My attempt:

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <sys/time.h>

double stoptime(void) {
   struct timeval t;
   gettimeofday(&t,NULL);
   return (double) t.tv_sec + t.tv_usec/1000000.0;
}

double addmul(double add, double mul, int ops){
   // Need to initialise differently otherwise compiler might optimise away
   double sum1=0.1, sum2=-0.1, sum3=0.2, sum4=-0.2, sum5=0.0;
   double mul1=1.0, mul2= 1.1, mul3=1.2, mul4= 1.3, mul5=1.4;
   int loops=ops/10;          // We have 10 floating point operations inside the loop
   double expected = 5.0*add*loops + (sum1+sum2+sum3+sum4+sum5)
               + pow(mul,loops)*(mul1+mul2+mul3+mul4+mul5);

   for (int i=0; i<loops; i++) {
      mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
      sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
   }
   return  sum1+sum2+sum3+sum4+sum5+mul1+mul2+mul3+mul4+mul5 - expected;
}

int main(int argc, char** argv) {
   if (argc != 2) {
      printf("usage: %s <num>n", argv[0]);
      printf("number of operations: <num> millionsn");
      exit(EXIT_FAILURE);
   }
   int n = atoi(argv[1]) * 1000000;
   if (n<=0)
       n=1000;

   double x = M_PI;
   double y = 1.0 + 1e-8;
   double t = stoptime();
   x = addmul(x, y, n);
   t = stoptime() - t;
   printf("addmul:t %.3f s, %.3f Gflops, res=%fn", t, (double)n/t/1e9, x);
   return EXIT_SUCCESS;
}

Compiled with

g++ -O2 -march=native addmul.cpp ; ./a.out 1000

produces the following output on an Intel Core i5-750, 2.66 GHz.

addmul:  0.270 s, 3.707 Gflops, res=1.326463

That is, just about 1.4 flops per cycle. Looking at the assembler code with g++ -S -O2 -march=native -masm=intel addmul.cpp the main loop seems kind of optimal to me:

.L4:
inc    eax
mulsd    xmm8, xmm3
mulsd    xmm7, xmm3
mulsd    xmm6, xmm3
mulsd    xmm5, xmm3
mulsd    xmm1, xmm3
addsd    xmm13, xmm2
addsd    xmm12, xmm2
addsd    xmm11, xmm2
addsd    xmm10, xmm2
addsd    xmm9, xmm2
cmp    eax, ebx
jne    .L4

Changing the scalar versions with packed versions ( addpd and mulpd ) would double the flop count without changing the execution time and so I'd get just short of 2.8 flops per cycle. Is there a simple example which achieves four flops per cycle?

Nice little programme by Mysticial; here are my results (run just for a few seconds though):

  • gcc -O2 -march=nocona : 5.6 Gflops out of 10.66 Gflops (2.1 flops/cycle)
  • cl /O2 , openmp removed: 10.1 Gflops out of 10.66 Gflops (3.8 flops/cycle)
  • It all seems a bit complex, but my conclusions so far:

  • gcc -O2 changes the order of independent floating point operations with the aim of alternating addpd and mulpd 's if possible. Same applies to gcc-4.6.2 -O2 -march=core2 .

  • gcc -O2 -march=nocona seems to keep the order of floating point operations as defined in the C++ source.

  • cl /O2 , the 64-bit compiler from the SDK for Windows 7 does loop-unrolling automatically and seems to try and arrange operations so that groups of three addpd 's alternate with three mulpd 's (well, at least on my system and for my simple programme).

  • My Core i5 750 (Nahelem architecture) doesn't like alternating add's and mul's and seems unable to run both operations in parallel. However, if grouped in 3's it suddenly works like magic.

  • Other architectures (possibly Sandy Bridge and others) appear to be able to execute add/mul in parallel without problems if they alternate in the assembly code.

  • Although difficult to admit, but on my system cl /O2 does a much better job at low-level optimising operations for my system and achieves close to peak performance for the little C++ example above. I measured between 1.85-2.01 flops/cycle (have used clock() in Windows which is not that precise. I guess, need to use a better timer - thanks Mackie Messer).

  • The best I managed with gcc was to manually loop unroll and arrange additions and multiplications in groups of three. With g++ -O2 -march=nocona addmul_unroll.cpp I get at best 0.207s, 4.825 Gflops which corresponds to 1.8 flops/cycle which I'm quite happy with now.

  • In the C++ code I've replaced the for loop with

       for (int i=0; i<loops/3; i++) {
           mul1*=mul; mul2*=mul; mul3*=mul;
           sum1+=add; sum2+=add; sum3+=add;
           mul4*=mul; mul5*=mul; mul1*=mul;
           sum4+=add; sum5+=add; sum1+=add;
    
           mul2*=mul; mul3*=mul; mul4*=mul;
           sum2+=add; sum3+=add; sum4+=add;
           mul5*=mul; mul1*=mul; mul2*=mul;
           sum5+=add; sum1+=add; sum2+=add;
    
           mul3*=mul; mul4*=mul; mul5*=mul;
           sum3+=add; sum4+=add; sum5+=add;
       }
    

    And the assembly now looks like

    .L4:
    mulsd    xmm8, xmm3
    mulsd    xmm7, xmm3
    mulsd    xmm6, xmm3
    addsd    xmm13, xmm2
    addsd    xmm12, xmm2
    addsd    xmm11, xmm2
    mulsd    xmm5, xmm3
    mulsd    xmm1, xmm3
    mulsd    xmm8, xmm3
    addsd    xmm10, xmm2
    addsd    xmm9, xmm2
    addsd    xmm13, xmm2
    ...
    

    I've done this exact task before. But it was mainly to measure power consumption and CPU temperatures. The following code (which is fairly long) achieves close to optimal on my Core i7 2600K.

    The key thing to note here is the massive amount of manual loop-unrolling as well as interleaving of multiplies and adds...

    The full project can be found on my GitHub: https://github.com/Mysticial/Flops

    Warning:

    If you decide to compile and run this, pay attention to your CPU temperatures!!!
    Make sure you don't overheat it. And make sure CPU-throttling doesn't affect your results!

    Furthermore, I take no responsibility for whatever damage that may result from running this code.

    Notes:

  • This code is optimized for x64. x86 doesn't have enough registers for this to compile well.
  • This code has been tested to work well on Visual Studio 2010/2012 and GCC 4.6.
    ICC 11 (Intel Compiler 11) surprisingly has trouble compiling it well.
  • These are for pre-FMA processors. In order to achieve peak FLOPS on Intel Haswell and AMD Bulldozer processors (and later), FMA (Fused Multiply Add) instructions will be needed. These are beyond the scope of this benchmark.
  • #include <emmintrin.h>
    #include <omp.h>
    #include <iostream>
    using namespace std;
    
    typedef unsigned long long uint64;
    
    double test_dp_mac_SSE(double x,double y,uint64 iterations){
        register __m128d r0,r1,r2,r3,r4,r5,r6,r7,r8,r9,rA,rB,rC,rD,rE,rF;
    
        //  Generate starting data.
        r0 = _mm_set1_pd(x);
        r1 = _mm_set1_pd(y);
    
        r8 = _mm_set1_pd(-0.0);
    
        r2 = _mm_xor_pd(r0,r8);
        r3 = _mm_or_pd(r0,r8);
        r4 = _mm_andnot_pd(r8,r0);
        r5 = _mm_mul_pd(r1,_mm_set1_pd(0.37796447300922722721));
        r6 = _mm_mul_pd(r1,_mm_set1_pd(0.24253562503633297352));
        r7 = _mm_mul_pd(r1,_mm_set1_pd(4.1231056256176605498));
        r8 = _mm_add_pd(r0,_mm_set1_pd(0.37796447300922722721));
        r9 = _mm_add_pd(r1,_mm_set1_pd(0.24253562503633297352));
        rA = _mm_sub_pd(r0,_mm_set1_pd(4.1231056256176605498));
        rB = _mm_sub_pd(r1,_mm_set1_pd(4.1231056256176605498));
    
        rC = _mm_set1_pd(1.4142135623730950488);
        rD = _mm_set1_pd(1.7320508075688772935);
        rE = _mm_set1_pd(0.57735026918962576451);
        rF = _mm_set1_pd(0.70710678118654752440);
    
        uint64 iMASK = 0x800fffffffffffffull;
        __m128d MASK = _mm_set1_pd(*(double*)&iMASK);
        __m128d vONE = _mm_set1_pd(1.0);
    
        uint64 c = 0;
        while (c < iterations){
            size_t i = 0;
            while (i < 1000){
                //  Here's the meat - the part that really matters.
    
                r0 = _mm_mul_pd(r0,rC);
                r1 = _mm_add_pd(r1,rD);
                r2 = _mm_mul_pd(r2,rE);
                r3 = _mm_sub_pd(r3,rF);
                r4 = _mm_mul_pd(r4,rC);
                r5 = _mm_add_pd(r5,rD);
                r6 = _mm_mul_pd(r6,rE);
                r7 = _mm_sub_pd(r7,rF);
                r8 = _mm_mul_pd(r8,rC);
                r9 = _mm_add_pd(r9,rD);
                rA = _mm_mul_pd(rA,rE);
                rB = _mm_sub_pd(rB,rF);
    
                r0 = _mm_add_pd(r0,rF);
                r1 = _mm_mul_pd(r1,rE);
                r2 = _mm_sub_pd(r2,rD);
                r3 = _mm_mul_pd(r3,rC);
                r4 = _mm_add_pd(r4,rF);
                r5 = _mm_mul_pd(r5,rE);
                r6 = _mm_sub_pd(r6,rD);
                r7 = _mm_mul_pd(r7,rC);
                r8 = _mm_add_pd(r8,rF);
                r9 = _mm_mul_pd(r9,rE);
                rA = _mm_sub_pd(rA,rD);
                rB = _mm_mul_pd(rB,rC);
    
                r0 = _mm_mul_pd(r0,rC);
                r1 = _mm_add_pd(r1,rD);
                r2 = _mm_mul_pd(r2,rE);
                r3 = _mm_sub_pd(r3,rF);
                r4 = _mm_mul_pd(r4,rC);
                r5 = _mm_add_pd(r5,rD);
                r6 = _mm_mul_pd(r6,rE);
                r7 = _mm_sub_pd(r7,rF);
                r8 = _mm_mul_pd(r8,rC);
                r9 = _mm_add_pd(r9,rD);
                rA = _mm_mul_pd(rA,rE);
                rB = _mm_sub_pd(rB,rF);
    
                r0 = _mm_add_pd(r0,rF);
                r1 = _mm_mul_pd(r1,rE);
                r2 = _mm_sub_pd(r2,rD);
                r3 = _mm_mul_pd(r3,rC);
                r4 = _mm_add_pd(r4,rF);
                r5 = _mm_mul_pd(r5,rE);
                r6 = _mm_sub_pd(r6,rD);
                r7 = _mm_mul_pd(r7,rC);
                r8 = _mm_add_pd(r8,rF);
                r9 = _mm_mul_pd(r9,rE);
                rA = _mm_sub_pd(rA,rD);
                rB = _mm_mul_pd(rB,rC);
    
                i++;
            }
    
            //  Need to renormalize to prevent denormal/overflow.
            r0 = _mm_and_pd(r0,MASK);
            r1 = _mm_and_pd(r1,MASK);
            r2 = _mm_and_pd(r2,MASK);
            r3 = _mm_and_pd(r3,MASK);
            r4 = _mm_and_pd(r4,MASK);
            r5 = _mm_and_pd(r5,MASK);
            r6 = _mm_and_pd(r6,MASK);
            r7 = _mm_and_pd(r7,MASK);
            r8 = _mm_and_pd(r8,MASK);
            r9 = _mm_and_pd(r9,MASK);
            rA = _mm_and_pd(rA,MASK);
            rB = _mm_and_pd(rB,MASK);
            r0 = _mm_or_pd(r0,vONE);
            r1 = _mm_or_pd(r1,vONE);
            r2 = _mm_or_pd(r2,vONE);
            r3 = _mm_or_pd(r3,vONE);
            r4 = _mm_or_pd(r4,vONE);
            r5 = _mm_or_pd(r5,vONE);
            r6 = _mm_or_pd(r6,vONE);
            r7 = _mm_or_pd(r7,vONE);
            r8 = _mm_or_pd(r8,vONE);
            r9 = _mm_or_pd(r9,vONE);
            rA = _mm_or_pd(rA,vONE);
            rB = _mm_or_pd(rB,vONE);
    
            c++;
        }
    
        r0 = _mm_add_pd(r0,r1);
        r2 = _mm_add_pd(r2,r3);
        r4 = _mm_add_pd(r4,r5);
        r6 = _mm_add_pd(r6,r7);
        r8 = _mm_add_pd(r8,r9);
        rA = _mm_add_pd(rA,rB);
    
        r0 = _mm_add_pd(r0,r2);
        r4 = _mm_add_pd(r4,r6);
        r8 = _mm_add_pd(r8,rA);
    
        r0 = _mm_add_pd(r0,r4);
        r0 = _mm_add_pd(r0,r8);
    
    
        //  Prevent Dead Code Elimination
        double out = 0;
        __m128d temp = r0;
        out += ((double*)&temp)[0];
        out += ((double*)&temp)[1];
    
        return out;
    }
    
    void test_dp_mac_SSE(int tds,uint64 iterations){
    
        double *sum = (double*)malloc(tds * sizeof(double));
        double start = omp_get_wtime();
    
    #pragma omp parallel num_threads(tds)
        {
            double ret = test_dp_mac_SSE(1.1,2.1,iterations);
            sum[omp_get_thread_num()] = ret;
        }
    
        double secs = omp_get_wtime() - start;
        uint64 ops = 48 * 1000 * iterations * tds * 2;
        cout << "Seconds = " << secs << endl;
        cout << "FP Ops  = " << ops << endl;
        cout << "FLOPs   = " << ops / secs << endl;
    
        double out = 0;
        int c = 0;
        while (c < tds){
            out += sum[c++];
        }
    
        cout << "sum = " << out << endl;
        cout << endl;
    
        free(sum);
    }
    
    int main(){
        //  (threads, iterations)
        test_dp_mac_SSE(8,10000000);
    
        system("pause");
    }
    

    Output (1 thread, 10000000 iterations) - Compiled with Visual Studio 2010 SP1 - x64 Release:

    Seconds = 55.5104
    FP Ops  = 960000000000
    FLOPs   = 1.7294e+010
    sum = 2.22652
    

    The machine is a Core i7 2600K @ 4.4 GHz. Theoretical SSE peak is 4 flops * 4.4 GHz = 17.6 GFlops . This code achieves 17.3 GFlops - not bad.

    Output (8 threads, 10000000 iterations) - Compiled with Visual Studio 2010 SP1 - x64 Release:

    Seconds = 117.202
    FP Ops  = 7680000000000
    FLOPs   = 6.55279e+010
    sum = 17.8122
    

    Theoretical SSE peak is 4 flops * 4 cores * 4.4 GHz = 70.4 GFlops. Actual is 65.5 GFlops .


    Let's take this one step further. AVX...

    #include <immintrin.h>
    #include <omp.h>
    #include <iostream>
    using namespace std;
    
    typedef unsigned long long uint64;
    
    double test_dp_mac_AVX(double x,double y,uint64 iterations){
        register __m256d r0,r1,r2,r3,r4,r5,r6,r7,r8,r9,rA,rB,rC,rD,rE,rF;
    
        //  Generate starting data.
        r0 = _mm256_set1_pd(x);
        r1 = _mm256_set1_pd(y);
    
        r8 = _mm256_set1_pd(-0.0);
    
        r2 = _mm256_xor_pd(r0,r8);
        r3 = _mm256_or_pd(r0,r8);
        r4 = _mm256_andnot_pd(r8,r0);
        r5 = _mm256_mul_pd(r1,_mm256_set1_pd(0.37796447300922722721));
        r6 = _mm256_mul_pd(r1,_mm256_set1_pd(0.24253562503633297352));
        r7 = _mm256_mul_pd(r1,_mm256_set1_pd(4.1231056256176605498));
        r8 = _mm256_add_pd(r0,_mm256_set1_pd(0.37796447300922722721));
        r9 = _mm256_add_pd(r1,_mm256_set1_pd(0.24253562503633297352));
        rA = _mm256_sub_pd(r0,_mm256_set1_pd(4.1231056256176605498));
        rB = _mm256_sub_pd(r1,_mm256_set1_pd(4.1231056256176605498));
    
        rC = _mm256_set1_pd(1.4142135623730950488);
        rD = _mm256_set1_pd(1.7320508075688772935);
        rE = _mm256_set1_pd(0.57735026918962576451);
        rF = _mm256_set1_pd(0.70710678118654752440);
    
        uint64 iMASK = 0x800fffffffffffffull;
        __m256d MASK = _mm256_set1_pd(*(double*)&iMASK);
        __m256d vONE = _mm256_set1_pd(1.0);
    
        uint64 c = 0;
        while (c < iterations){
            size_t i = 0;
            while (i < 1000){
                //  Here's the meat - the part that really matters.
    
                r0 = _mm256_mul_pd(r0,rC);
                r1 = _mm256_add_pd(r1,rD);
                r2 = _mm256_mul_pd(r2,rE);
                r3 = _mm256_sub_pd(r3,rF);
                r4 = _mm256_mul_pd(r4,rC);
                r5 = _mm256_add_pd(r5,rD);
                r6 = _mm256_mul_pd(r6,rE);
                r7 = _mm256_sub_pd(r7,rF);
                r8 = _mm256_mul_pd(r8,rC);
                r9 = _mm256_add_pd(r9,rD);
                rA = _mm256_mul_pd(rA,rE);
                rB = _mm256_sub_pd(rB,rF);
    
                r0 = _mm256_add_pd(r0,rF);
                r1 = _mm256_mul_pd(r1,rE);
                r2 = _mm256_sub_pd(r2,rD);
                r3 = _mm256_mul_pd(r3,rC);
                r4 = _mm256_add_pd(r4,rF);
                r5 = _mm256_mul_pd(r5,rE);
                r6 = _mm256_sub_pd(r6,rD);
                r7 = _mm256_mul_pd(r7,rC);
                r8 = _mm256_add_pd(r8,rF);
                r9 = _mm256_mul_pd(r9,rE);
                rA = _mm256_sub_pd(rA,rD);
                rB = _mm256_mul_pd(rB,rC);
    
                r0 = _mm256_mul_pd(r0,rC);
                r1 = _mm256_add_pd(r1,rD);
                r2 = _mm256_mul_pd(r2,rE);
                r3 = _mm256_sub_pd(r3,rF);
                r4 = _mm256_mul_pd(r4,rC);
                r5 = _mm256_add_pd(r5,rD);
                r6 = _mm256_mul_pd(r6,rE);
                r7 = _mm256_sub_pd(r7,rF);
                r8 = _mm256_mul_pd(r8,rC);
                r9 = _mm256_add_pd(r9,rD);
                rA = _mm256_mul_pd(rA,rE);
                rB = _mm256_sub_pd(rB,rF);
    
                r0 = _mm256_add_pd(r0,rF);
                r1 = _mm256_mul_pd(r1,rE);
                r2 = _mm256_sub_pd(r2,rD);
                r3 = _mm256_mul_pd(r3,rC);
                r4 = _mm256_add_pd(r4,rF);
                r5 = _mm256_mul_pd(r5,rE);
                r6 = _mm256_sub_pd(r6,rD);
                r7 = _mm256_mul_pd(r7,rC);
                r8 = _mm256_add_pd(r8,rF);
                r9 = _mm256_mul_pd(r9,rE);
                rA = _mm256_sub_pd(rA,rD);
                rB = _mm256_mul_pd(rB,rC);
    
                i++;
            }
    
            //  Need to renormalize to prevent denormal/overflow.
            r0 = _mm256_and_pd(r0,MASK);
            r1 = _mm256_and_pd(r1,MASK);
            r2 = _mm256_and_pd(r2,MASK);
            r3 = _mm256_and_pd(r3,MASK);
            r4 = _mm256_and_pd(r4,MASK);
            r5 = _mm256_and_pd(r5,MASK);
            r6 = _mm256_and_pd(r6,MASK);
            r7 = _mm256_and_pd(r7,MASK);
            r8 = _mm256_and_pd(r8,MASK);
            r9 = _mm256_and_pd(r9,MASK);
            rA = _mm256_and_pd(rA,MASK);
            rB = _mm256_and_pd(rB,MASK);
            r0 = _mm256_or_pd(r0,vONE);
            r1 = _mm256_or_pd(r1,vONE);
            r2 = _mm256_or_pd(r2,vONE);
            r3 = _mm256_or_pd(r3,vONE);
            r4 = _mm256_or_pd(r4,vONE);
            r5 = _mm256_or_pd(r5,vONE);
            r6 = _mm256_or_pd(r6,vONE);
            r7 = _mm256_or_pd(r7,vONE);
            r8 = _mm256_or_pd(r8,vONE);
            r9 = _mm256_or_pd(r9,vONE);
            rA = _mm256_or_pd(rA,vONE);
            rB = _mm256_or_pd(rB,vONE);
    
            c++;
        }
    
        r0 = _mm256_add_pd(r0,r1);
        r2 = _mm256_add_pd(r2,r3);
        r4 = _mm256_add_pd(r4,r5);
        r6 = _mm256_add_pd(r6,r7);
        r8 = _mm256_add_pd(r8,r9);
        rA = _mm256_add_pd(rA,rB);
    
        r0 = _mm256_add_pd(r0,r2);
        r4 = _mm256_add_pd(r4,r6);
        r8 = _mm256_add_pd(r8,rA);
    
        r0 = _mm256_add_pd(r0,r4);
        r0 = _mm256_add_pd(r0,r8);
    
        //  Prevent Dead Code Elimination
        double out = 0;
        __m256d temp = r0;
        out += ((double*)&temp)[0];
        out += ((double*)&temp)[1];
        out += ((double*)&temp)[2];
        out += ((double*)&temp)[3];
    
        return out;
    }
    
    void test_dp_mac_AVX(int tds,uint64 iterations){
    
        double *sum = (double*)malloc(tds * sizeof(double));
        double start = omp_get_wtime();
    
    #pragma omp parallel num_threads(tds)
        {
            double ret = test_dp_mac_AVX(1.1,2.1,iterations);
            sum[omp_get_thread_num()] = ret;
        }
    
        double secs = omp_get_wtime() - start;
        uint64 ops = 48 * 1000 * iterations * tds * 4;
        cout << "Seconds = " << secs << endl;
        cout << "FP Ops  = " << ops << endl;
        cout << "FLOPs   = " << ops / secs << endl;
    
        double out = 0;
        int c = 0;
        while (c < tds){
            out += sum[c++];
        }
    
        cout << "sum = " << out << endl;
        cout << endl;
    
        free(sum);
    }
    
    int main(){
        //  (threads, iterations)
        test_dp_mac_AVX(8,10000000);
    
        system("pause");
    }
    

    Output (1 thread, 10000000 iterations) - Compiled with Visual Studio 2010 SP1 - x64 Release:

    Seconds = 57.4679
    FP Ops  = 1920000000000
    FLOPs   = 3.34099e+010
    sum = 4.45305
    

    Theoretical AVX peak is 8 flops * 4.4 GHz = 35.2 GFlops . Actual is 33.4 GFlops .

    Output (8 threads, 10000000 iterations) - Compiled with Visual Studio 2010 SP1 - x64 Release:

    Seconds = 111.119
    FP Ops  = 15360000000000
    FLOPs   = 1.3823e+011
    sum = 35.6244
    

    Theoretical AVX peak is 8 flops * 4 cores * 4.4 GHz = 140.8 GFlops. Actual is 138.2 GFlops .


    Now for some explanations:

    The performance critical part is obviously the 48 instructions inside the inner loop. You'll notice that it's broken into 4 blocks of 12 instructions each. Each of these 12 instructions blocks are completely independent from each other - and take on average 6 cycles to execute.

    So there's 12 instructions and 6 cycles between issue-to-use. The latency of multiplication is 5 cycles, so it's just enough to avoid latency stalls.

    The normalization step is needed to keep the data from over/underflowing. This is needed since the do-nothing code will slowly increase/decrease the magnitude of the data.

    So it's actually possible to do better than this if you just use all zeros and get rid of the normalization step. However, since I wrote the benchmark to measure power consumption and temperature, I had to make sure the flops were on "real" data, rather than zeros - as the execution units may very well have special case-handling for zeros that use less power and produce less heat.


    More Results:

  • Intel Core i7 920 @ 3.5 GHz
  • Windows 7 Ultimate x64
  • Visual Studio 2010 SP1 - x64 Release
  • Threads: 1

    Seconds = 72.1116
    FP Ops  = 960000000000
    FLOPs   = 1.33127e+010
    sum = 2.22652
    

    Theoretical SSE Peak: 4 flops * 3.5 GHz = 14.0 GFlops . Actual is 13.3 GFlops .

    Threads: 8

    Seconds = 149.576
    FP Ops  = 7680000000000
    FLOPs   = 5.13452e+010
    sum = 17.8122
    

    Theoretical SSE Peak: 4 flops * 4 cores * 3.5 GHz = 56.0 GFlops . Actual is 51.3 GFlops .

    My processor temps hit 76C on the multi-threaded run! If you runs these, be sure the results aren't affected by CPU throttling.


  • 2 x Intel Xeon X5482 Harpertown @ 3.2 GHz
  • Ubuntu Linux 10 x64
  • GCC 4.5.2 x64 - (-O2 -msse3 -fopenmp)
  • Threads: 1

    Seconds = 78.3357
    FP Ops  = 960000000000
    FLOPs   = 1.22549e+10
    sum = 2.22652
    

    Theoretical SSE Peak: 4 flops * 3.2 GHz = 12.8 GFlops . Actual is 12.3 GFlops .

    Threads: 8

    Seconds = 78.4733
    FP Ops  = 7680000000000
    FLOPs   = 9.78676e+10
    sum = 17.8122
    

    Theoretical SSE Peak: 4 flops * 8 cores * 3.2 GHz = 102.4 GFlops . Actual is 97.9 GFlops .


    There's a point in the Intel architecture that people often forget, the dispatch ports are shared between Int and FP/SIMD. This means that you will only get a certain amount of bursts of FP/SIMD before the loop logic will create bubbles in your floating point stream. Mystical got more flops out of his code, because he used longer strides in his unrolled loop.

    If you look at the Nehalem/Sandy Bridge architecture here http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=6 it's quite clear what happens.

    In contrast, it should be easier to reach peak performance on AMD (Bulldozer) as the INT and FP/SIMD pipes have separate issue ports with their own scheduler.

    This is only theoretical as I have neither of these processors to test.


    Branches can definitely keep you from sustaining peak theoretical performance. Do you see a difference if you manually do some loop-unrolling? For example, if you put 5 or 10 times as many ops per loop iteration:

    for(int i=0; i<loops/5; i++) {
          mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
          sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
          mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
          sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
          mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
          sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
          mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
          sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
          mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
          sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
       }
    
    链接地址: http://www.djcxy.com/p/31504.html

    上一篇: int运算符!=和==比较时为零

    下一篇: 我如何获得每个周期4个FLOP的理论最大值?