Add (FMA) instructions with SSE/AVX

I have learned that some Intel/AMD CPUs can do simultanous multiply and add with SSE/AVX:
FLOPS per cycle for sandy-bridge and haswell SSE2/AVX/AVX2.

I like to know how to do this best in code and I also want to know how it's done internally in the CPU. I mean with the super-scalar architecture. Let's say I want to do a long sum such as the following in SSE:

//sum = a1*b1 + a2*b2 + a3*b3 +... where a is a scalar and b is a SIMD vector (e.g. from matrix multiplication)
sum = _mm_set1_ps(0.0f);
a1  = _mm_set1_ps(a[0]); 
b1  = _mm_load_ps(&b[0]);
sum = _mm_add_ps(sum, _mm_mul_ps(a1, b1));

a2  = _mm_set1_ps(a[1]); 
b2  = _mm_load_ps(&b[4]);
sum = _mm_add_ps(sum, _mm_mul_ps(a2, b2));

a3  = _mm_set1_ps(a[2]); 
b3  = _mm_load_ps(&b[8]);
sum = _mm_add_ps(sum, _mm_mul_ps(a3, b3));
...

My question is how does this get converted to simultaneous multiply and add? Can the data be dependent? I mean can the CPU do _mm_add_ps(sum, _mm_mul_ps(a1, b1)) simultaneously or do the registers used in the multiplication and add have to be independent?

Lastly how does this apply to FMA (with Haswell)? Is _mm_add_ps(sum, _mm_mul_ps(a1, b1)) automatically converted to a single FMA instruction or micro-operation?


The compiler is allowed to fuse a separated add and multiply, even though this changes the final result (by making it more accurate).

An FMA has only one rounding (it effectively keeps infinite precision for the internal temporary multiply result), while an ADD + MUL has two.

The IEEE and C standards allow this when #pragma STDC FP_CONTRACT ON is in effect, and compilers are allowed to have it ON by default (but not all do). Gcc contracts into FMA by default (with the default -std=gnu* , but not -std=c* , eg -std=c++14 ). For Clang, it's only enabled with -ffp-contract=fast . (With just the #pragma enabled, only within a single expression like a+b*c , not across separate C++ statements.).

This is different from strict vs. relaxed floating point (or in gcc terms, -ffast-math vs. -fno-fast-math ) that would allow other kinds of optimizations that could increase the rounding error depending on the input values. This one is special because of the infinite precision of the FMA internal temporary; if there was any rounding at all in the internal temporary, this wouldn't be allowed in strict FP.

Even if you enable relaxed floating-point, the compiler might still choose not to fuse since it might expect you to know what you're doing if you're already using intrinsics.


So the best way to make sure you actually get the FMA instructions you want is you actually use the provided intrinsics for them:

FMA3 Intrinsics: (AVX2 - Intel Haswell)

  • _mm_fmadd_pd() , _ mm256_fmadd_pd()
  • _mm_fmadd_ps() , _mm256_fmadd_ps()
  • and about a gazillion other variations...
  • FMA4 Intrinsics: (XOP - AMD Bulldozer)

  • _mm_macc_pd() , _mm256_macc_pd()
  • _mm_macc_ps() , _mm256_macc_ps()
  • and about a gazillion other variations...

  • I tested the following code in GCC 5.3, Clang 3.7, ICC 13.0.1 and MSVC 2015 (compiler version 19.00).

    float mul_add(float a, float b, float c) {
        return a*b + c;
    }
    
    __m256 mul_addv(__m256 a, __m256 b, __m256 c) {
        return _mm256_add_ps(_mm256_mul_ps(a, b), c);
    }
    

    With the right compiler options (see below) every compiler will generate a vfmadd instruction (eg vfmadd213ss ) from mul_add . However, only MSVC fails to contract mul_addv to a single vfmadd instruction (eg vfmadd213ps ).

    The following compiler options are sufficient to generate vfmadd instructions (except with mul_addv with MSVC).

    GCC:   -O2 -mavx2 -mfma
    Clang: -O1 -mavx2 -mfma -ffp-contract=fast
    ICC:   -O1 -march=core-avx2
    MSVC:  /O1 /arch:AVX2 /fp:fast
    

    GCC 4.9 will not contract mul_addv to a single fma instruction but since at least GCC 5.1 it does. I don't know when the other compilers started doing this.

    链接地址: http://www.djcxy.com/p/15020.html

    上一篇: 为什么x ** 3比x * x * x慢?

    下一篇: 使用SSE / AVX添加(FMA)指令