What is the worst case accuracy for dot product?
Suppose the processor has only 'fadd' and 'fmul' operations (no 'dot' or 'fma' instructions) which are IEEE-754 compliant. What is the worst case accuracy that will be achieved by the trivial implemention of the dot product operation. For example, for a vector of length 3:
dot(vec_a, vec_b) = vec_a.x*vec_b.x + vec_a.y*vec_b.y + vec_a.z*vec_b.z
Here is my analysis, but I am not sure if it correct: For a vector of length N, there are N multiplications and N-1 additions, resulting in 2N-1 floating point operations. In the worst case, for each of these operations the representation will be too small for the accurate result, so the intermediate result will be rounded. Each rounding adds up to 0.5 ULP error. So the maximum error will be (2N-1)*0.5 = N-1/2 ULP?
As with many FP error analyses, the error is strongly dependent on the maximum magnitude of the input. In this case, a rough-and-ready error bound is 2 * FLT_EPS * dot(abs(vec_a), abs(vec_b))
, where abs
denotes the elementwise absolute value of the vector.
Your reasoning does not work for additions: if a
and b
are already inaccurate by 0.5 ULP each and a
is close to -b
, then the relative accuracy of a + b
can be terrible, much worse than 1.5 ULP. In fact, without further information about the vectors you are computing the dot-product of, there are no guarantees one can provide about the relative accuracy of the result.
Your line of reasoning is okay-ish when there are only multiplications, but it ignores compounded errors.
Consider the equation: (a + ea)(b + eb) = ab + aeb + bea + eaeb.
If you assume that both a and b are between 1 and 2, the total relative error after multiplication of two results that were already accurate to 0.5 ULP each can only, in a rough first approximation, be estimated as 1 ULP, and that is still ignoring the error term eaeb and the error of the multiplication itself. Make it about 1.5 ULP total relative error for the result of the floating-point multiplication, and this is only a rough average, not a sound maximum.
These colleagues of mine have formalized and demonstrated a notion of accuracy of the double-precision floating-point dot-product. A translation to English of their result is that if each vector component is bounded by 1.0
, then the end result of the dot product is accurate to NMAX * B, where NMAX is the dimension of the vectors, and B is a constant depending on NMAX. A few values are provided on the linked page:
NMAX 10 100 1000 B 0x1.1p-50 0x1.02p-47 0x1.004p-44
In their result, you can substitute 1.0
with any power of two P low enough to ensure the absence of overflow, and the absolute error of the dot product then becomes NMAX * B * P2. The loop invariants become respectively:
@ loop invariant abs(exact_scalar_product(x,y,i)) <= i * P * P;
@ loop invariant abs(p - exact_scalar_product(x,y,i)) <= i * B * P * P;
链接地址: http://www.djcxy.com/p/85612.html
下一篇: 点积的最坏情况精度是多少?