Optimizing x64 assembler MUL loop
I'm writing math code which needs to multiply large numbers fast. It breaks down to multiplications of an array of integers with a single integer. In C++ this looks like this (on unsigned's):
void muladd(unsigned* r, const unsigned* a, unsigned len, unsigned b) {
unsigned __int64 of = 0; // overflow
unsigned i = 0; // loop variable
while (i < len) {
of += (unsigned __int64)a[i] * b + r[i];
r[i] = (unsigned)of;
of >>= 32;
++i;
}
r[i] = (unsigned)of; // save overflow
}
I unrolled this loop manually, converted it to 64 bit and worked on the .asm compiler output to optimize it further. The main .asm loop now looks like this:
mov rax, rdi ; rdi = b
mul QWORD PTR [rbx+r10*8-64] ; rdx:rax = a[i] * b; r10 = i
mov rsi, QWORD PTR [r14+r10*8-64] ; r14 = r; rsi = r[i]
add rax, rsi
adc rdx, 0
add rax, r11 ; r11 = of (low part)
adc rdx, 0
mov QWORD PTR [r14+r10*8-64], rax ; save result
mov r11, rdx
; this repeats itself 8 times with different offsets
When I benchmark this, I find that it takes about 6.3 cycles on avarage per multiplication on my Core2 Quad.
My question is: can I speed this up somehow? Unfortunately, I see no way to avoid one of the additions and the multiplication always needs RDX:RAX, so I need to move the data around and can not sort of "multiply in parallel".
Any ideas anyone?
Update: After some more testing, I've managed to bring the speed up to about 5.4 cycles per 64-bit MUL (that includes all add, move and loop overhead). I guess this is about the best you can get on a Core2, since the Core2 does not have a very fast MUL instruction: it has a throughput of 3 and a latency of 6 (resp. 7) cycles. Sandy bridge will be much better with a throughput of 1 and a latency of 3 (resp. 4) cycles.
Regarding the much lesser number for GMP: I got that from their source code and it seems to me that it is a theoretical number. But what's sure is that it's a number that was calculated for an AMD K9 CPU. And from what I have read I gather the AMDs have a faster MUL unit than the (older) Intel chips.
I once wrote a loop that looks rather like this, with a minimal amount of processing on a lot of data with the result that the loop was limited by memory speed.
I'd try prefetching a[i] and r[i]
if using gcc use the function __builtin_prefetch() or PREFETCHT0 instruction in assembler
http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/Other-Builtins.html
When this works the results can be dramatic. So long as the loop is a thousand or so iterations long, I'd prefetch a[i+64] and r[i+64] as a starting point and see how much difference it makes on your CPU. You may need to try larger prefetch distances.
Looks like your routine could benefit from SSE. PMULLD and PADDD seems like relevant instructions. Not sure why your compiler does not produce SSE from that.
I'd just like to point out that cycle-counting is rather useless as your instructions will be converted to microcode that will be executed out of order or paused depending on everything else the cpu is doing. If you have a fast routine, which you do, it's not really fruitfull to try and shave off a theoretical cycle unless you know your routine will always run in complete isolation.
链接地址: http://www.djcxy.com/p/65006.html上一篇: 计算器C:输入操作符和整数来执行计算
下一篇: 优化x64汇编器MUL循环