使用pre优化对数组的线性访问

披露:我已经尝试过关于programmers.stack的类似问题,但那个地方远没有活动堆栈。

介绍

我倾向于使用大量大图片。 他们也有不止一个序列,必须重复处理和回放。 有时我会使用GPU,有时候会使用CPU,有时使用两者。 大多数访问模式本质上是线性的(来回),这让我想到了有关数组的更基本的东西,以及如何在给定硬件上编写优化最大内存带宽代码的方法(允许计算不会阻止读/写) 。

测试规格

  • 我已经在2011年的MacbookAir4,2(I5-2557M)上配置了4GB RAM和SSD。 测试期间除了iterm2之外,没有别的东西在运行。
  • 带标志的gcc 5.2.0(自制软件):- -pedantic -std=c99 -Wall -Werror -Wextra -Wno-unused -O0带有附加包含和库标志以及框架标志以便使用我倾向于使用的glfw定时器。 我本来可以做到这一点,没关系。 当然,所有的64位。
  • 我已经尝试使用可选的-fprefetch-loop-arrays标志进行测试,但它似乎没有影响结果
  • 测试

  • 在堆上分配n bytes两个数组 - 其中n8, 16, 32, 64, 128, 256, 512 and 1024 MB
  • array初始化为0xff ,一次一个字节
  • 测试1 - 线性复制
  • 线性复制:

    for(uint64_t i = 0; i < ARRAY_NUM; ++i) {
            array_copy[i] = array[i];
        }
    
  • 测试2 - 用步幅复制。 这是令人困惑的地方。 我曾尝试在这里玩预取比赛。 我已经尝试了每个循环应该做多少次的各种组合,并且似乎每个循环产生40次最佳性能。 为什么? 我不知道。 我知道c99中的mallocuint64_t会给我内存对齐的块。 我也看到我的L1到L3缓存的大小,比这些320 bytes ,所以我打了什么? 线索可能在图表中稍后。 我真的很想理解这一点。
  • 步幅复制:

    for(uint64_t i = 0; i < ARRAY_NUM; i=i+40) {
                array_copy[i] = array[i];
                array_copy[i+1] = array[i+1];
                array_copy[i+2] = array[i+2];
                array_copy[i+3] = array[i+3];
                array_copy[i+4] = array[i+4];
                array_copy[i+5] = array[i+5];
                array_copy[i+6] = array[i+6];
                array_copy[i+7] = array[i+7];
                array_copy[i+8] = array[i+8];
                array_copy[i+9] = array[i+9];
                array_copy[i+10] = array[i+10];
                array_copy[i+11] = array[i+11];
                array_copy[i+12] = array[i+12];
                array_copy[i+13] = array[i+13];
                array_copy[i+14] = array[i+14];
                array_copy[i+15] = array[i+15];
                array_copy[i+16] = array[i+16];
                array_copy[i+17] = array[i+17];
                array_copy[i+18] = array[i+18];
                array_copy[i+19] = array[i+19];
                array_copy[i+20] = array[i+20];
                array_copy[i+21] = array[i+21];
                array_copy[i+22] = array[i+22];
                array_copy[i+23] = array[i+23];
                array_copy[i+24] = array[i+24];
                array_copy[i+25] = array[i+25];
                array_copy[i+26] = array[i+26];
                array_copy[i+27] = array[i+27];
                array_copy[i+28] = array[i+28];
                array_copy[i+29] = array[i+29];
                array_copy[i+30] = array[i+30];
                array_copy[i+31] = array[i+31];
                array_copy[i+32] = array[i+32];
                array_copy[i+33] = array[i+33];
                array_copy[i+34] = array[i+34];
                array_copy[i+35] = array[i+35];
                array_copy[i+36] = array[i+36];
                array_copy[i+37] = array[i+37];
                array_copy[i+38] = array[i+38];
                array_copy[i+39] = array[i+39];
        }
    
  • 测试3 - 用步幅阅读。 与用步幅复制一样。
  • 跨步阅读:

        const int imax = 1000;
        for(int j = 0; j < imax; ++j) {
            uint64_t tmp = 0;
            performance = 0;
            time_start = glfwGetTime();
            for(uint64_t i = 0; i < ARRAY_NUM; i=i+40) {
                    tmp = array[i];
                    tmp = array[i+1];
                    tmp = array[i+2];
                    tmp = array[i+3];
                    tmp = array[i+4];
                    tmp = array[i+5];
                    tmp = array[i+6];
                    tmp = array[i+7];
                    tmp = array[i+8];
                    tmp = array[i+9];
                    tmp = array[i+10];
                    tmp = array[i+11];
                    tmp = array[i+12];
                    tmp = array[i+13];
                    tmp = array[i+14];
                    tmp = array[i+15];
                    tmp = array[i+16];
                    tmp = array[i+17];
                    tmp = array[i+18];
                    tmp = array[i+19];
                    tmp = array[i+20];
                    tmp = array[i+21];
                    tmp = array[i+22];
                    tmp = array[i+23];
                    tmp = array[i+24];
                    tmp = array[i+25];
                    tmp = array[i+26];
                    tmp = array[i+27];
                    tmp = array[i+28];
                    tmp = array[i+29];
                    tmp = array[i+30];
                    tmp = array[i+31];
                    tmp = array[i+32];
                    tmp = array[i+33];
                    tmp = array[i+34];
                    tmp = array[i+35];
                    tmp = array[i+36];
                    tmp = array[i+37];
                    tmp = array[i+38];
                    tmp = array[i+39];
            }
    
  • 测试4 - 线性读数。 每个字节的字节数。 我很惊讶-fprefetch-loop-arrays在这里没有结果。 我认为这是为了这些情况。
  • 线性读数:

    for(uint64_t i = 0; i < ARRAY_NUM; ++i) {
                tmp = array[i];
            }
    
  • 测试5 - memcpy作为对比。
  • 的memcpy:

    memcpy(array_copy, array, ARRAY_NUM*sizeof(uint64_t));
    

    结果

  • 示例输出:
  • 样本输出:

    Init done in 0.767 s - size of array: 1024 MBs (x2)
    Performance: 1304.325 MB/s
    
    Copying (linear) done in 0.898 s
    Performance: 1113.529 MB/s
    
    Copying (stride 40) done in 0.257 s
    Performance: 3890.608 MB/s
    
    [1000/1000] Performance stride 40: 7474.322 MB/s
    Average: 7523.427 MB/s
    Performance MIN: 3231 MB/s | Performance MAX: 7818 MB/s
    
    [1000/1000] Performance dumb: 2504.713 MB/s
    Average: 2481.502 MB/s
    Performance MIN: 1572 MB/s | Performance MAX: 2644 MB/s
    
    Copying (memcpy) done in 1.726 s
    Performance: 579.485 MB/s
    
    --
    
    Init done in 0.415 s - size of array: 512 MBs (x2)
    Performance: 1233.136 MB/s
    
    Copying (linear) done in 0.442 s
    Performance: 1157.147 MB/s
    
    Copying (stride 40) done in 0.116 s
    Performance: 4399.606 MB/s
    
    [1000/1000] Performance stride 40: 6527.004 MB/s
    Average: 7166.458 MB/s
    Performance MIN: 4359 MB/s | Performance MAX: 7787 MB/s
    
    [1000/1000] Performance dumb: 2383.292 MB/s
    Average: 2409.005 MB/s
    Performance MIN: 1673 MB/s | Performance MAX: 2641 MB/s
    
    Copying (memcpy) done in 0.102 s
    Performance: 5026.476 MB/s
    
    --
    
    Init done in 0.228 s - size of array: 256 MBs (x2)
    Performance: 1124.618 MB/s
    
    Copying (linear) done in 0.242 s
    Performance: 1057.916 MB/s
    
    Copying (stride 40) done in 0.070 s
    Performance: 3650.996 MB/s
    
    [1000/1000] Performance stride 40: 7129.206 MB/s
    Average: 7370.537 MB/s
    Performance MIN: 4805 MB/s | Performance MAX: 7848 MB/s
    
    [1000/1000] Performance dumb: 2456.129 MB/s
    Average: 2435.556 MB/s
    Performance MIN: 1496 MB/s | Performance MAX: 2637 MB/s
    
    Copying (memcpy) done in 0.050 s
    Performance: 5095.845 MB/s
    
    -- 
    
    Init done in 0.100 s - size of array: 128 MBs (x2)
    Performance: 1277.200 MB/s
    
    Copying (linear) done in 0.112 s
    Performance: 1147.030 MB/s
    
    Copying (stride 40) done in 0.029 s
    Performance: 4424.513 MB/s
    
    [1000/1000] Performance stride 40: 6497.635 MB/s
    Average: 6714.540 MB/s
    Performance MIN: 4206 MB/s | Performance MAX: 7843 MB/s
    
    [1000/1000] Performance dumb: 2275.336 MB/s
    Average: 2335.544 MB/s
    Performance MIN: 1572 MB/s | Performance MAX: 2626 MB/s
    
    Copying (memcpy) done in 0.025 s
    Performance: 5086.502 MB/s
    
    -- 
    
    Init done in 0.051 s - size of array: 64 MBs (x2)
    Performance: 1255.969 MB/s
    
    Copying (linear) done in 0.058 s
    Performance: 1104.282 MB/s
    
    Copying (stride 40) done in 0.015 s
    Performance: 4305.765 MB/s
    
    [1000/1000] Performance stride 40: 7750.063 MB/s
    Average: 7412.167 MB/s
    Performance MIN: 3892 MB/s | Performance MAX: 7826 MB/s
    
    [1000/1000] Performance dumb: 2610.136 MB/s
    Average: 2577.313 MB/s
    Performance MIN: 2126 MB/s | Performance MAX: 2652 MB/s
    
    Copying (memcpy) done in 0.013 s
    Performance: 4871.823 MB/s
    
    -- 
    
    Init done in 0.024 s - size of array: 32 MBs (x2)
    Performance: 1306.738 MB/s
    
    Copying (linear) done in 0.028 s
    Performance: 1148.582 MB/s
    
    Copying (stride 40) done in 0.008 s
    Performance: 4265.907 MB/s
    
    [1000/1000] Performance stride 40: 6181.040 MB/s
    Average: 7124.592 MB/s
    Performance MIN: 3480 MB/s | Performance MAX: 7777 MB/s
    
    [1000/1000] Performance dumb: 2508.669 MB/s
    Average: 2556.529 MB/s
    Performance MIN: 1966 MB/s | Performance MAX: 2646 MB/s
    
    Copying (memcpy) done in 0.007 s
    Performance: 4617.860 MB/s
    
    --
    
    Init done in 0.013 s - size of array: 16 MBs (x2)
    Performance: 1243.011 MB/s
    
    Copying (linear) done in 0.014 s
    Performance: 1139.362 MB/s
    
    Copying (stride 40) done in 0.004 s
    Performance: 4181.548 MB/s
    
    [1000/1000] Performance stride 40: 6317.129 MB/s
    Average: 7358.539 MB/s
    Performance MIN: 5250 MB/s | Performance MAX: 7816 MB/s
    
    [1000/1000] Performance dumb: 2529.707 MB/s
    Average: 2525.783 MB/s
    Performance MIN: 1823 MB/s | Performance MAX: 2634 MB/s
    
    Copying (memcpy) done in 0.003 s
    Performance: 5167.561 MB/s
    
    --
    
    Init done in 0.007 s - size of array: 8 MBs (x2)
    Performance: 1186.019 MB/s
    
    Copying (linear) done in 0.007 s
    Performance: 1147.018 MB/s
    
    Copying (stride 40) done in 0.002 s
    Performance: 4157.658 MB/s
    
    [1000/1000] Performance stride 40: 6958.839 MB/s
    Average: 7097.742 MB/s
    Performance MIN: 4278 MB/s | Performance MAX: 7499 MB/s
    
    [1000/1000] Performance dumb: 2585.366 MB/s
    Average: 2537.896 MB/s
    Performance MIN: 2284 MB/s | Performance MAX: 2610 MB/s
    
    Copying (memcpy) done in 0.002 s
    Performance: 5059.164 MB/s
    
  • 线性读数比步幅读数慢3倍。 步幅读数最大值约为 7500-7800 MB / s的范围。 但有两件事让我感到困惑。 在DDR3 1333 Mhz,最大内存吞吐量应该是10,664 MB/s ,那为什么我不打它呢? 为什么阅读速度不是更一致,我将如何优化(缓存未命中?)? 图表更明显,尤其是线性阅读,并且表现经常下降。
  • 图表

    8-16 MB 8-16 MB

    32-64 MB 32-64 MB

    128-256 MB 128-256 MB

    512-1024 MB 512-1024 MB

    全部一起 所有

    这里有任何感兴趣的人的全部资源:

    /*
    gcc -pedantic -std=c99 -Wall -Werror -Wextra -Wno-unused -O0 -I "...path to glfw3 includes ..." -L "...path to glfw3 lib ..." arr_test_copy_gnuplot.c -o arr_test_copy_gnuplot -lglfw3 -framework OpenGL -framework Cocoa -framework IOKit -framework CoreVideo
    
    optional: -fprefetch-loop-arrays
    */
    
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h> /* memcpy */
    #include <inttypes.h>
    #include <GLFW/glfw3.h>
    
    #define ARRAY_NUM 1000000 * 128 /* GIG */
    int main(int argc, char *argv[]) {
    
        if(!glfwInit()) {
            exit(EXIT_FAILURE);
        }
    
        int cx = 0;
        char filename_stride[50];
        char filename_dumb[50];
        cx = snprintf(filename_stride, 50, "%lu_stride.dat", 
                        ((ARRAY_NUM*sizeof(uint64_t))/1000000));
        if(cx < 0 || cx >50) { exit(EXIT_FAILURE); }
        FILE *file_stride = fopen(filename_stride, "w");
        cx = snprintf(filename_dumb, 50, "%lu_dumb.dat", 
                        ((ARRAY_NUM*sizeof(uint64_t))/1000000));
        if(cx < 0 || cx >50) { exit(EXIT_FAILURE); }
        FILE *file_dumb   = fopen(filename_dumb, "w");
        if(file_stride == NULL || file_dumb == NULL) {
            perror("Error opening file.");
            exit(EXIT_FAILURE);
        }
    
        uint64_t *array = malloc(sizeof(uint64_t) * ARRAY_NUM);
        uint64_t *array_copy = malloc(sizeof(uint64_t) * ARRAY_NUM);
    
        double performance  = 0.0;
        double time_start   = 0.0;
        double time_end     = 0.0;
        double performance_min  = 0.0;
        double performance_max  = 0.0;
    
        /* Init array */
        time_start = glfwGetTime();
        for(uint64_t i = 0; i < ARRAY_NUM; ++i) {
            array[i] = 0xff;
        }
        time_end = glfwGetTime();
    
        performance = ((ARRAY_NUM * sizeof(uint64_t))/1000000) / (time_end - time_start);
        printf("Init done in %.3f s - size of array: %lu MBs (x2)n", (time_end - time_start), (ARRAY_NUM*sizeof(uint64_t)/1000000));
        printf("Performance: %.3f MB/snn", performance);
    
        /* Linear copy */
        performance = 0;
        time_start = glfwGetTime();
        for(uint64_t i = 0; i < ARRAY_NUM; ++i) {
            array_copy[i] = array[i];
        }
        time_end = glfwGetTime();
    
        performance = ((ARRAY_NUM * sizeof(uint64_t))/1000000) / (time_end - time_start);
        printf("Copying (linear) done in %.3f sn", (time_end - time_start));
        printf("Performance: %.3f MB/snn", performance);
    
        /* Copying with wide stride */
        performance = 0;
        time_start = glfwGetTime();
        for(uint64_t i = 0; i < ARRAY_NUM; i=i+40) {
                array_copy[i] = array[i];
                array_copy[i+1] = array[i+1];
                array_copy[i+2] = array[i+2];
                array_copy[i+3] = array[i+3];
                array_copy[i+4] = array[i+4];
                array_copy[i+5] = array[i+5];
                array_copy[i+6] = array[i+6];
                array_copy[i+7] = array[i+7];
                array_copy[i+8] = array[i+8];
                array_copy[i+9] = array[i+9];
                array_copy[i+10] = array[i+10];
                array_copy[i+11] = array[i+11];
                array_copy[i+12] = array[i+12];
                array_copy[i+13] = array[i+13];
                array_copy[i+14] = array[i+14];
                array_copy[i+15] = array[i+15];
                array_copy[i+16] = array[i+16];
                array_copy[i+17] = array[i+17];
                array_copy[i+18] = array[i+18];
                array_copy[i+19] = array[i+19];
                array_copy[i+20] = array[i+20];
                array_copy[i+21] = array[i+21];
                array_copy[i+22] = array[i+22];
                array_copy[i+23] = array[i+23];
                array_copy[i+24] = array[i+24];
                array_copy[i+25] = array[i+25];
                array_copy[i+26] = array[i+26];
                array_copy[i+27] = array[i+27];
                array_copy[i+28] = array[i+28];
                array_copy[i+29] = array[i+29];
                array_copy[i+30] = array[i+30];
                array_copy[i+31] = array[i+31];
                array_copy[i+32] = array[i+32];
                array_copy[i+33] = array[i+33];
                array_copy[i+34] = array[i+34];
                array_copy[i+35] = array[i+35];
                array_copy[i+36] = array[i+36];
                array_copy[i+37] = array[i+37];
                array_copy[i+38] = array[i+38];
                array_copy[i+39] = array[i+39];
        }
        time_end = glfwGetTime();
    
        performance = ((ARRAY_NUM * sizeof(uint64_t))/1000000) / (time_end - time_start);
        printf("Copying (stride 40) done in %.3f sn", (time_end - time_start));
        printf("Performance: %.3f MB/snn", performance);
    
        /* Reading with wide stride */
        const int imax = 1000;
        double performance_average = 0.0;
        for(int j = 0; j < imax; ++j) {
            uint64_t tmp = 0;
            performance = 0;
            time_start = glfwGetTime();
            for(uint64_t i = 0; i < ARRAY_NUM; i=i+40) {
                    tmp = array[i];
                    tmp = array[i+1];
                    tmp = array[i+2];
                    tmp = array[i+3];
                    tmp = array[i+4];
                    tmp = array[i+5];
                    tmp = array[i+6];
                    tmp = array[i+7];
                    tmp = array[i+8];
                    tmp = array[i+9];
                    tmp = array[i+10];
                    tmp = array[i+11];
                    tmp = array[i+12];
                    tmp = array[i+13];
                    tmp = array[i+14];
                    tmp = array[i+15];
                    tmp = array[i+16];
                    tmp = array[i+17];
                    tmp = array[i+18];
                    tmp = array[i+19];
                    tmp = array[i+20];
                    tmp = array[i+21];
                    tmp = array[i+22];
                    tmp = array[i+23];
                    tmp = array[i+24];
                    tmp = array[i+25];
                    tmp = array[i+26];
                    tmp = array[i+27];
                    tmp = array[i+28];
                    tmp = array[i+29];
                    tmp = array[i+30];
                    tmp = array[i+31];
                    tmp = array[i+32];
                    tmp = array[i+33];
                    tmp = array[i+34];
                    tmp = array[i+35];
                    tmp = array[i+36];
                    tmp = array[i+37];
                    tmp = array[i+38];
                    tmp = array[i+39];
            }
            time_end = glfwGetTime();
    
            performance = ((ARRAY_NUM * sizeof(uint64_t))/1000000) / (time_end - time_start);
            performance_average += performance;
            if(performance > performance_max) { performance_max = performance; }
            if(j == 0) { performance_min = performance; }
            if(performance < performance_min) { performance_min = performance; }
    
            printf("[%d/%d] Performance stride 40: %.3f MB/sr", j+1, imax, performance);
            fprintf(file_stride, "%dt%fn", j, performance);
            fflush(file_stride);
            fflush(stdout);
        }
        performance_average = performance_average / imax;
        printf("nAverage: %.3f MB/sn", performance_average);
        printf("Performance MIN: %3.f MB/s | Performance MAX: %3.f MB/snn", 
                performance_min, performance_max);
    
        /* Linear reading */
        performance_average = 0.0;
        performance_min     = 0.0;
        performance_max      = 0.0;
        for(int j = 0; j < imax; ++j) {
            uint64_t tmp = 0;
            performance = 0;
            time_start = glfwGetTime();
            for(uint64_t i = 0; i < ARRAY_NUM; ++i) {
                tmp = array[i];
            }
            time_end = glfwGetTime();
    
            performance = ((ARRAY_NUM * sizeof(uint64_t))/1000000) / (time_end - time_start);
            performance_average += performance;
            if(performance > performance_max) { performance_max = performance; }
            if(j == 0) { performance_min = performance; }
            if(performance < performance_min) { performance_min = performance; }
            printf("[%d/%d] Performance dumb: %.3f MB/sr", j+1, imax, performance);
            fprintf(file_dumb, "%dt%fn", j, performance);
            fflush(file_dumb);
            fflush(stdout);
        }
        performance_average = performance_average / imax;
        printf("nAverage: %.3f MB/sn", performance_average);
        printf("Performance MIN: %3.f MB/s | Performance MAX: %3.f MB/snn", 
                performance_min, performance_max);
    
        /* Memcpy */
        performance = 0;
        time_start = glfwGetTime();
        memcpy(array_copy, array, ARRAY_NUM*sizeof(uint64_t));
        time_end = glfwGetTime();
    
        performance = ((ARRAY_NUM * sizeof(uint64_t))/1000000) / (time_end - time_start);
        printf("Copying (memcpy) done in %.3f sn", (time_end - time_start));
        printf("Performance: %.3f MB/sn", performance);
    
        /* Cleanup and exit */
        free(array);
        free(array_copy);
        glfwTerminate();
        fclose(file_dumb);
        fclose(file_stride);
    
        exit(EXIT_SUCCESS);
    }
    

    概要

  • 当线性访问是最常见的模式的数组时,我应该如何编写代码以获得最大和(几乎)恒定的速度?
  • 我能从这个例子中了解到什么关于缓存和预取?
  • 这些图告诉了我一些我应该知道的事情,我没有注意到吗?
  • 我怎么能展开循环? 我已经尝试过-funroll-loops没有结果,所以我采取手动编写循环展开展开。
  • 感谢您的阅读。

    编辑:

    看起来-O0给出了与-O标志缺席时不同的表现! 是什么赋予了? 从图中可以看出,标志不存在会带来更好的性能。

    O标志缺席

    EDIT2:

    我终于用AVX达到了最高限度。

    === READING WITH AVX ===
    [1000/1000] Performance AVX: 9868.912 MB/s
    Average: 10029.085 MB/s
    Performance MIN: 6554 MB/s | Performance MAX: 11464 MB/s
    

    平均值接近10664.我必须将编译器更改为clang,因为gcc给我一个使用avx(-mavx)的困难时间。 这也是为什么图有更明显的下跌。 我仍然想知道如何/有什么/持续的表现。 我认为这是由于缓存/缓存行。 这也可以解释DDR3速度超过DDR3的速度(MAX是11464 MB / s)。

    请原谅我的gnuplot-fu及其钥匙。 蓝色是SSE2( _mm_load_si128 ),橙色是AVX( _mm256_load_si256 )。 紫色像以前一样跨越,绿色一次一个哑地阅读。

    AVX国王

    所以,最后两个问题是:

  • 什么导致了下跌以及如何持续表现
  • 没有内在因素可以达到天花板吗?
  • 与最新版本:https://gist.github.com/Keyframe/1ed9062ec52fc4a0d14b和该版本的图形:http://imgur.com/a/cPeor


    您从主存储器获得的峰值带宽值偏离了两倍。 取而代之的是10664 MB / s,应该是21.3 GB / s(更确切地说,它应该是(21333⅓)MB / s--请参阅下面的推导)。 事实上,你看到的超过10664 MB / s有时应该告诉你,你的峰值带宽计算可能有问题。

    为了通过Sandy Bridge获得Core2的最大带宽,您需要使用非临时存储。 另外,你需要多个线程。 您不需要AVX指令或展开循环。

    void copy(char *x, char *y, int n)
    {
        #pragma omp parallel for schedule(static)
        for(int i=0; i<n/16; i++)
        {
            _mm_stream_ps((float*)&y[16*i], _mm_load_ps((float*)&x[16*i]));
        }
    }
    

    这些数组需要16字节对齐,也是16的倍数。非临时存储的经验法则是当您正在复制的内存大于最后一级缓存大小的一半时使用它们。 在你的情况下,一半的L3高速缓存大小是1.5 MB,而你复制的最小的数组是8 MB,因此这比上一级高速缓存大小的一半要大得多。

    这里有一些代码来测试这个。

    //gcc -O3 -fopenmp foo.c
    #include <stdio.h>
    #include <x86intrin.h>
    #include <string.h>
    #include <omp.h>
    
    void copy(char *x, char *y, int n)
    {
        #pragma omp parallel for schedule(static)
        for(int i=0; i<n/16; i++)
        {
            _mm_stream_ps((float*)&x[16*i], _mm_load_ps((float*)&y[16*i]));
        }
    }
    
    void copy2(char *x, char *y, int n)
    {
        #pragma omp parallel for schedule(static)
        for(int i=0; i<n/16; i++)
        {
            _mm_store_ps((float*)&x[16*i], _mm_load_ps((float*)&y[16*i]));
        }
    }
    
    int main(void)
    {
        unsigned n = 0x7fffffff;
        char *x = _mm_malloc(n, 16);
        char *y = _mm_malloc(n, 16);
        double dtime;
    
        memset(x,0,n);
        memset(y,1,n);
    
        dtime = -omp_get_wtime();
        copy(x,y,n);
        dtime += omp_get_wtime();
        printf("time %fn", dtime);
    
        dtime = -omp_get_wtime();
        copy2(x,y,n);
        dtime += omp_get_wtime();
        printf("time %fn", dtime);
    
        dtime = -omp_get_wtime();
        memcpy(x,y,n);
        dtime += omp_get_wtime();
        printf("time %fn", dtime);  
    }
    

    在我的系统上,Core2(在Nehalem之前)P9600@2.53GHz,它给

    time non temporal store 0.39
    time SSE store          1.10
    time memcpy             0.98
    

    复制2GB。

    请注意,“触摸”您首先要写入的内存非常重要(我使用memset执行此操作)。 你的系统在你访问它之前不一定分配你的内存。 如果在执行内存复制时尚未访问内存,则执行此操作的开销会显着影响结果。


    根据维基百科DDR3-1333的内存时钟为166⅔MHz。 DDR以两倍的内存时钟速率传输数据。 另外,DDR3的总线时钟倍频为4。 所以DDR3每个内存时钟总数为8。 另外,你的主板有两个内存通道。 所以总传输速率是

     21333⅓ MB/s = (166⅔ 1E6 clocks/s) * (8 lines/clock/channel) * (2 channels) * (64-bits/line) * (byte/8-bits) * (MB/1E6 bytes).
    

    对于你正在做的事情,我会看看SIMD(单指令多数据),google for GCC Compiler Intrinsics获取详细信息


    你应该用最近的GCC进行编译(因此编译你的GCC 5.2是一个好主意,在2015年11月),你应该为你的特定平台启用优化,所以我建议使用gcc -Wall -O2 -march=native编译(也尝试用-O3代替-O2 )。

    (如果不在编译器中启用优化,请勿对您的程序进行基准测试)

    如果你关心缓存效果,你可能会玩__builtin_prefetch ,但看到这一点。

    另请阅读OpenMP,OpenCL,OpenACC。

    链接地址: http://www.djcxy.com/p/6191.html

    上一篇: Optimizing linear access to arrays with pre

    下一篇: Why is malloc 7x slower than new for Intel's icc?