Structure of arrays and array of structures

I have a class like this:

//Array of Structures
class Unit
{
  public:
    float v;
    float u;
    //And similarly many other variables of float type, upto 10-12 of them.
    void update()
    {
       v+=u;
       v=v*i*t;
       //And many other equations
    }
};

I create an array of objects of Unit type. And call update on them.

int NUM_UNITS = 10000;
void ProcessUpdate()
{
  Unit *units = new Unit[NUM_UNITS];
  for(int i = 0; i < NUM_UNITS; i++)
  {
    units[i].update();
  }
}

In order to speed up things, and possibly autovectorize the loop, I converted AoS to structure of arrays.

//Structure of Arrays:
class Unit
{
  public:
  Unit(int NUM_UNITS)
  {
    v = new float[NUM_UNITS];
  }
  float *v;
  float *u;
  //Mnay other variables
  void update()
  {
    for(int i = 0; i < NUM_UNITS; i++)
    {
      v[i]+=u[i];
      //Many other equations
    }
  }
};

When the loop fails to autovectorize, i am getting a very bad performance for structure of arrays. For 50 units, SoA's update is slightly faster than AoS.But then from 100 units onwards, SoA is slower than AoS. At 300 units, SoA is almost twice as worse. At 100K units, SoA is 4x slower than AoS. While cache might be an issue for SoA, i didnt expect the performance difference to be this high. Profiling on cachegrind shows similar number of misses for both approach. Size of a Unit object is 48 bytes. L1 cache is 256K, L2 is 1MB and L3 is 8MB. What am i missing here? Is this really a cache issue?

Edit: I am using gcc 4.5.2. Compiler options are -o3 -msse4 -ftree-vectorize.

I did another experiment in SoA. Instead of dynamically allocating the arrays, i allocated "v" and "u" in compile time. When there are 100K units, this gives a performance which is 10x faster than the SoA with dynamically allocated arrays. Whats happening here? Why is there such a performance difference between static and dynamically allocated memory?


Structure of arrays is not cache friendly in this case.

You use both u and v together, but in case of 2 different arrays for them they will not be loaded simultaneously into one cache line and cache misses will cost huge performance penalty.

_mm_prefetch can be used to make AoS representation even faster.


Prefetches are critical to code that spends most of its execution time waiting for data to show up. Modern front side busses have enough bandwidth that prefetches should be safe to do, provided that your program isn't going too far ahead of its current set of loads.

For various reasons, structures and classes can create numerous performance issues in C++, and may require more tweaking to get acceptable levels of performance. When code is large, use object-oriented programming. When data is large (and performance is important), don't.

float v[N];
float u[N];
    //And similarly many other variables of float type, up to 10-12 of them.
//Either using an inlined function or just adding this text in main()
       v[j] += u[j];
       v[j] = v[j] * i[j] * t[j];

Certainly, if you don't achieve vectorization, there's not much incentive to make an SoA transformation.

Besides the fairly wide de facto acceptance of __RESTRICT, gcc 4.9 has adopted #pragma GCC ivdep to break assumed aliasing dependencies.

As to use of explicit prefetch, if it is useful, of course you may need more of them with SoA. The primary point might be to accelerate DTLB miss resolution by fetching pages ahead, so your algorithm could become more cache hungry.

I don't think intelligent comments could be made about whatever you call "compile time" allocation without more details, including specifics about your OS. There's no doubt that the tradition of allocating at a high level and re-using the allocation is important.

链接地址: http://www.djcxy.com/p/85532.html

上一篇: 什么是最大的非规格化和标准化数字?(64bit,IEE 754

下一篇: 数组的结构和结构的数组