#include all .cpp files into a single compilation unit?

I recently had cause to work with some Visual Studio C++ projects with the usual Debug and Release configurations, but also 'Release All' and 'Debug All', which I had never seen before.

It turns out the author of the projects has a single ALL.cpp which #includes all other .cpp files. The *All configurations just build this one ALL.cpp file. It is of course excluded from the regular configurations, and regular configurations don't build ALL.cpp

I just wondered if this was a common practice? What benefits does it bring? (My first reaction was that it smelled bad.)

What kinds of pitfalls are you likely to encounter with this? One I can think of is if you have anonymous namespaces in your .cpps, they're no longer 'private' to that cpp but now visible in other cpps as well?

All the projects build DLLs, so having data in anonymous namespaces wouldn't be a good idea, right? But functions would be OK?

Cheers.


It's referred to by some (and google-able) as a "Unity Build". It links insanely fast and compiles reasonably quickly as well. It's great for builds you don't need to iterate on, like a release build from a central server, but it isn't necessarily for incremental building.

And it's a PITA to maintain.

EDIT: here's the first google link for more info: http://buffered.io/posts/the-magic-of-unity-builds/

The thing that makes it fast is that the compiler only needs to read in everything once, compile out, then link, rather than doing that for every .cpp file.

Bruce Dawson has a much better write up about this on his blog: http://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/


Unity builds improved build speeds for three main reasons. The first reason is that all of the shared header files only need to be parsed once. Many C++ projects have a lot of header files that are included by most or all CPP files and the redundant parsing of these is the main cost of compilation, especially if you have many short source files. Precompiled header files can help with this cost, but usually there are a lot of header files which are not precompiled.

The next main reason that unity builds improve build speeds is because the compiler is invoked fewer times. There is some startup cost with invoking the compiler.

Finally, the reduction in redundant header parsing means a reduction in redundant code-gen for inlined functions, so the total size of object files is smaller, which makes linking faster.

Unity builds can also give better code-gen.

Unity builds are NOT faster because of reduced disk I/O. I have profiled many builds with xperf and I know what I'm talking about. If you have sufficient memory then the OS disk cache will avoid the redundant I/O - subsequent reads of a header will come from the OS disk cache. If you don't have enough memory then unity builds could even make build times worse by causing the compiler's memory footprint to exceed available memory and get paged out.

Disk I/O is expensive, which is why all operating systems aggressively cache data in order to avoid redundant disk I/O.


I wonder if that ALL.cpp is attempting to put the entire project within a single compilation unit, to improve the ability for the compiler to optimize the program for size?

Normally some optimizations are only performed within distinct compilation units, such as removal of duplicate code and inlining.

That said, I seem to remember that recent compilers (Microsoft's, Intel's, but I don't think this includes GCC) can do this optimization across multiple compilation units, so I suspect that this 'trick' is unneccessary.

That said, it would be curious to see if there is indeed any difference.

链接地址: http://www.djcxy.com/p/14974.html

上一篇: C ++:在哪里编写代码文档:在.cpp或.hpp文件中?

下一篇: #include所有.cpp文件到一个编译单元中?