The other day Roland asked me why PCH is not a bigger win. While discussing this I realized that I didn’t try to time the very best case for PCH. So, I looked at a couple more things.
First, I spent some time munging the output of
-ftime-report and making graphs with gnuplot. I thought it might be interesting to visualize the problem areas. All this told me, though, is that in my earlier tests, the PCH build still spent an inordinate amount of time in the parser.
Back when I wrote gcjx (my test case here), I tried to make it PCH-friendly. I made a master file,
"typedefs.hh", which declared most things and which included a number of other files. But, it turns out, it didn’t include absolutely everything — which the
"all.cc" approach does. So, I made a .gch file that did include everything (not entirely trivial, as there are ordering dependencies between headers to consider) and re-ran my test. This did speed up the PCH case — but the
all.cc case is still much faster. A look at the graph here shows the same thing, namely that the PCH build is spending a lot of time in the parser.
I didn’t look much deeper than this. I’ve learned that PCH is not excellent even in its best case, but I don’t know why. Given PCH’s many restrictions, I don’t think it is too interesting to pursue this further. The approach I’m investigating is faster and does not have as many limitations as PCH. Also it will not require changes to your Makefiles.