The last couple weeks uncovered a few problems in the incremental compiler.

First, suppose you compile a program with the incremental compiler, then recompile it. You would expect to get the same warnings as well. But — whoops — I never thought about this until a week or two ago.

I hate that awful moment of realization. It reminds me of getting in trouble as a kid. “Oh shit”, I think. “What am I going to do? Does this sink the project?”

In this case, there are some options. If the set of warning flags does not change between compilations, I think I can modify GCC to store the warnings with their corresponding declarations. This is a bit of a pain, but nothing too awful — and I think I can avoid imposing a cost on the non-warning case by representing the warnings as tree objects and storing them in the hunk with the other declarations.

If the user does change the warning flags, then what? Record it and recompile, I guess. A similar idea applies to options that change the ABI — because ABI decisions get baked into the tree objects we create, if the ABI changes, we cannot reuse the trees.

My other uh-oh moment has to do with inlining. I got bored by the tedious sub-projects I was working on — integrating pragmas (by the way. If you design a language, don’t design pragmas. Thanks) into the dependency computation, fixing the remaining test suite failures — so I decided today to start looking at incremental code generation. Something fun!

I tried out a quick implementation. If a function is parsed, we arrange to compile it; if it is not parsed, we don’t bother. This won’t work on real programs, of course, since those “missing” functions have to come from somewhere, but this should give a good idea of the possible speedup.

After testing on my typical small test program (zenity), I noticed something odd, namely that recompilations were not as blazingly fast as I thought they should be. (I first estimated the absolute lower bound as the time it takes to preprocess the source files.)

Hmm. A mystery. But first, a brief aside about tools. The compile server forks and runs code generation in the subprocess. I wanted to debug this fork. So, Plan A: use gdb and set follow-fork to child. But… that fails because, although my program does not use threads, it still links in the thread library (relic of my failed threading experiment), and gdb does not seem to handle this well. So, Plan B: maybe ftrace from frysk can help me — all I want to do is see a stack trace at a particular function call, perfect for ftrace. But, the ftrace I have aborts at startup. So I update and rebuild — but there is a build error. I suppose I could have gone with Plan C: stick in a sleep() call and attach, just like I did 15 years ago. Instead I picked Plan D: printf. Not quite as good, since I still need some of that information. Somehow I didn’t feel like Plan E: rip out the threading code and start over at Plan A.

Right now I’m doing a lot of debugging and pretty much every week has a vignette like that. I didn’t do that python stuff in gdb purely for fun.

Anyway. What is going on in the compile server?

What I found is that the code generation process still does some processing on every function, even functions that we intend to drop. In particular it is lowering each function to GIMPLE. I think what is going on here is that GCC is lowering functions and running local optimizations on them so that they can be considered as candidates for inlining. At least, that’s my working theory until I get back to Plan C and dig around a bit.

I’m not totally sure yet what to do about this. I think I will have to go back and rip out the decl re-smashing pass I wrote a while back, and instead find a way to perform gimplification in the server. That way, the compile server can keep the gimplified form for use by the back end. Other than the work involved, and some tricky details in lowering without smashing, I think this will work.

This isn’t going to be pretty, but at least it isn’t a total disaster. I’d like to think this isn’t totally an accident. GCC has undergone a lot of changes in the last five years to make it more flexible internally, and I’ve pushed a little bit more in that direction on the branch. This makes it a bit simpler to change the point at which we put a fork in the pipeline.

It feels a bit strange to write about the mistakes I make. On the plus side, I know how to fix these problems; writing about really unknown problems would, of course, be beyond the pale.

One Comment

  • …writing about your mistakes is perhaps one of the most useful things you can do, IMHO. By doing this consistently, you will have a record over time of both your failures and successes: presumably at some point your successes will be checked in and that is what will be remembered.

    So, the successes will take care of themselves.

    But, five or ten years down the road, other people might wonder why things are done or if you’d thought of some other approach. By reviewing a log of mistakes and failed approaches, you’ll be helping them out. I guess this is the purpose of a laboratory notebook.

    I think of this sometimes in my own work, where I try to remember what approaches I took to making c/c++ headers work together five years ago. Sometimes I have to re-invent my old failures, which is actually even more heartbreaking than expected.

Join the Discussion

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>