Archive for November, 2007

Replacing Automake

Inspired by Ian, this past weekend I took a look at replacing the auto tools with GNU make code.

Replacing automake turns out to be very easy. And, given that GNU make is everywhere now, I think it is well past time to do this.

It took about 250 lines of code to get C and C++ compilation — with dependency tracking — install, clean, libraries, and programs working.

Of course, this is just the beginning. There’s still shared libraries (ugh), dist, distcheck, and a host of smaller things.

Still, the result is quite nice. The resulting user Makefiles are pretty similar — but, unfortunately, not identical. With a bit of help it could be ready for real use by Christmas. Write me if you’re interested.

The long term goal, of course, is to unify all three tools. I’ve also got some prototype GNU make code to do some configure-style checking, based on some earlier prototyping by Ben Elliston. This is less fully baked, and there are some problems I haven’t solved. But, I think the idea is sound.

In my view the plan would be to finish automake and libtool functionality in the new code, then move on to adding configuration support. The latter can be done incrementally. In the end, we will have replaced all the tools with something better, without undue disruption.

The most important decision — what to name it — is still open. Submit your ideas.

Anti-dependencies

Last week I implemented anti-dependencies in the C front end. This wasn’t difficult enough to write about, but it provides an excuse to describe dependencies and incrementalism in general.

The fundamental idea in speeding up the front end is that it should avoid parsing whenever possible. If we’ve already parsed a function definition in some earlier compilation, it is faster to reuse that definition than to try to parse it again. (At least, that is true if noticing that we can reuse it is faster than parsing. But it always is. Note also that this will be a much bigger win for C++ than for C.)

Reuse works by computing a checksum of a run of tokens (one of these runs is called a “hunk” in the implementation, a name that vaguely embarrasses me). If we’ve seen the checksum before, we reuse the parsed results, which we store in a table.

Of course, the situation is more complicated than that. A bit of code may depend on other bits of code, for instance:

typedef int myint;
extern myint variable;

Here we can’t reuse variable in isolation — we can only reuse it if we’ve already reused the previous line. So, each hunk carries with it some dependency information. Before last week, this was just a list of all the checksums of hunks on which the current hunk depends.

Even this dependency tracking is not enough for proper operation. Suppose we already compiled this program:

struct x;
struct x { int field; };

… and then try to compile:

struct x;
struct x { int bug; };
struct x { int field; };

With a naive dependency scheme, this would compile without error, because we would erroneously reuse the second definition. After all, all its dependencies have been met.

Anti-dependencies are the fix for this. When considering a hunk for reuse, we check not only that its dependencies have been met, but we check that any declarations that we might reuse have the correct “prior value”. In fact, these two checks amount to the same thing; rather than record the checksums of prerequisite hunks, now we simply record declarations.

Given all this, how will incremental compilation work? My current plan is not to introduce any sort of special case. Instead, the parser will notify the backend when it parses a function. A function which comes from a reused hunk will simply not be compiled again.

If you think about it a bit, you’ll see that the prerequisites form a dependency model of a compilation unit. If the developer changes a declaration, anything using that declaration will be invalidated, and this invalidation will itself propagate outward. So, the amount of recompilation will be proportional to the “importance” of the change. Change a central typedef, and many things may need recompilation. Change a comment, and nothing will (though handling debug info properly may be a challenge).

A future refinement is the idea of a “semantic dependency”. The idea here is that, rather than check prerequisite declarations by identity, check them by ABI compatibility. There are a few tricks here, so this is a bit lower on my priority list for the moment.

You might have wondered how we compute the boundaries of a hunk, or how we can actually avoid recompiling functions. I’ll answer both of these in future posts.

Fedora 8

Anthony was in town this past Thursday, and after talking to him I was inspired to follow his example and upgrade to Fedora 8. Like him, I did a live upgrade; but I only upgraded my laptop, which was running Fedora 7. My main machine still runs 6… I’m more cautious about upgrading it, but after playing with Fedora 8 a bit, I can feel the pressure rising.

The biggest change I’ve noticed so far is the inclusion of IcedTea. I’m happily using the browser plugin for the all-important chess applet.

Speaking of which… I’ve been looking at other brower plugins lately. Any reports on the flash support in Fedora 8? I’m using the (boo hiss) proprietary flash plugin right now. Also, any experiences with Firebug or Adblock?

I’m impressed with the Fedora maintainers. They manage to release a solid operating system every six months… not an easy task. Upgrades for me have always gone pretty smoothly; I’ve done a mix of upgrades from CD and yum upgrades, and haven’t been burnt (a couple minor singes, but always known bugs with simple fixes). Also, each release has had some compelling benefit making me want to upgrade.

Elyn’s Practice

Elyn’s new web site for her therapy practice is up and running. We’ve read (mostly in Psychotherapy Networker) that a web site is the second most important advertising resource for therapists, after word-of-mouth. The days of people looking for therapists in the phone book are over… another little detail of how the internet has changed things.

I thought I’d do my part and link to her. If you’re near Boulder, and want a therapist, and don’t know either of us personally, give her a call :-). She’s also started a therapy-related blog, the link is in my blogroll.

GCC 4.3 Hacking

I haven’t talked about the incremental compiler in a couple of weeks — first I was out of town, and then I was sick. And then yesterday, I put it off… I don’t want to be that way, but the truth is for the last couple of weeks I haven’t been working on this project much.

Instead, I’ve been doing a bit of work fixing bugs on the trunk, to help make it so that GCC 4.3 can be released in a timely way. I don’t really know much about most of GCC, though, so I’ve pretty much been working on only the bugs in parts I do know: more or less the C front end and the preprocessor.

Working on bugs this way is a humbling experience. Last week I think I fixed four bugs. Looking through the ChangeLog, I think Jakub fixed twenty. Hmm… I don’t even know how he can do that many bootstrap-and-check cycles in a week.

I also put a bit of work into making GCC emit better column number information — there’s some new location code that is more memory efficient (saves about 5%) and that enables column number output. Unfortunately the parsers and constant folders and debug output generators know nothing about columns; and fixing this is a big enough job that it won’t happen in 4.3.

The more I work on parts of GCC like this, the more I realize how awesome gcjx really was, if I may say so myself. GCC has several bad design decisions baked into things as basic as location handling. Sad. It will be a lot of work to clean this up… and when I look at GCC I think my eyes are bigger than my stomach. Or in other words, I need help.

I did do a little work on the incremental compiler, starting this week: I did a few cleanups to undo breakage I introduced earlier. My plan is to merge what I’ve got to the trunk during the next Stage 1. So, I’m cleaning up the little details I intentionally ignored during the experimentation phase.

My thinking behind the merge is that, first, the old compile server partly failed due to being on a branch too long, a mistake I don’t want to repeat; and second, even though this code does not do everything, it is reaching a good milestone, and at the very least it greatly speeds up --combine builds. I’ve heard that the Linux kernel uses combine; benchmarking this is on my to-do list.

The Darjeeling Limited

Wes Anderson seems to be converging on his own formula — an outstanding palette and excellent visual composition, meandering adventures, humor mixed with sorrow, and protagonists disconnected from the world and, often, their fathers.

This one moved me emotionally a bit, though not as much as Aquatic. I didn’t enjoy the “short” at the beginning.