Skip to content

Speedup

I added the needed multi-hunk processing to my front end hack, to try to be able to compile parts of GCC. What this means is that if a construct (say, an enum) spans multiple file changes, the parser will now notice this and record it properly. If a subsequent parse sees the same starting hunk, it will check to see if subsequent hunks are the same as the previous chain, and if so, will map in all the bindings.

Unfortunately this was not the only problem — GCC also has an enum defined in two different header files, meaning that the token stream actually seen depends on the order of header inclusion. Gross stuff! Anyway, handling this was always on the agenda; but it means writing more code and so I am putting it off a little bit.

Meanwhile I compared my hacked compiler with a roughly equivalent unmodified version. I compiled zenity (a Gnome program; the part I compiled is about 5KLOC according to wc — but 475KLOC after preprocessing) with both, using the same command line arguments, including --combine.

The results: the modifications are a big win:

Compiler Time Memory (used at exit)
Hacked 11.98sec 17M
Trunk 23.87sec 62M

The hacked compiler with --combine is faster than running a make on the same program (though to be fair I didn’t try to measure any of the build overhead — but it would have to be more than 4 seconds of overhead, since the normal build took 16.55sec).

2 Comments

  1. Taras wrote:

    Out of curiosity, are you getting paid to hack on this or doing this in your spare time? It’s impressive stuff, I just wonder how you find time to work on it.

    Thursday, June 28, 2007 at 12:35 am | Permalink
  2. tom wrote:

    Yes, I’m doing this for my job right now.

    Thursday, June 28, 2007 at 2:38 pm | Permalink

Post a Comment

Your email is never published nor shared. Required fields are marked *
*
*