[TUHS] Happy birthday, Niklaus Wirth!

Tim Bradshaw tfb at tfeb.org
Fri Feb 16 21:27:50 AEST 2018


On 16 Feb 2018, at 02:48, Larry McVoy <lm at mcvoy.com> wrote:
> 
> I'd like to be told, as a systems guy, what those things are.

This is a story I was told in the 1980s about something which happened in the 1960s.  But I was told it by the person who did the thing in the 1960s so I believe it to be true subject to the limitations of my fading memory.

In General Relativity, one of the things you need to do is, starting from a metric (which is a specification, in terms of some basis -- usually a coordinate basis -- of how distances work in that basis), compute various geometrically and physically interesting properties.  Some of these are things like curvature, which tells you whether the spacetime the metric describes is flat, and some of them are related to understanding whether, given two apparently different metrics, do these metrics in fact describe the same spacetime, or parts of the same spacetime.

These calculations are laborious: really, seriously laborious. They're also not algorithmic: it's been known for a long time (this is all pretty much a corollary of Gödel incompleteness) that, for two quite simple algebraic expressions, you can't in general know if they are equal.  But they matter if you want to make progress in GR, because unless you can solve problems like this you end up with people churning out endless metrics and never knowing whether they describe the same spacetimes.  An early concrete example of this is the Schwarzschild metric (which describes nontorating, uncharged things and is a very good approximation for the gravitational field around stars and, famously, black holes).  In the obvious expression for this metric, it blows up in a horrible way at a particular radius (the Schwarzschild radius).  It took people a very long time to understand that this blowing up was not because something physically horrible happened at that radius, but because the coordinate system in which the metric was expressed went bad at that radius.  Eventually people found other metrics which were equivalent to the original one and which did not blow up at the Schwarzschild radius.

There's an important metric which I have always known as the 'Bondi metric' but I think is more properly known as the 'Lemaître–Tolman–Bondi metric', which was under investigation in the 1960s, towards the start of the GR renaissance.  Various properties of this thing needed to be computed, and this was assigned to a student as a PhD project (this is how laborious the computations are: they take a human several years).  I'm not sure who this person was, but they duly got their PhD for it.

At the same time another person (Ray d'Inverno) got interested in whether these computations could be done by machine.  He wrote a program on the Atlas (a second-generation British machine), and he may have written the Lisp in which it was implemented.  This was called ALAM: Atlas Lisp Algebraic Manipulation, and it gave rise to a slightly later thing called LAM (just dropping the 'Atlas': perhaps this was after it was ported somewhere else).

And it turns out that yes, these calculations can be done by machine: I was told that (A)LAM replicated the calculation that the PhD student had done over several years, and it took seven minutes (on, I think I was told, the Atlas).  Better: *it found mistakes in the original computation*.

LAM grew up and became, inevitably, SHEEP, in which form I used it in the 1980s. SHEEP and its derivatives (CLASSI) was widely used for computations in GR (and may still be), often extended by using another algebra system, REDUCE, to do some of the harder simplification -- REDUCE was also written in Lisp of course.  One of my early explorations in retrocomputing was to find the source of the original LAM, persuade it to run on a PC running muLisp that I discovered I could use at night, and use it to redo this original calculation.  I remember it took much longer than seven minutes: 8088-based PCs were still much slower than the Atlas in the early 1980s.  (This had all already been done on the mini we had, the interesting thing was making it work on a PC.)

Now of course, someone will make the silly argument that these systems could be written in C.  Yes, indeed they could be, and the way you do this is *by writing a Lisp system in C*.  Indeed there is a famous aphorism about this, Greenspun's tenth rule of programming:

"Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."

There is a corollary to this due to Morris (of the worm):

"Including Common Lisp."

There are some very well-known examples of Greenspun's tenth: I don't want to describe the best one here because it would involve being rude about famous people with extremely large egos in a public forum, and I also own a copy of the thing so I'd be being at least a bit hypocritical.

However I currently work with a very large Fortran program and I can confirm at first hand that it -- specifically its runtime configuration system -- contains a grotty implementation of half of CL (actually less than half, but the configuration system would be *so much nicer* if it was in Lisp).  If only the people who had written this thing had *known* about Lisp rather than known only about Fortran and Python, the world would be a better place.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20180216/72ab0d07/attachment.html>


More information about the TUHS mailing list