[TUHS] [tuhs] The Unix shell: a 50-year view

Theodore Ts'o tytso at mit.edu
Mon Jul 5 02:07:55 AEST 2021


On Sat, Jul 03, 2021 at 09:36:15PM -0700, Larry McVoy wrote:
> In my opinion you couldn't be more wrong.  We still have the same problems,
> we are still trying to grep an answer out of a bunch of info that has just
> gotten bigger.
> 
> We still want to do the same things and we are doing them better with faster
> CPUs, memory, disks, etc.

I'm not sure I agree with you.  The same problems still exist, to be
sure.  But there are new problems that have also arisen, for which
Unix might not be the best match.  For example, Unix didn't have to
deal with devices appearing after the system has booted; or those
devices disappearing afterwards.  And of course, with services that
might need to be started or reconfigured or stopped as devices get
attached and detached.

The original Unix systems didn't have the requirement of automatically
detecting failed hardware or software, and taking action to mitigate
those faults without depending on operators in data centers.  If we
consider some of the structured logging systems, whether it's
Ultrix/OSF's uerf, or Solaris's SMF, or Tuttle (an internal system I
designed for $WORK), it's at least in part because scraping
/var/log/messages and having cluster management systems parse textual
log entries using an unholy mess of regex's is error prone and not
necessarily reliable or maintainable.

There are also new problems, such as Machine Learning, which has led
to the introduction of coprocessors, which is a great example of
something which is *not* doing the "same things" but with better CPUs,
memory, disks, etc.

You can of course define away some of these new problems as "not
interesting", or "not the true spirit of Unix", but the reality is
there are a good reason why systems have been changing --- it's in
response the pressures of new requirements and new problems being
added, even if the old problems haven't gone away.

> I maybe think the reason you think that things aren't relevant anymore are
> because young people don't get Unix, they just pile on to this framework
> and that framework, NONE OF WHICH THEY UNDERSTAND, they just push more
> stuff onto the stack.

This is perhaps an unavoidable response to increasing complexity.
When I was an undergraduate we started by building a computer using
TTL chips on a breadboard, and then we went on to learn Scheme and the
Lambda calculus, and then built up from there.  But a few years ago,
MIT decided this wasn't sufficiently relevant to how most software
engineers work today, so the intro to computing class was redone to
use Python, and it was a stated goal to give students experience in
how to build on top of a framework which is not sufficiently
documented, and where they need to figure out what the #?$!@? was
going on via experimentation, supplemented with whatever documentation
and sources that students could find/read/understand.

(And if you look at the depth of a stack trace after a typical Java
application crashes due to an unhandled exception, MIT has probably
gotten it right in terms of what most CS students will experience
after they graduate.  I remember looking 80-90 Java function names in
a Lotus Notes stack dump after a crash, involving the Eclipse *and*
the Standard Java library frameworks, with awe and horror...)

As a result, I've found that most new college grads don't have much in
the way of Systems training or experience, and for those groups that
need that kind of Systems thinking at $WORK, it's something we need to
train, and not something I can assume new hires will have.  (This is
also true for many candidates with significant industry experience
which I've interviewed, alas.)

						- Ted


More information about the TUHS mailing list