[TUHS] Interesting commentary on Unix from Multicians.

Noel Chiappa jnc at mercury.lcs.mit.edu
Mon Apr 11 03:14:22 AEST 2022


    > From: Rich Morin <rdm at cfcl.com>

    > I'd love to hear from some folks who have used both Multics and
    > Unix(ish) systems about things that differ and how they affect the
    > user experience.

{This is a bit behind the flow of the conversation, because I wanted to
ponder for a while what I was going to say on this, to me, important topic.}

Technicaly, I don't quite qualify (more below), but I do have an interesting
perspective: I was the very first Unix person in the 'Multics' group at
MIT-LCS - the Computer Systems Group, which contained Corby and Jerry Saltzer.

The interesting thing, which may surprise some people, is that I _never_ got
any 'anti-Unix' static from anyone in the group, that I can remember. (Maybe
one grad student, who was a bit abrasive, but he and I had a run-in that was
mostly? caused by my fairly rapid assumption, as an un-paid undergrad, of a
significant role in the group's on-going research work. So that may have bled
across a bitt, to Unix, which was 'my' baby.)

I'm not sure what _they_ all made of Unix. None of us knew, of course, where
it was going to go. But I don't recall getting any 'oh, it's just a toy
system' (an attitude I'm _very_ familiar with, since it was what the TCP/IP
people got from _some_ members of what later became the ISO effort). Of
course, our Unix was a tiny little PDP-11/40 - not a building-sized
multi-processor 'information utility' mainframe - so they may well have not
thought of it in the same frame as Multics. Also, by the time I arrived the
group was doing nothing on Multics (except below); everyone was focused on
networks. So OS's were no longer a topic of much research interest, which may
also have contributed.


Anyway, technically, I don't count for the above, because I never actualy
wrote code on Multics. However, I had studied it extensively, and I worked
very closely with Dave Clark (a younger Multics light, later a leading figure
in the early TCP/IP work) on projects that involved Multics and my machine,
so I got to see up close what Multics was like as a system environment, as he
worked on his half of the overall project. I've also pondered Multics in the
decades since; so here's my take.

I really, really liked Unix (we were running what turns out to have been
basicaly a PWB1 system - V6, with some minor upgrades). I learned about it
the way many did; I read the manuals, and then dove into the system source
(which I came to know quite well, as I was tasked with producing a piece that
involved a major tweak - asynchronous 'raw' DMA I/O directly to user
processes).

Unfortunately, one of the two (to me) best things about Unix, is something it
has since lost - which is its incredible bang/buck ratio - to be more
precise, the functionality/complexity ratio of the early versions of the
system.

(Its other important aspect, I think, was that its minimal demands of the
underlying hardware [unlike Multics, which was irretrievably bound to the
segmentation, and the inter-segment data and code connection] along with its
implementation in what turned out to be a fairly portable language (except
for the types; I had to make up what amounted to new ones.)


So, what was Multics' major difference from other systems - and why
was it good?

I'd say that it was Multics' overall structuring mechanisms - the
single-level store, with the ability to have code and data pointers between
segments - and what that implied for both the OS itself, and major
'applications' (some of them part of the OS, although not the 'kernel' - like
networking code).

Unix had (and still may have, I'm not up on Linux, etc) a really major, hard
boundary beween 'user' code, in processes,and the kernel. There are
'instructions' that invoke system primitives - but not too many, and limited
interactions across that boundary. So, restricted semantics.

Which was good in that it helped keep the system simple and clear - but it
limited the flexibilty and richness of the interface. (Imagine building a
large application which had a hard boundary across the middle of it, with
extremely limited interactions across the boundary. Just so with the
interface in Unix between 'user' code, and the kernel.)

Multics is very different. The interface to the OS is subroutine calls, and
one can easily pass complex data structures, including pointers to other
data, any of which can be in the 'user's' memory, as arguments to them. The
_exact_ same _kind_ of interface was available to _other_ major subsystems,
not just the OS kernel.

As I've mentioned before, Dave's TCP/IP for Multics was not in the kernel -
it was ordinary user code! And he was able to work on it, test and install
new versions - while the system was up for normal useage!

Dave's TCP/IP subsystem included i) a process, which received all incoming
ackets, and also managed/handled a lot of the timers involved (e.g.
retransmission timeouts); ii) data segment(s), which included things like
buffered un-acknowledged data (so that if a retransmission timer went off,
the process would wake up and retransmit the data); iii) code segment(s):
iiia) some for use by the process, like incoming packet processing; iiib)
some which were logically part of the subsystem, but which were called by
subroutine calls from the _user_'s process; and iiic) some which were
commands (called by the user's shell), which called those iiib) procedures.

(There were issues in security because Multics didn't have the equivalent of
'set-user-id' on a cross-subsystem subroutine calls - although I think there
had been plans early on for that. When Dave's code was taken over by
Honeywell, for 'production' use, whole whole thing was moved into a lower
ring, so the database didn't have to be world-writeable in the user ring.)

This is typical of the kind of structure that was relatively easy to build in
Multics. Yes, one can do similar application without all those underlying
mechanism; but Turing-completeness says one doeesn't need stacks to compute
anything - but would any of us want to work in a system that didn't have
stacks? We have stacks because they are useful.

True, simple user applications don't need all that - but as one builds more
and more complex support subsytems, that kind of environment turns out to be
good to have. Think of a window system, in that kind of environment. Those
'tools' (inter-segment subroutine calls, etc) were created to build the OS,
but they turned out to be broadly useful for _other_ complicated subsystems.

I'm not sure I've explained this all wel, but I'm not sure I can fully
cover the topic with less than a book.

	Noel


More information about the TUHS mailing list