I've assembled some notes from old manuals and other sources
on the formats used for on-disk file systems through the
Additional notes, comments on style, and whatnot are welcome.
(It may be sensible to send anything in the last two categories
directly to me, rather than to the whole list.)
I've seen a couple of less than flattering references here; what was the
problem with them?
At UNSW, we couldn't afford the DH-11, so ended up with the crappy DJ-11
instead (the driver for the DH-11 had the guts ripped out of it in an
all-nighter by Ian Johnston as I recall), and when the DZ-11 came along we
thought it was the bees' knees.
Sure, the original driver was as slow as hell, but the aforesaid IanJ
reworked it and made it faster by at least 10x; amongst other things, I
think he did away with the character queues and used the buffer pool
instead, getting 9600 on all eight channels simultaneously, possibly even
Dave Horsfall DTM (VK2KFU) "Bliss is a MacBook with a FreeBSD server."
http://www.horsfall.org/spam.html (and check the home page whilst you're there)
> From: Clem Cole
A few comments on aspects I know something of:
> BTW: the Arpanet was not much better at the time
The people at BBN might disagree with you... :-)
But seriously, throughout its life, the ARPANET had 'load-dependent routing',
i.e. paths were adjusted not just in response to links going up or down, but
depending on load (so that traffic would avoid loaded links).
The first attempt at this (basically a Destination-Vector algorithm, i.e. like
RIP but with non-static per-hop costs) didn't work too well, for reasons I
won't get into unless anyone cares. The replacement, the first Link-State
routing algorithm, worked much, much, better; but it still had minor issues
damping fixed most of those too).
> DH11's which were a full "system unit"
Actually, two; they were double (9-slot, I guess?) backplanes.
> The MIT guys did ARP for ChaosNet which quickly migrated down the street
> to BBN for the 4.1 IP stack.
Actually, ARP was jointly designed by David Plummer and I for use on both
TCP/IP and CHAOS (which is why it has that whole multi-protocol thing going);
we added the multi-hardware thing because, well, we'd gone half-way to making
it totally general by adding multi-protocol support, so why stop there?
As soon as it was done it was used on a variety of IP-speaking MIT machines
that were connected to a 10MBit Ethernet; I don't recall them all, but one
kind was the MIT C Gateway multi-protocol routers.
> Hey it worked just fine at the time.
For some definition of 'work'! (Memories of wrapping protocol A inside
protocol B, because some intervening router/link didn't support protocol A,
Wanting to set up an 11/34 or 11/23 with a unix that's at least
contemporary enough to run telnet and ftp. From what I can gather on line,
I guess 2.10 is the best shot, but it's apparently a little less popular
and I can't fin enough docs about it to determine if it'll run with the
hardware I have. Am I on the right track here, or should I be considering
backporting the programs to 2.9? Pointers to 2.10 Setup manual would be
most welcome as well as suggestions on where to find other resources to
meet this goal..
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
> The replacement, the first Link-State routing algorithm, worked much,
> much, better; but it still had minor issues
> damping fixed most of those too).
Oop, the editor 'ate' a line there (or, rather the editor's operator spaced
out :-): it should say "it still had minor issues, such as oscillation
between two equal-cost paths, with the traffic 'chasing itself' from path to
path; proper damping fixed most of those too".
> I always give Dave Clark credit (what I call "Clark's Observation") for
> the most powerful part of the replacement for the ARPAnet - aka the
> idea of a network of network.
Not sure exactly what you're referring to here (the concept of an internet
as a collection of networks seems to have occurred to a number of people,
see the Internet Working Group notes from the early 70s).
> Dave once quipped: "Why does a change at CMU have to affect MIT?"
Subnets (which first appeared at MIT, due to our, ah, fractured
infrastructure) again were an idea which occurred to a number of people all
at the same time; in part because MIT's CHAOSNET already had a collection of
subnets (the term may in fact come from CHAOSNET, I'd have to check) inside
> I've forgotten what we did at CMU at the time, but I remember the MIT
> folk were not happy about it.
I used to know the answer to that, but I've forgotten what it was! I have this
bit set that CMU did something sui generis, not plain ARP sub-netting, but I
just can't remember the details! (Quick Google search...) Ah, I see, it's
described in RFC-917 - it's ARP sub-netting, but instead of the first-hop
router answering the ARP based on its subnet routing tables, it did something
where ARP requests were flooded across the entire network.
No wonder we disapproved! :-)
> Thought, didn't you guys have the 3Mbit stuff like we did at CMU and
> UCB first?
MIT, CMU and Stanford all got the 3Mbit Ethernet at about the same time, as
part of the same university donation scheme. (I don't recall UCB being part
of that - maybe they got it later?)
The donation included a couple of UNIBUS 3Mbit Ethernet cards (a hex card,
IIRC) the first 3MB connections at MIT were i) kludged into one of the MIT-AI
front-end 11's (I forget the details, but I know it just translated CHAOS
protocol packets into EFTP so they could print things on the Dover laser
printer), and ii) a total kludge I whipped up which could forward IP packets
back and forth between the 3M Ethernet, and the other MIT IP-speaking LANs.
(It was written in MACRO-11, and with N interfaces, it used hairy macro
expansions to create separate code for each of all 'N^2' possible forwarding
paths!) Dave Clark did the Alto TCP/IP implementation (which was used to
create a TFTP->EFTP translating spooler for IP access to the Dover).
I can give you the exact data, if you care, because Dave Clark and I had
a competition to see who could be done first, and the judge (Karen Sollins)
declared it a draw, and I still have the certificate! :-)
> From: Clem Cole <clemc(a)ccc.com>
> two issues. first DEC subsetted the modem control lines so running
> modems - particularly when you wanted hardware flow control like the
> trailblazers - did not work.
?? We ran dialup modems on our DZ11s (1200 bps Vadics, IIRC) with no problems,
so you must be speaking only some sort of high-speed application where you
needed the hardware flow control, or something, when you say "running modems
... did not work".
Although, well, since the board didn't produce an interrupt when a modem
status line (e.g. 'carrier detect') changed state, we did have to do a kludge
where we polled the device to catch such modem control line changes. Maybe
that's what you were thinking of?
> To Dave the DZ was great because it was two boards to do what he
> thought was the same thing as a DH
To prevent giving an incorrect impression to those who 'were not there', each
single DZ hex board supported 8 lines (fully independent of any other cards);
the full DH replacement did need two boards, though.
> From: Ronald Natalie
>> each single DZ hex board supported 8 lines (fully independent of any
>> other cards); the full DH replacement did need two boards, though.
> Eh? The DH/DM from Able was a single hex board and it supported 16
To be read in the context of Clem's post which I was replying to: to replace
the line capacity of a DH (16 lines), one needed two DZ cards.
> From: Dave Horsfall <dave(a)horsfall.org>
> what was the problem with them?
Well, not a _problem_, really, but.... 'one interrupt per output character'
(and no way around that, really). So, quite a bit of overhead when you're
running a bunch of DZ lines, at high speeds (e.g. 9600 baud).
I dunno, maybe there was some hackery one could pull (e.g. only enabling
interrupts on _one_ line which was doing output, and on a TX interrupt,
checking all the other lines to see if any were i) ready to take another
character, and ii) had a character waiting to go), but still, it's still going
to be more CPU overhead than DMA (which is what the DH used).
> From: Clem Cole <clemc(a)ccc.com>
> an old Able "Enable" board which will allow you to put 4Megs of memory
> in an 40 class processor (you get a cache plus a new memory MAP with 22
> bits of address like the 45 class processors).
But it doesn't add I/D to a machine without it, though, right? (I tried
looking for Enable documentation online, couldn't find any. Does anyone know
I recall at MIT we had a board we added to our 11/45 which added a cache, and
the ability to have more than 256KB of memory, but I am unable to remember
much more about it (e.g. who made it, or what it was called) - it must have
been one of these.
I recall we had to set up the various memory maps inside the CPU to
permanently point to various ranges of UNIBUS address space (so that, e.g.
User I space was assigned 400000-577777), and then the memory map inside the
board mapped those out to the full 4MB space; the code changes were (IIRC)
restricted to m45.s; for the rest of the code, we just redefined UISA0 to
point to the one on the added board, etc. And the board had a UNIBUS map to
allow UNIBUS devices access to all of real memory, just like on an 11/70.
> From: Jacob Ritorto <jacob.ritorto(a)gmail.com>
> So does that single board contain the memory and everything, or is this
> a backplane mod/special memory kind of setup?
I honestly don't recall much about how the board we had at MIT worked, but i)
the memory was not on the board itself, and ii) there had to be some kind of
special arrangements for the memory, since the stock UNIBUS only has 18 bits
of address space. IIRC, the thing we had could use standard Extended UNIBUS
I don't recall how the mapping board got access to the memory - whether the
whole works came with a small custom backplane which one stuck between the
CPU and the rest of the system, and into which the new board and the EUB
memory got plugged, or what. I had _thought_ it somehow used the FastBUS
provision in the 11/45 backplane for high-speed memory (since with the board
in, the machine had a basic instruction time of 300nsec if you hit the cache,
just like an 11/70), and plugged in there somewhere, but maybe not, since
apparently this board you have is for a /34? Or maybe there were two
different models, one for the /45 and one for the /34?
> With the enable34 board, do I have some hope of getting 2.11bsd on this
Since I doubt it adds I/D to machines that don't already have it, I would
guess no. Unless somehow one can use overlays, etc, to fit 2.11 into 56KB of
address space (note, not 'memory').
> I do have an 11/73 I'm working on that could run that build much more
> easily and appropriately..
That's where I'd go.
I do have that MIT V6 Unix with TCP/IP, where the TCP is almost entirely in
user space (only incoming packet demux, etc is in the kernel), and I have
found backup tapes for it, which are off at an old tape specialist being
read, and the interim reports on readability are good, but until that
happens, I'm not sure we'll be seeing TCP/IP on non-split-I/D machines.