Exciting development in the process of finding lost documentation, just sealed this one on eBay: https://www.ebay.com/itm/385266550881?mkcid=16&mkevt=1&mkrid=711-127632-235…
After the link is a (now closed) auction for a Western Electric 3B20S UNIX User's Manual Release 4.1, something I thought I'd never see and wasn't sure actually exited: print manuals for 4.x.
Once received I'll be curious to see what differences are obvious between this and the 3.0 manual, and this should be easy to scan given the comb binding. What a nice cover too! I always expected if a 4.x manual of some kind popped up it would feature the falling blocks motif of the two starter package sets of technical reports, but the picture of a 3B20S is nice. How auspicious given the recent discussion on the 3B series. I'm particularly curious to see what makes it specifically a 3B20S manual, if that's referring to it only having commands relevant to that one or omitting any commands/info specific to DEC machines.
Either way, exciting times, this is one of those things that I had originally set out to even verify existed when I first started really studying the history of UNIX documentation, so it's vindicating to have found something floating around out there in the wild. Between this and the 4.0 docs we now should have a much clearer picture of that gulf between III and V.
More to come once I receive it!
- Matt G.
I finally got myself a decent scanner, and have scanned my most prized
relic from my summer at Bell Labs - Draft 1 of Kernighan and Ritchie's "The
C Programming Language".
It's early enough that there are no tables of contents or index; of
particular note is that "chapter 8" is a "C Reference Manual" by Dennis
dated May 1, 1977.
This dates from approx July 1977; it has my name on the cover and various
scribbles pointing out typos throughout.
Enjoy!
https://drive.google.com/drive/folders/1OvgKikM8vpZGxNzCjt4BM1ggBX0dlr-y?us…
p.s. I used a Fujitsu FI-8170 scanner, VueScan on Ubuntu, and pdftk-java
to merge front and back pages.
(Recently I mentioned to Doug McIlroy that I had infiltrated IBM East
Fishkill, reputedly one of the largest semiconductor fabs in the world,
with UNIX back in the 1980s. He suggested that I write it up and share it
here, so here it is.)
In 1986 I was working at IBM Research in Yorktown Heights, New York. I had
rejoined in 1984 after completing my PhD in computer science at CMU.
One day I got a phone call from Rick Dill. Rick, a distinguished physicist
who had, among other things, invented a technique that was key to
economically fabricating semiconductor lasers, had been my first boss at
IBM Research. While I’d been in Pittsburgh he had taken an assignment at
IBM’s big semiconductor fab up in East Fishkill. He was working to make
production processes there more efficient. He was about to initiate a
major project, with a large capital cost, that involved deploying a bunch
of computers and he wanted a certified computer scientist at the project
review. He invited me to drive up to Fishkill, about half an hour north of
the research lab, to attend a meeting. I agreed.
At the meeting I learned several things. First of all, the chipmaking
process involved many steps - perhaps fifty or sixty for each wafer full of
chips. The processing steps individually were expensive, and the amount
spent on each wafer was substantial. Because processing was imperfect, it
was imperative to check the results every few steps to make sure everything
was OK. Each wafer included a number of test articles, landing points for
test probes, scattered around the surface. Measurements of these test
articles were carried out on a special piece of equipment, I think bought
from Fairchild Semiconductor. It would take in a boat of wafers (identical
wafers were grouped together on special ceramic holders called boats, for
automatic handling, and all processed identically) and feed each wafer to
the test station, and probe each test article in turn. The result was
about a megabyte of data covering all of the wafers in the boat.
At this point the data had to be analyzed. The analysis program comprised
an interpreter called TAHOE along with a test program, one for each
different wafer being fabricated. The results indicated whether the wafers
in the boat were good, needed some rework, or had to be discarded.
These were the days before local area networking at IBM, so getting the
data from the test machine to the mainframe for analysis involved numerous
manual steps and took about six hours. To improve quality control, each
boat of wafers was only worked during a single eight-hour shift, so getting
the test results generally meant a 24 hour pause in the processing of the
boat, even though the analysis only took a couple of seconds of time on the
mainframe.
IBM had recently released a physically small mainframe based on customized
CPU chips from Motorola. This machine, the size of a large suitcase and
priced at about a million dollars, was suitable to locate next to each test
machine, thus eliminating the six hour wait to see results.
Because there were something like 50 of the big test machines at the
Fishkill site, project represented a major capital expenditure. Getting
funding of this size approved would take six to twelve months, and this
meeting was the first step in seeking this approval.
At the end of the meeting I asked for a copy of the manual for the TAHOE
test language. Someone gave me a copy and I took it home over the weekend
and read through it.
The following Monday I called Rick up and told him that I thought I could
implement an interpreter for the TAHOE language in about a month of work.
That was a tiny enough investment that Rick simply wrote a letter to Ralph
Gomory, then head of IBM Research, to requisition me for a month. I told
the Fishkill folks that I needed a UNIX machine to do this work and they
procured an RT PC running AIX 1. AIX 1 was based on System V. The
critical thing to me was that it had lex, yacc, vi, and make.
They set me up in an empty lab room with the machine and a work table.
Relatively quickly I built a lexical analyzer for the language in lex and
got an approximation to the grammar for the TAHOE language working in
yacc. The rest was implementing the functions for each of the TAHOE
primitives.
I adopted rigorous test automation early, a practice people now call test
driven development. Each time I added a capability to the interpreter I
wrote a scrap of TAHOE code to test it along with a piece of reference
input. I created a test target in the testing Makefile that ran the
interpreter with the test program and the reference input. There were four
directories, one for test scripts, one for input data, one for expected
outputs, and one for actual outputs. There was a big make file that had a
target for each test. Running all of the tests was simply a matter of
typing ‘make test’ in the root of the testing tree. Only if all of the
tests succeeded would I consider a build acceptable.
As I developed the interpreter I learned to build tests also for bugs as I
encountered them. This was because I discovered that I would occasionally
reintroduce bugs, so these tests, with the same structure (test scrap,
input data, reference output, make target) were very useful at catching
backsliding before it got away from me.
After a while I had implemented the entire TAHOE language. I named the
interpreter MONO after looking at the maps of the area near Lake Tahoe and
seeing Mono Lake, a small lake nearby.
Lake Tahoe and Mono Lake, with walking routes between them. Source: Google
Maps
At this point I asked my handler at Fishkill for a set of real input data,
a real test program, and a real set of output data. He got me the files
and I set to work.
The only tricky bit at this stage was the difference in floating point
between the RT PC machine, which used the recently adopted IEEE 754
floating point standard and the idiosyncratic floating point implemented in
the System 370 mainframes at the time. The problem was that the LSB
rounding rules were different in the two machines, resulting in mismatches
in results. These mismatches were way below the resolution of the actual
data, but deciding how to handle this was tricky.
At this point I had an interpreter, MONO, for the TAHOE language that took
one specific TAHOE program, some real data, and produced output that
matched the TAHOE output. Almost done.
I asked my handler, a lovely guy whose name I am ashamed I do not remember,
to get me the regression test suite for TAHOE. He took me over and
introduced me to the woman who managed the team that was developing and
maintaining the TAHOE interpreter. The TAHOE interpreter had been under
development, I gathered, for about 25 years and was a large amount of
assembler code. I asked her for the regression test suite for the TAHOE
interpreter. She did not recognize the term, but I was not dismayed - IBM
had their own names for everything (disk was DASD and a boot program was
IPL) and I figured it would be Polka Dot or something equally evocative. I
described what my regression test suite did and her face lit up. “What a
great idea!” she exclaimed.
Anyway, at that point I handed the interpreter code over to the Fishkill
organization. C compilers were available for the PC by that time, so they
were able to deploy it on PC-AT machines that they located at each testing
machine. Since a PC-AT could be had for about $5,000 in those days the
savings from the original proposal was about $50 million and about a year
of elapsed time. The analysis of a boat’s worth of data on the PC-AT took
perhaps a minute or two, so quite a bit slower than on the mainframe, but
the elimination of the six-hour delay meant that a boat could progress
forward in its processing on the same day rather than a day later.
One of my final conversations with my Fishkill handler was about getting
them some UNIX training. In those days the only way to get UNIX training
was from AT&T. Doing business with AT&T at IBM in those days involved very
high-level approvals - I think it required either the CEO or a direct
report to the CEO. He showed me the form he needed to get approved in
order to take this course, priced at about $1,500 at the time. It required
twelve signatures. When I expressed horror he noted that I shouldn’t worry
because the first six were based in the building we were standing in.
That’s when I began to grasp how big IBM was in those days.
Anyway, about five years later I left IBM. Just before I resigned the
Fishkill folks invited me up to attend a celebratory dinner. Awards were
given to many people involved in the project, including me. I learned that
there was now a department of more than 30 people dedicated to maintaining
the program that had taken me a month to build. Rick Dill noted that one
of the side benefits of the approach that I had taken was the production of
a formal grammar for the TAHOE language.
At one point near the end of the project I had a long reflective
conversation with my Fishkill minder. He spun a metaphor about what I had
done with this project. Roughly speaking, he said, “We were a bunch of
guys cutting down trees by beating on them with stones. We heard that
there was this thing called an axe, and someone sent a guy we thought would
show us how to cut down trees with an axe. Imagine our surprise when he
whipped out a chainsaw.”
=====
nygeek.netmindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home>
All, thank you all for all the congratulations! I was going to pen an e-mail
to the list last night but, after a few celebratory glasses of wine, I demurred.
It still feels weird that Usenix chose me for the Flame award, given such
greats as Doug, Margo, Radia and others have previously received the
same award. In reality, the award belongs to every TUHS member who has
contributed documents, source code, tape images, anecdotes, knowledge
and wisdom, and who have given their time and energy to help others
with problems. I've been a steward of a remarkable community over three
decades and I feel honoured and humbled to receive recognition for it.
Casey told me the names of the people who nominated me. Thank you for
putting my name forward. Getting the e-mail from Casey sure was a surprise :-)
https://www.tuhs.org/Images/flame.jpg
Many thanks for all your support over the years!
Warren
Hello all,
I'm giving a presentation on the AT&T 3B2 at a local makerspace next month, and while I've been preparing the talk I became curious about an aspect that I don't know has been discussed elsewhere.
I'm well aware that the 3B2 was something of a market failure with not much penetration into the wider commercial UNIX space, but I'm very curious to know more about what the reaction was at Bell Labs. When AT&T entered the computer hardware market after the 1984 breakup, I get the impression that there wasn't very much interest in any of it at Bell Labs, is that true?
Can anyone recall what the general mood was regarding the 3B2 (and the 7300 and the 6300, I suppose!)
-Seth
--
Seth Morabito
Poulsbo, WA
web(a)loomcom.com