Rob Pike, re the spelling corrector in V8 and later Research
versions of sh:
> That was done by Tom Duff, I believe before he came to Bell Labs. I might
> have brought the idea with me from Toronto.
Very likely, since you left it behind at Caltech as well; it was
in sh on cithep (a hostname meaningless to many but rob will remember)
when I arrived in 1980.
It was in the version of p you left behind there as well.
I can confirm that spname remained in the shell through V10
(it's still in my copy), but it seems to have disappeared from p.
Norman Wilson
Toronto ON
Warner Losh:
Less ugly would be to declare time_t to be unsigned instead of signed...
It would break less code... Making time_t 64 bits also breaks code, even if
you declare you don't care about binary compat since many older apps know
time_t is 32-bits.
===
I remember chatting in 1998 with a consultant who worked with
clients in the financial industry. They still used 32-bit systems
at the time, and had already converted critical programs (I don't
know whether that included parts of libc or they had their own
conversion routines) to make time_t unsigned.
It mattered early to those folks because of 40-year bonds.
That suggests to me that the financial-services world may have
a head start on the 2038 problem, but I fear many others are
still lagging behind. 64 bits will help but not as much for
embedded systems and legacy stuff.
Us hobbyists will doubtless have fun too, as we already do
(ask Warren) when running the earliest existing UNIX images,
in which times are stored in sixtieths of a second.
Norman Wilson
Toronto ON
The spell check in 'cd' commands I remember from SCO UNIX 3.2
The 'sh' manual page has
Spelling checker
When using cd(C) the shell checks spelling. For example, if you change to
a different directory using cd and misspell the directory name, the shell
responds with an alternative spelling of an existing directory. Enter
``y'' and press <Return> (or just press <Return>) to change to the
offered directory. If the offered spelling is incorrect, enter ``n'',
then retype the command line. In this example the sh(C) response is
boldfaced:
$ cd /usr/spol/uucp
cd /usr/spool/uucp?y
ok
Cheers,
uncle rubl
>Date: Mon, 04 Jan 2021 02:08:09 -0700
>From: arnold(a)skeeve.com
>To: m.douglas.mcilroy(a)dartmouth.edu, egbegb2(a)gmail.com
>Cc: tuhs(a)tuhs.org
>Subject: Re: [TUHS] Question
>Message-ID: <202101040908.104989TF022830(a)freefriends.org>
>Content-Type: text/plain; charset=us-ascii
>
>The spelling corrector in the shell rings vague bell. I think
>it's in the 8th or 9th edition Bourne shell. You should be able to
>find those in the archives.
>
>Geoff Collyer has a modern port of the V9 shell at
>http://www.collyer.net/who/geoff/v9sh.tar.
>
>HTH,
.>
>Arnold
> I was a BTL person for 8 years between 1976 and 1984. During
> that time there was a spelling corrector that was better than
> anything I see today. There was a concept of "spelling distance"
> that corrected a whole bunch of stuff that even today cannot be >
corrected.
> Who in that era worked on spelling correction at BTL. I was at
> Columbus BTL (1976-1979) and Whippany BTL (1979-1984).
Peter Nelson made an interface to spell(1) that showed putative errors in
context. I believe it could suggest corrections. I remember the project; I
installed hooks for it in spell(1). I don't remember the date, but it would
probably not have been early enough for you to have used it in Columbus.
If there's a chance that Peter's program is the one you remember
and you'd like to get in touch with him, I can give you his
email address.
Doug
Sandy Fraser was kind enough to share some papers from his archive that give further background to early networking at Bell Labs. One of these is about the “File Store”.
For context I refer to an article that Sandy wrote back in 1975 and describes the network setup at Murray Hill at that time:
https://drive.google.com/file/d/1_kg4CEsbGucsU8-jxi0ptfUdsznWcKWm/view?usp=…
In the figure and legenda at the bottom of page 52/53, the "File Store" is item 10.
The File Store paper itself is (temporarily) here:
https://drive.google.com/file/d/1AhLmjJcHXFtfoIUlfvl0bzbB0zSPzpSq/view?usp=…
The paper has a very interesting introduction: “Five machines currently use the file store. Three of them use it as if were a peripheral device not part of their main backing store. In these cases there exists programs to transmit complete files between the user’s machine and the file store."
This was known and Noel Chiappa found the source for the main transfer program “nfs” (this program is also mentioned in Doug McIllroy’s manual compendium):
https://chiselapp.com/user/pnr/repository/Spider/tree?ci=tip
The introduction continues: “In the other two cases the file store is treated as an extension of the user machine’s backing store. Once a user has opened a file his program does reads, writes, and seeks without being aware of the file’s actual location.”
This -- to me at least -- is a new fact and as such it would predate various other projects for a distributed Unix file system (the paper is dated December 1974). Unfortunately, the paper is short on how the integration was achieved.
On one hand the work may have been related to "Peripheral Unix” as developed by Heinz Lycklama (https://www.tuhs.org/Archive/Documentation/TechReports/Heinz_Tech_Memos/TM-…) at the same time -- the memos are dated just a month apart. In essence the approach is to forward system calls over the network 1).
Another possibility is that it worked much like the 1979 “RIDE” system (https://www.computer.org/csdl/proceedings-article/cmpsac/1979/00762533/12Om…) Here there is a modified C library that recognises certain path names and maps these to file server calls.
A third possibility is that it was a precursor to the work on distributed Unix by Luderer et al. in 1980/81 (https://dl.acm.org/doi/abs/10.1145/1067627.806604) Here file system calls are redirected at the kernel level using the concept of a remote/forwarding inode.
The File Store paper mentions that the server is a modified Unix. At first glance it would seem that the modifications are modest, with the file system partly rewritten to account for storage usage, and an automatic backup feature added.
I am much interested in any recollections, insights and materials about these topics.
Many thanks in advance,
Paul
Note 1)
The tech report on the “high speed serial loop” (the Weller loop) has not surfaced yet, but the document for the Glance terminal gives a quick, high level overview on page 3/4:
https://www.tuhs.org/Archive/Documentation/TechReports/Heinz_Tech_Memos/TM-…
The recent 516 documents include a detailed description of how it worked:
https://www.tuhs.org/Archive/Documentation/Other/516-TSS/516-10-11-12-Ring-…
I was a BTL person for 8 years between 1976 and 1984. During that time
there was a spelling corrector that was better than anything I see today.
There was a concept of "spelling distance" that corrected a whole bunch of
stuff that even today cannot be corrected.
Who in that era worked on spelling correction at BTL. I was at Columbus BTL
(1976-1979) and Whippany BTL (1979-1984).
Who ever did that stuff should patent it and sell it. Today there is
nothing like it.
--
Advice is judged by results, not by intentions.
Cicero
Happy new year to all!
Over the holidays I found some time to port the recovered version of
speak(6) to modern Unix systems and make its output compatible with the
espeak system. The resulting sound is barely intelligible. I assume
this is due to the imperfect matching between the speak and espeak
phonemes. The process's details are available at
https://www.spinellis.gr/blog/20210102#tu and the code at
https://github.com/dspinellis/speak.
Diomidis
His autobiography, Go West, Young German! states it is luderer at asu.edu -- but that was published a decade ago.
Paul Ruizendaal (pnr at planet.nl) wrote:
>
> I am looking for some more background on the 1980/81 "S/F-Unix" system.
>
> Gottfried W. R. Luderer, H. Che, J. P. Haggerty, Peter A. Kirslis, W. T. Marshall:
> A Distributed UNIX System Based on a Virtual Circuit Switch. SOSP 1981: 160-168
> I have that paper, and Bill Marshall was kind enough to provide a lot of info. I think that Gottfried Luderer may have some further background, in particular on how this work relates to earlier distributed file system projects at Bell Labs.
>
> Would anybody have a current e-mail address for him?
>
> Paul
I am looking for some more background on the 1980/81 "S/F-Unix" system.
Gottfried W. R. Luderer, H. Che, J. P. Haggerty, Peter A. Kirslis, W. T. Marshall:
A Distributed UNIX System Based on a Virtual Circuit Switch. SOSP 1981: 160-168
I have that paper, and Bill Marshall was kind enough to provide a lot of info. I think that Gottfried Luderer may have some further background, in particular on how this work relates to earlier distributed file system projects at Bell Labs.
Would anybody have a current e-mail address for him?
Paul
> I'll be (G-d willing) 79 then; I hope around, but I also hope not
> overly involved with computers.
>From the vantage point of 88, I can attest to the permanence of
computing's grip. I guess the key word is "overly". The only code
I've written in the last couple of weeks is a few lines of PostScript
to touch up my seasonal map/greeting card, the creative part of
which is at www.cs.dartmouth.edu/~doug/2020map.pdf.
Doug
> Bill passed away this summer - you may have seen his epic farewell
> message.
Anyone have a copy of this? I did a Web search, but all I could find was the
Subject line ("public static final void goodbye ()").
holding back the night
with its increasing brilliance
the summer moon
-- Yoshitoshi's death poem
Noel
All, a while back Tom Lyon sent me the following e-mail and I'd been
sitting on my hands, but I've finally placed these two theses in the
Unix Archive at https://www.tuhs.org/Archive/Documentation/Theses/
Thanks Tom!
Cheers, Warren
-------- Forwarded Message --------
Hi, Warren - as you may know, Bill Shannon and Sam Leffler ported UNIX
to the Harris /6 minicomputer at Case Western.
Bill passed away this summer - you may have seen his epic farewell
message. Anyways, that prompted me to get the CWRU librarians to scan
copies of Shannon's and Leffler's theses where they describe the work.
It took a while due to Covid, but I have them now (attached).
There is a copyright disclaimer at the end of each, but it allows for
research purposes, which is all that I can imagine this being used for.
Please give them a home in the archives, and announce as you please.
- Tom
> From: Tyler Adams
> We all know and love the UNIX beard, but I can't find anything on how
> the beards started other than an old photo of Ken and Dennis
I'm not sure the term is Unix-specific. At a fairly early stage, people who
worked on the ARPANET/Internet (when PDP-10's were still the 'usual' host, and
Unix systems were just starting to beceome common) were jokingly known as
'network grey-beards': the prototypical one being Jon Postel. (Vint Verf's
beard was too tidy to qualify, IIRC!)
Noel
> Sometimes I wonder what would have happened if A68 had become the medium-level language of Unix
My knowledge of A68 comes from reading the official definition back in
the day. It took effort to see the clarity of the design through the
fog of the description. Until more accessible descriptions came along
(which I admit to not having seen) it would have been a big barrier to
acceptance.
A68 was very much in the air (though not much on the ground) in the
early days of Unix, as was PL/I. Although we had implemented and used
PL/I for Multics, it was never considered for Unix due to Its size and
the rise of other attractive languages during the 6-year gestation of
Multics. BCPL had the most influence, particularly in its clever
identity a[i] = i [a] = *(a+i).It was OK to write 2[x], which served
to implement structs like this: field=2; field[x]=3. (Actually the
BCPL indirection operator was not *. Dennis borrowed the more
perspicuous *from the SAP assembler for IBM 700-series machines.)
>From Algol 68 Dennis borrowed addition operators like +=, though at
first he unwisely reversed their spelling, underestimating the
inherent hazard of a=-b. He rejected A68's automatic dereferencing as
too inflexible. He considered whether semicolons should be separators
as in Algol, or terminators as in PL/I. (Separators are more
convenient for macro substitution, but macros were not in the original
design.) He also considered making statements and expressions mutually
recursive as in A68. My recollection is that his choices were finally
based on a guess about user acceptance--how many radical innovations
would prospective users buy into. Perhaps Ken would have more to say
about this,
I tried to persuade Dennis to provide simultaneous assignments like
a,b = b,a, In the end, the comma operator got hijacked for a partial
realization of embedded statements.(We could still get parallel
assignment by, interpreting the existing {a,b,c} syntax as an lvalue
thus: {a,b,c}={b,c,a}.)
Then came Steve Bourne, with real experience in A68. Its influence on
the shell, including the story of do...done, has often been told. It
shows up most vividly in the condition part of if and while
statements.
Doug
This pair of commands exemplifies a weakness in the way Unix evolved.
Although it was the product of a shared vision, It was not a
product-oriented project. Unix commands work well together, but they
don't necessarily work alike.
It would be nice if identifiable families of commands had similar user
interfaces. However, cron and at were written by different
individuals, apparently with somewhat different tastes. Unix folks
were close colleagues, but had no organized design committee.
Time specs in cron and at are markedly different. A more consequential
example is data-field specs (or lack thereof) in sort, join, cut, comm
and uniq. The various specs were recognized as "wildly incongruent" in
a BUG remak. However there was no impetus for unification. To
paraphrase John Cocke (speaking about Fortran): one must understand
that Unix commands are not a logical language. They are a natural
language--in the sense that they developed by organic evolution, not
"intelligent design".
Doug
Message: 4
Date: Mon, 30 Nov 2020 19:59:18 -0800 (PST)
From:jason-tuhs@shalott.net
To:tuhs@minnie.tuhs.org
Subject: Re: [TUHS] The UNIX Command Language (1976)
Message-ID:
<alpine.LRH.2.23.453.2011301946410.14253(a)waffle.shalott.net>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
> "The UNIX Command Language is the first-ever paper published on the Unix
> shell. It was written by Ken Thompson in 1976."
>
> https://github.com/susam/tucl
Thanks for that.
This reminded me that the Thompson shell used goto for flow control, which
I had forgotten.
Bourne commented on the omission of goto from the Bourne shell, "I
eliminated goto in favour of flow control primitives like if and for.
This was also considered rather radical departure from the existing
practice."
Was this decision contentious at all? Was there a specific reason for
goto's exclusion in the Bourne shell?
Thanks.
-Jason
At the time it may have raised a few eyebrows but I don't recall much discussion about it then. My email tracks at the time don't mention it.
Doug McIlroy or Steve Johnson (or Ken) on this forum might recall differently. At the time scripts were not that complicated and so error recovery to a far off place in the script was not common. As an aside I did persuade Dennis to add "setjmp" and "longjmp" so the shell code itself could recover from some kinds of script errors.
So I did not have a "religious" aversion to "goto" at the time.
Steve
> Bourne commented on the omission of goto from the Bourne shell
...
> Was this decision contentious at all? Was there a specific reason for goto's exclusion in the Bourne shell?
I don't believe I ever used a shell goto, because I knew
how it worked--maybe even spinning a tape drive, not too
different from running a loop on the IBM CPC. There you
stood in front of the program "read head" (a card reader),
grabbed the "used" cards at the bottom and put them back
in the top feed. Voila, a program loop. The shell goto
differed in that it returned to the beginning by running the
tape backward rather than going around a physically
looped path. And you didn't have to spin the tape by hand.
It also reminds me of George Stibitz's patent on the goto.
The idea there was to stop reading this tape and read
that one instead. The system library was a bank of
tape readers with programs at the ready on tape loops .
(This was in the late 1940s. I saw the machine but never
used it. I did have hands-on experience with CPCcards.)
Doug
Off topic, but too much fun to pass up.
>> You wrote your algorithm in Pascal, debugged it, and then rewrote it in your favourite language (in my case, ALGOL-W).
> Now why didn't Don Knuth think of that for TeX?
I'm glad he didn't. He might have written it in Mix. Knuth once said
he didn't believe in higher-level languages. Of course he knew more
about them than anybody else and was CACM's associate editor for the
subject--like a minister who doesn't believe in God.
Doug
> From: "Theodore Y. Ts'o"
> Having a clean architecture is useful in so far as it makes reduces
> maintenance overhead and improves reliability.
I would put it differently, hence my aphorism that: "the sign of great
architecture is not how well it does the things it was designed to do, but how
well it does things you never imagined it would be used for".
I suppose you could say that reducing maintenance and improving reliability
are examples of the natural consequences of that, but to me those are limited
special cases of the more general statement. My sense is that systems decline
over time because of what I call 'system cancer': as they are modified to do
more and more (new) things, the changes are not usually very cleanly
integrated, and eventually one winds up with a big pile. (Examples of this
abound; I'm sure we can all think of several.)
Noel
Sorta relevant to both groups...
Augusta Ada King-Noel, Countess of Lovelace (and daughter of Lord Byron),
was born on this day in 1815; arguably the world's first computer
programmer and a highly independent woman, she saw the potential in
Charles Babbage's new-fangled invention.
J.F.Ossanna was given unto us on this day in 1928; a prolific programmer,
he not only had a hand in developing Unix but also gave us the ROFF
series.
Who'ld've thought that two computer greats would share the same birthday?
-- Dave
> From: Niklas Karlsson
> Just consider Multics, or IBM's "Future System".
Here's a nice irony for you: one of the key people in killing off FS was
reported to me (by someone who should have known) to be Jerry Saltzer (of
Multics fame). That wasn't the only time he did something like thst, either;
when MIT leaned on him to take over Athena, the first thing he did was to take
a lot of their ambitious system plans, and ditch them; they fell back to
mostly 'off the shelf' stuff: pretty much vanilla 4.2, etc.
Multics itself has an interesting story, quite different from the popular
image among those who know little (or nothing) of it. The system, as it was
when Honeywell pulled the plug on further generations of new hardware (in the
mid-80's) was very different from the system as originally envisaged by MIT
(in the mid-60's); it had undergone a massive amount 'experience-based
evolution' during those 20 years.
For instance, the original concept was to have a process per command (like
Unix), but that was dropped early on (because Multics processes were too
'expensive'); they wound up going with doing commands with inter-segment
procedure calls. (Which has the nice benefit that when a command faults, you
can get dropped right into the debugger, with the failed instance there to
look at.) If you read the Organick book on Multics, it describes a much
different system: e.g. in Organick there's a 'linkage segment' (used for
inter-segment pointers, in pure-code segments) per code segment, but in
reality Multics, as distributed, used a single 'combined linkage segment'
(which also contained the stack, also unlike the original design, where the
stack was a separate segment).
There were also numerous places where major sub-systems were re-implemeneted
from scratch, with major changes (often great simplifications): one major
re-do was the New Storage System, but that one had major new features (based
on operationally-shown needs, like the 4.1/.2 Fast File System), so it's not a
'simplification' case. There's one I read about which was much simpler the second
time it was implemented, I think it was IPC?
Noel
At 09:40 AM 12/9/2020, Clem Cole wrote:
>My own take on this is what I call "Cole's Law"Â Â Simple economics always beats sophisticated architecture.
I thought that was finely sliced cabbage with a tart salad dressing.
- John
When I got into Unix in 1976 cron and at were both there.
I got to wondering for no particular reason which came first -- I had
always assumed cron, but ...?
Anyone know?
Last night I stumbled upon this speech by Doug McIlroy at Dijkstra's retirement banquet @ UT Austin where he tells the story of his first encounter with EWD and the origins of programming without goto.... Given that we recently had a discussion on "Goto considered harmful", you all may enjoy it. Make sure you watch the start of part 10 as well as that is where the story ends.
https://youtu.be/5OUPBwrufKA?list=PL328C7EFFC1F41674&t=326
Hoi,
I'm wondering what the name of the B compiler was.
Doug's ``Unix Reader'' lists:
1 2 3 4 5 6 7 8 9
+ + + . . . . . . b compile b program
. . . . . + + + + bc arbitrary-precision arithmetic language
Via Wikipedia I found a scan of the ``Users' Reference to B'',
a technical memorandum by Ken, dated 1972-01-07 (which is between
the releases of the 1st and 2nd Edition).
https://web.archive.org/web/20150317033259/https://www.bell-labs.com/usr/dm…
There on page 25:
10.0 Usage
Currently on UNIX, there is no B command. The B compiler phases
must be executed piecemeal. The first phase turns a B source
program into an intermediate language.
/etc/bc source interm
The next phase turns the intermediate language into assembler
source, at which time the intermediate language can be removed.
/etc/ba interm asource
rm interm
The next phase assembles the assembler source into the object
tile a.out. After this the a.out file can be renamed and the
assembler source file can be removed.
as asource
mv a.out object
rm asource
The last phase loads the various object files with the necessary
libraries in the desired order.
ld object /etc/brtl -lb /etc/bilib /etc/brt2
Now a.out contains the completely bound and loaded program and
can be executed.
a.out
A canned sequence of shell commands exists invoked as follows:
sh /usr/b/rc x
It will compile, convert, assemble and load the file x.b into the
executable file a.out.
It lists /etc/bc, as a command to convert into the intermediate
language, and /etc/ba, to convert the intermediate language into
assembler source, but lists no `b' command. The wrapper script is
/usr/b/rc.
Can someone clarify?
I came to this question because I was looking for one letter
commands. I always thought them to be a reserved namespace for the
user ... Any background on that topic is appreciated as well. ;-)
meillo
There's a back story. The paper appears in the proceedings of a
conference held in London in 1973, a few months after the advent of
pipes. While preparing the presentation, Ken was inspired to invent
and install the pipe operator. His talk wouldn't have been nearly as
compelling had it been expressed in the original pipeline syntax (for
which I take the blame).
References to eqn (v5), bc (v6), and ratfor (v7) obviously postdate
the London conference. Ken must have edited--or re-created--the
transcript for the proceedings sometime after v6 (May, 1975).
Bibliographic citations are missing. Can they be resurrected?
Reference 137, about Unix itself, probably refers to a
presentation by Ken and Dennis at SOSP in January1973. Alas, only an
abstract of the talk appears in the conference proceedings. But the
abstract does contain the potent and often-repeated sentence, "It
offers a number of features seldom found even in larger operating
systems, including ... inter-process IO ..." The talk--in Yorktown
Heights--was memorable, and so was a ride to the same place in Ken's
'vette. (I can't recall whether the two happened on the same
occasion.)
Given that the talk at SOSP preceded the talk in London, and that the
Unix manual was widely distributed by (1976) when the revised London
talk was printed, the claim that "The Unix command language" was the
first publication of Unix seems hyperbolic. In no way, though, does
this detract from the inherent interest of the paper.
Doug
Hi All,
So, I'm about to get my very own Apple IIe and while it's an incredibly
versatile machine for assembly language and hardware hackery, I'm not
aware of any Unices that run on the machine, natively. Does anybody know
of any from back in the day?
It's got a 65c02 processor and somewhere around 128k of RAM, but it's
also pretty expandable w/7 slots and a huge amount of literature about
how to do stuff w/those slots.
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I'm also forwarding this message that arrived in my mailbox. Is this
**THE** Ken Thompson?
Yes, that's the real Ken. Those who have communicated
with him in the past (and didn't get a direct note as
some of us did) should notice the new e-mail address,
kenbob(a)gmail.com.
Norman Wilson
Toronto ON
Somewhat late to the discussion, but GeckOS may be another curious
contender. You can find more information in http://6502.orghttp://www.6502.org/users/andre/index.html
j
--
Scientific Computing Service
Centro Nacional de Biotecnología, CSIC.
c/Darwin, 3. 28049 Madrid
+34 91 585 45 05
+34 659 978 577
I'm also forwarding this message that arrived in my mailbox. Is this
**THE** Ken Thompson?
Joachim
-------- Forwarded Message --------
Subject: Re: [TUHS] The UNIX Command Language (1976)
Date: Sun, 29 Nov 2020 21:47:16 -0800
From: Ken Thompson <kenbob(a)gmail.com>
To: Joachim <j(a)tllds.com>
yes. the unix manual was published before that,
but i dont think it was released that early. it had
a manual page on the shell.
this paper might be the first paper ever released
outside bell labs.
On Sun, Nov 29, 2020 at 7:18 PM Joachim via TUHS <tuhs(a)minnie.tuhs.org
<mailto:tuhs@minnie.tuhs.org>> wrote:
Apologies if this has already been linked here.
"The UNIX Command Languageis the first-ever paper published on the
Unix shell. It was written by Ken Thompson in 1976."
https://github.com/susam/tucl
Joachim
> From: Steve Nickolas
> there's no easy way to do preemptive multitasking without extra
> hardware.
Perhaps you're using some idiosyncratic definition of "preemptive" and
"multitasking", but to me that statement's not accurate.
Let's take the "multitasking" part first: that just means 'two or more
computations can run at the same time, sharing the machine' - and that's not
hard to do, without special hardware, provided there's some way (in the
organization of the software) to save the state of one when the other is
running. Many simple systems do this; e.g. the MOS system that I used on
LSI-11's, BITD.
"Preemptive" is a bit trickier, because things have to be organized so that
one 'task' can be temporarily stopped arbitrarily (i.e. without it explicitly
giving up the processor, which is what e.g.MOS did) to let another run. There
does need to be some asynchronous way of inciting the second 'task' to run,
but interrupts (either clock, or device) do that, and pretty much every
machine has those. MINI-UNIX, for example, has premptive multitasking.
The thing that takes special hardware is _protecting_ one task from a bug in
another - a bug which could trash the first tasks's (or the system's!)
memory. One has to have memory management of some kind to do that.
> From: Dave Horsfall
> I would start with something like Mini-Unix
MINI-UNIX would be a good place to start if one wanted to bring up a system on
a machine without memory management; there's nothing in the kernel which is
PDP-11 dependent that I can think of (unlike V6, which had a fairly heavy
dependency on the PDP-11 memory management hardware - although one could of
course rip that all out, as MINI-UNIX did).
However, one's still looking at a fair amount of work, both to get rid of any
traces of PDP-11isms (e.g. stack growth direction), and translate the
assembler part (startup, and access to non-C operations). Something like FUZIX
might be an easier option.
Noel
I'm trying to find out why compress(1) uses .Z as filename extension.
My theory is that it was inspired by pack(1), which uses the .z extension.
However, I haven't been able to find any info on why pack(1) uses that
extension. Does anyone here know?
Some searching led me to [1] which is a man page for pack from AUSAM.
It's written by Steve Zucker in 1975, so perhaps the extension is z for
Zucker?
Was Zucker's pack(1) the first, though? This message [2] talks about a
Bell version.
Does anyone here have any information about this?
Cheers,
Hans
1. https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/doc/man/man1/pack.1
2. https://tech-insider.org/unix/research/1984/0319.html
I was making coffee into the cup I've had for decades that talks about
his networking books. I realized I might have something that would
help his wife a bit. I had her email but long since lost it. Anyone
have it?
--lm
Back in mid-January, I posted a note saying:
> TL; DR. I'm trying to find the best possible home for some dead trees. ...
A lot (far too much, IMNSHO!) has happened since then. In any case, I thought folks here might appreciate an update. In brief, Iain Maoileoin offered to pay for shipping a largely unknown amount of technical (mostly computer-related) books to his repurposed missile sile (!) near Inverness, Scotland.
Early this Fall, I packed up 16 cardboard boxes (designated 0-F, of course :-) and the shippers hauled them off. Dunno when they'll arrive, let alone in what condition, but trying to save them from recycling seemed worth the effort. FYI, the total weight was a bit over a ton.
Meanwhile, my spouse and I gave away and/or packed up the rest of our things and drove ourselves and five cats up to Seattle, WA, USA. Somewhere in a shipping container, there is still a cubic foot or so of historical Unix papers from Jim Joyce; when it surfaces, I'll post again about rehoming it.
-r
Well, it's a very rainy day and since COVID is keeping me home I just
fed my 516-TSS notebooks into the scanner. It's about 17MB of stuff.
Not sure what to do with it since I don't have a place to serve it and
since they're scanned images they're too big to post. Here's the list
of documents; email me if you're wanting something in a hurry while the
archive stuff is figured out. Note that the smell of mildew wasn't
preserved in the scanning process.
516-1-516-DOCUMENTATION.pdf
516-3-DDP-516-PRICE-LIST.pdf
516-4-Disk-Layout.pdf
516-5-System-Table-Formats.pdf
516-6-Segment-Format.pdf
516-8-Disk-Hole-Format.pdf
516-7-DMA-Mnemonics.pdf
516-9-Addresses.pdf
516-10-11-12-Ring-Formats.pdf
516-12-Specifications-For-The-Node-Modem-Interface.pdf
516-13-Trac-Character-Strings.pdf
516-14-GMAP-Assembler-for-the-Multi-Programmed-516.pdf
516-15-A-Suggested-Graphic-Display-with-Keyboard-for-Graphic-Terminals.pdf
516-16-516-Assembler-And-Post-Processor-For-Unsegmented-Programs.pdf
516-18-Format-For-Ring-Interrupt.pdf
516-19-Thread-Save-Blocks.pdf
516-20-Card-Reader-Bootstrap-and-Programs.pdf
516-21-Octal-Package.pdf
516-22-A-Repeater-For-The-Node-Modem.pdf
516-23-CLEAR-CORE-CARD.pdf
516-24-SOROBAN-CARD-READER-TEST-PROGRAM.pdf
516-25-IO-Table.pdf
516-26-Disk-DMA-Queue.pdf
516-27-GE-Disc-Files-For-516-Programming.pdf
516-27-Thread-Table.pdf
516-28-IO-Ring_Device-Codes.pdf
516-29-Five-Bit-Character-Codes.pdf
516-30-Text-Editor.pdf
516-31-Relocatable-Segment-Octal-Package.pdf
516-32-ASCII-Character-Mnemonics.pdf
516-34-Display-List-For-Glance.pdf
516-35-Internal-Megacycle-Clock.pdf
516-36-Node-Modem-Interface-For-Computer-Terminals.pdf
516-38-P8SYS.pdf
516-39-Resource-Monitor-Meters.pdf
516-40-SNAP-Time-Sharing-Calculator.pdf
516-41-516-Segment-Assembler.pdf
516-42-Memory-Service-Unit-Format.pdf
516-43-GEBKUP-and-FLOAD.pdf
516-44-FSNAP-Floating-Point-Time-Sharing-Calculator.pdf
516-45-516-316-Assembler-and-Binder.pdf
516-46-CALC-A-Desk-Calculator-Program.pdf
516-47-Remote-Data-Plotting.pdf
516-48-CODING-FOR-GLANCE-G-Graphics.pdf
516-49-516-Segment-Assembler.pdf
516-50-Use-Of-The-516-Segment-Assemblers-Macros-In-Application-Programs.pdf
516-51-FSNAP-Designers-Guide.pdf
516-51-FSNAP-Users-Guide.pdf
516-52-DESK-A-Desk-Calculator.pdf
516-53-FSEOF-Flag-End-Of-File.pdf
516-54-Context-Editing.pdf
516-55-One-Card-Core-Save-Program.pdf
516-56-PRIME-An-Integer-Factoring-Program.pdf
516-57-Format-For-The-516-Node-T-I-U-Spider-Interface.pdf
516-59-Calling-Procedures-For-Math-Routines.pdf
516-59-INITIALIZATION-OF-THE-516-TSS-SYSTEM.pdf
516-60-SORT-SUBR-FOR-SEGMENTED-PROGRAMS.pdf
516-61-516-TSS-SYSTEM-BOLTED-IN-CORE-SUBROUTINES.pdf
516-63-Display-Controller-Glance-G.pdf
516-65-SOME-DIGITAL-FILTER-APPLICATION-PROGRAMS.pdf
516-66-TSS-516-GE-Communication.pdf
516-67-Node-Format-For-PDP-11.pdf
516-68-DFILE-N-A-Program-for-TSS-516.pdf
516-69-GLANCE-G-COMMUNICATION-FORMAT-TSS-516-TO-SCOPE.pdf
516-70-Routines-to-Perform-Character-String-IO-in-a-FSNAP-Program.pdf
516-71-FSNAP-Debugging-Aids.pdf
516-72-Node-Test.pdf
516-73-Node-IO-Software.pdf
516-75-Display-Text-Editor-DTE.pdf
516-76-LOCAL-DATA-PLOTTING.pdf
516-77-GLANCE-PLOTTING-ROUTINES-GPLOT-GLANCE-CHRGEN.pdf
516-77-V2-GLANCE-PLOTTING-ROUTINES-GPLOT-GLANCE-CHRGEN.pdf
516-78-DUMP.pdf
516-79-New-File-Features-in-FSNAP.pdf
516-81-OPTION-CHANGING-IN-GPLOT.pdf
516-86-MODIFICATIONS-TO-201-DATAPHONE-SOFTWARE.pdf
DDP-516-PROGRAMMERS-REFERENCE-CARD.pdf
DDP-516-Instruction-Set-Summary.pdf
Index.pdf
README
> From: "Theodore Y. Ts'o"
> was there anything that had similar functionality which pre-dated Bill
> Joy and termcap in late 70's?
Is your question purely in Unix, or more general?
If the latter, there's the terminal-independent support of video terminals in
ITS; that dates to the mid-1970's (i.e. circa V5 or so). User programs output
device-independent display control codes (I have this memory that they were
called P-Codes, but that could be my memory failing), and the OS translated
them to the appropriate screen-control characters.
One additional hack was that the number of terminal types supported in the OS
was limited; there was however a protocol called SUPDUP which sent (basically)
those device-independent codes over a remote login (originally over NCP) frm
the server machine to the client. The User SUPDUP client supported a lot more
terminal types; so people with odd-ball terminals used to log in, SUPDUP
_back_ to their machine, and away they went.
Noel
Hello All,
I know I have asked this before, but I am curious about any new replies or
insight. How did package management start? Were sites keeping track of
packages installed in a flat file that you could grep (as god intended)
somewhere, or were upgrades and additions simply done without significant
announcement? At what point did someone decide, 'Hey, we need to have a
central way to track additional software"?
I know of DEC's setld and SGI's inst in the latter half of the '80s. What
was the mechanism before that?
-Henry
> On Mon, Nov 23, 2020 at 12:28 PM Erik E. Fair <fair-tuhs(a)netbsd.org> wrote:
> The Honeywell DDP-516 was the computer (running specialized software
> written by Bolt, Bernanek & Newman (BBN)) which was the initial model of
> the ARPANET Interface Message Processors (IMP).
The IMPs had a lot of custom interface hardware; sui generis serial
interlocked host interfaces (so-called 1822), and also the high-speed modem
interfaces. I think there was also a watchdog time, IIRC (this is all from
memory, but the ARPANET papers from JCC cover it all).
Noel
These are in Warren's hands now and he'll let us know where their permanent
home ends up being. Since these are pretty much uncirculated unlike the UNIX
documents I wrote a README to go along with them which Heinz reviewed so it's
the best that two aging sets of memories can do. Here it is:
- - -
516-TSS is a little-known but groundbreaking and influential operating system
that was developed at Bell Telephone Laboratories. I came across this system
because Carl Christensen and later Heinz Lycklama were major contributors to
it, and they were also advisors for the Bell Labs Explorer Scout Post at
Murray Hill. I was a member of that post which allowed us to play with
computers on Monday evenings, and 516-TSS was what most of us used. Through
a series of amazingly lucky events, I ended up working as a summer student
for Carl and Heinz and got to contribute to the system. Long before the term
"code spelunking" was coined Carl and Heinz taught us both code and spelunking.
This is not a complete set of 516-TSS documents, it's a couple of notebooks
that I found in a box in the basement. Probably my ancient work-at-home copy.
I don't know enough history to know if it was the first, but 516-TSS was an
early department-level time-sharing system. It was built around a Honeywell
DDP-516. While other time-sharing systems predate 516-TSS, they weren't
systems that one's department could afford. CTSS certainly came earlier,
but it used a monster IBM 7090 mainframe. In round numbers, a 7090 cost
$3,000,000 dollars, a DDP-516 cost $50,000.
516-TSS was also a virtual memory system; again not the first but a rarity
in that era. My recollection is that it used the 516's index register as
the base address register, and there was some complicated mucking around
that a program had to do if it needed to use the index register including
disabling interrupts and eventually restoring the register from .PRESB
(present base address), one of those weird things stuck in my memory from
long ago.
I believe that the system's development predated UNIX although I remember
our department getting a PDP-11/45 running UNIX Version 3 in the summer of
1973. This machine was acquired so that Doug Bayer and Heinz Lycklama could
develop the MERT operating system.
The 516 was a testbed for a lot of novel technologies. It had a local area
network called the ring which was later made to work on PDP-11s including
Ken and Dennis's machine up in the attic of building 2. It was also used
to develop the GLANCE graphics terminals. My recollection is that one of
the main drivers behind getting the ring to work on PDP-11s and UNIX was so
that Ken could get a GLANCE-G terminal for playing chess. Sandy Fraser's
Spider network was developed there. It supported a number of novel
applications including Dick Hause's DTE graphics editor; way ahead of its
time. I remember that one GLANCE terminal was fitted with an array of LEDs
and photodiodes to make an early version of a touch screen.
While it wasn't exactly work related, a number of the people in the department
had purchased property up in Vermont for ski cabins. An important use of the
516-TSS system and GLANCE-G terminals was to figure out survey closures. The
property surveys were ancient, of the "from the big rock to the left of the
tree that's no longer there" sorts of things, so figuring out the actual
property lines was an interesting problem.
The 516 also had a wide area network which consisted of picking up the phone
and calling the computer center. It had a monster GE-635 or maybe 645 left
over from the Multics project. It may have been renamed to be a Honeywell
6070 with Honeywell's acquisition of GE's computer business. The computer
center kept department costs down by hoarding all of the really expensive
peripherals. For example, we didn't have a card punch; that was effectively
done via remote job entry. We didn't have a graphics printer either, so when
I was working on GPLOT I'd submit remote jobs to the computer center for
printing. Matter of fact, I don't think that we even had a printer in our
department; we sent stuff up to the computer center for printing. Although,
in those days many terminals used paper. The 516 console was an ASR-33.
There was also the ability to send jobs to the computer center and have it
call back with results. This early approach to a WAN showed up as the tss
command in UNIX.
One of the missions of the department was the development of an all-digital
telephone exchange which is why some of the documents describe programs that
assist with digital filter design. Both Jim Kaiser and Hal Alles were in the
department. One of the side-effects of all this was Hal figuring out how to
use the filter hardware connected to a LSI-11/03 to make sound, followed by
Dave Hagelbarger building a very interesting keyboard for it, culminating in
a visit by Stevie Wonder trailed by a large number of screaming secretaries.
No sexism intended, it was a different world back then. The LSI-11 was one
of the motivations for Heinz to create the LSX operating system.
My recollection is that on Dave's keyboard each key was an antenna, and that
there was strip of ribbon cable underneath where each wire was driven by a
different bit on a binary counter. This allowed the position of each key to
be determined which I think was way ahead of its time. I don't think that
any commercially available keyboards did this at the time, they were all just
on/off. Dave also designed the GLANCE keyboard which spoiled me for life.
I don't remember how he did it, but the keys had a really good feel where once
they got pushed past a certain point they snapped down. I do recall that there
was a small solenoid mounted on the circuit board so that the keys gave a
satisfying click that you could feel in your fingers. Another of Dave's gizmos
was the chess board that he made for Ken. My recollection is that there was a
tuned circuit in the base of each chess piece and an antenna grid in the board
so that the PDP-11 could read the position of each piece.
Some of the success of the 516 system was that other departments used it. I
spent some time working an a 516-based integrated circuit test system where
the test equipment stations were on the ring. Seems really dumb now, it's hard
to believe that there was a time in which a computer cost more than a wafer
stepper.
In addition to his work on 516-TSS, Carl Christensen was one of the people who
interviewed Ken Thompson for a job at the labs and gave a thumbs up.
The 516-TSS documents don't have author names, just initials. Here's who they
are to the best of my recollection.
ADH Dick Hause
CC Carl Christensen
DJB Doug Bayer
DRW Dave Weller
EPR ?
HL Heinz Lycklama
JCS John Schwartzwelder
JES Jon Steinhart
JFK Jim Kaiser
JHC Joe Condon
JVC John Camlet
LIS ?
MAS ?
RFG Rudy Garcia
There is one mysterious document in the collection about a "memory service unit".
I had this filed under "zapper". To the best of my recollection it was the PROM
programmer that we used to burn the microcode PROMs for the GLANCE terminals.
Jon Steinhart, 11/20/2020
While cleaning up a few shelves of old USENIX proceedings, I found a
mysterious manila envelope full of xeroxed copies of all the original
UNIX NEWS newsletters from 1975 thru 1977. It was renamed to ;login:
in 1977 and has continued publication to this day. The envelope also
contained ;login: issues v2n6 thru v3n8 (1977-1978).
I scanned those all in today and put them up on my website, here:
http://www.toad.com/early-usenix-newsletters/
These have not been OCR'd, and many of the pages were rotated by 90
degrees in the original publication, to fit two pages of typewritten
correspondence (or recipient address lists) into one page of newsletter.
Still, in a quick web search I was unable to find copies of these
anywhere else, so I invested a few hours to scan them in and post them
for historical interest. As an example, Sixth Edition (v6) UNIX was
announced in issue number 1.
These are all free to publish nowadays. USENIX was one of the first
technical organizations to establish an Open Access policy for its
publications, a step which distinguishes them from ACM and many academic
publishers who favor revenue for themselves over the progress of
science. (I voted for this policy decades ago when I was a USENIX board
member.) This page, for example, says:
https://www.usenix.org/conference/usenixsecurity20/presentation/schwarz
"USENIX is committed to Open Access to the research presented at our
events. Papers and proceedings are freely available to everyone once
the event begins. Any video, audio, and/or slides that are posted
after the event are also free and open to everyone."
The ;login: archives at USENIX.org are complete from October 1997 to today:
https://www.usenix.org/publications/login
Also, most but not all issues of ;login: from 1983 to 1997 have been
scanned by USENIX and uploaded to the Internet Archive here:
https://archive.org/details/usenix-login?&sort=date
The USENIX Association apparently has paper copies of the stuff I
scanned in today, but they are still trying to locate ;login: issues
from 1979 and parts of 1980 and 1981. In addition, they are backlogged
on scanning in their old materials (including copies of ;login: between
1978/09 and 1983/02). If you have old copies of ;login: that you don't
see visible in these places, please scan them, or offer them to USENIX.
Also, if you have old proceedings of USENIX conferences, there are still
three that the USENIX staff do not have any copy of:
XFree86 Technical Conference
https://www.usenix.org/legacy/publications/library/proceedings/xfree86/
2001-11-08
5th Annual Linux Showcase & Conference
https://www.usenix.org/legacy/publications/library/proceedings/als01/tech.h…
2001-11-08
WORLDS '04
https://www.usenix.org/legacy/events/worlds04/tech/
2004-12-05
If you have any of these three, please let <info(a)usenix.org> know. They
also lack about twenty more for which they have posted the academic
papers, but don't have the covers or front-matter, so if you have other
proceedings from between 1989 and 2004 that you'd be willing to part
with or scan, also let them know. Thanks!
John
A couple of my friends from UC Berkeley were musing on another email
thread. The question from one of them came up: *"I'm teaching the
undergrad OS course this semester ... Mention where ~ comes."*
This comment begets a discussion among the 4 of us at where it showed up in
the UNIX heritage and it if was taken from somewhere else.
Using the tilde character as a short cut for $HOME was purely a userspace
convention and not part of the nami() kernel routine when it came into
being. We know that it was supported by Mike Lesk in UUCP and by Bill Joy
in cshell. The former was first widely released as part of Seventh Edition
but was working on V6 before that inside of BTL. Joy's cshell came out as
part of 2BSD (which was V7 based), but he had released "ashell" before that
and included it in the original BSD (*a.k.a.* 1BSD) which was for V6 [what
I don't remember is if it supported the convention and I can not easily un-
ar(1) the cont.a files in the 1BSD tar image in Warren's archives.
In our exchange, someone observed suggested that Joy might have picked it
up because the HOME key was part of the tilde key on the ADM3A, which were
popular at UCB [*i.e.* the reason hjkl are the movement keys on vi is the
were embossed on the top of those keys on the ADM3A]. It also was noted
that the ASR-33 lacks a ~ key on its keyboard. But Lesk definitely needed
something to represent a remote user's home directory because each system
was different, so he was forced to use something.
It was also noted that there was plenty of cross-pollination going on as
students and researchers moved from site to site, so it could have been BTL
to UCB, vice-versa, or some other path altogether.
So two questions for this august body are:
1. Where did the ~ as $HOME convention come to UNIX?
2. Did UNIX create the idiom, or was there an earlier system such as
CTSS, TENEX, ITS, MTS, TSS, or the like supported it?
Fun read and it's totally wild that people's emotional comfort with *text*
drives a lot of their love or hate of unix.
http://theody.net/elements.html
Tyler
Ken's (?) Plan 9 assemblers are well known for their idiosyncratic
syntax, placing identical behaviour across platforms over a sense
of resemblance to people used to normal assemblers. While I am
aware of Rob's talk [1] on the basic design ideas and have read both
the Plan 9 [2] and Go [3] assembler manuals, many aspects of the
design (such as the strange way to specify static data) are
unclear and seem poorly documented.
Is there some document or other piece of information I can read on
the history of these assemblers? Or maybe someone recalls more bits
about these details?
Yours,
Robert Clausecker
[1]: https://talks.golang.org/2016/asm.slide
[2]: https://9p.io/sys/doc/asm.html
[3]: https://golang.org/doc/asm
--
() ascii ribbon campaign - for an 8-bit clean world
/\ - against html email - against proprietary attachments
This memory just came back to me. There was a UNIX disribution
(PWB/UNIX?) that had a program called 1.
It printed tis quaint bit of propaganda.
One Bell System. It works.
This was fine until one day I’m at work in a big bull pen computer room
when Bernie, one of my co-workers, shouts.
“What’s all this Bell System crud in the editor?”
My reaction is, “Well, it’s all Bell System crud.” I walk over to his
terminal and find he is typing 1 repeatedly at the shell prompt and
getting the above message. (This was back in the old /bin/ed days where
1 got you to the top of the file). I had to point out he wasn’t in
the editor.
Later that day, the program was changed to say:
You’re not in the editor, Bernie.
This I think made it into one of the BRL releases and occassionally got
inquiries as to who Bernie is.
Warner Losh and I have been discussing the early history of John
Lions' "A commentary on the Sixth Edition UNIX Operating System".
I've been hosting Warren Toomey's version (with some correction of
scan errors) at http://www.lemis.com/grog/Documentation/Lions/ for
some years now, and my understanding had been that the book hadn't
been published, just photocopied, until Warren posted it on
alt.folklore.computers in 1994. But now it seems that the "book" had
been published by UNSW when Lions held the course, and only later was
the license revoked. Does anybody have any insights? What
restrictions were there on its distribution? What was the format?
Was it a real book, or just bound notes?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
There also exists a latter-day AT&T version of the Lions
book. White cover with deathstar logo; standard US letter-
sized paper, perfect-bound along the short edge. Two
volumes: one for the source code, one for the commentary.
I have a copy, and I bet Andrew does too: as I remember,
he got a handful of them from Judy Macor (who used to
handle licensing requests--I remember speaking to her
on the phone once in my pre-Labs days) when she was
clearing old stuff out of her office, and I nabbed one.
Norman Wilson
Toronto ON
Has anyone gotten Xinu running in SIMH? It seems like it should be straightforward to run the "support" utilities under BSD on an emulated VAX and then run Xinu itself on an emulated LSI-11. If anyone's done so, I'd be interested to learn what all you had to do to set it up and get it working.
-- Chris
-- who needs to figure out SIMH config file syntax to match the board set he wants to simulate
Does anyone have an email for Eric Schmidt? My vibe is he is super
private so contact me off list if you need to know why I am looking.
I overlapped with him at Sun and talked to him a few times but I doubt
he remembers me.
--lm
I've made a number of 'improvements' to the LSI-11 version of MINI-UNIX.
(I'm starting to be fairly impressed with MINI-UNIX; for people who have a
hardware PDP-11 with no memory management, it's a very capable system; most
of V6, and very good source compatability.)
First, with help from feedback from Paul Riley, I've improved the "Running
MINI-UNIX on the LSI-11" page:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/Mini/Mini.html
It should be pretty usable at this point, but more feedback on further
improvements gratefully accepted! (Hint, hint :-)
In code changes, I have a new version of mch.s:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/Mini/mch.s
The main improvements are a tiny prs() and prn(), to allow systems to leave
out prf.c to save space, but still be able to print messages (rather than
simply dying silently, as is MINI-UNIX's wont). The prs() also saves and
restores the console'e CSR, and prints with console interrupts off (to prevent
spurious interrupts).
An idea from Milo Velimirovic (use the top of the stack!) resulted in minor
improvements in two places where there wasn't a register free to use
MFPS/MTPS.
Also,I have a working RL driver for MINI-UNIX now (I was able to attach a V6
filesystem to RL0 and then could do "icheck /dev/rl0" and it worked); I'll be
up-loading that, and adding directions for using it, 'soon'. (It pretty much
just worked; pulled out the XMem bits, and the raw I/O calls, and it worked
right off.)
To make an RL the root filesystem, I need to tweak a few more things; the
parameters ROOTDEV, etc - crucially, including SWPLO and NSWAP - are currently
set in param.h, so you'd have to recompile the OS to switch disk types. I'm
going to put them back as externals in conf.c, the way they are in V6; that
way you'll only need an 'rlconf.c' to switch roots. (I'm not sure why they
were moved; it only saves one word each to make them #define's.)
Noel
[ Warning: you need to be an OF to understand the references ]
Time to re-post this... And trust me, an IBM 3090 was really big iron in
those days.
I don't recall the author, but I found it on the 'net.
-----
VAXen, my children, just don't belong some places. In my business, I am
frequently called by small sites and startups having VAX problems. So when a
friend of mine in an Extremely Large Financial Institution (ELFI) called me one
day to ask for help, I was intrigued because this outfit is a really major VAX
user - they have several large herds of VAXen - and plenty of sharp VAXherds to
take care of them.
So I went to see what sort of an ELFI mess they had gotten into. It seems they
had shoved a small 750 with two RA60s running a single application, PC style,
into a data center with two IBM 3090s and just about all the rest of the disk
drives in the world. The computer room was so big it had three street
addresses. The operators had only IBM experience and, to quote my friend, they
were having "a little trouble adjusting to the VAX", were a bit hostile towards
it and probably needed some help with system management. Hmmm, hostility...
Sigh.
Well, I thought it was pretty ridiculous for an outfit with all that VAX muscle
elsewhere to isolate a dinky old 750 in their Big Blue Country, and said so
bluntly. But my friend patiently explained that although small, it was an
"extremely sensitive and confidential application." It seems that the 750 had
originally been properly clustered with the rest of a herd and in the care of
one of their best VAXherds. But the trouble started when the Chief User went
to visit his computer and its VAXherd.
He came away visibly disturbed and immediately complained to the ELFI's
Director of Data Processing that, "There are some very strange people in there
with the computers." Now since this user person was the Comptroller of this
Extremely Large Financial Institution, the 750 had been promptly hustled over
to the IBM data center which the Comptroller said, "was a more suitable place."
The people there wore shirts and ties and didn't wear head bands or cowboy
hats.
So my friend introduced me to the Comptroller, who turned out to be five feet
tall, 85 and a former gnome of Zurich. He had a young apprentice gnome who was
about 65. The two gnomes interviewed me in whispers for about an hour before
they decided my modes of dress and speech were suitable for managing their
system and I got the assignment.
There was some confusion, understandably, when I explained that I would
immediately establish a procedure for nightly backups. The senior gnome seemed
to think I was going to put the computer in reverse, but the apprentice's son
had an IBM PC and he quickly whispered that "backup" meant making a copy of a
program borrowed from a friend and why was I doing that? Sigh.
I was shortly introduced to the manager of the IBM data center, who greeted me
with joy and anything but hostility. And the operators really weren't hostile
- it just seemed that way. It's like the driver of a Mack 18 wheeler, with a
condo behind the cab, who was doing 75 when he ran over a moped doing its best
to get away at 45. He explained sadly, "I really warn't mad at mopeds but to
keep from runnin' over that'n, I'da had to slow down or change lanes!"
Now the only operation they had figured out how to do on the 750 was reboot it.
This was their universal cure for any and all problems. After all it works on a
PC, why not a VAX? Was there a difference? Sigh.
But I smiled and said, "No sweat, I'll train you. The first command you learn
is HELP" and proceeded to type it in on the console terminal. So the data
center manager, the shift supervisor and the eight day-operators watched the
LA100 buzz out the usual introductory text. When it finished they turned to me
with expectant faces and I said in an avuncular manner, "This is your most
important command!"
The shift supervisor stepped forward and studied the text for about a minute.
He then turned with a very puzzled expression on his face and asked, "What do
you use it for?" Sigh.
Well, I tried everything. I trained and I put the doc set on shelves by the
750 and I wrote a special 40 page doc set and then a four page doc set. I
designed all kinds of command files to make complex operations into simple
foreign commands and I taped a list of these simplified commands to the top of
the VAX. The most successful move was adding my home phone number.
The cheat sheets taped on the top of the CPU cabinet needed continual
maintenance, however. It seems the VAX was in the quietest part of the data
center, over behind the scratch tape racks. The operators ate lunch on the CPU
cabinet and the sheets quickly became coated with pizza drippings, etc.
But still the most used solution to hangups was a reboot and I gradually got
things organized so that during the day when the gnomes were using the system,
the operators didn't have to touch it. This smoothed things out a lot.
Meanwhile, the data center was getting new TV security cameras, a halon gas
fire extinguisher system and an immortal power source. The data center manager
apologized because the VAX had not been foreseen in the plan and so could not
be connected to immortal power. The VAX and I felt a little rejected but I
made sure that booting on power recovery was working right. At least it would
get going again quickly when power came back.
Anyway, as a consolation prize, the data center manager said he would have one
of the security cameras adjusted to cover the VAX. I thought to myself,
"Great, now we can have 24 hour video tapes of the operators eating Chinese
takeout on the CPU." I resolved to get a piece of plastic to cover the cheat
sheets.
One day, the apprentice gnome called to whisper that the senior was going to
give an extremely important demonstration. Now I must explain that what the
750 was really doing was holding our National Debt. The Reagan administration
had decided to privatize it and had quietly put it out for bid. My Extreme
Large Financial Institution had won the bid for it and was, as ELFIs are wont
to do, making an absolute bundle on the float.
On Monday the Comptroller was going to demonstrate to the board of directors
how he could move a trillion dollars from Switzerland to the Bahamas. The
apprentice whispered, "Would you please look in on our computer? I'm sure
everything will be fine, sir, but we will feel better if you are present. I'm
sure you understand?" I did.
Monday morning, I got there about five hours before the scheduled demo to check
things over. Everything was cool. I was chatting with the shift supervisor
and about to go upstairs to the Comptroller's office. Suddenly there was a
power failure.
The emergency lighting came on and the immortal power system took over the load
of the IBM 3090s. They continued smoothly, but of course the VAX, still on
city power, died. Everyone smiled and the dead 750 was no big deal because it
was 7 AM and gnomes don't work before 10 AM. I began worrying about whether I
could beg some immortal power from the data center manager in case this was a
long outage.
Immortal power in this system comes from storage batteries for the first five
minutes of an outage. Promptly at one minute into the outage we hear the gas
turbine powered generator in the sub-basement under us automatically start up
getting ready to take the load on the fifth minute. We all beam at each other.
At two minutes into the outage we hear the whine of the backup gas turbine
generator starting. The 3090s and all those disk drives are doing just fine.
Business as usual. The VAX is dead as a door nail but what the hell.
At precisely five minutes into the outage, just as the gas turbine is taking
the load, city power comes back on and the immortal power source commits
suicide. Actually it was a double murder and suicide because it took both
3090s with it.
So now the whole data center was dead, sort of. The fire alarm system had its
own battery backup and was still alive. The lead acid storage batteries of the
immortal power system had been discharging at a furious rate keeping all those
big blue boxes running and there was a significant amount of sulfuric acid
vapor. Nothing actually caught fire but the smoke detectors were convinced it
had.
The fire alarm klaxon went off and the siren warning of imminent halon gas
release was screaming. We started to panic but the data center manager shouted
over the din, "Don't worry, the halon system failed its acceptance test last
week. It's disabled and nothing will happen."
He was half right, the primary halon system indeed failed to discharge. But the
secondary halon system observed that the primary had conked and instantly did
its duty, which was to deal with Dire Disasters. It had twice the capacity and
six times the discharge rate.
Now the ear splitting gas discharge under the raised floor was so massive and
fast, it blew about half of the floor tiles up out of their framework. It came
up through the floor into a communications rack and blew the cover panels off,
decking an operator. Looking out across that vast computer room, we could see
the air shimmering as the halon mixed with it.
We stampeded for exits to the dying whine of 175 IBM disks. As I was escaping
I glanced back at the VAX, on city power, and noticed the usual flickering of
the unit select light on its system disk indicating it was happily rebooting.
Twelve firemen with air tanks and axes invaded. There were frantic phone calls
to the local IBM Field Service office because both the live and backup 3090s
were down. About twenty minutes later, seventeen IBM CEs arrived with dozens
of boxes and, so help me, a barrel. It seems they knew what to expect when an
immortal power source commits murder.
In the midst of absolute pandemonium, I crept off to the gnome office and
logged on. After extensive checking it was clear that everything was just fine
with the VAX and I began to calm down. I called the data center manager's
office to tell him the good news. His secretary answered with, "He isn't
expected to be available for some time. May I take a message?" I left a
slightly smug note to the effect that, unlike some other computers, the VAX was
intact and functioning normally.
Several hours later, the gnome was whispering his way into a demonstration of
how to flick a trillion dollars from country 2 to country 5. He was just
coming to the tricky part, where the money had been withdrawn from Switzerland
but not yet deposited in the Bahamas. He was proceeding very slowly and the
directors were spellbound. I decided I had better check up on the data center.
\Most of the floor tiles were back in place. IBM had resurrected one of
the
3090s and was running tests. What looked like a bucket brigade was working on
the other one. The communication rack was still naked and a fireman was
standing guard over the immortal power corpse. Life was returning to normal,
but the Big Blue Country crew was still pretty shaky.
Smiling proudly, I headed back toward the triumphant VAX behind the tape racks
where one of the operators was eating a plump jelly bun on the 750 CPU. He saw
me coming, turned pale and screamed to the shift supervisor, "Oh my God, we
forgot about the VAX!" Then, before I could open my mouth, he rebooted it. It
was Monday, 19-Oct-1987. VAXen, my children, just don't belong some places.
-- Dave
Just for completeness: I have one OpenBSD 6.7 system at
home and look after several more at work, and yes, OpenBSD
still has raw disk devices.
Norman Wilson
Toronto ON
> I noticed a place where I used R0 as a temp ... and was being bashed.
> So I fixed it, and now the shell starts OK, but attempting to do any
> command (e.g. "echo foo"), things hang
Well, I had 'fixed' it; it turned out my 'fix' had a bug. :-( (The code I had
to change for the /03 there was pushing the old PS, and that and the temp I
had to push got intermangled.)
Anyway, with that fixed, the /03 Mini-Unix works now. The old user command
binaries seem to work OK on the /03; not that I've tried the all, but the ones
I have tried (including the C compiler) all worked. They all should all work
(there's nothing in user code that's model-dependent). I have tweaked the
shell (to allow 'cd') and init (to get rid of the annoying long rights
message), but that's all.
The latest, greatest mch.s is uploaded:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/Mini/mch.s
Although a couple of files (bio.c, clock.c, slp.c, and tty.c) had minor
changes (to remove direct rerferences to the PS; they now call getps() and
putps() for that), and main.c has minor changes to work when there's no KW11
or switch register, really the only file with significant changes for the /03
is mch.s. It's the only one where the object code is model-dependent; all the
other changed ones use the same object code for all CPU models.
I'll put up a Web page with details, links to sources, etc, 'soon'.
A couple of other things.
Mini-Unix has removed 'raw' devices (not sure why, probably seemed un-needed),
so other disk drivers (e.g. the RL11 driver) aren't straight drop-ins. Minimal
tweaks needed, though; just remove the read and write routines, I think.
If there was a real use for 'raw' devices, they could probably be added back,
but physio() would have to be modified (simplified). Not sure if anything else
special would be needed; the process can't be swapped while raw I/O is
ongoing, and so on Mini-Unix no other process could run. Probably OK, but
needs to be checked.
I recommend that everyone trying to run Mini-Unix on a hardware /03 invest in
a KEF11 chip. (There are a few on eBait.) That way, you can leave the EIS
emulator out of the build, which will save some space, and allow more room for
device drivers. I added kernel printf() into the build, to help with
debugging, but it can be removed to save space.
You can change the system to use more room for the kernel (see the Mini-Unix
docs), but that involves re-linking _every single user command_, including the
shell and init. Not recommended.
Noel
can stanley lieber contact me please (regarding an 8th edition manual)?
i can’t contact you via sl(a)stanleylieber.com (because of an SPF error),
so perhaps via another email address.
thanks
> From: Paul Riley
> port Mini-Unix will create some demand for device drivers on the /03
> systems, so may be worthwhile to implement RAW device.
I'm not sure I understand this ("worthwhile to implement RAW device"); let me
explain what the removal of 'raw' I/O devices from MINI-UNIX really means, and
then ask what it is that you are after.
Early Unix (no idea about later ones) supports two classes of devices; 'block'
devices, which can be used to hold file-systems, and 'character' devices,
which cannot. (I seem to recall a paper, perhaps from the Unix BSTJ issue,
which talks about them in some detail.)
The former are those where the underlying physical device has restricted
semantics; they are block-addressable mass storage devices. All access to them
is via the system's block buffer pool, so reads/writes by the user of
arbitrary size and location are possible. 'Character' devices are everything
else.
'Raw' devices are an interface to the devices of 'block' devices which does
not go through the system's buffer pool; I/O operations to them perform DMA
directly to the user process' memory. They are 'character' devices, in the
Unix device taxonomy. The only semantics available are those supported by the
hardware - e.g. seeks only to block boundaries.
So when I say that MINI-UNIX doesn't have 'raw' devices, it just means that
e.g. the RK disk controller device _only_ talks to the Unix block buffer
system; if a user program wants to look at the disk contents, it has to go
via that system.
So, with that in hand, what exactly is the need you forsee for raw devices
in MINI-UNIX?
Also, I've started to work on getting the V6 RL driver to work in MINI-UNIX;
it should have been easy (just delete the charater device interface), but
for some reason it didn't work when I tried it. I'll look at it in more
detail 'soon'.
Noel
> From: Jay Logue
> Are your other changes available anywhere?
Yeah, they're all up-loaded, and linked to from my 'Running MINI-UNIX on the
LSI-11' page:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/Mini/Mini.html
It's mostly done, but I'll probably continue to tweak it a bir.
If anyone notices any errors, or has questions that aren't answered there, etc
please let me know.
> Also, I was wondering if it might be useful to have a github repo with
> these changes. I'd be happy to help set this up.
Feel free to go for it, if people think it would be useful. (I'm not sure
there is a 'basic' MINI-UNIX repo to start on.)
Noel
How did globbing come about in unix?
Related, as regexes were already well known because of qed/ed, why wasn't a
subset of regular expressions used instead?
Tyler
> From: Paul Riley
> Always darkest before the dawn.
Well, we'll see.
I found _that_ one; process 1 managed to exec() init, do a fork(), and then
the child process exec()'d the shell - then that apparently died, and the code
in init falls through into:
termall();
execl(init, minus, 0);
when the single-user shell exits, so then init restarted; rinse, repeat.
So a _lot_ of the code in mch.s seemed to be working correctly; system calls
(2 exec()'s, and a fork()) worked, processing switching worked, device
interrupts (for the disk and console tty) seemed to be working.. Not sure
what's left!
So I looked at mch.s again, to see what else _was_ there, and I noticed a
place where I used R0 as a temp - with the MFPS/MTPS thing to get to the PS,
instructions like:
BIS #340, PS
need to change to:
MFPS R0
BIS #340, R0
MTPS R0
and R0 was being used to hold an arg (in pword:), and was being bashed.
So I fixed it, and now the shell starts OK, but attempting to do any command
(e.g. "echo foo"), things hang (the shell doesn't fork). If I type the
interrupt character, the shell exits, and init restarts.
Oh well, hopefully this one won't be too painful to work out. The system's
mostly working, which I think will really help.
Noel
> Now to see if 'cc' works on an '-11/40'
Yeah, the C compiler works fine on a /40; so the SOB bug (or perhaps some
_other_ one I haven't found yet) must have affected it too.
The thing that's odd about that bug is that SOB works _sometimes_ on an /05;
the 'rkmx' on the distro tape will boot (which if the SOB _never_ worked, it
wouldn't). So there must be a data dependency somehow. John Wilson says the
SOB is very optimized, so maybe there's a bug.
> then back to the /03.
Well, I tried /03 version, and it doesn't work; /etc/init continually
restarts.
The thing is that _every_ file except mch.s is identical between the '05'
version (which runs fine on the /40; above), and the /03 version. So the bug
must be in mch.s - unless there's somehow an /03 dependency somewhere else I
missed. I looked through init.c, didn't see anything.
Here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/Mini/mch.s
is the mch.s source, if anyone wants to look through it and see if they see
anything. It's conditionalized for the /03; there's a very simple header file
(here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/Mini/03mch.s
for the 03 version) to set the flags (so one doesn't have to edit the main file
to change the flag settings).
I had a careful look at mch.s, before I started, checking all the added
conditionalized code; found one that could have been an issue (I was using a
temporary register, r0, that had needed data in it), but the rest all looked
OK. I have some ideas on how to proceed on working out what's going on, but
I'm done for today, gotta do other stuff.
Noel
> I suspect there must be an issue with the -11/05 emulation in Ersatz-11
There is. The C compiler on MiniUnix emits SOB instructions. The -11/05 doesn't
implement SOB; however, the instruction emulator in MiniUnix (emul.s) is
prepared to emulate it. (So MiniUnix should work fine on real -11/05's.)
However, when set to an -11/05, Ersatz-11 treats SOBs as NOPs; they fall
through without any effect. Without a trap, they can't be emulated either.
This caused the problem with namei() failing (namei() calls bcopy(), which has
a SOB in it), and probably caused the problem I was seeing with the C
compiler, but I'm too burned out to check right now. Tomorrow.
Noel
> I just realized that the _first_ entry, #'9', is actually #0 _in the
> directory_; u_count counts _down_, whereas the code looks through dir
> entries going _up_.
Fixed that (kept my own index of which entry it was working on, and
caculated the name location in the buffer, and got:
chk 0 15 2 '..'
chk 1 14 18 '.'
chk 2 13 34 'bin'
chk 3 12 50 'dev'
chk 4 11 66 'etc'
etc... and it blew right past the 'etc' entry, to the end of the
directory. WTF?
> Still don't understand why I can't print the dir entries out of the u
> area, though.
Now that my brain has turned on, I'll bet that's not _my_ bug, I'll bet it's
_the_ bug! If the directory entry in u.u_dent.u_name is all 0's, _of course_
the match is going to fail.
Just for grins, I set the CPU type in Ersatz-11 to "40 EIS NOMMU" and... it
booted up fine! I suspect there must be an issue with the -11/05 emulation in
Ersatz-11, since MiniUnix worked fine on _real_ -11/05's.
Now to see if 'cc' works on an '-11/40' - then back to the /03.
Noel
> it looks at the following entries:
> Anyway, after printing the entry for 5, it goes to 'done'
Uh, I just realized that the _first_ entry, #'9', is actually #0 _in the
directory_; u_count counts _down_, whereas the code looks through dir entries
going _up_.
Still don't understand why I can't print the dir entries out of the u area,
though.
Noel
> From: jay-tuhs9915
> Are there any notes you can share on how to get to the point you're at?
Well, there are three areas where the /03 version needs work, over the /05:
- No LTC clock register
- No switch register
- PS access only via MFPS/MTPS instructions
For the first two, the needed changes are identical to the ones detailed here:
http://gunkies.org/wiki/Running_UNIX_V6_on_an_-11/23
These have all been tested on the /23. Rather than anyone make the exact same
changes independently, I can put up the modified versions of the MiniUnix
files for them (low.s, main.c and param.h).
For the third, I have an mch.s with a conditional assembly flag that _should_
do it all; like I said, there are also really minor edits to bio.c, clock.c,
slp.c, and tty.c. Again, I can upload the mch.s which is already done.
I haven't been able to _confirm_ that these work, but it should be mostly good;
the changes are pretty straight-foward.
> The code is doing a string comparison between the name in the current
> directory entry (u_dent) and the current pathname component (in
> u_dbuf). The expression in brackets is the relative distance between the
> two name fields within the u struct.
Yeah, I'd worked that out (the immediately preceding comment in the code -
"String compare the directory entry and the current component" - indicates
what it's doing, and as my "the term inside the []'s seems to be an offset
.. into the copy of the current directory entry" indicates, I'd worked out how
it did that. I was still puzzled by some othet aspects of the code, I just
included that to give a flavour of the code.
> In what way does it fail? Is it simply that namei() doesn't find the
> file its looking for?
Right. It's looking for 'etc' in the root directory (only one block), and
it looks at the following entries:
9 146 '05mx'
8 130 'usr'
7 114 'tmp'
6 98 'mnt'
5 82 'lib'
(I put a printf() in the loop; I've added prf.c to the load so I can do
that. The numbers are the index, u.u_count - although it's already been
descremented at that point, so it will be '0' when doing the last entry - and
location of that entry in the directory, given next to it. For some reason, I
can't get the entry to print from u.u_dent.u_name, so I'm printing it straight
from the block buffer, bp->b_addr[]. I can print the _inode number_,
u.u_dent.u_ino as a string, but not the dir entry. Wierd.)
Anyway, after printing the entry for 5, it goes to 'done', with u_error
containing '2'. I can't see how it could do that.
I'm using printf() because I'm too lazy to figure out how to build a kernel
with a debugger like DDT included. (We never did that when we were working
on V6 at MIT BITD; ISR we mostly just used printf() back then, too.)
Noel
>> Then on to trying to find out why MiniUnix crashes whenever I try and do
>> anything significant.
> I decided I wasn't up to tackling that, so instead I did all the edits
> to produce an LSI-11 version of MX. ... I need to go
> back and put conditional assembly flags in mch.s so there's only one
> source file for both kinds of system. Doesn't boot, though.
So this has turned into a big swamp.
I went back and did the conditionals, and I can turn off the -11/03 flag and
produce the identical binary to the original mch.o file. -11/05 systems built
with that still won't boot, though!
So I had made some minor changes elsewhere in the system; e.g. a few files
(bio.c, clock.c, slp.c, and tty.c) refer the the PS explicitly (a no-no in the
-11/03) so I changed them all to call getps() and putps(), and added /05
versions of those to mch.s; so I backed them out, and re-built the system
using the original binaries of those; _still_ won't boot.
I then tried the original 'rkmx', and that _does_ boot; so there's no
mysterious damage to the file system. But now I'm deeply puzzled, since the
new system (which won't boot) should be basically identical (OK, not
bit-for-bit identical, but close).
So then I started trying to see why the new /05 system won't boot; the exec()
call in process 1 that starts /etc/init seems to be failing. Digging into
that, the call to namei() (in sys1$exec()) seems to be failing? Huh? The
file-system is OK (see above)?
So I'm trying to work out how that is happening. Which is non-trivial;
namei() is pretty convoluted. I can deal with the fact that there are two
nested loops using goto's (not the best form, but I can grok it), but then I
run into things like this:
for(cp = &u.u_dbuf[0]; cp < &u.u_dbuf[DIRSIZ]; cp++)
if(*cp != cp[u.u_dent.u_name - u.u_dbuf])
Check out that second line! (And Heinz didn't touch it; this copy is from the
V6 source.) I'm not sure I 100% grok it, but I think I get roughly what it's
doing: 'cp' seems to be a (moving) pointer into the filename being matched,
and the term inside the []'s seems to be an offset from there into the copy of
the current directory entry in the 'u' structure. (Which is a constant, it
doesn't need to be recomputed each time around the loop, though.) It seems to
check most of the (wrong) directory entries OK, but then inexplicably (to me)
fails.
At this point I'm getting a bad feeling that there could be a sim issue; that
could also explain the problem I'm seeing with the crashes, when trying to run
'cc' under the as-distributed -11/05 version.`
I'm not a SIMH user, though (I'm an Ersatz-11 person); is there anyone who is,
who'd like to play with MiniUnix with me?
Noel
Hi All.
I know this is off topic, but this list is full of the people who
can help me with this.
I'd like to nomimate Chet Ramey for the USENIX lifetime achievement
award (yes, he's aware). I need 2-4 letters of support for this;
anyone who agrees that he deserves it should send me such a letter,
please (PDF, I guess) and I'll send in the whole package. See the
details below.
The deadline is October 19; please send a letter sooner rather
than later.
Let's start by sending me "yes, I'll write a letter" and then if
there are lots, I'll pick four.
PLEASE reply directly to me; no need to inundate TUHS with mail
on this.
Thanks,
Arnold
> Date: Wed, 30 Sep 2020 16:00:19 -0700 (PDT)
> From: Casey Henderson <casey.henderson(a)usenix.org>
> To: Arnold Robbins <arnold(a)skeeve.com>
> Subject: Call for Nominations for the USENIX Lifetime Achievement Award
>
> Dear Arnold:
>
> USENIX offers several awards that recognize public service and technical
> excellence in the fields, including the USENIX Lifetime Achievement Award
> ("The Flame"): https://www.usenix.org/about/flame
>
> The Flame recognizes and celebrates singular contributions to the USENIX
> community of both intellectual achievement and service that are not
> recognized in any other forum. Please consider nominating a deserving
> colleague!
>
> We invite you to submit nominations for USENIX's awards at any
> time. However, to help us offer the award this year, we strongly encourage
> nominations for The Flame by October 19.
>
> A nomination should include:
> *Name and contact information of the person making the nomination
> *Name(s) and contact information of the nominee(s)
> *A citation, approximately 100 words long
> *A statement, at most one page long, on why the candidate(s) should receive the award
> *Between two and four supporting letters, no longer than one page each
>
> Please submit your nominations to the Awards Committee via awards(a)usenix.org.
>
> Best Wishes,
> Casey Henderson
> Executive Director
> USENIX Association
> At some point, I'll produce a 'MiniUnix ld' on vanilla V6, so I can
> build MiniUnix versions of applications there; the first will be the
> shell, so I don't have to keep typing 'chdir' instead of 'cd'! :-)
OK, that was pretty smooth. I now have (on the main V6 system) a linker in
V6 binary form that outputs MX binary files, so I can do things like:
mld sh.o /mnt/lib/lib[ca].a
to produce a new shell (which worked fine). (I think to build 'mld' all I
had to do was 'cc ld.c', in usr/sys/source on the MX disk.)
This whole 'futz with MX by mounting the MX disk under V6' thing works really
well.
> Then on to trying to find out why MiniUnix crashes whenever I try and do
> anything significant.
I decided I wasn't up to tackling that, so instead I did all the edits to
produce an LSI-11 version of MX. Doesn't boot, though; tries to do a panic, I
think. I'm too burned out to keep going, I will continue tomorrow morning (US
East Coast time).
Once I get it running, before I make it available for download, I need to go
back and put conditional assembly flags in mch.s so there's only one source
file for both kinds of system; I had originally planned on doing that, but I
was in such a 'code attack' mode I forgot all about it.
Noel
> From: Paul Riley
> So mounting Mini-Unix on a running V6 system I guess allows you to run
> Mini-Unix user mode binaries stuff
Ah, no. They are all link-loaded to run at TOPSYS (060000), so won't run on
V6 native.
> Or do you plan to recompile on a stable system?
Well, I need to find out what the problem is, first.
Still, notable progress: using my 'mount the Mini-Unix RK pack on a V6 system'
hack (which woked fine; the native V6 'icheck' and 'dcheck' work on that
pack), I was able to sucessfully compile a few tweaked system modules (to get
my usual line-editing chars, and turn off that irritating lower-case mode),
and then build an OS load which could sucessfully boot. So good progress
there. A couple of things I learned:
- MiniUnix tools use the 'new' archive format, so the V6 vanilla 'ar' doen't
grok MiniUnix archives (e.g. lib1/lib2). I have a 'nar', which I found on the
'Shoppa disks', which can deal with them. (We don't have source for it, but
I'll bet the MIT PWB1 system has that; I'll get to that eventuallly. There's also
an 'ar.c' on the MiniUnix disk; between the two, we'll be able to reconstitute
source for 'nar'.)
- Also, the vanilla V6 linker, 'ld', _also_ doesn't understand new archives;
so the shell file to build a new system, 'shld':
ld -a -x low.o conf.o lib1 lib2
blows out because it doesn't grok the libraries. Also, the '-a' flag, which
says 'link starting at 0, not TOPSYS', doesn't exist in the V6 'ld'.
I got around all that by unpacking lib1 (using 'nar', above) - it only contains
two files - and then listing the files to link directly:
ld -x low.o conf.o syso emulo dev/kl.o dev/devstart.o dev/rk.o
The vanilla V6 linker of course produces an output linked at 0 without
the -a flag.
At some point, I'll produce a 'MiniUnix ld' on vanilla V6, so I can build
MiniUnix versions of applications there; the first will be the shell, so I
don't have to keep typing 'chdir' instead of 'cd'! :-)
Then on to trying to find out why MiniUnix crashes whenever I try and do
anything significant.
Noel
> I think before I try debugging it directly, I'll try one of the other
> Mini-Unix repositories; the one I've been using (from Bob Eager's site)
> may have some bit-rot.
Well, foo, the one from TUHS has the same symptom: random re-starts. Looks
like I'm going to have to actually debug this.
I guess the first step is to work out how the re-start is happening; I suspect
it's not a trap (I'll check, but I think Mini-Unix catches them all). My best
guess is a jump to 0 (somehow); if so, that should be easy to catch. Then the
next thing is to try and work out where/how/why that is happening.
Following a suggestion of Warner Losh, I think there's a good idea on how to
make progress, which is to mount the Mini-unix pack under V6 (running on a
simulator); that's rock-solid - and the V6 tool-chain can be used to build
Mini-Unix binaries.
And here I thought it was going to be easy to convert Mini-unix to run on
an -11/03. Well, it still might - if I can get Mini-Unix to run reliably!
Noel
David C. Brock posted on Twitter links to the Bell Labs January 1968
Organizational Directory scans. The Research, Communications Sciences
Division is available at
https://drive.google.com/file/d/171jywFyIDyyWUMX4jYl3Czblqe5VGX2q/view
In it the Computing Science Research Center appears on tab 13, page 15
(PDF page 6). Most of the names are very familiar to members of this
list; some are even posting here.
Diomidis - https://www.spinellis.gr
> It is definite, though, that Q22 memory won't work with an LSI-11/2
> (M7270) ... I'll try an LSI-11 (M7264) tomorrow, make sure it's the
> same; it and the LSI-11/2 are similar enough that it should, but it'd be
> good to confirm it.
Yes, it too won't work with Q22 memory (tried it, no go - ODT won't even start).
> Then back to trying to work out why Mini-Unix is so fragile for me.
I tried some changes in the simulator set-up, to see if that would fix my
issue; no luck.
It's quite problematic: if I boot Mini-Unix on a clean copy of the RK pack, cd
to /usr/sys/dev, cp kl.c to nkl.c, and 'cc -c nlk.c', Mini-Unix reliably
restarts (trashing the disk in the process). (If I omit the 'cp', I can 'cc -c
kl.c' 3 times, and Mini-Unix restarts on the third.) Something's seriously
wrong.
I think before I try debugging it directly, I'll try one of the other
Mini-Unix repositories; the one I've been using (from Bob Eager's site) may
have some bit-rot.
> From: Paul Riley <paul(a)rileyriot.com>
> I'm checking with Peter Schranz about whether or not his RLV12/RL02
> boards can run on the '03.
Dave Bridgham and I discussed whether the QSIC would work with an -11/03, and
that analysis should apply equally to the RLV12. Our conclusion was that it
should; here's our reasoning:
The device registers on the board should work fine on either Q18 or
Q22. That's because when going to the I/O page, the CPU asserts a special bus
line, BBS7 (that says 'this cycle is to the I/O page'). The device doesn't
look at the higher address lines when that's on, just BDAL00-12; so if the
LSI-11 is messing with some of BDAL181-21, it shouldn't matter.
For DMA cycles from the device to memory, since the CPU requires Q18 memory
to work, the device too should be able to read/write to Q18 memory. At least,
that's our theory.
I guess all this PDP-11 hardware detail isn't really on-topic for this list; I
should move it to Classic Computers, or something. Sorry all; it's
intermingled with early Unix stuff, and it was easier to keep it all in one
place.
Noel
> From: Paul Riley
> Can you clarify something for me regarding memory? I understand the
> bottom area of memory in a Unix system is for the Kernel and it's stuff,
> and that the top 8kB is set aside for device I/O
Well, technically, the top 8KB of _address space_, not of memory - it's mostly
used for device registers, etc. Here:
http://gunkies.org/wiki/Unix_V6_kernel_memory_layout
is a bit more detail on how the memory is laid out.
> The LSI-11 board has 4kW of RAM on it, and I have already a 16KW
> board. If I want to further expand the RAM, and say I buy another 16kW
> board, that makes an arithmetic sum of 32kW for the boards, making 36kW
> total. Can the 4kW of on-board RAM be disabled, and only the 32kW on
> the boards be used?
Yeah, if you look at LSI-11 documentation, there are jumpers that allow
configuration of the on-board memory. Depending on the etch revision; for my F
revision, jumper W11 (at the top, towards the handle edge, in the middle of
that edge; just below the W1/W2 jumper pair) should be out to disable the
on-board memory.
Or you could configure the two 32KB boards to be at 020000 and 0120000; there
will be 72KB of memory total on the QBUS, but the LSI-11 CPU (no memory
management) will only be able to 'see' the bottom 56KB.
> Is it ok for the installed RA mto overlap the 8kW at the high memory
> area?
Yeah, what the CPU sees as the I/O page (at 0160000-0177776 in its address
space) is actually at 0760000-0777776 on the QBUS (on a Q18 QBUS); the CPU
automagically translates the 0160000-0177776 addresses up. On a PDP-11
with memory management, the MMU has to be set up to do that. E.g. in V6,
in:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V6/usr/sys/conf/m40.s
there is the following code:
/ initialize io segment
mov $IO,(r0)+
mov $77406,(r1)+ / rw 4k
to set the I/O page in kernel address space to point to the I/O page on the
bus.
Noel
> From: Pete Turnbull
> The SRUNL signal that was mentioned isn't likely to cause a problem
That was just a guess on my part as to the exact cause. It is definite,
though, that Q22 memory won't work with an LSI-11/2 (M7270); I just tried it.
I didn't touch any of the other boards; just swapped the LSI-11/2 for a
KDF11-A (M8186), everything worked fine; put the LSI-11/2 back, dead again.
I'll try an LSI-11 (M7264) tomorrow, make sure it's the same; it and the
LSI-11/2 are similar enough that it should, but it'd be good to confirm it.
Then back to trying to work out why Mini-Unix is so fragile for me. Has
anyone else ever worked with Mini-Unix, and if so, any tips?
Noel
In a brief discussion with a coworker today the question of formatting shell scripts came up.
I believed that in the past the preferred format (if there ever were any such thing) was
if [ test ]
then
statements
else
statements
fi
I can find nothing specific to back this up. More appropriate for COFF maybe would
be a discussion of what format is better
if [ test ]; then
statements
else
statements
fi
or the above.
No intention to start any kind of flame war about which is better, just want to see
if there is any historical option for one over the other.
David
if test; then
stuff
and
if test
then
stuff
are functionally equivalent. I wouldn't say one or the
other `is preferred.' I use the former because I think
it's a little more readable because more compact. But
it's really a matter of style, like whether you write
if (test) {
(multi-statement block)
or
if (test)
{
(multi-statement block)
I have a stronger opinion about those who use overly-
cryptic constructions like
test && {
shell commands
}
because it means exactly the same thing as
if test; then
shell commands
but is more obscure to read. But again it's a question
of style, not of dogma.
As an aside, I think one excuse that is sometimes used
for that sort of construct is when it's
test || {
commands
}
because Bourne's original shell had no not operator.
For a long time after shell functions appeared, I would
add this function to any of my shell scripts that needed
it:
not() {
if "$@"; then
return 1
else
return 0
}
so I could say
if not test; then
commands
fi
Modern Bourne-shell descendants have a built-in ! operator:
if ! test; then
commands
fi
I'm not keen on most of what has been stuffed into bash and
ksh and the like, but ! is a real improvement. I believe
POSIX mandates it, and I think they're right.
Norman Wilson
Toronto ON
> From: Paul Riley
> I also have a DEC 256KB board, but I doubt I could use it on the
> '03.
Yes, DEC 256KB boards are what's called Q22, and those don't seem to work with
LSI-11's; not even CPU ODT works. I just tried a 256KB MSV11-L with an LSI-11,
and it definitely doesn't work; the MSV11-P is definitely a Q22 board, and so
probably also won't work.
What the Q22 means is that early in the lifetime of the QBUS, it only had 18
address line - so-called Q18. (Technially the LII-11 used only 16 address lines,
so it's actually Q16.) DEC latter snarfed some of the 'unused' pins, and
made them QDAL18-21. So boards that use those pins for QDAL18-21 are 'Q22'
boards.
My theory on what the problem is is that the LSI-11 uses some of those pins
for other things - I think the 'run' light, IIRC. So that confuses Q22 memory.
If one tries to use one with an LSI-11, the machine is totally dead - not even
ODT. It doesn't do any harm, though; unplug the Q22 memory, and plug in a Q18
card like an MSV11-D, and it'll be fine.
If you need memory for the LSI-11, MSV11-D boards are pretty common on eBait,
for not much. They tend to be flaky, though; sometimes they come back to life
if you leave them sit for a bit after you plug them in.
> I believe the [memory] board is non-DEC.
Well, if it's Q22 it won't work either. Both that and the DEC board should
work in the /23, though. (If you have the part number on the memory chips, a
little arithmetic should give you the board size. 256K and up are generallly
Q22; if you have a manual for that card it might say.)
I'm still working with Mini-Unix; it's very fragile. When I got it running,
the first thing I tried to do was changle the line editing characters (since
my normal ones are burned into ROM). Alas, in stock V6, DEL is hard-wired to
be 'interrupt process', so I can't just 'stty [mumble]', I have to rebuild the
kernel to change that. Not a problem, necessarily - but I edited tty.h and
said 'cc -c tty.c', and it crashed and re-started - and roached the disk. So
I'm still trying to make progress. I might have mis-configured the simulator,
I'll see.
Noel
> Turing save us from 1-complement machines!
Users of Whirlwind II were warned about the quirks of -0.
We were not warned, though, about integer divide with remainder 0.
The result of dividing 6 by 3, for example, was binary 1.111111... -
a valid answer when carried to infinity. Unfortunately, the
"fractional" part was dropped.
Most people used Whirlwind for floating-point computation, and
blithely dismissed printed answers like 1.999999 as "roundoff
errors".
Doug
> So that confuses Q22 memory. If one tries to use one with an LSI-11,
> the machine is totally dead - not even ODT.
Oh, that's another LSI-11 'feature' (only discovered after someone sent in a
help request on CCalk for a dead LSI-11): if there's no working memory at 0,
ODT won't start/run. So if the Q22 memory is confused, the whole machine is dead.
Noel
> From: Paul Riley
> I have two RLV-12/RL02 emulator boards I had made from Peter Schranz's
> design (5volts.ch). They take an SD card
Ah, you're all set, then - doubly so, in fact. Not only do you have reliable
mass storage, but you should be able to put the Unix filesystem on an SD card,
to get the bits into the machine.
I'm not familiar with that board, but it sounds pretty good; QBUS<->SD. I
don't know how that board uses the SD card, in terms of where it keeps the
RL02 image, but if you can find that out, SD<->USB3 adaptors are cheap and
plentiful, and it shouldn't be too hard to load the disk image into it using
one of them. (For the QSIC, I found a 'dd' for Winsdoze and used that to write
the disk image onto the SD card.)
> I don't have any PROMs other than what would be on the '03 or '23+
> boards now.
Not a problem: if you hook up the -11's console to another computer, you
can download a bootstrap into it over the serial line, using the -11's ODT.
(There's a page here:
http://gunkies.org/wiki/Running_an_LSI-11_from_Unix_V6
which talks briefly about how to do that. Things like PDP11GUI can do it too,
I think.) I don't use an RK bootstrap in ROM to boot from the emulated RK11 on
the QSIC; I just load in a short RK bootstrap. I don't know of one lying
around for the RL11, but one would be trivial to whip up.
Speaking of booting, I have Mini-Uix booting under an -11/05 simulator
(Ersatz-11); I used the RK image from here:
http://www.tavi.co.uk/unixhistory/mini-unix/munixrks.zip
and it just started right up. So that's the big hurdle; been busy with other
stuff, but I'll work on getting it to boot on an '03 'soon'.
You probably want to do the same; having it running under a simulator will
make it easy to build new OS images, e.g. for a system with RL02's. Build the
new system, name it 'rlmx', copy the simulator disk image into the SD card,
and away you go.
Oh, I recently realized how to make a bit more room on an -11/03: most
DEC small QBUS memory cards allow you to use half the 'I/O page' for memory,
if you need it. I.e. instead of having 56KB of memory, and 8 KB of
address space for device registers (a lot more than is really needed), the
memory can be configured to be 60KB of memory.
It can be a bit of a hassle to use it; to have more room for the OS (for more
drivers, or disk buffers, or whatever), some pieces of Mini-Unix need to be
recompiled, to move up the address where user processes are loaded. Larger
user processes are the same thing; they aren't automatically enabled when
there's more memory, you have to change a config file, recompile some things,
and build a new system.
What kind of memory card(s) do you have for the -11/03?
Noel
When someone mentioned that they'd ported V6 to the 11/23, I recalled that
I did the same thing (well, V6 + the bits of AUSAM that I liked + the bits
of V7 that I could shoe-horn in), and went looking for the article that I
could've sworn I'd published, using "pdfgrep 23 AUUGN*" in my TUHS mirror.
And yes, I recall some hardware peculiarity which had to be worked around,
but I've forgotten the details (which is why I went looking).
I didn't find it (is there an index of articles anywhere?), but I did find
some peculiar typos, and I was wondering whether they were a result of
Google's (destructive) scanning, or were in the originals.
Here's a quick sample:
AUUGN-V04.5.pdf: tailored for smaller. PDP11s (such as the 11/23 or 11/34) in an
A period after "smaller".
AUUGN-V04.6.pdf: Unfortunately. the clever Code comes unstuck as the LSI-II/23 doesn't’t
The phrase "Unfortunately. the clever Code" looks wrong.
AUUGN-V04.6.pdf:LSI-II/23 was changing the value of r! if the V-bit gets set. It seemed
"r!" should be "r1" (a possible typo, as they are the same key)..
AUUGN-V04.6.pdf: bio23 (662 2668) Elec. Eng., UNSW,
This is a weirdie; "bio23" (one of my clients) was never a part of Elec
Eng (they were their own school), so I suspect a mistake here; it's
possible, however, that they were in the same building.
AUUGN-V04.6.pdf: PDP 11/23 + FPU. RK05, RL02, DRIIb
I believe that "DR11b" should be "DR11B".
AUUGN-V05.1.pdf: PDP 11/23 - System III (Ausam)., 256k, 2xR102, 16 lines
AUSAM under System III, on a mere 11/23? I very much doubt it... Also,
"R102" should probably be "RL02".
AUUGN-V05.1.pdf: PDP 11/23, Q bus + qniverter, RK05, Pertec dual RK05, DEC dual cassette,
I suspect that "qniverter" was a typo on the part of the author.
As a bit of a Unix historian it would be a shame if those AUUGN scans were
less than accurate; I no longer have my hard copies (lost in a house
move), so perhaps someone could check their copies?
Thanks.
-- Dave
> From: Warner Losh
> I'm pretty sure PDP-10 wasn't 1's compliment / was 2's compliment..
Just to confirm, I pulled out my PDP-10 Hardware Reference Manual; Vol I - CPU
(EK-10/20-HR-001), and it does indeed say (pg. 1-12): "The fixed-point
arithmetic instructions use 2's complement representations to do binary
arithmetic." Selah.
Noel
> From: John Cowan
> if you are not messing with the kernel or drivers, I find apout to be
> delightful.
Pretty much all of what little I do with V6 anymore is kernel hacking! :-)
Noel
> From: Paul Riley
> On my physical '03 I have twin Sykes floppy drives. I note that in the
> LSX archives there is a Sykes driver, so I can adapt that I guess.
Yes, here:
https://www.tuhs.org/cgi-bin/utree.pl?file=LSX/sys/sykfd.c
It looks like it should be a straight drop-in, to run it on Mini-Unix. Not
sure if your controller is the exact same model, though? Is there any
documentation on yours? (I haven't done any searching.)
If you want to boot from it, you'll need to write a bootstrap for it; I poked
around, but didn't see one. (Not sure how they booted machines with one, back
in the day; maybe it wasn't the only drive, and they booted off something
else.) You can probably modify the RX one:
https://www.tuhs.org/cgi-bin/utree.pl?file=LSX/src/rxboot.shttps://www.tuhs.org/cgi-bin/utree.pl?file=LSX/src/rxboot2.s
Note that this is a 2-stage bootstrap, apparently as a result of the small
hardware block size on the RX.
And of course there's still the issue of 'how to get bits onto it'. Can
floppies for it be written on some other kind of machine? If so, someone on
the Classic Computers list:
http://www.classiccmp.org/mailman/listinfo/cctalk
may be able to help you write those, or an RL02 pack.
You should start by getting some experience building V6 OS loads (Mini-Unix
will be _very_ similar); use a simulator. I have a lengthy tutorial here:
http://www.chiappa.net/~jnc/tech/V6Unix.html
It's in terms of Ersatz-11, which I prefer because it has that nice DOS device,
which makes it easy to get files into the Unix (so I can use my normal editor on
the host machine). However, I gather most people prefer SIMH; there is a tutorial
here:
https://gunkies.org/wiki/Running_Unix_v6_in_SIMH
(I didn't write it; I know nothing of SIMH) for that option.
How do people using SIMH get files into a Unix running on one? Larry Allen
just wrote a PDP-11 simulator in Rust, and he's thinking about adding a
paper-tape reader (connectable to a file), so that if he installs the stock
V6 PTR driver, he can just do 'cat /dev/ptr > myfile'; sort of like how
VM/370 used the virtual card reader.
Noel
> From: Paul Ruizendaal
> I'd love to have a look at that and compare and contrast the
> approaches.
OK; the system is somewhat disorganized, and stuff is in directories here,
there and everywhere in it (much is in the home dirs of some of the people who
worked on some pieces), so it will take me a fair amount of work to curate it
all into an accessible form, but I have put _some_ of it up here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/net/
If you want to slurp up the whole thing:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/net/net.tar
I should explain that the kernel only contains inbound de-multiplexing, and
not much else; the TCP is in the user process, allong with the
application. (Great for V6 on a small machine...) There is most of what
documentation I could find in the 'doc' folder.
There were at least two different TCP's in use on that system - maybe three. I
have currently only included one (gotta do some more work to get the rest; for
Server Telnet, and User/Server FTP), along with a couple of appplications
which use it: SMTP, Finger, and User Telnet.
The kernel code is mostly there, but there are some minor dribs and drabs of
changes/additions elsewhere in the kernetl which I'll have to sort out in the
future. The main thing that's not there is the Pronet driver; not very useful!
There's also an ICMP paemon (and 'ping'), IIRC, but that's something else I'll
have to find.
Any questions, or stuff you'd really like to see, let me know.
Noel
We've had three prospects respond (who knew???), so unless they all drop out, I think we're set.
-r
> A while back, there was some discussion of A/UX. We have an Apple Quadra which was able to boot and run A/UX a few decades ago. If someone wants to pay for shipping from the San Francisco Bay Area, this charming little machine could be theirs. Please respond off-list, to avoid annoying the neighbors...
>
> -r
>
> P.S. We're packing and moving, so a prompt response is critical.
A while back, there was some discussion of A/UX. We have an Apple Quadra which was able to boot and run A/UX a few decades ago. If someone wants to pay for shipping from the San Francisco Bay Area, this charming little machine could be theirs. Please respond off-list, to avoid annoying the neighbors...
-r
P.S. We're packing and moving, so a prompt response is critical.
> Would also be great if supported RL02 drives. ;)
_Support_ for RL02's will not be a problem (but there are probably some issues
- see below). I have an RL11 driver for V6, which will be easy to include in a
system build. There is also a RL V6 boostrap (which lives in block 0 - it
loads the OS from a V6 filesystem); how to load it off the RL pack on actual
hardware, I'm not sure; do you have some PROM device that has an RL bootstrap
in it?
Or do you have some other drive which is going to be your boot device? If not,
how are you going to get the bits onto an RL pack? This was a bit of an issue
for Fritz Muelller with his -11/45 (with an RK05 drive); he finally wound up
having to load it over a serial line, which took several hours. He used
something called PDP11GUI, and you're in luck, that does support RL02's.
Also, I'm not sure if you've had any experience with an old removable-pack
drive. If not, you have to be very careful with them; if you have a head
crash, the heads are now unobtainium, so a head crash will turn the drive into
junk. (Which is a big part of why Dave Bridgham and I are doing to QSIC RK11
emulator...) The packs need to be absolutely clean; a number of people have
experience with them, you should probably qget some lessons from them before trying
to use it.
> I'd imagine the V6 TTY driver would support boards with multiple serial
> ports. Guess that's what's needed for multi user access.
The TTY code in V6 consists of two levels of driver. The bottom layer is a
driver which is specific to the particular type of card one's using; DL-11,
DZ-11, etc. If the card supports multiple lines, that driver will too. Then
there's a layer above that, tty.c, which the low-level driver uses to interace
to the OS; the user talks to that. That layer is multi-line.
Noel
> From: John Foust
> Mark Riordan has a copy and some docs...
> http://www.60bits.net/msu/mycomp/terak/termedia.htm
Alas, the only thing online there is a list of which floppies he had, not any
actual content (other than one document). I've sent him an email (at the
address given on that site), we'll see if I get a reply. He might not have a
way to read those floppies...
If not, no biggie; in asking I more wanted to make sure I wasn't wasting my
time, I don't really need him to come through. Having done the /40->/23 move,
I'm sure the /05->/03 move won't present any major problems.
BTW, looking around for a copy of a document he mentioned, I found this:
http://www.tavi.co.uk/unixhistory/mini-unix.html
so between them and the stuff at TUHS, I think we have everything we'd need.
Noel
Hello,
Perhaps this has been discussed here before, but I
couldn't find a definitive answer as to the origin of
"Charlie Root".
https://www.geeklan.co.uk/?p=2457 includes links to
some of the /etc/passwd files from 4.1cBSD, 4.2BSD,
and 2.9BSD, where we see root changing from being "The
Man" to "Charlie &".
Speculations on the internet about Charlie Root the
baseball player are easy enough to find, but no
confirmation or official origin story.
So I thought I'd ask here: who can (authoritatively)
shed light on how we ended up with Charlie Root?
-Jan
I wrote:
>There was one for the Terak (an 11/03) in May 1979 but I've never found a copy.
I take it back... Mark Riordan has a copy and some docs...
http://www.60bits.net/msu/mycomp/terak/termedia.htm
- John
> If you want multiple users on an -11/03, Mini-Unix would be an option;
> zit doesn't support the -11/03 'out of the box', but looking at it, it
> shouldn't be too hard. (Heinz mentioned that it had been done before.)
On thinking about it, I might do the -11/03 port of Mini-Unix for the hack
value; it looks like it should be a quick project (a couple of hours, much of
which would be getting Mini-Unix set up; I'd use a simulator, my QBUS RK11
emulator is broken at the moment).
I think it should mostly just be some fairly straight-forward changes to
mch.s; I think all the C code would be fine. (Unless there's an 'PS->integ' or
something hiding somewhere.) Also a few odds and ends, like a software console
switch register (been there, done that).
That would make the full power of Mini-Unix available to people with -11/03's;
those are still fairly common, and reasonably cheap. (Unlike -11/05's.) It's a
considerably more capable system than LSX: e.g. the tty driver is the full V6
one, and supports an arbitrary number of devices.
So my question is: had anyone else already done this (I don't want to waste
time replicating already-done work)? Also, would anyone have a use for it if I
did it? If so, I'll put it up on a Web page when I'm done. (No, I _don't_ use
Guthub, and have zero interest in learning how. I'd rather spend my remaining
un-comitted neurons improving my ability to read feudal Japanese.)
Noel
Doug McIlroy:
To put it more strongly. this is not a legal C source file.
char *s = NULL;
But this is.
char *s = 0;
Clem Cole:
67)The macro NULL is defined in <stddef.h> (and other headers) as a null
pointer constant; see 7.19.
====
$ cat null.c
char *s = NULL;
$ cat zero.c
char *s = 0;
$
zero.c is a legal C program. null.c is not. Create
files exactly as shown and compile them if you don't
believe me.
Prepend `#include <stddef.h>' (or <stdlib.h> or <stdio.h>)
to null.c and it becomes legal, but I think that's Doug's
point: you need an include file.
Personally I prefer to use NULL instead of 0 when spelling
out a null pointer, because I think it's clearer:
if ((buf = malloc(SIZE)) == NULL)
error("dammit andrew");
though I am willing to omit it when there's no confusion
about = vs ==:
if (*p)
dammit(*p, "andrew");
But that's just a question of style, and Doug's is fine too.
The language does not require the compiler to pre-define
NULL or to recognize it as a keyword; you have to include
an appropriate standard header file.
Norman Wilson
Toronto ON (not 0N nor NULLN)
> From: Paul Riley <pdr0663(a)icloud.com>
>> the tty driver:
>>
>> https://minnie.tuhs.org//cgi-bin/utree.pl?file=LSX/sys/tv.c
>>
>> only supports a single line.
> The second serial port would not be to support another user, but to
> communicate with peripherals.
Ah. Well, AIX will still have an issue (above).
Noel
You're welcome. Other theoretical alternatives are one of the later 2.9BSD
patches, though I have had no success getting those to run on anything, and
maybe BRL UNIX from the CSRG DVD? I would appreciate hearing from anyone
who has tried to get BRL UNIX running.
-Henry
On Mon, 21 Sep 2020 at 19:36, Paul Riley <pdr0663(a)icloud.com> wrote:
> Thanks Henry, that’s an interesting alternative.
>
> I’m trying to get a photo of the drive label.
>
> Paul
>
> Sent from my iPhone
>
> On 22 Sep 2020, at 7:28 am, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>
>
> On Fri, 18 Sep 2020 at 23:24, Paul Riley <paul(a)rileyriot.com> wrote:
>
>>
>> Team,
>>
>> I’ve read thru the FAQ and other sources regarding compatibility of
>> Research and other flavours on the PDP-11. I have two physical machines, an
>> 11/03 and an 11/23+. I’m choosing which version to use on each machine.
>>
>> Is LSX the only option on the 11/03, or could I run V6 or Mini-Unix with
>> more RAM?
>>
>> From the FAQ, it says V7 would support 11/23 with kernel recompilation, I
>> assume includes 11/23+. I see 2.11BSD would also run on a ‘23 (and + I
>> guess) with 1MB or more of RAM, so that would be preferred. I suppose 2.11
>> would be preferable.
>>
>> I also have found another 11/23+ system from a seller here in China.
>> There’s the system, and a VT100, and a hard drive I can’t identify. Here’s
>> a photo, does anyone know what it is? I may bid for it...
>>
>> Paul
>>
>> *Paul Riley*
>>
>
> Ultrix 3.1 should support the 11/23+, which would give you memory support
> up to 4MB as well as support for TCP/IP if you have a DEQNA. I don't think
> 2.11BSD will run on anything without split I/D which the 23 does not have.
>
> Without a closer view of the label I doubt that anyone could give you a
> definite identification of that hard drive.
>
> -Henry
>
>
Team,
I’ve read thru the FAQ and other sources regarding compatibility of
Research and other flavours on the PDP-11. I have two physical machines, an
11/03 and an 11/23+. I’m choosing which version to use on each machine.
Is LSX the only option on the 11/03, or could I run V6 or Mini-Unix with
more RAM?
>From the FAQ, it says V7 would support 11/23 with kernel recompilation, I
assume includes 11/23+. I see 2.11BSD would also run on a ‘23 (and + I
guess) with 1MB or more of RAM, so that would be preferred. I suppose 2.11
would be preferable.
I also have found another 11/23+ system from a seller here in China.
There’s the system, and a VT100, and a hard drive I can’t identify. Here’s
a photo, does anyone know what it is? I may bid for it...
Paul
*Paul Riley*
> From: Paul Riley
> Using LSX on the 11/03... Will LSX handle cards with multiple serial
> ports?
Ah, I just read this carefully; LSX only supports a single user at a time, so
there's no use to multiple serial lines? (But see below.) (I thought Heinz'
reply message to this referred to Mini-Unix, which does suppport multiple
users, but on reading it again I see it does not.)
If you want multiple users on an -11/03, Mini-Unix would be an option; it
doesn't support the -11/03 'out of the box', but looking at it, it shouldn't
be too hard. (Heinz mentioned that it had been done before.) Change all cases
of 'mov xx, PS' in mch.s:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=Mini-Unix/usr/sys/mxsys/mch.s
to 'MTPS xx' (PS access needs a special instructtion in the /03); that might
be all you need to do. (Installing a KEV11-A, so you can avoid using the
instruction emulation package for EIS instructions would be nice, but
apparently isn't required.)
If Mini-Unix supports multiple users, it ought to be possible to do the same
with LSX. (I'm not sure what the rationale was for making LSX single-user:
perhaps the RX was too slow; perhaps there was no need in their use case;
etc.)
But it would probably be more work than going the Mini-Unix route; e.g.
to start with, init only supports a single user:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=LSX/src/init.c
and the tty driver:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=LSX/sys/tv.c
only supports a single line. One could cross-port the needed 'stuff' from
Mini-Unix, but it'd probably be easier to do the Mini-Unix -11/03 conversion.
Noel
> From: Heinz Lycklama
> The DLV-11 supported only one serial port in 1975. Other DEC interface
> devices may have been added to the Qbus after the mid 1970's.
The DLV11-J:
http://gunkies.org/wiki/DLV11-J_asynchronous_serial_line_interface
is basically 4 DLV11's rolled into a single dual-width card; that's definitely
an option for Mini-Unix (which apparently supports up to 4 simultaneous
users). They are program compatible with the DHV11, so the driver should 'just
work'. The boards are readily available on eBait; 'glitchworks' (of this
parish) sells replacement cables (quite good, I have several).
Another option for serial lines on QBUS machines are DZ11 and DH11 versions
for QBUS. (The DZ11 is interrupt-per-character on output; the DH11 is DMA on
output.) There are two generations of each: the DZV11 (quad-width) and DZQ11
(dual-width), and DHV11 (quad) and DHQ11 (dual). I think they are pretty much
program compatible with their UNIBUS brethren, and should be easy to get
running. The boards are easy to find on eBait, the breakout panels are
somewhat rarer (although there sre some DH ohes up at the moment).
Noel
Brantley Coile:
The fact that a pointer of zero generates a hardware trap is not
defined in the language, whereas a 0 is is defined to be a null pointer.
=====
The language doesn't require that dereferencing a null pointer
cause a trap, either. There's no way to guarantee that behaviour
in all environments unless every pointer dereference must include
instructions to check for the null-pointer value, because C can
run in environments in which any pointer value might be a valid
address.
On modern machines it's conventional for the null-pointer value
in C, what you get when you assign 0 to a pointer, to be all-zeroes;
and for operating systems to arrange that that address is unmapped.
But that wasn't always so (on the PDP-7 there was no memory map;
on the PDP-11 once memory-mapping was added, address space was
too dear to throw away an eighth of it just to block null-pointer
dereferencing), and it may still not be (consider a C program
on an embedded system running without a memory map).
It's good that modern systems usually whap you in the head if you
deference a null pointer, but it's not required, and those who
rely on it are as foolish as those who used to rely on the
accident that the byte at address 0 on early VAX UNIX was a zero.
Norman Wilson
p&p6 and f(
> Thanks so much for your reply.
That's what we're here for... :-)
> I have an 11/23+ does that make a difference?
No. The KDF11-B of the 23+:
http://gunkies.org/wiki/KDF11-B_CPU
is the same CPU as all the other KDF11 CPUs; it just has a couple of extra
peripherals on the board (2 asyn serial lines, and some PROMs, IIRC).
> From the manual it seems to have an MMU
Like all KDF11 CPUs, it has a socket for an MMU chip, but the chip may or may
not be there (I don't know if it was standard on the 23+; and in any event, it
may have been pulled - the CPU will work without it). The main CPU is a DIP
carrier which holds two chips; the optional KTF11-A MMU has one (see the
image, above); the optional KEF11-A FPU is also a carrier with two chips. (The
KDF11-B can also hold the large 6-chip carrier of the optional KEF11-B CIS
chip - a rara avid indeed, if you'relucky enough to have it.)
If yours doesn't have the MMU chip, you're probably SOL; those are very rare.
KEF11-A FPU chips are avilable on eBay for modest amounts.
> I'm not sure if it's split I/D.
None of the KDF11 CPUs support splite I/D.
Noel
You're a bit harsh on the developers but I think in most cases it was
the marketing/finance part of companies which decided on such mundane
matters as licensing.
My 2-1/2 cents.
Cheers,
uncle rubl
>Date: Sat, 19 Sep 2020 12:42:39 -0700
>From: John Gilmore <gnu(a)toad.com>
>To: Clem Cole <clemc(a)ccc.com>
>Cc: "Nelson H. F. Beebe" <beebe(a)math.utah.edu>, tuhs
> <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] Unix on DEC AlphaServer 4000
>Message-ID: <32401.1600544559(a)hop.toad.com>
... snip ...
>License managers now count as DRM, under the Digital Millennium
>Copyright Act (though no such laws had been passed when the license
>managers were first created). So: is it worth breaking the law in many
>countries, to maintain a historical curiosity?
>Personally, I would throw DRM-encrusted software, and the hardware that
>is dependent on it, into the dustbin of history. Its creators had fair.
>warning that they were making their products unusable after they stopped
>caring to maintain them. They didn't care about their place in history,
>nor about their users. They did it anyway, for short-term profit and to
>harass those people foolish enough to be their customers. Their memes
>should not be passed to future generations. As Sir Walter Scott
>suggested in another context, they "doubly dying, shall go down, to the.
>vile dust, from whence [they] sprung, unwept, unhonour'd, and unsung".
John
I have an MVME121 that I’d like to run some stuff on. I’m planning what I’ll need to do to port MINIX 1.5 but since this has a 68451 segmented MMU, I’d like to actually make use of it.
Have any historical sources been published for UNIX on the various 68010 + 68451 systems from the early-mid 1980s? I’m curious how they used segmented MMUs.
I figure at minimum I could have several segments set up to enforce protections and a stable per-process address space, but it’d be good to have an example.
— Chris
Chris Hanson asks about historical sources for Unix on the Motorola
68K processor.
>From my bibliography at
http://www.math.utah.edu/pub/tex/bib/unix.bib
I find these Motorola contributions
The Dynamics of a Semi-Large Software Project with Specific
Reference to a UNIX System Port
USENIX Conference Proceedings, Summer, 1984. Salt Lake City, UT
pp. 332--342
[I think that I have a printed copy in my campus office, but
won't be there for another 10 days or so.]
Latent Source Bugs and UNIX System Portability
Proceedings: USENIX Association Winter Conference, January
23--25, 1985, Dallas, Texas, USA
pp. 125--130
Co-Resident Operating System: UNIX and Real-Time Distributed
Processing
Fifth Real-Time Software and Operating Systems Workshop
Proceedings, May 12--13, 1988. Washington, DC
pp 47--53
Co-Resident Operating System: UNIX and Real-Time Distributed Processing
[Fifth RTOS... as above]
pp. 47--53
A Faster fsck for BSD UNIX
Proceedings of the Winter 1989 USENIX Conference: January
30--February 3, 1989, San Diego, California, USA
pp. 173--185
Also take a look at the 200 entries in
http://www.math.utah.edu/pub/tex/bib/minix.bib
I have made attempts to install Debian 10 on the MC68K on QEMU from an
ISO image at
https://cdimage.debian.org/cdimage/ports/2020-05-30/
Source code is, of course, available, so it could be useful resource
in porting Minix to the MC68K.
However, while I can get the ISO image to boot, I get grub update
failures and when I try run the installer, I get "No PCI buses
available", For now, I have given up on that platform until new ideas
for workarounds appear.
I have similar emulated VMs for ARM64, RISC-V64, PowerPC (big and
little endian), and IBM System 390x, all of which run nicely, have
up-to-date O/Ses and binary software package repositories, and are
used for routine software build testing. My attempts for other VMs
for HPPA, Alpha, and SPARC64 CPUs have failed with install or network
problems.
Debian ISO images are available for IA-64, but QEMU has no support for
the Itanium CPU family. We have a single phyical IA-64 system that
runs fine, but is currently powered off due to machine-room
air-conditioning issues that will be resolved in a couple of months.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Hi,
Is there a repository of historical versions of Eric Allman's -me macro
set for troff?
(For some context: the macro set has been forked to operate with two
modern troff implementations: GNU groff and Heirloom troff. According
to the header blocks of the respective files, groff's -me macros are
forked from version 2.31 of Allman's, and Heirloom's from version 2.14.
For help in debugging -me problems in these troff implementations,
I'm trying to locate at least these versions of the -me package as they
existed before forking. I posted this query on the troff email list,
but no one there knew the answer, and one person suggested I ask here.)
Thanks for any pointers.
FYI. UNESCO call for a study on the future institutional structure for
Software Heritage.
---------- Forwarded message ----------
Dear all,
I do hope you are all safe, and could take some time off to recharge the
batteries that these hectic times have drained quite a bit.
Some of you know already Software Heritage (https://www.softwareheritage.org)
it is a nonprofit initiative, started by Inria and supported by UNESCO, whose
mission is to ensure that software source code, as part of the common heritage
of humankind, is preserved over time and made available to all, building,
maintaining and developing a universal source code archive, providing
persistent identifiers for all software artifacts, and creating the largest
shared knowledge base about software artifacts ever built.
This is a long term undertaking, and UNESCO has just published a call for
advice, via a small feasibility study providing options for establishing the
future independent, non profit, multi-stakeholder organization that will host
Software Heritage for the long run.
As Software Heritage is a shared infrastructure that will support use cases of
interest to the members of this list, I take the liberty to bring this call to
your attention, and I'd be very grateful if you could also forward it to
whomever you believe could be interested in answering.
Detailed information on the expected advice and procedures to answer the call
is online at:
https://careers.unesco.org/job/Paris-Consultant-on-Software-Heritage-CIMID/…
The deadline for the answer is September 26th.
Thank you for your help
Roberto Di Cosmo (roberto(a)dicosmo.org)
_______________________________________________
foundations mailing list
foundations(a)lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/foundations
> From: Paul Guertin
> I teach math in college ... Sometimes, during an exam, a student who
> forgot to bring their calculator will ask if they can borrow mine I
> always say "sure, but you'll regret it" and hand them the calculator
> After wasting one or two minutes, they give it back
Maybe I'm being clueless/over-asking, but to me it's appalling that any
college student (at least all who have _any_ math requirement at all; not sure
how many that is) doesn't know how an RPN calculator works. It's not exactly
rocket science, and any reasonably intelligent high-schooler should get it
extremely quickly; just tell them it's just a representational thing, number
number operator instead of number operator number. I know it's not a key
intellectual skill, but it does seem to me to be part of comon intellectual
heritage that everyone should know, like musical scales or poetry
rhyming. Have you ever considered taking two minutes (literally!) to cover it
briefly, just 'someone tried to borrow my RPN calculator, here's the basic
idea of how they work'?
Noel
Dennis Ritchie's ACM Turing Award lecture paper in Communications of
the ACM 27(8) 758--760 (August 1984), doi:10.1145/358198.358207 was
reprinted in UNIX Review 3(1) 28, 118--120, 122, (January 1985) [no
DOI or URL yet found], and more recently, in Resonance 17(8) 810--816
(August 2012) doi:10.1007/s12045-012-0091-y.
There are two other UNIX-related papers in that issue of Resonance:
Pramod Chandra P. Bhatt
UNIX: Genesis and design features
Resonance 17(8) 727--747 (August 2012)
doi:10.1007/s12045-012-0084-x
K. Bhaskar
C --- Past, present, and future --- A perspective
Resonance 17(8) 748--758 (August 2012)
doi:10.1007/s12045-012-0085-9
I do not have access to that journal's archives from my campus
library, so I have not seen the articles.
In his paper, Dennis Ritchie referred to another UNIX article that I
did manage to track down and record in unix.bib:
Donald Arthur Norman
The Truth about UNIX
Datamation 27(12) 139--150 (November 1981)
https://tinyurl.com/yyselmxq
The original URL is 200+ characters long, and is a freely-downloadable
PDF of a reprint in AUUGN volume IV number I. The PDF file has
searchable text.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
For yonks I've been seeing "XXX" as a flag to mean "needs more work" or
"look at this carefully, just in case" etc, and I use it myself.
Whence did it come about? I think I saw it as early as PWB, but can't be
sure.
-- Dave, wondering how many nanny-filters he triggered with "XXX"
Score! I'm not planning to scan all of these in unless someone really cares
and has a better scanner than I do. I'm going to prioritize the following:
516-10, -11, -18 Ring Formats
516-12 Specifications for the Node Modem Interface
516-22 A Repeater for the Node Modem
516-28 I/O Ring Device Codes
516-36 Node Modem Interface for Computer Terminals
516-72 Node Test
516-73 Node I/O Software
516-57 Format for the 516 Node - TIU Spider Interface
516-67 Node Format for PDP-11
Jon
In my humble-but-correct opinion*, Linux and its
origins fit into the general topic of UNIX history
just as well as those of Research UNIX or BSD or
SVr4.2.2.2.2.2.2.2 or SunOS or IRIX or Ultrix or
Tru64-compaqted-HPSauce or whatever. It all stems
from the same roots, despite the protestations of
purists from all sides.
Warren gets final say, of course, but to encourage
him I will say: Ploooogie!
Norman Wilson
Toronto ON
* One of Peter Weinberger's sayings that I still
enjoy overusing.
Honestly, I'm not quite sure if this is a TUHS, COFF, or IH question. But
since my background with respect to such things is largely Unix centric, I
thought I'd ask in that context, hence asking on TUHS.
I assume some of the regulars on this list have authored RFCs (of the IETF
etc variety). The RFC format seems fairly well fixed: table of contents,
fixed number of lines per page, page numbers and dates in the footer, and
so forth. The format is sufficiently complex that it seems like some
tooling could be usefully employed to aid in producing these documents.
So I'm curious: what tools did people use to produce those documents?
Perhaps `nroff` with custom macros or something?
- Dan C.
https://www.youtube.com/watch?v=ww60o940kEk
You may be surprised :)
Warner
P.S. Hope this is relevant enough to share here. Also, if I botched
anything I've not yet mentioned in the comments, please let me know...
A modestly corrected and improved version of my bare-m4
program, which quickly builds from nothing to arithmetic on
binary numbers using no builtins other than `define'. is
posted at www.cs.dartmouth.edu/~doug/barem4.txt. (.txt
because browsers balk at .m4)
Doug
Another question at the intersection of the Internet-History and TUHS
lists...
I was wondering about the early history of BIND. I started off wondering
about the relative ages of JEEVES (the original PDP10 DNS server) and
BIND. Judging by the dates on RFCs 881 - 883, the DARPA contract
commissioning BIND, and the Berkeley tech reports, it seems there wasn't
much time when the DNS was without BIND.
But I was struck by the resolver API set out on page 8 of Berkeley tech
report UCB/CSD-84-182: it looks nothing like the familiar API that ended
up in 4.3BSD.
https://www2.eecs.berkeley.edu/Pubs/TechRpts/1984/5957.htmlhttps://minnie.tuhs.org/cgi-bin/utree.pl?file=4.3BSD/usr/src/lib/libc/net/n…
So I'm wondering if there's anything out there recording the history
between the 1984 tech reports and the 4.3BSD release in 1986.
(It's also noteworthy that early BIND supported incremental DNS updates
and zone transfers, which didn't reappear in standard form until RFC 2136
(1997) and RFC 1995 (1996)...)
Tony.
--
f.anthony.n.finch <dot(a)dotat.at> http://dotat.at/
public services of the highest quality
Doug,
“In fact Dennis's compiler did not use AID instructions for that purpose.”
Whilst local variables are indeed accessed as an offset from the base pointer, I’m not sure that the above statement is correct. In the V6 compiler (-sp) was certainly used to push arguments to the stack, and when the register allocation overflowed, the interim results were pushed to the stack as well with (-sp).
See c10.c, the case CALL in rcexpr(), the function comarg() and sptab (which is right at the end of table.s)
Links:
https://www.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/c/c10.chttps://www.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/c/table.s
For interim result pushing/popping I refer to the FS and SS macro’s. Dennis discusses these in his “A tour of the C compiler” paper.
https://www.jslite.net/cgi-bin/9995/doc/tip/doc/old-ctour.pdf
Of course this is all implementational detail, not a core design aspect - as Richard Miller showed in his port to the Interdata architecture (including a port of the Ritchie C compiler). Maybe the sentence should have read: "In fact Dennis's compiler did not rely on having AID instructions for that purpose."
Paul
@Warren: at moments like these I really like having the line highlight feature that we discussed before Summer.
> From: John Cowan
> That's always true on the PDP-11 and Vax ... because the processor
> architecture (which has pre-increment and post-decrement instructions,
> but not their counterparts)
After Doug's message, I carefull re-read this, and I'm not sure it's correct?
The PDP-11 has pre-decrement and post-increment, not the other way around (as
above) - unless I'm misunderstanding what you meant by those terms?
That's why:
*p++ = 0;
turns (if p is in R2) into
CLR (R2)+
R2 is used, and then incremented after it has been used.
Noel
> That's always true on the PDP-11 and Vax, no matter what the OS,
> because the processor architecture (which has pre-increment and
> post-decrement instructions, but not their counterparts) makes
> anything but a downward-growing stack unmanageable.
I hadn't heard this urban legend before. A stack is certainly
manageable without auto-increment/decrement (AID) instructions. In
fact Dennis's compiler did not use AID instructions for that purpose.
AID instructions are nice for a LIFO stack, in which a stacked
item disappears as soon as it's used--as do intermediate
results in expression evaluation. But when the stack contains
local variables that are accessed multiple times, the accesses
are offset from the stack pointer. If AID instructions set the
pointer, then the offset of a particular local varies throughout
the code. The compiler can cope with that (I once wrote a
compiler that did so), but a debugger will have a devil of a
time doing a backtrace when the offset of the return address
in each stack frame depends on the location counter.
AID instructions are fine for sequentially accessing arrays, and
in principle Ken's ++ and -- operators can exploit them. Yet
idiomatic C typically wants post-increment and pre-decrement
instructions--the opposite of what DEC offered. Examples:
char a[N], b[N];
char *ap = a;
char *bp = b;
int i;
for(i=0; i<N; i++)
*ap++ = *bp++;
int a[N], b[N];
int i = N;
while(--i >= 0)
a[i] = b[i];
Doug
Hi,
I have a project to revive the C compiler from V7/V10.
I wanted to check if anyone here knows about the memory management in
the compiler (c0 only for now). I am trying to migrate the memory
management to malloc/free but I am struggling to understand exactly
how memory is being managed.
Thanks and Regards
Dibyendu
I am fairly sure the interdata port from Wollongong used the v6 c compiler, and this lived on in the “official” v7 port from perkin elmer, it still used the v6 compiler.
i remember the pain of the global namespace for structure members.
-Steve
> >> Even high-school employees could make lasting contributions. I am
> >> indebted to Steve for a technique he conceived during his first summer
> >> assignment: using macro definitions as if they were units of associative
> >> memory. This view of macros stimulated previously undreamed-of uses.
>
> > Can you give some examples of what this looked like?
>
See attached for an answer to Arnold's question
Doug
> Did the non-Unix people also pull pranks like the watertower?
One of my favorites was by John Kelly, a Texas original,
who refused the department-head perk of a rug so he
could stamp his cigarettes out on the vinyl floor.
John came from Visual and Acoustics Research, where
digital signal processing pressed the frontiers of
computing. Among his publications was the completely
synthetic recording of "Daisy, Daisy" released
circa 1963.
Kelly electrified the computer center with a
blockbuster prank a year or two before that. As
was typical of many machine rooms, a loudspeaker
hooked to the low-order bit of the accumulator
played gentle white noise in the background. The
noise would turn into a shriek when the computer
got into a tight loop, calling the operators to
put the program out of its misery.
Out of the blue one day, the loudspeaker called
for help more articulately: "Help, I'm caught in
a loop. Help, I'm caught in a loop. ..." it
intoned in a slow Texas drawl. News of the talking
computer spread instantly and folks croweded into
the machine room to marvel before the operators
freed the poor prisoner.
Doug
Dan Cross:
I'll confess I haven't looked _that_ closely, but I rather imagine that the
V10 compiler is a descendant of PCC rather than Dennis's V6/V7 PDP-11
compiler.
====
Correct. From 8/e to the end of official Research UNIX,
cc was pcc2 with a few research-specific hacks.
As Dan says, lcc was there too, but not used a lot.
I'm not sure which version of lcc it was; probably it
was already out-of-date.
In my private half-backed 10/e descendant system, which
runs only on MicroVAXes in my basement, cc is an lcc
descendant instead. I took the lcc on which the book
was based and re-ported it to the VAX to get an ISO-
compliant C compiler, and made small changes to libc
and /usr/include to afford ISO-C compliance there too.
The hardest but most-interesting part was optimizing.
lcc does a lot of optimization work by itself, and
initially I'd hoped to dispense with a separate c2
pass entirely, but that turns out not to be feasible
on machines like the VAX or the PDP-11: internally
lcc separates something like
c = *p++;
into two operations
c = *p;
p++;
and makes two distinct calls to the code generator.
To sew them back together from
cvtbl (p),c
incl p
to
cvtbl (p)+,c
requires external help; lcc just can't see that
what it thinks of as two distinct expressions
can be combined.
It's more than 15 years since I last looked at any
of this stuff, but I vaguely remember that lcc has
its own interesting (but ISO/POSIX-compatible)
memory-allocation setup. It allows several nested
contexts' worth of allocation, freeing an inner
context when there's no longer any need for it.
For example, once the compiler has finished with
a function and has no further need for its local
symbols, it frees the associated memory.
See the lcc book for details. Read the book anyway;
it's the one case I know of in which the authors
followed strict Literate Programming rules and made
a big success of it. Not only is the compiler well-
documented, but the result is a wonderful tour
through the construction and design decisions of a
large program that does real work.
Norman Wilson
Toronto ON
> From: Dibyendu Majumdar
> the C compiler from V7/V10. I wanted to check if anyone here knows
> about the memory management in the compiler (c0 only for now). I am
> trying to migrate the memory management to malloc/free but I am
> struggling to understand exactly how memory is being managed.
Well, I don't know much about the V7 compiler; the V6 one, which I have looked
at, doesn't (except for the optimizer, C2) use allocated memory at all.
The V7 compiler seems to use sbrk() (the system call to manage the location of
the end of a process' data space), and manage the additional data space
'manually'; it does not seem to use a true generic heap. See gblock() in
c01.c:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V7/usr/src/cmd/c/c01.c
which seems to use two static variables (curbase and coremax) to manage
that additional memory:
p = curbase;
if ((curbase =+ n) >= coremax) {
if (sbrk(1024) == -1) {
error("Out of space");
exit(1);
}
coremax =+ 1024;
}
return(p);
My guess is that there's no 'free' at all; each C source file is processed
by a new invocation of C0, and the old 'heap' is thrown away when the
process exits when it gets to the EOF.
Noel
> From: Larry
> It's possible the concept existed in some other OS but I'm not aware of
> it.
It's pretty old. Both TENEX and ITS had the ability to map file pages into a
process' address space. Not sure if anything else of that vintage had it (not
sure what other systems back then _had_ paging, other than Atlas; those two
both had custom KA10 homebrew hardware mods to support paging). Of course,
there's always Multics... (sorry, Ken :-).
Noel
fyi warner,
in your talk, you referred to Alex Fraser at bell labs.
he was my first director, and always went by “sandy”.
i asked my wife (who was a secretary at the time) and she
said he was occasionally referred to as Alex. certainly, every
time i saw anything written by him or references to a talk by him
always used “sandy”.
andrew
>
> Message: 6
> Date: Fri, 5 Jun 2020 16:51:27 -0600
> From: Warner Losh <imp(a)bsdimp.com>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] My BSDcan talk
> Message-ID:
> <CANCZdfpq8tiDYe2iVeFh1h0VMDK+4B=kXuGSJ3iNmtjbzHQT6Q(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> OK. Must be off my game... I forgot to tell people about my BSDcan talk
> earlier today. It was streamed live, and will be online in a week or
> three...
>
> It's another similar to the last two. I've uploaded a version to youtube
> until the conference has theirs ready. It's a private link, but should work
> for anybody that has it. Now that I've given my talk it's cool to share
> more widely... https://www.youtube.com/watch?v=NRq8xEvFS_g
>
> The link at the end is wrong. https://github.com/bsdimp/bsdcan2020-demos is
> the proper link.
>
> Please let me know what you think.
>
> Warner
>
Just saw your BSDcan talk. Great stuff, so much progress in the last five years. Just wanna say thanks. When I started looking into ancient systems, it was hard finding anything coherent on the historical side beyond manuals and this list (thankful to Warren & co for the list). Your talk is packed with interesting information and really pulls together the recent pieces.
Great job, Warner.
I needed to look up something in the Bell System Technical Journal
(Wikipedia didn't have it) and discovered that the old Alcatel-Lucent
site that used to host a free archive of BSTJ no longer seems extant.
(No surprise, the Web is nothing if not ephemeral.)
After a bit of Googling, I did find that the archives are now residing
at <https://archive.org/details/bstj-archives> and found what I was
looking for there.
Hope others find this link useful. At least until it too "sublimaates".
Kirk McKusick
Looking at the 6th edition man page tty(2), I see
Carriage-return delay type 1 lasts about .08 seconds and is
suitable for the Terminet 300. Delay type 2 lasts about .16
seconds and is suitable for the VT05 and the TI 700. Delay
type 3 is unimplemented and is 0.
New-line delay type 1 is dependent on the current column and
is tuned for Teletype model 37's. Type 2 is useful for the
VT05 and is about .10 seconds. Type 3 is unimplemented and
is 0.
Why would the VT05 (a VDU) need a delay for carriage return?
I can just about imagine that it might need one for linefeed
if it shifted the characters in memory.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
On 2020-07-29 15:17, Paul Koning wrote:
>
>
>> On Jul 29, 2020, at 5:50 AM, Johnny Billquist <bqt(a)softjar.se> wrote:
>>
>> Just a small comment. Whoever it was that thought DECtape was a tape was making a serious mistake. DECtapes are very different from magtapes.
>>
>> Johnny
>
> Depends on what you're focusing on. Most tapes are not random-write. DECtape and EL-X1 tape are exceptional in that respect. But tapes, DECtape include, have access time proportional to delta block number (and that time is large) unlike disks.
>
> From the point of view of I/O semantics, the first point is significant and the second one not so much.
True. But seek times are in the end only relevant as an aspect of the
speed of the thing, nothing else.
However, seek times on DECtape aren't really comparable to magtape
either. Because DECtape deals with absolute block numbers. So you can
always, no matter where you are, find out where you are, and how far you
will need to move to get to the correct block.
With magtapes, this is pretty much impossible. You'll have to rewind,
and then start seeking.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
OK, I was able to locate 2bsd.tar.gz and spencer_2bsd.tar.gz in the
archive. Neither is an installation tape. It appears that they are just
tarballs of their respective systems (there are very minor differences
between the two).
In the TAPE file in the tarball, it talks about reading the tar program
off of the tape using:
dd if=/dev/mt0 bs=1b skip=1 of=tar
Well, tar is definitely not located at that address, which implies that
the tarball isn't a distro tape. This note in the archive used to read:
...
The remaining gzipped tar files are other 2BSD distributions supplied by
Keith Bostic, except for spencer_2bsd.tar.gz which came from Henry Spencer.
They do not contain installation tape images. The 2.9BSD-Patch directory
contains patches to 2.9BSD dated August 85, and again supplied by Keith Bostic.
...
now it reads:
...
2.11BSD 2.11BSD-pl195.tar is a copy of 2.11BSD at patch level 195, supplied
by Tom Ivar Helbekkmo. spencer_2bsd.tar.gz is a version of 2BSD which came
from Henry Spencer.
...
I recall having to do something with cont.a files, which are not present
on these images. So, my questions is, does anyone know of or have an
actual 2bsd tape/tape image?
Thanks,
Will
Here's where I found the tarballs:
https://www.tuhs.org/Archive/Distributions/UCB/
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
All,
About a week ago, Bill English passed away. He was a Xerox guy, who
along with Douglas Engelbart of "Mother of all demos" fame, created our
beloved mouse:
https://www.bbc.com/news/technology-53638033
I remember, back in the mid-1980's being part of a focus group
evaluating Microsoft's mouse. Wow, time flies.
-Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Lars Brinkhoff
> I haven't investigated it thoroughly, but I do see a file .DOVR.;.SPOOL
> 8 written in C by Eliot Moss.
> ...
> When sending to the DOVER, the spooler waits until Spruce is
> free before sending another file.
Ah, so there was a spooler on the ITS machine as well; I didn't know/remember
that.
I checked on CSR, and it did use TFTP to send it to the Alto spooler:
HOST MIT-SPOOLER, LCS 2/200,SERVER,TFTPSP,ALTO,[SPOOLER]
I vaguely recall the Dover being named 'Spruce', but that name wasn't in the
host table... I have this vague memory that 'MIT-Spooler' was the Alto which
prove the Dover, but now that I think about it, it might have been another one
(which ran only TFTP->EFTP spooler software). IIRC the Dover as a pain to run,
it required a very high bit rate, and the software to massage it was very
tense; so it may have made sense to do the TFTP->EFTP (I'm pretty sure the
vanilla Dover spoke EFTP, but maybe I'm wrong, and it used the PUP stream
protocol) in another machine.
It'd be interesting to look at the Dover spooler on ITS, and see if/how one
got to the CHAOS network from C - and if so, how it identified the protocol
translating box.
Noel
> From: Will Senn
> $c
> 0177520: ~signal(016,01) from ~sysinit+034
> 0177542: ~sysinit() from ~main+010
> 0177560: _main() from start+0104
> If this means it got signal 16... or 1 from the sysinit call (called
> from main)
I'm not sure that interpretation is correct. I think that trace shows signal()
being called from sysinit().
On V6, signal() was a system call which one could use to set the handlers for
signals (or set them to be ignored, or back to the default action). In. 2.11
it seems to be a shim layer which provides the same interface, but uses
the Berserkly signal system interface underneath:
https://www.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/include/signal.hhttps://www.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/man/cat3/signal.0
So maybe the old binary for kermit is still trying to use the (perhaps
now-removed) signal system call?
Noel
> From: Lars Brinkhoff
> the Dover printer spooler was written using Snyder's C compiler
I'm not sure if that's correct. I don't remember with crystal clarity all the
details of how we got files to the Dover, but here's what I recall (take with
1/2 a grain of salt, my memory may have dropped some bits). To start with,
there were different paths from the CHAOS and TCP/IP worlds. IIRC, there was a
spooler on the Alto which ran the Dover, and the two worlds had separate paths
to get to it.
>From the CHAOS world, there was a protocol translation which ran on whatever
machine had the AI Lab's 3Mbit Ethernet interface - probably MIT-AI's
CHAOS-11? If you look at the Macro-11 code from that, you should see it - IIRC
it translated (on the fly) from CHAOS to EFTP, the PUP prototocol which the
spooler ran 'natively'.
>From the IP world, IIRC, Dave Clark had adapted his Alto TCP/IP stack (written
in BCPL) to run in the spooler alongside the PUP software; it included a TFTP
server, and people ran TFTP from TCP/IP machines to talk to it. (IP access to
the 3Mbit Ethernet was via another UNIBUS Ethernet interface which was plugged
into an IP router which I had written. The initial revision was in Macro-11; a
massive kludge which used hairy macrology to produce N^2 discrete code paths,
one for every pair of interfaces on the machine. Later that was junked, and
replaced with the 'C Gateway' code.)
I can, if people are interested, look on the MIT-CSR machine dump I have
to see how it (a TCP/IP machine) printed on the Dover, to confirm that
it used TFTP.
I don't recall a role for any PDP-10 C code, though. I don't think there was a
spooler anywhere except on the Dover's Alto. Where did that bit about the
PDP-10 spooler in C come from, may I enquire? Was it a CMU thing, or something
like that?
Noel
> My unscientific survey of summer students was that they either came
> from scouts, or were people working on advanced degrees in college.
Not all high-school summer employees were scouts (or scout equivalents -
kids who had logins on BTL Unix machines). I think in particular of Steve
Johnson and Stu Feldman, who eventually became valued permanent employees.
The labs also hired undergrad summer employees. I was one.
Even high-school employees could make lasting contributions. I am
indebted to Steve for a technique he conceived during his first summer
assignment: using macro definitions as if they were units of associative
memory. This view of macros stimulated previously undreamed-of uses.
Doug
I'm running 211bsd pl 431 in SimH on FreeBSD. I've got networking
working on a tap interface both inbound and outbound. I still have a few
issues hanging around that are bugging me, but I'll eventually get to
them. One that is of concern at the moment is kermit. It is in the
system under /usr/new/kermit. When I call it, I get:
kermit
Bad system call - core dumped
I don't see core anywhere and if I did, I'd need to figure out what to
do with it anyway (mabye adb), but I'm wondering if anyone's used kermit
successfully who is on pl 431 or knows what's going on?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I've always been intrigued with regexes. When I was first exposed to
them, I was mystified and lost in the greediness of matches. Now, I use
them regularly, but still have trouble using them. I think it is because
I don't really understand how they work.
My question for y'all has to do with early unix. I have a copy of
Thompson, K. (1968). Regular expression search algorithm. Communications
of the ACM, 11(6), 419-422. It is interesting as an example of
Thompson's thinking about regexes. In this paper, he presents a
non-backtracking, efficient, algorithm for converting a regex into an
IBM 7094 (whatever that is) program that can be run against text input
that generates matches. It's cool. It got me to thinking maybe the way
to understand the unix regex lies in a careful investigation into how it
is implemented (original thought, right?). So, here I am again to ask
your indulgence as the latecomer wannabe unix apprentice. My thought is
that ed is where it begins and might be a good starting point, but I'm
not sure - what say y'all?
I also have a copy of the O'Reilly Mastering Regular Expressions book,
but that's not really the kind of thing I'm talking about. My question
is more basic than how to use regexes practically. I would like to
understand them at a parsing level/state change level (not sure that's
the correct way to say it, but I'm really new to this kind of lingo).
When I'm done with my stepping through the source, I want to be able to
reason that this is why that search matched that text and not this text
and why the search was greedy, or not greedy because of this logic here...
If my question above isn't focused or on topic enough, here's an
alternative set to ruminate on and hopefully discuss:
1. What's the provenance of regex in unix (when did it appear, in what
form, etc)?
2. What are the 'best' implementations throughout unix (keep it pre 1980s)?
3. What are some of the milestones along the way (major changes, forks,
disagreements)?
4. Where, in the source, or in a paper, would you point someone to
wanting to better understand the mechanics of regex?
Thanks!
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> 1. What's the provenance of regex in unix (when did it appear, in what form, etc)?
> 2. What are the 'best' implementations throughout unix (keep it pre1980s)?
> 3. What are some of the milestones along the way (major changes, forks, disagreements)?
The editor ed was in Unix from day 1. For the necessarily tiny
implementation, Ken discarded various features
from the ancestral qed. Among the casualties was alternation
in regular expressions. It has never fully returned.
Ken's original paper described a method for simulating all paths
of a nondeterministic finite automaton in parallel, although he
didn't describe it in these exact terms. This meant he had to
keep track of up to n possible states, where n is the number of
terminal symbols in the regular expression.
"Computing Reviews" published a scathing critique of the paper:
everyone knows a deterministic automaton can recognize regular
expressions with one state transition per input character; what
a waste of time to have to keep track of multiple states! What the
review missed was that the size of the DFA can be exponential in n.
For one-shot use, as in an editor, it can take far longer to construct
the DFA than to run it.
This lesson came home with a vengeance when Al Aho wrote egrep,
which implemented full regular expressions as DFA's. I happened
to be writing calendar(1) at the same time, and used egrep to
search calendar files for dates in rather free formats for today
and all days through the next working day. Here's an example
(egrep interprets newline as "|"):
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*1)([^0123456789]|$)
(^|[ (,;])((\* *)0*1)([^0123456789]|$)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*2)([^0123456789]|$)
(^|[ (,;])((\* *)0*2)([^0123456789]|$)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*3)([^0123456789]|$)
(^|[ (,;])((\* *)0*3)([^0123456789]|$)
Much to Al's chagrin, this regular expression took the better
part of a minute to compile into a DFA, which would then run in
microseconds. The trouble was that the DFA was enormously
bigger than the input--only a tiny fraction of the machine's
states would be visited; the rest were useless. That led
him to the brilliant idea of constructing the machine on
the fly, creating only the states that were pertinent to
the input at hand. That innovation made the DFA again
competitive with an NFA.
Doug
This topic is still primarily UNIX but is getting near the edge of COFF, so
I'll CC there if people want to follow up.
As I mentioned to Will, during the time Research was doing the work/put out
their 'editions', the 'releases' were a bit more ephemeral - really a set
of bits (binary and hopefully matching source, but maybe not always)
that become a point in time. With 4th (and I think 5th) Editions it was a
state of disk pack when the bits were copies, but by 6th edition, as Noel
points out, there was a 'master tape' that the first site at an
institution received upon executing of a signed license, so the people at
each institution (MIT, Purdue, CMU, Harvard) passed those bits around
inside.
But what is more, is what Noel pointed out, we all passed source code and
binaries between each other, so DNA was fairly mixed up [sorry Larry - it
really was 'Open Source' between the licensees]. Sadly, it means some
things that actually were sourced at one location and one system, is
credited sometimes credited from some other place the >>wide<< release was
in USG or BSD [think Jim Kulp's Job control, which ended up in the kernel
and csh(1) as part in 4BSD, our recent discussions on the list about
more/pg/less, the different networking changes from all of MIT/UofI/Rand,
Goble's FS fixes to make the thing more crash resilient, the early Harvard
ar changes - *a.k.a.* newar(1) which became ar(1), CMU fsck, e*tc*.].
Eventually, the AT&T Unix Support Group (USG) was stood up in Summit, as I
understand it, originally for the Operating Companies as they wanted to use
UNIX (but not for the licenses, originally). Steve Johnson moved from
Research over there and can tell you many more of the specifics.
Eventually (*i.e.* post-Judge Green), distribution to the world moved from
MH's Research and the Patent Licensing teams to USG and AT&T North Carolina
business folks.
That said, when the distribution of UNIX moved to USG in Summit, things started
to a bit more formal. But there were still differences inside, as we have
tried to unravel. PWB/TS and eventually System x. FWIW, BSD went
through the same thing. The first BSD's are really the binary state of the
world on the Cory 11/70, later 'Ernie.' By the time CSRG gets stood
up because their official job (like USG) is to support Unix for DARPA, Sam
and company are acting a bit more like traditional SW firms with alpha/beta
releases and a more formal build process. Note that 2.X never really
went through that, so we are all witnessing the wonderful efforts to try to
rebuild early 2.X BSD, and see that the ephemeral nature of the bits has
become more obvious.
As a side story ... the fact is that even for professional SW houses, it
was not as pure as it should be. To be honest, knowing the players and
processes involved, I highly doubt DEC could rebuild early editions of VMS,
particularly since the 'source control' system was a physical flag in
Cutler's office.
The fact is that the problem of which bits were used to make what other
bits was widespread enough throughout the industry that in the mid-late 80s
when Masscomp won the bid to build the system that Nasa used to control the
space shuttle post-Challenger, a clause of the contract was that we have
put an archive of the bits running on the build machine ('Yeti'), a copy of
the prints and even microcode/PAL versions so that Ford Aerospace (the
prime contractor) could rebuild the exact system we used to build the
binaries for them if we went bankrupt. I actually, had a duplicate of that
Yeti as my home system ('Xorn') in my basement when I made some money for a
couple of years as a contract/on-call person for them every time the
shuttle flew.
Anyway - the point is that documentation and actual bits being 100% in sync
is nothing new. Companies work hard to try to keep it together, but
different projects work at different speeds. In fact, the 'train release'
model is what is usually what people fall into. You schedule a release of
some piece of SW and anything that goes with it, has to be on the train or
it must wait for the next one. So developers and marketing people in firms
argue what gets to be the 'engine' [hint often its HW releases which are a
terrible idea, but that's a topic for COFF].
> From: Warner Losh
> 8 I think was the limit.
IIRC, you could use longer names than that (in C), but external references
only used the first 7 (in C - C symbols had a leading '_' tacked on; I used to
know why), 8 (in assembler).
> Could that cause this error?
Seems unlikely - see below.
> The error comes from lookloc. This is called for external type
> relocations. It searches the local symbol table for something that
> matches the relocation entry. This error happens when it can't find
> it...
Someone who actually looked at the source:
https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/ld.c
instead of just guessing. Give that man a star!
I spent a while looking at the code, trying to figure out i) how it works, and
ii) what's going wrong with that message, but I don't have a definitive
answer. The code is not super well commented, so one has to actually
understand what it's doing! :-)
It seems to my initial perusal that it maintains two symbol tables, one for
globals (which accumulates as each file is processed), and one for locals
(which is discarded/reset for each file). As Werner mentioned, the message
appears when a local symbol referenced in the relocation information in the
current file can't be found (in the local symbol table).
It's not, I think, simply due to too many local symbols in an input file -
there seems to be a check for that as it's reading the input file symbol
table:
if (lp >= &local[NSYMPR])
error(1, "Local symbol overflow");
*lp++ = symno;
*lp++ = sp;
although of course there could be a bug which breaks this check. It seems to
me that this is an 'impossible' error, one which can only happen due to i) a
bug in the loader (a fencepost error, or something), or ii) an error in the
input a.out file.
I don't want to spend more time on it, since I'm not sure if you've managed to
bypass the problem. If not, let me know, and we'll track it down. (This may
involve you addding some printf's so we have more info about the details.)
Noel
I finally munged lbforth.c (https://gist.github.com/lbruder/10007431) into
compiling cleanly on mostly-stock v7 with the system compiler (lbforth
itself does fine on 211BSD, but it needs a little help to build in a real
K&R environment).
Which would be nice, except that when it gets to the linker....
$ cc -o 4th forth.c
ld:forth.o: Local symbol botch
WTF?
How do I begin to debug this?
Adam
> From: Will Senn
> it finally clicked that it is just one (of many) bit buckets out there
> with the moniker v6. ... I am coming from a world where OS version
> floppy/cd/dvd images are copies of a single master ... These tape things
> could be snapshots of the systems they originate from at very different
> times and with different software/sources etc.
Well, sort of. Do remember that everyone with V6 had to have a license, which
at that point you could _only_ get from Western Electric. So every
_institution_ (which is not the same as every _machine_) had had to have had
dealings with them. However, once your institution was 'in the club', stuff
just got passed around.
E.g. the BBN V6 system with TCP/IP:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-V6
I got that by driving over to BBN, and talking to Jack Haverty, and he gave us
a tape (or maybe a pack, I don't recall exactly). But we had a V6 license, so
we could do that.
But my particular machine, it worked just the way you described: we got our V6
from the other V6 machine in the Tech Sq building (RTS/DSSR), including not
only Bell post-V6 'leakage' like adb, but their local hacks (e.g. their TTY
driver, and the ttymod() system call to get to its extended features; the
ability to suspend selected applications; yadda, yadda). We never saw a V6
tape.
Noel
> From: Will Senn
> I don't think adb was in v6, where the fcreat function and buf struct
> are used... Were Maranzano and Bourne using some kind of hybrid 6+ system?
In addition to the point about skew between the released and internal development,
it's worth remembering just how long it was between the V6 and V7 releases, and
how much ground was covered technically during that period.
A lot of that stuff leaked out: we've talked about the upgraded 'Typesetter C'
(and compilers), which a lot of people had, and the V6+ system at MIT
(partially sort of PWB1) had both 'adb' and the stdio library. The latter also
made its way to lots of places; in my 'improved V6 page':
http://www.chiappa.net/~jnc/tech/ImprovingV6.html
it talks about finding the standard I/O stuff in several later V6 repositories,
including a UNSW tape. But it and typsetter C, also on the Shoppa pack, were
clearly quite widespread.
Noel
I've done research on this, but I'm confused and would appreciate some
help to understand what's going on. In the 7th edition manual, vol 2,
there's an ADB tutorial (pp. 323-336). In the tutorial, the authors,
Maranzano and Bourne, walk the reader through a debugging session. The
first example is predicated on a buffer overflow bug and the code includes:
struct buf {
int fildes;
int nleft;
char *nextp; char buff[512]; }bb;
struct buf *obuf;
...
if((fcreat(argv[1],obuf)) < 0){
...
Well, this isn't v7 code. As discussed in the v7 manual vol 1 (p. VII):
Standard I/O. The old fopen, getc, putc complex and the old –lp package
are both dead, and even getchar has changed. All have been replaced by
the clean, highly efficient, stdio(3) package. The first things to know
are that getchar(3) returns the integer EOF (–1), which is not a
possible byte value, on end of file, that 518-byte buffers are out, and
that there is a defined FILE data type.
The buffers are out, fcreat is gone, etc. So, what's up with this? I
don't think adb was in v6, where the fcreat function and buf struct are
used... Were Maranzano and Bourne using some kind of hybrid 6+ system?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
OK. I'm plowing through a lot of issues with the putative 2.11BSD
reconstructions I've done to date. I keep finding things dated too new to
be right.
And it turns out that a few patches "snuck in" when the patch 80 catch up
was done. I've outlined the ones I've found so far at
https://bsdimp.blogspot.com/2020/08/missing-211bsd-patches.html but I'm
sure there's at least one more. There was much ambiguity over /usr/new and
/usr/local that lead to some of these, but others look like they were in
the master tree, but never formally published that have all the hallmarks
of legit bug fixes...
I've also detailed the issues in going backwards. 2.11BSDpl195 had a
different .o format than 2.11BSDpl0. And to make matters worse, its
assembler couldn't assemble the assembler from the initial release, so I
had to get creative (using apout, thanks to all who contributed to that!).
I've also blogged about how to walk back a binary change when the old
programs no longer build on the new system. I think I got lucky that it was
possible at all :).
https://bsdimp.blogspot.com/2020/08/bootstrapping-211bsd-no-patches-from.ht…
has the blow by blow. There are a lot of steps to building even a normal
system... Let alone walking through the minefield of errors that you need
to do when stepping back...
And neither of these even begin to get into the issues with the build
system itself requiring workarounds for that...
But anyway, I keep making "ur2.11BSD" tapes, installing them and fixing the
issues I find... While much information was destroyed in the process,
there's a surprising amount of redundancy that one can use to 'test'
putative tapes.
Warner
P.S. ur2.11BSD is from urFOO in linguisting, meaning the original FOO
that's been lost and which some amount of reconstruction / speculation is
offered about it. Still looking for a good name for the reconstructed
2.11BSD release....
Probably same as others do, when I'm implementing some 'trace'
messages in a new program or one just 'under investigation' I try to
make sure the message has a nice format to be able to process a few
megabyte of logfile easily.
Cheers,
uncle rubl
John Gilmore:
Yes -- but [Bell Labs'] administration was anything but egalitarian or
meritocratic. I know someone who had immense trouble getting inside the
door at the Labs because "all" he had was a bachelor's degree. Let
their character be judged by how they treated a stranger.
Sign me proud to have succeeded in life with no degrees at all,
====
That was where local management came in.
I have no degrees at all. I haven't been nearly as
successful in many ways as John, but I was recruited
and hired by 1127. That I had no degree meant I was
initially hired as a `special technical assistant'
rather than a `member of technical staff,' but my
department head and director and executive director
(the last was the legendary Vic Vyssotsky) worked
tirelessly on my behalf, without my pushing them at
all, to get me upgraded, and succeeded after I'd been
there about a year. It was only later that I realized
just how much work they'd done on my behalf.
The upgrade gave me a big raise in pay, but I was
young enough and nerdy enough not to notice.
Within the 1127 culture there was no perceptible
difference; it was very much an egalitarian culture.
I felt respected as an equal from the start (really
from the day and a half I spent interviewing there).
Not every part of the Labs, let alone AT&T, was like
that, especially outside of the Research area. I
didn't realize it initially but that was one of the
ways I benefited from the success of UNIX (that 1127's
and 112's management could push past such bureaucratic
barriers).
After all, Ken never had more than an MS.
Norman Wilson
Toronto ON
The Computer History Museum has an interesting blog post about
Dennis Ritchie's lost dissertation:
https://computerhistory.org/blog/discovering-dennis-ritchies-lost-dissertat…
Interesting fact is that Dennis never received his PhD because he failed
to provide a bound copy of his dissertation to the Harvard library.
Kirk McKusick
> The use of honorifics was subtly discouraged at the Labs. I never saw a
policy statement, but nobody I knew used "Dr" (except those in the medical
department)
With the sole exception of the president's office, secretaries were
instructed not to say "Dr so-and-so's office" when they picked up an
unanswered phone call. (When that happened you could be sure that
the party you were calling was genuinely unavailable. Part of the
AT&T ethos--now abandoned--was that everybody, right up to the
president, answered their own phones.)
Doug
On another front. I know I've asked this before in v6, and possibly
related to v7, but I can't find the notes anywhere. vi doesn't come with
v7. So, has anybody put it on v7 in simh? I saw a thread sometime back
where vi on v7 wasn't the main topic, where Warren? I think it was, said
he'd done it and it was "easy." I don't suppose there are any notes
laying around telling how this might be accomplished?
I do see vi in 2bsd.tar, I don't suppose there is a 'how to install
2bsd on v7" note around either?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Is there full bsdi git repo anywhere?
I've vague recollections parts were merged into FreeBSD in the early
2000s so I assume it was open sourced?
There is a tarball of bsdi 2 on venus wetware but that's the best I
can do with searching.
--
Steve Mynott <steve.mynott(a)gmail.com>
cv25519/ECF8B611205B447E091246AF959E3D6197190DD5
> From: Will Senn
> I don't really understand how they work.
> ...
> maybe the way to understand the unix regex lies in a careful
> investigation into how it is implemented
I'm not sure what I did, but it wasn't the latter, since I have no idea how
they are done!
I just mentally break the regex search string up into substrings (I use them
most in Epsilon, which has syntax to do substrings of search strings, which
helps a lot); past that, I think it's just using them and getting used to
them.
> an IBM 7094 (whatever that is)
IBM's last 36-bit scientific mainframe before the System/360's. CTSS (which
DMR held out as the ancestor of Unix) ran on 7094's.
Noel
> "My graduate school experience convinced me that I was not smart enough to
> be an expert in the theory of algorithms and also that I liked procedural
> languages better than functional ones."
>
> Amen to that. Me too, I tried functional languages and my head hurt. C
> seems so natural to me.
Dennis made quite a generalization from a sample of one--Lisp,
the only functional language that existed when he was in grad
school. I'm sure he'd agree today that functional languages
shine for spplications rooted in algebraic domains. I
immodestly point to www.cs.dartmouth.edu/~doug/powser.html,
which has nothing to do with Unix, but certainly would have
appealed to Dennis.
Doug
> From: Angelo Papenhoff
> I believe 11/20 UNIX also needs the EAE.
Some applications might have used it (the story about the KS11 bug with the
KW11-A confirms they did use it on that machine), but I found no trace of use
of it in a quick scan of the entire Version 1 source (the one which is
extant).
Also, the first file in the OS source:
https://www.tuhs.org/cgi-bin/utree.pl?file=V1/u0.s
lists the addresses of all device registers, and the KE11-A isn't there.
If the KE11 is needed to run some application on the -11/04, there are
KE11-B's (program compatible, but a single hex card) available, ISTR. For
emulation, something (SIMH?) supports it, since the TV -11 on ITS (now running
in emulation,I'm pretty sure) uses it.
Noel
> From: Will Senn
> can I run Unix on a PDP 11/04?
No, it doesn't have memory management, so not any of the well-known 'stock'
versions (V5/V6/etc).
Two choices, though:
- If you get the V1 that ran on an -11/20 (which is mostly compatible with
the /04 and /05), it should run on an /04. (Not sure what you'd use for mass
storage, on a physical /04, though.) I'm not sure when they dropped the /20 -
I think V4 n(at the latest)? But V2 and V3 are lost.
- There's a 'Unix' for the LSI-11, and with minor changes (the LSI-11 isn't
100% compatible with other MMU-less 11's, but the changes are minor, e.g.
MOS, written in MACRO-11, was conditionalized to run on both the LSI-11
and the -11/20) it should run on an /04.
Noel
> From: Clem Cole
> And if you could find a KS-11 MMU that Ken and Dennis had for the 11/20
> ... we can't even find documentation about it (Ken's surviving code is
> the best doc we have).
Where is that code? The Version 1 at TUHS appears to pre-date it.
It would be great to have a look at it, we might be able to partially document
the KS11 using it. (Ken had only vague memories of the KS11.)
> From: Ronald Natalie
> There's always MiniUnix.
Ah; I didn't realize that was something different from LSX (the LSI-11
system).
Noel
The question is can I run Unix on a PDP 11/04? I've dug around and it's
unclear to me, so I'm asking y'all.
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I'm trying to get 2bsd.tar extracted into v7. Does anyone have any
recollection how to do this. Here's what I'm seeing:
in simh:
simh> att tm0 -V -F TAR 2bsd.tar
Tape Image '2bsd.tar' scanned as TAR format.
contains 4935680 bytes of tape data (482 records, 1 tapemarks)
c
and in v7:
tar xv0
tar: bin/ - cannot create
directory checksum error
# ls
bin
What gives - it says it can't create the dir, but it does, it's there...
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> There was quite some communication between Peter Nilson (npn, known
> for picasso) and bwk.
In the interest of accuracy npn's full name is Nils-Peter Nelson.
He honchoed the Bell Labs Cray and originated <string.h>.
Doug
Hi folks,
I used thack to typeset my dissertation on v7 circa 1988. It converts C/A/T codes to postscript. i have no idea if it will cope with eqn or pic but it was enough for nicely formatted text as i remember.
this is the patch/bugfix with a link to the original package.
https://www.tuhs.org/Usenet/comp.sources.misc/1989-July/001073.html
-Steve
Nemu Nusquam:
When was dpost born?
=====
CSTR 97, A Typesetter-Independent TROFF by Brian W Kernighan
was issued in 1981 and revised the next year. So that's the
earliest possible date.
I vaguely remember the existence of Postscript support in
general, including at least one Apple Laserwriter kicking
around somewhere, starting at some point during my time at
1127 in the latter 1980s. There was even a Postscript
display engine that ran on 5620 terminals under mux, though
it wasn't normally used for troff previewing because the
troff-specific proofer was faster (mainly, I think, it
didn't send nearly as much data down the serial line to
the terminal).
My personal snapshot of V10, and the TUHS archive copy,
include dpost; see src/cmd/postscript/dpost. Everything
in the postscript directory came from USG, who had
packaged everything troff into a separately-licensed
Documenter's Workbench package. That may have made us
exclude it from the officially-distributed V8 tape and
V9 snapshots. In any case, the only V9 snapshot I know
of offhand (which is in Warren's archive) has no dpost.
Both my copy of V10 and the TUHS copy show dpost's
source files with dates in 1991, but it was certainly
there earlier if I used it in New Jersey (I left in
mid-1990). Dpost is documented in man8/postscript.8;
my copy of that file is dated October 1989.
Digging around in documents available on the web,
I found a bundle of DWB 2.0 docs:
http://www.bitsavers.org/pdf/att/unix/Documentors_Workbench_1989/UNIX_Syste…
It's a scanned-image PDF so I can't search it by
machine, but it includes such things as listings of
the source-code directory and manifests of various
binary distributions, and dpost doesn't appear anywhere
I can see. As the URL implies, the docs seem to
be dated 1989. So maybe dpost wasn't part of the
product until DWB 3.0; but maybe we in Research got
an early copy of the postscript stuff (I think bwk
was in regular communication with the USG-troff
folks), perhaps in 1989.
I confess I've lost track of the original question
that spawned this thread, but if it is whether
dpost is easily back-ported to PDP-11 UNIX, I don't
think that's likely without a good bit of work.
It would very likely require a post-1980-type C
compiler, since it was written in the late 1980s.
It might or might not fit on a PDP-11; I don't
remember whether USG's system still officially
ran there by the late 1980s.
Norman Wilson
Toronto ON
> From: Paul Riley
> I'm struggling however with how C processes the IO. It seems that when I
> type at the console, my typing is immediately echoed to my terminal
> window. ... nothing appears on the terminal until I press enter, when
> the system displays the whole line of input ... How
> can I suppress the original C/Unix echo, and get my output to appear
> immediately?
This is not a C issue; it's the Unix I/O system (and specifically, terminal I/O).
Normally, Unix terminal input is done line-at-a-time: i.e. the read() call to
the OS (whether for 1 character, or a large number) doesn't return until an
enire line has been typed, and [Retrurn] has been hit; then the entire line is
available. While it's being buffered by the OS, echoing is done, and rubout
processing is also performed.
One can suppress all this; there's a mode call 'raw' (the normal mode is
sometime labelled 'cooked') which suppresses all that, and just gives one the
characters actually typed, as they are typed. The stty() system call can be
used to turn this on.
See the V6 tty(IV) manual entry for more. stty() is in stty(II).
Noel
> From: Clem Cole <clemc(a)ccc.com>
> another old V6 trick is the use file redirection from the different
> terminal to unlock a hosed tty.
'stty {mumble} > /dev/ttyX', in case that wasn't clear.
Note that this only works if you have a working shell on _another_ terminal,
so if you're e.g. working with an emulation which has only one working
terminal, and your raw-mode program on it has gone berserk, 'See Figure 1'.
It really is advised to have another working terminal if you want to play
with raw-mode programs.
Noel
I got a diff for adding actual backspace and delete to v7, linked off of
gunkies... Anyhow, I can manually edit the referenced files and rebuild,
but I would rather do it canonically. I don't see patch anywhere, so did
v7 users use diffs to patch source and if so what's the magic?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Echoing other answers, I regularly use groff in Cygwin.
If you're into Unix/Linux, Cygwin is a great tool with a
remarkably clean installation process. I use default
PostScript output by choice, because I can tinker with
PostScript but not with PDF. ps2pdf (available from
Cygwin) has always worked when I need PDF.
I must admit, tough, that this approach will be pretty
onerous if you do not want Cygwin for any other
reason. And I should add that PoscScript requires a
special viewer; I use gsview.
Doug
As y'all know, I'm a relative latecomer to the world of Unix, but I do
try to figure out how y'all did it back when. So, sometimes, as in this
case, I can figure out how to do something, but I'm curious how it was
done back in the day, moreso than how I can get it done today. I'm
looking at the patching of my shiny new 2.11 BSD pl 431 system running
on my speedy little virtual PDP-11/70, so I grabbed patch 432 (here's a
portion of the patch):
...
To install the update cut where indicated below and save to a file
(/tmp/432) and then:
cd /tmp
sh 432
./432.sh
./432.rm
sh 432.shar
patch -p0 < 432.patch
Watch carefully for any rejected parts of the patch. Failure of a
patch typically means the system was not current on all preceeding
updates _or_ that local modifications have been made.
...
====== cut here
#! /bin/sh
# This is a shell archive, meaning:
# 1. Remove everything above the #! /bin/sh line.
# 2. Save the resulting text in a file.
# 3. Execute the file with /bin/sh (not csh) to create:
# 432.rm
# 432.sh
# 432.shar
# 432.patch
...
# End of shell archive
This seems straightforward. Just follow the directions et voila magic
happens.
My questions for y'all are how would you go about doing this? Use vi to
delete everything through the ==== cut here line? Use some combination
of standard unix tools to chop it up? What was your workflow for
patching up the system using these files?
In my world, if I screw something up, it's 15 seconds to run a restore
script in my simh directory and I can try again, so my level of concern
for a mistake is pretty low. If I was doing this in 1980, on real
hardware, I would have had many concerns, as I'm sure some of y'all can
remember, how did you prepare and protect yourselves so a patch was
successful.
BTW, I thought .shar was an archive format, so when I saw the patch was
a shar file, I was worried it would be in some binary form, lo and
behold, it looks like text to me... not even b64. So much to learn, so
little time.
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Larry McVoy
> Yeah, write is unbuffered though I think Noel is correct, it's going to
> a tty and the tty will buffer until \n
The 'wait until newline' is on the input side.
Output is buffered (in the sense that characters are held in the kernel until
the output device can take them); but normally output will start to happen as
soon as the device is able to take them. Only a certain amount can be
buffered though, after that (the 'high water', I think it's called), the
process is blocked if it tries to do output, and awakened when the buffered
output level goes past the 'low water' mark.
Note that getchar() and putchar() are subroutines in a library; looking
at the source:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s4/getchr.s
you can see how they relate to the actual read/write calls to the OS.
> So you probably have to set the tty in raw mode
Probably best to run such programs from something other than the main console,
because if there's a bug in the program, and the terminal is in raw mode, if
you're on the console, you may have to reboot the system to regain control of
the system. (Interrupt characters, ^D etc won't work.)
> (sorry that I'm vague, I never ran V6).
Tnat's OK, I pretty much have the V6 kernel memorized, from working with
it back in the day... :-)
Noel
I am using C in V6 to create a Forth compiler. I don;t have any interest in
Forth as a general purpose language, however I do like it. It's satisfying
to create a language from the ground up. I'm interested in it a simple and
extensible interpreted language for toying with my PDP-11s, so I'll have
some with it in the future.
I have a very basic problem. I am simply using getchar and putchar for
console IO, which is convenient indeed. I'm struggling however with how C
processes the IO. It seems that when I type at the console, my typing is
immediately echoed to my terminal window. When I press backspace the system
responds with the character that was deleted, which is fine, as I assume it
was from the paper teletype days. However, my code is receiving input and
also echoing the output, but nothing appears on the terminal until I press
enter, when the system displays the whole line of input, which is
essentially a duplicate of what the terminal originally displayed, but with
the consolidated edits. My code is reading and echoing the input character
by character.
Here's my question. How can I suppress the original C/Unix echo, and get my
output to appear immediately? This simple sample code form the C
programming manual behaves the same.
int main() {
int c;
while ((c = getchar()) != EOF) {
putchar(c);
}
}
Paul
*Paul Riley*
Mo: +86 186 8227 8332
Email: paul(a)rileyriot.com
I'm hearing that 50% of what's left of AT&T research got the axe today.
I'm hoping to hear from friends about details.
God's gift to google, as we have said in the past.
Henry Bent:
Are there any former Bell Labs sysadmins on this list? My father was the
sysadmin for hoh* (Crawford Hill, mostly the radio astronomy folks) in the
mid-late '80s and early '90s and I would be especially interested to hear
from hou* (Holmdel, what a building!) folks but also ihn* (Indian Hill) and
the like. I'm very curious about what life was like on the ground, so to
speak.
=====
It is worth pointing out that, like many universities, Bell
Labs had at least two layers of computing and therefore of
sysadmins. There were official central Comp Centers, which
is the world Henry asks specifically about; but there were
also less-central computing facilities at the divisional and
center and department level.
I never worked for a comp center, but my role in 1127 was
basically that of sysadmin for our center's own systems;
the ones used both for our day-to-day computing (including
that known to the uucp world as research!) and for OS and
other experiments and research and hacking.
There were also two large VAXes called alice and rabbit,
which afforded general-use computing for other departments
in Division 112. Us 1127 sysadmins (I wasn't the only one
by any means) ran those too; I don't know the full history
but I gather that happened because there was some desire
to run the Research version of UNIX rather than the comp-center
one for that purpose.
One system I had hands in straddled the researcher/comp-center
boundary: 3k, the Cray X-MP/24 bought specifically for
researchers. It was physically located in the comp center,
but managed jointly: it ran Cray's UNICOS but with
substantial additions from the Research world, including
the stream I/O system (which is rather self-contained so
it was not too hard to graft into a different UNIX) and
Datakit support (using a custom interface board designed
and built by Alan Kaplan and debugged by me and Alan
over several late-night sessions. (I remember that
the string "feefoefum\n", which is an obscure cultural
reference from one of my previous workplaces, was
particularly effective at shaking out bugs!)
Once UNICOS-a-la-Research was running stably, staff from
the Murray Hill Comp Center looked after day-to-day
operations, with Research involved more for software
support as needed.
Norman Wilson
Toronto ON
Are there any former Bell Labs sysadmins on this list? My father was the
sysadmin for hoh* (Crawford Hill, mostly the radio astronomy folks) in the
mid-late '80s and early '90s and I would be especially interested to hear
from hou* (Holmdel, what a building!) folks but also ihn* (Indian Hill) and
the like. I'm very curious about what life was like on the ground, so to
speak.
I'll start off with a quick anecdote. When I started college I began
working for the computing center doing menial jobs. There was an older,
ex-army guy leading the networking department who extolled the virtues of
the VAX up and down; I think Oberlin would have kept the VAX 6620 running
VMS 5.5 forever if he had his way. Anyway, I mentioned his position to my
father and he told me that the best thing he ever did was replace the VAX
systems (and the endless mounting/remounting of RL02s for user data) with
early generation Sun 4 systems.
-Henry
Coming upon this dedication in W.B. Yeats's book, "Irish Folk
stories and Fairy Tales", I felt a frisson of connection: "Inscribed
to my mystical friend, G. R." Mystically present in the Unix room
and on the 1127 org chart, G. R. Emlin took a place in our little
community akin to that of fairies in Irish peasant culture. First
encountered by Fred Grampp, G. R. manifested to others in various
guises, ranging from Grace Emlin, whom I remember as a security guru,
to a Labs-badged apparition now housed in the corporate archives
(www.peteradamsphoto.com/unix-folklore/) My most memorable personal
encounter was when I received from G. R. a receipt for paint for the
water-tower project (spinrooot.com/pico/pjw.html) as part of a request
for reimbursement. I passed the voucher up the chain of command to
our executive director, Vic Vyssotsky. Unfortunately for G. R., Vic,
despite his masterful ability to bypass bureaucratic obstacles,
declared he wasn't authorized to approve plant improvements.
Doug
Hi All,
I'm currently doing some work with 211BSD and the best version that I've
come across for my investigations is the one put together by Andre
Luvisi, based on the distro in the Unix Archive at
https://tuhs.v6shell.org/UnixArchiveMirror/Distributions/UCB/2.11BSD
So far as I can figure out (and I'm a little bit fuzzy around the
edges), this appears to be patch level 431, at least according to
https://tuhs.v6shell.org/UnixArchiveMirror/Distributions/UCB/2.11BSD/VERSION.
I have a number of questions that hopefully, someone can shed some light on:
1. Is it really pl 431?
2. How can I tell?
3. Is it the latest tape image available (I've seen plenty of disk
images, but those are already installed)?
4. Is there a howto bring it up to the next patch level document lying
around somewhere?
I've seen Warner's work on going the other direction and that's
fascinating in it's own right, but I'd like to see about patching up to
latest.
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Well, I figured out number 4, duh!
4. Is there a howto bring it up to the next patch level document lying
around somewhere?
Each patch is self documenting :). Just do what it says and it should
work... we'll see!
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
On 7/19/20, emanuel stiebler <emu(a)e-bbes.com> wrote:
>
> That's why DEC made also the MicroVAX. I had once a MVII/BA23 in my
> samsonite. Weird look at customs, but worked ;-)
>
By the early 1980s it was apparent that some of the more complicated
VAX instructions weren't worth the space they took up in firmware.
Especially POLY and EMOD, which turned out to be both slower and less
accurate than coding them up as subroutines. And the PL/I and COBOL
compilers were implementing packed decimal using decimal shadowing.
Chucking out those instructions and doing them by emulation in the OS
freed up enough chip real estate to allow the remaining VAX
architecture to be implemented on a chip. All the later VAXen
supported only the MicroVAX subset architecture in hardware/firmware.
I don't recall which was the last VAX to support the whole
architecture in hardware/firmware. Perhaps the VAX 8200/8300
(Scorpio)? That was a single-board implementation. It could be
paired with a high-end Evans & Sutherland 3D graphics monitor. DEC
tried unsuccessfully to use that combination to compete with Apollo in
the workstation market, but it was too little too late. One reviewer
said that coupling the E&S graphics to the VAX 8200 was like
turbocharging a lawn mower. Did Unix support that configuration, or
was it VMS-only?
-Paul W.
IMMSMC the early Linux had problems with some old-good programs i.e. Sendmail.
New-born Linux hesitated between POSIX-BSD-Solaris semantic
i.e. in file-locking, pty, network interface binding e.t.c.
>From the early Sendmail README:
===
Linux
Something broke between versions 0.99.13 and 0.99.14 of Linux:
the flock() system call gives errors. If you are running .14,
you must not use flock. You can do this with -DHASFLOCK=0.
Around the inclusion of bind-4.9.3 & linux libc-4.6.20, the
initialization of the _res structure changed. If /etc/hosts.conf
was configured as "hosts, bind" the resolver code could return
"Name server failure" errors. This is supposedly fixed in
later versions of libc (>= 4.6.29?), and later versions of
sendmail (> 8.6.10) try to work around the problem.
Some older versions (< 4.6.20?) of the libc/include files conflict
with sendmail's version of cdefs.h. Deleting sendmail's version
on those systems should be non-harmful, and new versions don't care.
Sendmail assumes that libc has snprintf, which has been true since
libc 4.7.0. If you are running an older version, you will need to
use -DHASSNPRINTF=0 in the Makefile. If may be able to use -lbsd
(which includes snprintf) instead of turning this off on versions
of libc between 4.4.4 and 4.7.0 (snprintf improves security, so
you want to use this if at all possible).
NOTE ON LINUX & BIND: By default, the Makefiles for linux include
header files in /usr/local/include and libraries in /usr/local/lib.
If you've installed BIND on your system, the header files typically
end up in the search path and you need to add "-lresolv" to the
LIBS line in your Makefile. Really old versions may need to include
"-l44bsd" as well (particularly if the link phase complains about
missing strcasecmp, strncasecmp or strpbrk). Complaints about an
undefined reference to `__dn_skipname' in domain.o are a sure sign
that you need to add -lresolv to LIBS. Newer versions of linux
are basically threaded BIND, so you may or may not see complaints
if you accidentally mix BIND headers/libraries with virginal libc.
If you have BIND headers in /usr/local/include (resolv.h, etc)
you *should* be adding -lresolv to LIBS. Data structures may change
and you'd be asking for a core dump.
I've managed to reach an important milestone in my efforts to recreate
2.11BSD pl 0 from sources. I've managed to unwind the patches, recreated
some programs that were lost (the patches destroyed data and weren't modern
context diffs).
So I've managed to unwind back to pl 0, I've managed to bootstrap the
assembler (it wouldn't build on pl 195), recreate ld, ranlib and ar. I've
rebuilt the libraries and many binaries.
https://bsdimp.blogspot.com/2020/07/211bsd-original-tapes-recreation.html
No tapes yet, but I thought people here would like to know.
Warner
(if this is better suited for COFF, that'd be fine too)
I've been trying to set up UUCP on my V7 system and its raspberry Pi host.
This plus the "s" editor (already working) are really all that's needed to
make this something pretty close to a daily driver, if all I wanted to do
was write text files (which in some sense is all my job _is_, but to be
fair I get a much more immediate feedback loop in my current environment).
I was following
https://github.com/jwbrase/pdp11-tools/blob/master/howtos/V7%20UUCP%20Insta…
more or less--I had already rebuilt v7 with the DZ terminal driver and was
using it for interactive sessions (albeit, before I started trying to get
UUCP running, with 7-bit line discipline--but I've since changed that).
I have 16 DZ lines, I've set them to 8-bit mode. They're working fine,
because I can use them for terminal sessions.
I've built UUCP, set a node name, and set it up on the pi.
I can execute uucico to send files, and it, frustratingly, almost works.
>From the Pi side, I see (with uulog):
uucico v7 - (2020-07-03 08:11:34.97 23106) Calling system v7 (port TCP)
uucico v7 - (2020-07-03 08:11:42.25 23106) Login successful
uucico v7 - (2020-07-03 08:11:44.44 23106) Handshake successful (protocol
'g' sending packet/window 64/3 receiving 64/7)
uucico v7 adam (2020-07-03 08:11:51.61 23106) Sending
/home/adam/git/simh/sim_scsi.h (6780 bytes)
uucico v7 adam (2020-07-03 08:16:21.79 23106) ERROR: Timed out waiting for
packet
uucico v7 - (2020-07-03 08:16:21.80 23106) Protocol 'g' packets: sent 86,
resent 6, received 1
uucico v7 - (2020-07-03 08:16:21.80 23106) Errors: header 2, checksum 0,
order 0, remote rejects 0
uucico v7 - (2020-07-03 08:16:22.51 23106) Call complete (283 seconds 5440
bytes 19 bps)
So it's clearly logging in, and if I telnet in directly, the v7 end is
starting uucico as expected:
login: pi-uucp
Password:
Shere
uulog -x on the v7 side has no output, and nothing ever appears in the
spool directory, which I suspect is a direct result of the timeout waiting
for packet.
So my question is, what else do I do to debug this? Clearly the pi (Taylor
UUCP) side is expecting something else--maybe an acknowledgement?--from the
v7 side to let it know the transmission was successful.
Any help would be appreciated.
Adam
John Linderman:
Every "divestiture" had an adverse effect on critical mass. The split
between AT&T and Bellcore was a big hurt.
The split between AT&T and Lucent was another. When I joined the Labs in
1973, it was an honor to work there.
====
Maybe I'm blinded because I wasn't there earlier, but
to me, joining Bell Labs in 1984, just after the original
divestiture that split off Bellcore, was still an honour.
There were certainly good people I never had a chance to
work with because they went to Bellcore, but in 1127 at
least, morale was good, management stayed out of our way
and encouraged researchers to work on whatever interested
them, and a lot of good work was done even if that group
was no longer the source of All UNIX Truth. (In fact I
think we missed the boat on some things by being too
inwardly-focussed, TCP/IP in particular, but divestiture
didn't cause that.)
It seemed to me that the rot didn't really begin to show
until around 1990, the time I left (though not for that
reason; this is hindsight). Upper management were
visibly shifting focus from encouraging researchers to
do what they did best to treating researchers as a source
of new products to be marketed. The urge to break the
company up further seems to me to have been a symptom,
not a cause; the cause was a general corporate shift
toward short-term profits rather than AT&T's traditional
long-term view. AT&T was far from alone in making this
mistake, and research in the US has suffered greatly all
over as a result.
I remember visiting a couple of years after I left, and
chatting with my former department head. He said 1127
was having trouble convincing new researchers to join up
because they'd heard (correctly) that the physics and
chemistry research groups were being cut back, and feared
computing science would have its own reckoning soon enough.
In fact the corporate direction of the time was to cut
back on the physical sciences and push to expand software
research and development, but I don't blame the new
researchers for being concerned (nor did my ex-DH), and
in the long term they turned out to be more right than
wrong.
Nothing lasts forever, but the classic Bell Labs lasted
a long time. We have nothing like it now. I don't think
we'll have anything like it any time soon. That's sad.
Norman Wilson
Toronto ON
Preferring fewer emails, but not wanting to miss out on topics
that had not occurred to me, I would continue to subscribe
to the digest and not switch to a topic-filtering option.
Doug
On 7/6/20 7:30 PM, Greg 'groggy' Lehey wrote:
> People (not just Clem), when you change the topic, can you please
> modify the Subject: to match? I'm not overly interested in uucp,
> but editors are a completely different matter. I'm sure I'm not the
> only one, so many interested parties will miss these replies.
I see this type of change happen — in my not so humble opinion — /way/
/too/ /often/.
So, I'm wondering if people are interested in configuring TUHS and / or
COFF mailing list (Mailman) with topics. That way people could
subscribe to the topics that they are interested in and not receive
copies of topics they aren't interested in.
I'm assuming that TUHS and COFF are still on Mailman mailing lists and
that Warren would be amicable to such.
To clarify, it would still be the same mailing list(s) as they exist
today. They would just have the to be utilized option of picking
interesting topics. Where topics would be based on keywords in the
message body.
I'm just trying to gauge people's interest in this idea.
--
Grant. . . .
unix || die
Dave Horsfall:
A boss of mine insisted that we all learned "ed", because one day it might
be the only editor available to you after a crash; he was right...
====
Besides which, as The Blessed Manual said in every
Research edition:
ed is the standard text editor.
Norman Wilson
Toronto ON
(typing this in Toronto qed)
John Cowan:
Very much +1. Part of the trouble is that Gmail and similar clients don't
routinely show you the Subject: line to make it easy to edit it; you have
to take affirmative action when you want to change the subject matter.
====
Perhaps we should take a leaf from 1980s Rob Pike, and
just automatically change every message to be
Subject: The content of this message
(There is an actual story behind that, but I'll leave it
to Rob to tell.)
Norman Wilson
Toronto ON
When googling for File System Switch or Virtual File System most sources mention Sun NFS and SysVr3 as the earliest implementations. Some sources mention 8th Edition.
I did a (short) search on FSS/VFS in earlier, non-Unix OS’s (Tenex, Multics, CTSS, etc.), but none of those seem to have had a comparable concept.
Does anybody recall prior art (prior to 1984) in this area?
Paul
All, I just received this e-mail from a non-TUHS list member. If you have
an answer for Michael, could you reply to him and pop a cc here as well?
Thanks, Warren
----- Forwarded message from Michael Siegel <msi(a)malbolge.net> -----
Date: Sun, 14 Jun 2020 16:37:59 +0200
From: Michael Siegel <msi(a)malbolge.net>
To: wkt(a)tuhs.org
Subject: Origins and life of the pg pager
Hi there,
I'm trying to find out where the pg pager originated.
The research I've done so far vaguely suggests it came with one of the
System V versions, though Internet claims it to be “the name of the
historical utility on BSD UNIX systems” occasionally.[1]
I think System V because the source code of pg.c in the util-linux
package says that this utility is “a clone of the System V CRT paging
utility.”[2]
I'd also like to find out when pg was discarded and if it ever made it
into POSIX before that. Linux still has pg to the very day, but none of
the current major BSDs (Free/Net/Open) offer it. POSIX 2001, 2004
Edition lists it as an excluded utility.[3] I've not been able to get
the text of any prior POSIX documents. It seems they aren't freely
available.
Any ideas on how to proceed?
Best
Michael
[1] This one's from Wikipedia (https://en.wikipedia.org/wiki/Pg_(Unix))
but I've also found other sites stating the same.
[2]
https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/tree/text-ut…
[3] https://pubs.opengroup.org/onlinepubs/009696899/xrat/xcu_chap04.html
----- End forwarded message -----
Could someone point me to some information about s editor?
Googling didn't help
On Fri, Jul 3, 2020, 9:00 PM <tuhs-request(a)minnie.tuhs.org wrote:
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. v7 uucp debugging help requested (Adam Thornton)
> 2. Re: v7 uucp debugging help requested (Clem Cole)
> 3. Re: v7 uucp debugging help requested (Grant Taylor)
> 4. Re: v7 uucp debugging help requested (John Cowan)
> 5. Re: v7 uucp debugging help requested (Clem Cole)
> 6. Re: v7 uucp debugging help requested (Clem Cole)
> 7. Re: v7 uucp debugging help requested (Norman Wilson)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 3 Jul 2020 13:52:42 -0700
> From: Adam Thornton <athornton(a)gmail.com>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] v7 uucp debugging help requested
> Message-ID:
> <CAP2nic3UNxqi-obHwB5H+Ee+x5MKsd=eBrwhVbX+Ao3AgVPx=
> g(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> (if this is better suited for COFF, that'd be fine too)
>
> I've been trying to set up UUCP on my V7 system and its raspberry Pi host.
> This plus the "s" editor (already working) are really all that's needed to
> make this something pretty close to a daily driver, if all I wanted to do
> was write text files (which in some sense is all my job _is_, but to be
> fair I get a much more immediate feedback loop in my current environment).
>
> I was following
>
> https://github.com/jwbrase/pdp11-tools/blob/master/howtos/V7%20UUCP%20Insta…
> more or less--I had already rebuilt v7 with the DZ terminal driver and was
> using it for interactive sessions (albeit, before I started trying to get
> UUCP running, with 7-bit line discipline--but I've since changed that).
>
> I have 16 DZ lines, I've set them to 8-bit mode. They're working fine,
> because I can use them for terminal sessions.
>
> I've built UUCP, set a node name, and set it up on the pi.
>
> I can execute uucico to send files, and it, frustratingly, almost works.
>
> >From the Pi side, I see (with uulog):
>
> uucico v7 - (2020-07-03 08:11:34.97 23106) Calling system v7 (port TCP)
> uucico v7 - (2020-07-03 08:11:42.25 23106) Login successful
> uucico v7 - (2020-07-03 08:11:44.44 23106) Handshake successful (protocol
> 'g' sending packet/window 64/3 receiving 64/7)
> uucico v7 adam (2020-07-03 08:11:51.61 23106) Sending
> /home/adam/git/simh/sim_scsi.h (6780 bytes)
> uucico v7 adam (2020-07-03 08:16:21.79 23106) ERROR: Timed out waiting for
> packet
> uucico v7 - (2020-07-03 08:16:21.80 23106) Protocol 'g' packets: sent 86,
> resent 6, received 1
> uucico v7 - (2020-07-03 08:16:21.80 23106) Errors: header 2, checksum 0,
> order 0, remote rejects 0
> uucico v7 - (2020-07-03 08:16:22.51 23106) Call complete (283 seconds 5440
> bytes 19 bps)
>
> So it's clearly logging in, and if I telnet in directly, the v7 end is
> starting uucico as expected:
>
> login: pi-uucp
> Password:
> Shere
>
> uulog -x on the v7 side has no output, and nothing ever appears in the
> spool directory, which I suspect is a direct result of the timeout waiting
> for packet.
>
> So my question is, what else do I do to debug this? Clearly the pi (Taylor
> UUCP) side is expecting something else--maybe an acknowledgement?--from the
> v7 side to let it know the transmission was successful.
>
> Any help would be appreciated.
>
> Adam
>
Grant Taylor:
I'm a little surprised that you're trying to use the 'g' protocol to
talk to v7. I thought the 'g' protocol came out later for TCP over
Ethernet connections. As such I wonder if UUCP on v7 supports the 'g'
protocol.
=====
You're mis-remembering. g was the original protocol,
intended for use over possibly-noisy serial lines (e.g.
modems on POTS). It does error checking of various
sorts with retransmission. I believe it is named g
after the protocol's original designer, Greg Chesson.
Later protocols meant to work over reliable, error-
checked links like a TCP/IP circuit were t and e.
Norman Wilson
Toronto ON