I was just reminded of this and thought others might enjoy reading it...
Customer Review
https://www.amazon.com/gp/customer-reviews/R2VDKZ4X1F992Q/
> PING! The magic duck!
> Using deft allegory, the authors have provided an insightful and intuitive explanation of one of Unix's most venerable networking utilities. Even more stunning is that they were clearly working with a very early beta of the program, as their book first appeared in 1933, years (decades!) before the operating system and network infrastructure were finalized. ...
-r
Perhaps the best irrefutable source for Lorinda's contribution
to dc is that cited by https://en.wikipedia.org/wiki/Lorinda_Cherry?
It is, by the way, Doug's A Research UNIX Reader. Those who
subscribe to this list and haven't read it ought to do so; it's
full of tidbits of history.
Norman Wilson
Toronto ON
All,
1. I have a physical copy of the V7 UPM published by Holt, Rinehart and
Winston (HRW) from 1983 (2 volume phone book). In it, there is a C
Reference Manual (pp. 247-277, reprinted with minor changes from the
first C book by K&R and including a Recent Changes to C addendum). I
also have a PDF that was supposedly created from sources that has a C
Reference Manual in it, but, in /usr/doc/cman, there's an inscription
that reads, "Sorry, but for copyright reasons, the source for the
C Reference Manual is not distributed." and the one in the pdf appears
to be identical to the one in the V6 UPM (which I have a print-on-demand
version of). Are the *roff sources (or a clean PDF) available for the
reprint? I have located a PDF copy of the HRW edition, but it's got the
usual deficiencies of being a scanned copy.
2. In same manual, there is an article entitled, UNIX Programming --
Second Edition by K&R. Where is the first edition located? It isn't in
the V6 UPM.
Regards,
Will
I got his from a friend today (15 February):
===========
I'm sorry to report that Lorinda passed away a few days ago. I got a call
from her sister today. Apparently the dog walker hadn't seen her for a few
days and called the police. The police entered the house and found her
there. Her sister says they are assuming either a heart attack or a stroke.
A week or so after spending an entire day meticulously mapping manpages
from version 0 to version 7, I came across Doug's combined table of
contents. I love recreating the wheel <<shakes head, ruefully>>, only
saving grace is that the geniuses who came before, in this case Doug,
had the same idea :). In it, he mentions an addendum for v7 that has
pages for unexported software for local use: apl, cflow, cpio, cref, and
a slew of other usefull commands get mentioned. By unexported, I'm
gathering this means not included on the distro tapes -- I certainly
don't see them in my installed system. Were they distributed at all, or
just used internally at Bell, or what? Are there extant copies around?
To be clear, I'm talking about the unexported versions, not later
versions that might be fitted onto v7. Oh, and a bonus question, why
weren't they exported.
Will
All,
I have been doing some language exploration in v7/4.3bsd and came across
Software Tools (not the pascal version). It's written using ratfor,
which I had seen in the v7 UPM. I fired up v7 and tried my hand at the
first example:
# copy - copy input characters to output
integer getc
integer c
while(getc(c) != EOF)
call putc(c)
stop
end
The first thing I noticed was that it read more like C than Fortran (I
know C quite well, Fortran just a smidge)... awesome, right? So I ran
the preprocessor on it and got the following f77 code:
c copy - copy input characters to output
integer getc
integer c
c while
23000 if(.not.(getc(c) .ne. eof))goto 23001
call putc(c)
goto 23000
c endwhile
23001 continue
stop
end
Cool. The way it translated the EOF test is weird, but ok. Trying to
compile it results in complaints about missing getc and putc:
$ f77 copy.f
copy.f:
MAIN:
Undefined:
_getc_
_putc_
_end
Ah well, no worries. I know that they're in the c lib, but don't about
fortran libs... Meanwhile, over on 4.3BSD, it compiles without issue.
But running it is no joy:
$ ./a.out
This is a test
$
I remembered that the authors mentioned something about EOF, so I
tweaked the code (changed EOF to -1) and rebuilt/reran:
$ ./a.out
This is a test
This is a test
$
Fascinating. Dunno why no complaints from F77 about the undefined EOF
(or maybe mis-defined), but hey, it works and it's fun.
I'm curious how much ratfor was used in bell labs and other locations
running v6, v7, and the BSD's. When I first came across it, I was under
the impression that it was a wrapper to make f66 bearable, but the
manpage says it's best used with f77, so that's not quite right. As
someone coming from c, I totally appreciate what it does to make the
control structures I know and love available, but that wasn't the case
back then, was it? C was pretty new... Was it just a temporary fix to a
problem that just went away, or is there tons of ratfor out there in the
wild that I just haven't seen? I found ratfor77 and it runs just fine on
my mac with a few tweaks, so it's not dead:
ratfor77 -C copy.r | tee copy.f
C Output from Public domain Ratfor, version 1.0
C copy - copy input characters to output
integer getc
integer c
23000 if(getc(c) .ne. eof)then
call putc(c)
goto 23000
endif
23001 continue
stop
end
What's the story? Oh, and in v6 it looks like it was rc - ratfor
compiler, which is not present in v7 or 4.3BSD - is there a backstory
there, too?
Will
I'm reading in, Kernighan & Plauger's 1981 edition of Software Tools in
Pascal and in the book, the author's mention Bill Joy's Pascal and Andy
Tanenbaum's as being rock solid. So, a few related questions:
1. What edition of UNIX were they likely to be using?
2. What versions of "Standard Pascal" were in vogue on UNIX at the time
(1981)?
3. What combinations of UNIX/Pascal were popular?
Thanks,
Will
All,
I did my research on this, but it's still a bit fuzzy (why is it that
people's memories from 40 years ago are so malleable?).
1. What are y'all's recollections regarding BSD 4.1's releases, vis a
vis the VAX. In McKusick's piece, Twenty Years of Berkeley Unix, I get
one perspective, and from Sokolov's Quasijarus project, I get quite
another. In terms of popularity and in terms of stable performance, what
say you? Was 4.1 that much better than 4BSD? Was 4.1as obsolete
immediately as McKusick says? 4.1b sounds good with FFS, was it? 4.1c's
the last pre 4.2 release, but it sounds like it was nearly a beta
version of 4.2...
2. Sokolov implies that the CSRG mission started going off the rails
with the 4.3/4.3BSD-Tahoe and it all went pear shaped with the 4.3-Reno
release, and that Quasijarus puts the mission back on track, is that so?
3. I've gotten BSD 4.2 and BSD 4.3 releases built from tape and working
very well. I just can't decide whether to go back to one of the 4.x
releases (hence question 1), or go get Quasijarus0c - thoughts on why
one might be more interesting than another?
4. Is Quasijarus0c end of the line for VAX 4.xBSD? Why does tuhs only
have Quasijarus0 and 0a, was there something wrong with 0b and 0c?
5. Has anyone unearthed an original 4.1 tape, or is Haertel's
reconstruction of the 1981 tape 1 release as close as it gets?
Later,
Will
Back in September I was having serious DNS issues with my MX records. Finally was able to move to a new DNS provider.
Could someone add me (jra(a)andrusk.com) back to the mailing list?
Thanks,
Justin
Sent from ProtonMail mobile
the discussion of why the CSRG disbanded in 1995 has come up elsewhere.
My memory is that the reason was pretty simple: DARPA ended their
funding at that time.
Hoping for corrections to my memory :-)
> From: Clem Cole
> So by the late 70s/early 80s, [except for MIT where LISP/Scheme reigned]
Not quite. The picture is complicated, because outside the EECS department,
they all did their own thing - e.g. in the mid-70's I took a programming
intro couse in the Civil Engineering department which used Fortran. But in
EECS, in the mid-70's, their intro programming course used assembler
(PDP-11), Algol, and LISP - very roughly, a third of the time in each. Later
on, I think it used CLU (hey, that was MIT-grown :-). I think Scheme was used
later. In both of these cases, I have no idea if it was _only_ CLU/Scheme, or
if they did part of it in other languages.
Noel
Some of the folks here might like this FB group...
Internet Old Farts Club
https://www.facebook.com/groups/internetoldfarts/
> This group is for self-declared Internet Old Farts, who want to discuss any aspect of the the Internet or its history. People in this group had their bits walk up hill both ways.
> Welcome to the Internet Old Farts group.
The purpose of this group is both social and technical. Feel free to revisit the past, explore the future, grouse about technical problems that you or others created. Feel free to self-aggrandize, complain about your least favorite standards organization or its politics, and how those young whippersnappers are running the show today.
By participating in this group you are admitting or proclaiming that you are indeed an Internet Old Fart. Perhaps we should give a prize for the youngest and oldest Old Fart.
-r
There is a new book from MIT Press, edited by Harry Lewis, with a
collection of classic papers in computer science, among them
37: The Unix Time-Sharing System (1974)
Dennis Ritchie, Kenneth Thompson
DOI: https://doi.org/10.7551/mitpress/12274.003.0039
The book Web site is at
https://direct.mit.edu/books/book/5003/Ideas-That-Created-the-FutureClassic…
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Has anyone seen Fraser's original ratfor source for the s editor for unix on the PDP-11. It was a screen editor front-end built on top of Software Tools's edit. I've seen a c version, but I'm interested in the 375 line version :).
Will
Sent from my iPhone
Resending this as realised accidentally replied off list
Silas
On 30 Jan 2022, at 18:39, silas poulson <silas8642(a)hotmail.co.uk<mailto:silas8642@hotmail.co.uk>> wrote:
On 30 Jan 2022, at 18:07, Dan Stromberg <drsalists(a)gmail.com<mailto:drsalists@gmail.com>> wrote:
And is Java? They both have a byte code interpreter.
My understanding is Java is both a compiled and interpreted language -
with javac compiling java code to byte code and then JVM interpreting
and executing the byte code.
And then there's the CPython implementation of Python. <snip>
Granted, it has an implicit, cached compilation step, but is it less compiled for that?
I would so no - in my mind compiling analyses the entire source and
then translates it whilst interpreters only explore a single line or
expression. Simply because the compilation happens only Just In Time,
doesn’t make it any less of a compilation step.
Hope that helps,
Silas
One of architechture supported by 4.4BSD, luna68k's compiled binary is
now available.
http://www.netside.co.jp/~mochid/comp/bsd44-build/
luna68k is OMRON LUNA, m68k cpu workstation. This binary set works on
"nono" emulator software.
http://www.pastel-flower.jp/~isaki/nono/
It's author, Isaki-san have done some minor modification for 4.4BSD,
binary set for luna68k run rather well.
OMRON, already dropped thier workstation products. LUNA-I, LUNA-II
equipped with m68k, same CPU as CSRG's main target arch hp300.
So userland programs may binary compatible.
-mochid
I'm working through 4.3BSD setup and configuration and came across this:
"There is no equivalent service for network names yet. The full host and network name databases are normally derived from a file retrieved from Internet Network Information Center at SRI... use gettable to retrieve the NIC host database and htable to convert it to the format used by the libraries."
Does this mean I should expect functionality like resolv.conf and ping yahoo.com not to work in 4.3, or by some miracle is gettable still a functional system?
Will
Sent from my iPhone
Hi all,
I've been hard at work on my retro-fuse project over the past few
months, and so I thought I'd update the list with my progress.
I have just released version 7 of retro-fuse on github
(https://github.com/jaylogue/retro-fuse) This version adds support for
initializing and mounting 2.9 and 2.11BSD filesystems on modern
systems. It also includes fixes for a number of bugs in v6 and v7 support.
Beyond the work on 2.11 support, I also spent a significant amount of
time building an automated test framework. I'm a pretty big fan of
automated testing. So I'm happy to say that the project now includes a
series of tests verifying basic file I/O functionality as seen from the
modern system. While not exhaustive (because filesystem testing is
/hard/) the new tests give me reasonable confidence that things are
behaving as they should.
Additionally (in what was perhaps the most fun part of the project to
date) I have also created tests to verify the integrity of the generated
filesystems as seen from the historical systems. In particular, for each
of the supported Unix versions I've built tests that: launch the os
under simulation (simh), mount the generated filesystems, verify the
filesystems using the original integrity check tools (icheck/fsck), and
enumerate and compare the filesystem contents to that generated on the
modern system. As you might imagine, this involved a lot of
learning--from how to build size-reduced system images from the original
distribution tapes, to how to implement a modern POSIX cksum command
with old dev tools. All thoroughly enjoyable.
With this under my belt, I'll probably take a break from retro-fuse to
concentrate on other things. If anyone has any problems (or successes!)
using it, please drop me a line.
--Jay
Wow. Brings back memories
On Mon, Jan 24, 2022 at 1:32 PM Robert Diamond <rob(a)robdiamond.com> wrote:
> Just found this program from 1979 of an Explorer Club (aka “The Scouts”)
> Family Night, which included a talk from Ken Thompson on Computer Chess.
> There’s a few notable names in there.
>
> See PDF at
> https://drive.google.com/file/d/15fXhkk9KlmJNrhGlFnWuH23XQ09vLG4o/view?usp=…
--
Aaron Cohen
908-759-9069
> Did roff do all of what troff and nroff did?
No way. Ossanna deserves all the praise you give him. Roff extended
runoff in various ways:
relative numeric operators, e.g. .in +8
tabbing (left, right and centered)
underlining
tripartite headers and footers
arabic and roman page numbering
adjustable head and foot margins
automatic hyphenation, thanks to Molly Wagner
footnotes
merge patterns for change marks, column separators, etc.
various special requests: .ne, .ti, .tr, .po, .op (odd page)
But roff did NOT have conditionals, traps, special characters,
environments, or arbitrary motion control. Crucially (and ironically,
because I was Mr. Macro), it did not have anything like macros,
strings and diversions until after Joe pioneered them in nroff.
So there was a gaping disparity: nroff was Turing complete, roff
wasn't. Roff merely added features to runoff; nroff leapt into a
different universe.
-----------------------
The features listed above are in the January 1971 manual for BCPL
roff, which is probably the anonymous reference cited in the November
1971 v1 manual. The v1 manual lists Osanna, Ken and Dennis as authors
of the Unix implementation. I believe Ossanna is named because he
added line-numbering--and maybe more--to entice the patent department
to switch to roff.
BCPL roff allowed all four arithmetic operators in contexts like .ls
*3. Only + and - were allowed in nroff. Eventually both BCPL roff and
nroff got number registers (defined by different commands); I don't
recall which came first. BCPL roff also got a weak macro facility,
definitely after nroff.
Doug
Hello All.
If anyone is interested in struct, I have completed updating it
for modern day systems. Thanks to Jay Logue for invaluable help in
completing the work and to Bakul Shah for his interest and support.
See https://github.com/arnoldrobbins/struct; I have merged the
modernization work into the master branch. The README.md describes
what was done in more detail.
Doug McIlroy - if you want me to add anything to the README.md, please send
it on and I will do so, quoting you as appropriate.
Jay Logue and Bakul Shah - if you want me to add anything to the
README.md, please let me know (privately).
Thanks,
Arnold
Hi all,
Has anybody ever seen a console floppy image anywhere on the internet
labeled:
/"RX11 VAX DSK LD DEV #1"/
It is referenced in BSD 4 documentation with respect to formatting disks
(edited):
USING DEC SOFTWARE TO FORMAT
Warning: These instructions are for people with 11/780 CPU’s.
You should shut down UNIX and halt the machine to do any disk
formatting.
Formatting an RP06. Load the console floppy labeled, "RX11 VAX DSK
LD DEV #1" in the console disk drive, and type the following commands:
>>>BOOT
DIAGNOSTIC SUPERVISOR. ZZ-ESSAA-X5.0-119 23-JAN-1980 12:44:40.03
DS>ATTACH RH780 SBI RHO 8 5
DS>ATTACH RPO6 RHO DBA0
DS>SELECT DBAO
DS>LOAD EVRAC
DS>START/SEC:PACKINIT
This is for drive 0 on mbaO; use 9 instead of 8 for mbal, etc.
> I've just watched an interesting presentation given last Friday via
> video link to the Linux Conference in Australia:
> Brian Kernighan
> The early days of Unix at Bell Labs
> https://www.youtube.com/watch?v=ECCr_KFl41E
Here's an earlier incarnation of the talk:
https://www.youtube.com/watch?v=nS-0Vrmok6Y
I rather enjoyed seeing it with closed captions in Spanish and
speakers turned off. Aided by the slides, I was pretty well able to
read the Spanish, which otherwise would have been quite mysterious.
Doug
I've just watched an interesting presentation given last Friday via
video link to the Linux Conference in Australia:
Brian Kernighan
The early days of Unix at Bell Labs
https://www.youtube.com/watch?v=ECCr_KFl41E
48 minutes
While most of the talk subjects are well known to TUHS list members,
there are nice things said about various people, and about the value
of TUHS.
Other talks at the conference may be of interest as well: see the
schedule at
https://linux.conf.au/schedule/
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Any takers for a (free) two-volume 7th Ed manual (1983), or ring-bound 8th Ed (1985), or PDP11 processor handbook (1981)? These would need to be picked up in Lindfield, Sydney, Australia. Condition is fair, but they've been in storage for 35 years so are slightly mouldy, but still perfectly usable. Images at http://jon.es/other/7th-ed.jpg and http://jon.es/other/8th-ed.jpg If you’d like them, let me know in email ASAP please.
Regards,
Terry Jones
> From: Angelo Papenhoff
> to my knowledge no troff version before the C rewrite in v7
Apologies if I missed something, but between this list and COFF there's so
much low S/N traffic I skip a lot of it. Having said that, was there ever a
troff in assembler? I'd always had the impression that the first one was in C.
> The v6 distribution has deleted directory entries for troff source but
> not the files themselves. I hope it is not lost. Maybe someone here has
> an idea where it could be found?
The MIT 'V6+' (I think it's probably basically PWB1) system had troff -
i guess it 'fell off the back of a truck', like a lot of other stuff MIT had,
such as 'typesetter C', the Portable C Compiler, etc.
Theirs was modified to produce output for a Varian (I forget which model,
maybe the docs or driver say).
nroff on that system seems to have been generated from the troff sources; the
assembler nroff sources aren't present.
I looked at its n1.c, and compared it to the V7 one:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/troff/n1.c
and this one appears to be slightly earlier; e.g. it starts:
#include "tdef.h"
#include "t.h"
#include "tw.h"
/*
troff1.c
consume options, initialization, main loop,
input routines, escape function calling
*/
extern int stdi;
and in the argument processing, it has quite a lot fewer.
So that one is a "troff version before the C rewrite in .. v7", but it is in
C. Is that of any interest?
Noel
Most of y'all are aware of Brian Kernighan's troff involvement. My
understanding is that he pretty much took over nroff/troff after Joe Ossana
died, and came out with ditroff.
But Brian had much earlier involvement with non-UNIX *roff. When he was
pursuing his PhD at Princeton, he spent a summer at MIT using CTSS and
RUNOFF. When he came back to P'ton, he wrote a ROFF for the IBM 7094,
later translated to the IBM 360. Many generations of students, myself
included, use the IBM ROFF (batch, not interactive) as a much friendlier
alternative to dumb typewriters. I don't know if 360 ROFF spread beyond
Princeton, but I wouldn't be surprised.
BTW, during my summer at Bell, nroff/troff was one of the few programs I
could not port to the Interdata 8/32 - it was just a mess of essentially
typeless code. I don't think Joe Ossana got around to it either before he
died.
--
- Tom
Hello All.
We recently discussed Brenda Baker's struct program, that read Fortran
and generated Ratfor. Many of us remarked as to what a really cool
program it was and how much we admired it, myself included.
For fun (for some definition of "fun") I decided to try to bring the code
into the present. I set up a GitHub repo with the V7, V8 and V10 code,
and then started work in a separate branch.
(https://github.com/arnoldrobbins/struct, branch "modernize".)
The program has three main parts:
- structure, which reads Fortran and outputs something that is
almost Ratfor on standard output.
- beautify, which reads the output of structure and finishes the job,
primarily making conditions readable (.not. --> !, removing double
negatives, etc.)
- struct.sh - a simple shell script that runs the above two components.
This is what the user invokes.
The code was written in 1974. As such, it is rife with "type punning"
between int, int *, int **, and char *. These produce a lot of warnings
from a modern day C compiler. The code also uses a, er, "unique" bracing
style, making it nearly illegible to my stuck-in-a-rut programming brain.
Here is what I've accomplished so far:
* Converted every function definition and declaration to use modern (ANSI)
C style, adding a header file with function declarations that is
included everywhere.
* Run all the code through the indent program, formatting it as traditional
K&R bracing style, with tabs.
* Fixed some macros to use modern style for getting parameter values as strings
into the macros.
* Fixed a few small bugs here and there.
* Fixed beautify to work with modern byacc/bison (%union) and to work with
flex instead of lex. This latter was a challenge.
In structure, only three files still generate warnings, but they all relate
to integer <--> pointer assignment / use as. However, when compiled in
32 bit mode (gcc -m32), where sizeof(int) is the same as sizeof(pointer),
despite the warnings, structure works!!
Beautify works, whether compiled in 32 or 64 bit mode.
What I've done so far has been mostly mechanical. I hereby request help from
anyone who's interested in making progress on "the hard part" --- those three
files that still generate warnings.
I think the right way to go is to replace int's with a union that holds and
int, a char* and an int*. But I have not had the quiet time to dive into
the code to see if this can be done.
Anyone who has some time to devote to this and is interested, please drop
me a note off-list.
Thanks,
Arnold Robbins
This is clearly getting off track of TUHS. I'll stop
after this reply.
> *From:* Blake McBride <blake1024(a)gmail.com>
> *Date:* January 11, 2022 at 2:48:23 PM PST
> *To:* Jon Forrest <nobozo(a)gmail.com>
> *Cc:* TUHS main list <tuhs(a)minnie.tuhs.org>
> *Subject:* *[TUHS] TeX and groff (was: roff(7))*
> Although I'm not connected with the TeX community, I don't agree with
> much of what you said.
>
> 1. TeX source to C is fine - stable and works. It would be
> impossible to rewrite TeX in any other language without introducing
> bugs and incompatibilities. Leaving TeX as-is means that it can be
> converted to other languages too if/when C goes out of style. TeX
> as-is is exactly what it is. Anything else wouldn't be TeX.
I agree that Web->C works but it's a major obstacle in doing any
development work on TeX. Try making a major change in the Web source
that requires debugging.
Anything that can pass the TeX Trip Test can be called TeX. I know of
a full C reimplementation that passes the test but the author doesn't
want to make it free software.
There are other rewrites out there that could be candidates but someone
will enough power will have to proclaim one as the official TeX
alternative.
> 2. Drop DVI? Are you kidding me? Although PDF may be popular now,
> that may not be the case 20 years from now. A device-independent
> format is what is needed, and that's what DVI is. TeX is guaranteed
> to produce the exact same output 100 years from now.
And .PDF isn't?
.DVI was great until .PDF matured. .DVI has almost no penetration
these days, whereas .PDF is everywhere. I'm not saying that .PDF
will always be the proper alternative but a properly rewritten TeX
should make it much easier to replace .PDF will whatever comes
next.
> 3. I am curious about memory limitations within TeX.
TeX has various fixed sized memory pools, and contains clever code
to work around limited memory. Some of the newer TeXs,
like LuaTeX, use dynamic allocation but this isn't official.
Given how primitive things were when TeX was developed it's a
miracle it works as well as it does.
> 4. Knuth is getting up in age. Someone will have to take over.
Exactly. I don't follow the TeX community so I don't know what
they're doing about this.
Jon Forrest
I've been meaning to ask about this for a while....
"... The reason why is because there was tremendous antagonism between New
York and L.A. L.A. was, you know, full of color, full of acid, full of
hippies, and we were not like that.
We dressed in black and white. We did not like free love. ..... We took
amphetamine; they took LSD. They were, you know, sort of loving and happy,
and we were - we weren't really evil, we were more intellectual, more about
art."
[Mary Woronov, in an interview with NPR's Terry Gross on "Fresh Air",
talking about New York City, Warhol's Factory and shows in Los Angeles
while touring with the Velvet Underground:
http://www.npr.org/templates/transcript/transcript.php?storyId=241437872]
Note: I am not suggesting that anyone involved with Unix ever took
amphetamines, nor, despite the usual crack about LSD and BSD, that anyone
on the west coast was taking acid, though Markov's "What the Dormouse Said"
would indicate that many of you WERE tripping.
It seems like Unix is largely a child of the coasts. Notable work in Utah,
Colorado and Chicago aside, it seems the bulk of early Unix work happened
in either the greater New York metro area in northern New Jersey or the
greater Bay area around San Francisco. Notable work was also done in
Massachusetts, but again, that's a coastal state and I think it's fair to
say that most of that was inside the route 128 corridor. Of course work was
done internationally, but I'm particularly curious about differences in US
culture here, and how they influenced things.
The question is, to what extent did differences in coastal cultures
influence things like design aesthetics? I think it's is accurate to
characterize early BTL Unix by it's minimalism, and others have echoed this
(cf. Richard Gabriel in the "Worse is Better" papers). But similarly, BSD
has always felt like a larger system -- didn't Lions go as far as to quip
about the succinctness of 6th Edition being "fixed" by 4BSD?
Anyway, I believe it is fair to say that early Unix has a rather distinct
feel from later BSD-derived systems and the two did evolve in different
geographic locations. Furthermore, the world was not as connected then as
it is now.
So to what extent, if any, was this a function of the larger cultural
forces at play near where that work was taking place?
- Dan C.
> If I can be so bold as to offer an interpretation: Doug's approximations
> treat ellipses as mathematical objects and algorithmically determine what
> pixels are closest to points on the infinitesimally-thin curves, while
> Knuth's (or one his students') method acknowledges that the curve has a
> width defined by the nib
Just so.
> I find it impossible that neither Knuth nor Hobby were unaware of McIlroy's
> work and vice-versa; of course he would have known about and examined troff
> just as the Bell Labs folks knew about TeX.
We were generally aware of each other's work. My papers on drawing
lines, circles, and ellipses on rasters, though, were barely connected
to troff. Troff did not contain any drawing algorithms. That work was
relegated to the rendering programs that interpreted ditroff output.
Thus publication-quality rendering with support for thick lines was
outsourced to Adobe and Mergenthaler.
Various PostScript or ditroff postprocessors for screen-based
terminals were written in house. These programs paid little or no
attention to fonts and line widths. But the blit renderers made a
tenuous connection between my ellipse algorithm and troff, since my
work on the topic was stimulated by Rob's need for an ellipse
generator.
Doug
> From: Bakul Shah
> My guess is *not* storing a path instead of a ptr to the inode was done
> to save on memory.
More probably speed; those old disks were not fast, and on a PDP-11, disk
caches were so small that converting the path to the current directory to its
in memory inode could take a bunch of disk reads.
> Every inode has a linkcount so detecting when the last conn. is severed
> not a problem.
Depends; if a directory _has_ to be empty before it can be deleted, maybe; but
if not, no. (Consider if /a/b/c/d exists, and /a/b is removed; the tree
underneath it has to be walked and the components deleted. That could take a
while...) In the general case (e.g. without the restriction to a tree), it's
basically the same problem as garbage collection in LISP.
Noel
> From: Dan Cross
> a port of the _CTSS_ BCPL ROFF sources purportedly written by Doug. I
> wonder if that was actually a thing, or an error?
> ...
> Fortunately, the source [of the original CTSS runoff] is online:
> ...
> Indeed; one finds the following in at least one of the Multics RUNOFF
> source files:
It sounds like all the steps in the chain have pretty definitive evidence -
_except_ whether there was ever a CTSS RUNOFF in BCPL, from Doug.
Happily, we have someone here who should be able to answer that! :-)
Noel
Been reading the heirloom docs. Remember one thing that I disliked
about troff which maybe Doug can explain. It's the language in the
docs. I never understood "interpolating a register" to have any
relation to the definition of interpolate that I learned in math.
Made it a bit hard to learn initially.
Any memory of why that term was used?
> most, if not all of these things were after I arrived.
That may indicate the youth of the narrator more than a lightening of
the culture. Some practical jokes and counter-culture customs from an
earlier day:
When I joined the Labs, everyone talked about the escapades of Claude
Shannon and Dave Hagelbarger--unicycle, outguessing machines, the
finger-on-the-switch box, etc.
When John Kelly became a department head he refused to have his office
carpeted. That would have kept him from stubbing out cigarettes on the
floor.
Bill Baker may have worn a coat and tie, but he kept a jalopy in his
VP parking space. Another employee had a rusty vehicle with weeds
growing out of the fenders.
As early as 1960 BESYS began appending fortune cookies to every
printout. The counter where printouts were delivered got messed up by
people pawing around to see others' fortunes.
One day the audio monitor on the low bit of the 7090 accumulator
stopped producing white noise (with an occasional screech for an
infinite loop) and intoned in aTexas drawl, "Help, I'm caught in a
loop. Help. I'm caught in a loop. Help ..."
A pixelated nude mural appeared in Ed David's office. (Maybe this no
longer counts as a prank. It is now regarded as a foundational event
in computer art.)
Ed Gilbert had a four-drawer filing cabinet labeled integers,
rationals, reals, and balloon. The latter held the tattered remains of
lunchtime hot-air experiments. He also had a chalkboard globe with a
world map on it. It sometimes took several spins before a visitor
realized that you really shouldn't be able to see all the continents
at once--the map appeared twice around the circumference of the globe.
CS had a Gilbert-and-Sullivan duo, Mike Lesk and Peter Neumann, who
produced original entertainment for department parties.
Doug
> Ken and Dennis were teaching [the Votrax] to swear
"Speak" being a phonetics-based program, I suspect they were exploring
multiple spellings. Out of context, lots of spellings were
indistinguishable. For example,
cheap, cheat, cheek, chief was hard to tell from cheep, cheep, cheep, cheep..
At the risk of repeating myself, the fuck, fuck, fuck, fuck example
came to the fore when a "speak" kiosk was installed at Epcot. PR folks
were worried that people would try it on bad words in this public
setting and asked me to block them. I said I'd block whatever words
they told me to. Duly, I was sent a list--on the letterhead of an AT&T
vice president. (Was that dictated to a secretary?) Later I heard
that girls would often try friends' names, while boys would try bad
words and exclaim that the machine didn't know them. In fact, those
were among the few words the machine *did* know. Fortunately nobody
ever complained that I hadn't blocked misspellings.
Doug
> Later Brian's work was updated after V7 and included some new tools, and became known as Writer's Workbench, which eventually was entered in the 'toolchest.'
WWB wouldn't exist if text had not routinely existed in
machine-readable form, thanks to word-processing. But the impetus for
WWB came from "style", not from troff.
Style was a spinoff of Lorinda Cherry's "parts", which assigned parts
of speech to the words of a document. Style provided a statistical
profile of the text: measures such as average word length: frequency
of passives, adjectives and compound sentences, reading level, etc.
WWB in turn offered writing advice based on such profiles.
Style was stimulated by Bill Vesterman, a professor of English at
Rutgers, who brought the idea to me. I introduced him to Lorinda, who
had it running in a couple of weeks. Then Nina McDonald at USG
conceived and packaged WWB as a distinct product, not just a
collection of entries in man 1.
Wikipedia reports a surmise that WWB sank out of sight because it was
not a standard part of Unix distributions.
Doug
Steffen Nurpmeso writes:
> Note that heirloom doctools (on github) is a SysV-derived *roff
Wow, thanks for mentioning this. I was unaware of it. When I
recently wrote that it would be nice to add TeX's 2D formatting
to troff I didn't realize that it had already been done.
Something new to play with.
Jon
On the subject of documtation of [nt]roff, no one seems to have
mentioned Narain Gehani's two editions of ``Document Formatting and
Typesetting on the UNIX System'' (700+ pages), and a second two-author
volume that covers grap, mv, ms, and troff. There is a table of
contents of the second edition recorded here:
http://www.math.utah.edu/pub/tex/bib/typeset.html#Gehani:1987:DFT
There is an entry in that file for the first edition too
http://www.math.utah.edu/pub/tex/bib/typeset.html#Gehani:1986:DF
The second volume, co-authored with Steven Lally, is covered here:
http://www.math.utah.edu/pub/tex/bib/typeset.html#Gehani:1988:DFT
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Today I was looking around for more v7 stuff of interest that I might
find on the web and came across a tape image in the ATT bits directory:
http://www.bitsavers.org/bits/ATT/ labeled X7252.2015_UNIX_V7.tap and an
image of the reel with original and added markings. I downloaded it and
sure enough, it's a bootable v7. I then compared it with my recreated
tape image from the files in the Keith Bostic folder on tuhs. The 11.7MB
tapes are nearly identical, with only a handful of bytes that differ at
the very end of the tape:
ATT tape:
54532000 000000 000000 024000 000000 000000 000000 000077 000000
54532020 052135 014020 010000 034113 056720 023524 072143 122062
54532040 141401 000000 000000 000000 000000 000000 000000 000000
54532060 000000 000000 000000 000000 000000 000000 000000 000000
54532100 000000 000000 000000 000000 000000 000000 000000 037400
54532120 000000 000000
54532123
Bostic recreated tape:
54532000 000000 000000 024000 000000 000000 000000 177777 177777
54532020
I'm wondering - 1) Does anyone know the provenance of the
X7252.2015_UNIX_V7.tap 2) Do the bytes at the end of the tapes look
familiar or particularly meaningful? My knowledge of 40+ y.o. tape
formats is woefully lacking, but I'm curious.
Will
On Jan 10, 2022, at 12:33 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> TeX looks better but you instantly know it is
> TeX, it has a particular look.
Perhaps you’re thinking of documents using Computer Modern fonts,
typeset using LaTeX’s document classes. Check out the examples here:
https://tex.stackexchange.com/questions/1319/showcase-of-beautiful-typograp…
Hello All.
I am pleased to announce that, after a multi-year effort, Chris Ramming's
awkcc is now once again available for download, and this time with
a more permissive license.
I would like to thank Brian Kernighan, Chris Ramming and Doug McIlroy
for contributing letters of support to my efforts to get this program
re-released.
The lion's share of the thanks must go to Martin Carroll of Nokia
Bell Labs, who fought the uphill battle within Bell Labs to get permission
to release the code, and who uploaded it to GitHub.
Me? I pushed here and there, and contributed the actual code snapshots;
it seems that Bell Labs had misplaced the code in the meantime. :-)
The code, both the 1988 and 2011 versions, may be found at
https://github.com/nokia/awkcc.
The code is primarily of historical interest; I think it would take
a significant effort to build it on a more modern system, although I
think it could be done. It'd also be an effort to bring it up to date
with the current Unix version of awk. Again most likely doable, but not
necessarily trivial.
In any case, enjoy!
Arnold
Hello,
I’m looking for photographs of university computer labs from 1985 until 1995, particularly labs full of unix workstations, of course. Does anyone here have photos like that in their collection?
I’m also thinking of reaching out to university archivists, but I don’t have any direct connections to any.
Thanks much!
- Alex
I have a copy of a spiral-bound booklet with yellow covers called "The C
Programmer's Reference" by Morris I. Bolsky of the Systems Training
Center, AT&T Bell Laboratories, (C) 1985. A curious little snapshot of
1980s pre-ANSI C.
I posted a picture of the front cover (with table of contents) at
https://twitter.com/fanf/status/1475407500946157570
I think I rescued it from the office clear-out in 2013 when Cambridge
University Computing Service moved out of the old city-centre offices. I
probably picked it up from a stack of old books that were to be chucked;
wherever I found it, I can't remember who it belonged to. And now I no
longer work for the University, it has come home with me.
Tony.
--
f.anthony.n.finch <dot(a)dotat.at> https://dotat.at/
Southwest Forties, Cromarty, Forth, Tyne, Dogger: Southerly or
southeasterly, backing easterly or northeasterly later, 4 to 6,
becoming variable 3 for a time in Cromarty and Forth. Moderate,
occasionally rough at first in southwest Forties, Cromarty and Dogger.
Rain or showers, fog patches developing. Moderate or good,
occasionally very poor.
> It would be nice to hear about the rationale from a primary source.
Assembly language was deemed a last resort, especially as portability
was coming to the fore. As I wrote in A Research Unix Reader,
"Assembly language and magic constants gradually declined from the
status of the 'real truth' (v4) to utterly forgotten (v8)." In v7,
assembler usage was demoted to the bottom of syscall man pages. It
could also be found in /usr/src/libc/sys/*.s
Doug
Well, hallelujah, after much travail (I've tried this every Christmas
for at least 5 years now), I have succeeded in building vi on v7 from
2bsd. Had to patch the c compiler to enlarge the symbol table, tweak
some stuff here and there, but it finally built and installed without
any errors, yay.
Now, I just want it to do some editing, preferably in visual mode. I can
call it as ex or vi:
# ex
:
or
# vi
:
and q will exit, yay, again!
I have at least two problems:
1. It's not going "full" screen, even with TERM=vt100 or TERM=ansi set
(not that I was surprised, but it'd be nice)...
2. If I type a for append and type something, there's no way to get back
to the : prompt (ESC doesn't seem to work).
3. I'd like manpages (figure they might help with the above), but
they're on the tape as .u files?
I'm hoping this triggers some, oh yeah I remember that, type responses.
Thanks,
Will
So,
in v6, it was possible to use the mesg function from the system library
with:
ed hello.s
/ hello world using external mesg routine
.globl mesg
mov sp,r5
jsr r5,mesg; <Hello, World!\n\0>; .even
sys exit
as hello.sld -s a.out -l
a.out
Hello, World!
This was because v6 included mesg in the library, in v7, it doesn't look
like mesg is included, so doing the same thing as above requires that
code to write the message out be included and in addition system call
names are not predefined, so exit and write have to be looked up in
/usr/include/sys.s, resulting in the v7 equivalent file:
ed hello2.s
/ hello world using internal mesg routine
mov sp,r5
jsr r5,mesg; <Hello, World!\n\0>; .even
sys 1
mesg:
mov r0,-(sp)
mov r5,r0
mov r5,0f
1:
tstb (r5)+
bne 1b
sub r5,r0
com r0
mov r0,0f+2
mov $1,r0
sys 0; 9f
.data
9:
sys 4; 0:..; ..
.text
inc r5
bic $1,r5
mov (sp)+,r0
rts r5
as hello2.s
a.out
Hello, World!
My questions are:
1. Is mesg or an equivalent available in v7?
2. If not, what was the v7 way of putting strings out?
3. Why aren't the system call names defined?
4. What was the v7 way of naming system calls?
Will
All,
I've completed a new version of my "Installing and Using Research Unix
Version 7 in the SimH PDP-11/45 and 11/70 Emulators" tutorial/document
(wow, that's a mouthful). I'm calling it Version 2.0 - It is a
completely reorganized, updated, and edited version of the document.
The blog post announcement is on the blog: https://decuser.blogspot.com/
and the pdf is on google drive:
https://drive.google.com/file/d/1gDBxULlpLwezH-1RO_3ou_W7trElgSgT/view?usp=…
The new doc covers building a working v7 instance from tape files that
will run on the SimH emulator. First, the reader is led through the
restoration of a pristine v7 instance from tape to disk. Next, the
reader is led through adding a regular user, making the system
multi-user. Then, the reader is shown how to make the system
multi-session cable allowing multiple simultaneous sessions. Finally,
the system is put to use with hello world, DMR style, and the learn
system is enabled. It also includes a hyperlinked table of contents.
My hope is that the new version will be more useful than the prior
version, as well as more accurate. I really appreciate the input and
feedback y'all have given me over the intervening years.
Regards,
Will
> From: John Cowan
> Why use C syntax? What was wrong with Fortran, Lisp, or Cobol syntax,
> extended to do what you wanted?
Why do all hammers look basically the same? Because there's an 'ideal
hammer', and over time hammer design has asymtoted toward that 'ideal hammer'
design. One can't just keep improving the design indefinitely - diminishing
returns set in.
So I suspect there is, to some degree, a Platonic 'ideal syntax' for a
'classic block-structured' programming language, and to me, C came pretty
close to it.
I except LISP from that assessment, because LISP is built around a
fundamentally different model of how computations/algorithms are organized,
and C and LISP aren't directly comparable.
But that realization points to a deeper bug with the 'Platonic ideal
language' concept above, which is that languages are fundamentally, albeit at
a very deep conceptual level, tied to the basic concept of the computing
hardware they are to run on. C/COBOL/FORTRAN/etc are all for von Neumann-like
(in a broad sense) computing engines - a single thread of computation, which
happens sequentially.
But we've pushed that paradigm about as far as it can go, we're into
diminishing returns territory on that one. The future, starting with the
hardware, will be very different - and will need quite different languages.
(Go, from what little I know of it, is a baby step in this direction - it is
intended to make it easy to use multiple simultaneous loci of execution,
making use of the mutiple cores that are common now.)
I suspect we'll be shifting to radically different paradigms, 50 years from
now - massively parallel computing meshes (think Connection Machines on
steroids - or the human brain), and those will use fundamentally different
computing paradigms, and programming languages for them, which in turn will
need very different syntax.
Noel
> Lisp, _that's_ elegant.
The machine shines through Lisp even more brightly than it does
through C. Lisp attains incredible power from a tiny base: car, cdr,
cons, cond, T, F, null, lambda, def, exuding elegance that survives
even in a raging sea of parentheses.
For Lisp-friendly applications nowadays, I prefer Haskell, which is
much further away from the machine. Haskell code approaches--and
sometimes surpasses--the cleanliness of good mathematical notation.
For string processing, I remember Snobol 3 with great fondness.
But for everyday work with arrays and numbers, C is the workhorse.
Still, I wish that C would evaluate comma expressions in parallel
rather than in series, as in (a,b) = (b,a).
Doug
I enabled user accounting on my v7 instance and I noticed it "growing
without bound" and while this is noted as a possibility in the ac(1) man
page, I was pretty sure the original authors didn't mean 30k a second. I
scratched my head and thought for a while and then started experimenting
to see what the heck was going on. I removed /usr/adm/wtmp (which I had
created to enable accounting in the first place) and the little red disk
write arrow on my mac went away, but not the little green disk read
arrow... hmm. Something was keeping my v7 instance very busy reading
disk, that was for sure. I went through a few (dozens) more tests and
experiments, reread a bunch of man pages, Ritchie's v7 install note, and
thought some more and here's what I came up with...
If you modify your system to add dci lines and you enable some ttys in
/etc/ttys and you enable user accounting. Then, the next time you boot
into a kernel that doesn't have dci support, init or some other process
will try and fail to read the enabled ttys, log something in
/usr/adm/wtmp, if it exists, and then loop (very quickly), over and over
and over. If you aren't paying attention, this will hardly be noticeable
on modern hardware running simh, but I'm guessing this would have been
disastrous, back in the day.
The simple solution is to boot w/dci enabled when you have ttys enabled,
and only boot w/o dci enabled when you have disabled the ttys.
I'm guessing that this wasn't really ever an issue, back in the day, as
folks prolly didn't just yank their dci's and reboot a different kernel?
But, such are the joys of simulation.
Anyhow, if this doesn't sound like a very likely or reasonable analysis
of what was happening, I'd appreciate your letting me know, or if you've
experienced something like it before, it'd be great to know that I'm not
alone in this silliness.
Will
I'm pretty sure that I asked about learn ages back, but I couldn't find
any reference to it in the archives. So, I thought I would close the
possibly imaginary loop on it. Cuz, I figured it out, and it may prove
useful to others or with my track record, even myself in the future :).
Learn works fine in v7. It just needs to be properly installed. The
command is there already, so you may not need to follow all of the steps
below, but it doesn't hurt:
I did this as root, but it could possibly be done as another user, I'm
not sure.
cd /usr/src/cmd/learn
make
make lessons
make play; make log
That's it. make will complain about missing files that it tries to
delete, but these can be safely ignored, since make then creates them
anyway.
Here's the result run as a normal user:
$ learn
These are the available courses -
files
editor
morefiles
macros
eqn
C
If you want more information about the courses,
or if you have never used 'learn' before,
type 'return'; otherwise type the name of
the course you want, followed by 'return'.
macros
If you were in the middle of this subject
and want to start where you left off, type
the last lesson number the computer printed.
To start at the beginning, just hit return.
This script deals with the use of the "-ms" macro
package to produce Bell Laboratories style documents.
Before trying it, you should be familiar with the
editor. To test that, please enter the file
typed below, exactly as is, into file "decl". Then
type "ready".
.PP
When in the course of human events, it becomes
necessary for one people to dissolve the political bands which have
connected them with another, and to assume among the
powers of the earth the separate and equal station to which
the laws of Nature and of Nature's God entitle them, a decent
respect to the opinions of mankind requires that they should
declare the causes which impel them to the separation.
$ ed decl
?decl
a
.PP
When in the course of human events, it becomes
necessary for one people to dissolve the political bands which have
connected them with another, and to assume among the
powers of the earth the separate and equal station to which
the laws of Nature and of Nature's God entitle them, a decent
respect to the opinions of mankind requires that they should
declare the causes which impel them to the separation.
.
w
410
q
$ ready
Good. Lesson 1.1a (1)
When you have some document typed in "-ms" style,
you run it off on your terminal by saying:
nroff -ms file
where "file" is the name of the file it is on. For example,
the file "decl" in this directory is in a suitable format
for running off this way. Do so. Then type "ready".
$
Interrupt.
Want to go on? n
Bye.
$
Pretty slick, really, once you realize that the $ prompt isn't really
your shell, it's a shell within learn. Also, there's no learn manpage
although there is a document in vol2 of the programmer's manual that
describes the program. I couldn't figure out the canonical way to exit,
so I just CTRL-DELETE on my mac, which I figure it CTRL-BREAK (^C?).
That seems to work.
Oh, and according to /usr/src/cmd/learn/README, if you have any trouble:
Please report problems, bad lessons, etc., to
Brian Kernighan, MH 6021, 2C-518, or
Mike Lesk, MH 6377, 2C-572. Thanks.
Enjoy, and happy New Year, folks!
Will
> Joy’s original 2BSD tape will give you UCB Pascal.
While I agree with Kernighan that Pascal is not my favorite
programming language, UCB Pascal is my favorite compiler because of
its spectacularly good syntax diagnostics. The diagnostics are
automatically generated, so they have a completely consistent style
and never go down rabbit holes trying to explain an error.
The UCB trick is to report the exact spot where LR parsing chokes and
suggest a canonical alternate token that allows progress. This simple
strategy is startlingly effective; the compiler taught me Pascal in an
evening.
It occurred to me that Pascal would be a suitable language in which to
express a certain algorithm. Having skimmed the Pascal report a year
or more earlier, I knew it was a pretty typical language, so I grabbed
a sample program from somewhere and plowed ahead. I made mistake after
mistake, but every diagnostic was instantly suggestive. By the end of
the session I had a polished working program. In due time it was
accepted for publication.
Doug
I'm tooling around doing my annual dive into operating systems ancient
and not-so-ancient and I've gotten back around to v7 because it has a
working f77 in baseline. The 3b2 has f77 as an installable package but
it crashes and burns with read statements like: read *,var - in both
sysvr2 and sysvr3. After consulting with the fortran expert, I'm gonna
chalk it up to "man there's a lot of backstory to these seemingly simple
issues" and just work with v7... in full disclosure f77 also seems to
work fine on 211bsd, but that's too new for today's dive :).
Anyhow, I ran through my install notes from back in 2017 and did a few
updates on them to update urls (would everyone just go ahead and move to
https already?), fix some clunky examples, fix some typos, and update to
a more recent host environment (although some would argue that Mojave is
out of date - just give me a drop in replacement for Adobe Acrobat Pro X
(a 32 bit app) that doesn't phone home every 5 minutes and I'll move to
Monterey). Version 1.7 of the doc is posted on the blog:
https://decuser.blogspot.com/
Anyway, now I'm ready to add stuff to my shiny new v7 instance and
document the additions. So, on to the question of the hour... I did some
looking around and couldn't easily locate any v7 software archives for
additional software that will run on v7 (not the distros, which are
adequately hosted in the Unix Archive). Stuff like pascal, fortran iv,
fortran 90, basic, lisp and the ilk. Do y'all know of any good caches?
Later,
Will
So, I was bemoaning the fact that I couldn't really make sense of the
bas command in v7. In sysv2, it works and is similar enough to modern
dialects that I was able to get a simple example working, but with v7,
the best I was able to do was use it interactively, as a calculator.
Then I went looking for v7 videos on youtube and came across this Dr.
Dave's Diversions video: https://www.youtube.com/watch?v=LZUMNZTUJos
In it Dr. Dave demos a few simple versions of 99 bottles of beer in v7.
Wow! I wondered how he'd glommed the inner workings of bas - did he read
a book, just know it naturally, phone a friend, or what? Well, as it
turned out, he... wait for it... read the bas(1) man page! Wha?!
Ridiculous. So, I pulled out my v7 phone book and read the bas(1) man
page and sure enough, it was all there... how '_' is used for negation,
how those arcane for loops work, arrays?!, function calls?!, etc and so
on, all in 3 short pages.
So, two observations:
1. Those man pages from back in the day - wow. So terse, yet so well
written. With a little help from a friend (thanks, Dr Dave), they're
really all you need, sometimes.
2. Those early apps - wow. So obscure these days, but so ahead of their
time, too (thanks, Ken for making Dec 2021 interesting by doing what you
did back in the early 70's).
And a question (you knew it was coming): Besides the bas(1) page, is
there anything else written on Ken's basic out there in the wild? Oh,
and a bonus question, draw and erase from v5 appear to still be around
in v7, but not documented in the man page. Did it work and has anyone
written a Tektronix 611 emulator that works with v7?
Thanks,
Will
I'm a little flummoxed in trying to move some directories around in
svr2. Shouldn't the following work?
mkdir a
mkdir b
mv a b
I get the following error:
mv: b exists
I tried many of the possible variants including:
mv a b/
mv: b/ exists
mv a b/a
mv: directory rename only
cd b
mv ../a .
mv: . exists
mv ../a ./
mv: ./ exists
mv ../a ./a
mv: directory rename only
If moving directories into existing directories wasn't allowed in those
days, 1) how were directories managed? and 2) when did moving
directories into directories become a thing?
Are there any extant 2.11 manuals other than the online manual and various install and configure PDFs, available for download?
Also, am I understanding correctly that 2.11 is 2.10 plus some 4.3 backports and a ton of patches? If there aren’t any 2.11 specific docs, what would be the closest of what is available?
Will
Sent from my iPhone
> From: Clem Cole
> Try it on V6 or V7 and you will get 'directory exists' as an error.
The V6 'mv' is prepared to move a directory _within_ a directory (i.e.
'rename' functionality). I'm not sure why it's not prepared to move within
a partition; probably whoever wrote it wasn't prepared to deal with all the
extra work for that (unlink from the old '..', then link to the '..' in the
new directory, etc, etc).
(The MIT PWB1 had a 'mvdir' written by Jeff Schiller, so PWB1 didn't have
'move directory' functionality before that. MIT must have been using the PWB1
system for 6.031, which I didn't recall; the comments in 'mvdir' refer to it
being used there.)
The V6 'mv' is fairly complicated (as I found out when I tried to modify it
to use 'smdate()', so that moving a file didn't change its 'last write'
date). Oddly enough, it is prepared to do cross-partition 'moves' (it forks a
'cp' to do the move). Although on V6, 'cp' only does one file; 'cp *.c
{dest}' was not supported, there was 'cpall' for that. (Why no 'mvall', I
wonder? It would have been trivial to clone 'cpall'.)
Run fact; the V6 'mv' is the place that has the famous (?) "values of B will
give rise to dom!" error message (in the directory-moing section).
> if the BSD mv command for 4.1 supported it. If it did then it was not
> atomic -- it would have had to create the new directory, move the
> contents independently and then remove the old one.
Speaking of atomic operation, in V6 'mkdir' (not being a system call) was
not atomic, so if interrupted at 'just the right time', it could leave
the FS in an inconsistent state. That's the best reason I've come across
to make 'mkdir' a system call - it can be made atomic that way.
Noel
> what was the last Unix version
> that let users make arbitrary links, such that the file system was no
> longer a DAG? I recall in v6 days hearing that earlier Unix allowed
> this, and that cleanup (via icheck and friends) got to be near
> impossible.
>From v1 on (I'm not sure about PDP-7 Unix) only the superuser could do
that, so what you heard strikes me as urban legend. Perhaps some
installations abused root privilege to scramble the file system, but I
certainly didn't see that happen in Research systems.
Doug
On Dec 29, 2021, at 8:01 AM, Bakul Shah <bakul(a)iitbombay.org> wrote:
> On Dec 29, 2021, at 7:18 AM, Clem Cole <clemc(a)ccc.com> wrote:
>>
>> Think about the UNIX FS and the link system call. How is mv implemented? You link the file to the new directory and the unlink it from the old one. But a directory file can not be in two directories at the same time as the .. link would fail.
>
> Don’t see why linking a dir in two places is a problem.
To expand on this a bit, the “cd ..” case can be handled by not storing a ..
link in a dir. in the first place! Store the $PWD path in the u struct. Then
cd .. would simply lop off the last component, if one exists. Thus .. takes
you back only on the path you used! This also takes care of the issue with
symlinks (& does what csh did in user code).
The first specific mention of moving directories in Research is in
v10, but I'm sure that was implemented considerably earlier. The only
things special about moving a directory were that it needed root
privilege and checked against moving a directory into itself. As with
ordinary files, copying (with loss of hard links) happened only when
moving between different file systems. As far as I know, no atomicity
precautions were taken.
The core system of svr2 distributed by ATT for the 3b2-400 doesn't come
with manpages installed. Does anybody know of a set of manpages for SVR2
that can be installed into the system? It's not the end of the world, if
not... the user guide is available as a pdf, but it'd be handy to have
man on the system.
Will
Is it possible to use echo to send a vt-100 escape sequence in v6/v7/sysvr2?
I can write a c program to clear the screen and go home in sysvr2:
#define ASCII_ESC 27
main()
{
printf( "%c[2J", ASCII_ESC );
printf( "%c[H", ASCII_ESC );
}
and it works fine. I can type the escape sequences in as well, but I'd
just as soon write a shell script with an echo '[[2J;[[H' or something
similar without having to compile a clear command. Is it possible and
what do I need to know :)?.
Thanks,
Will
> From: Will Senn
> anything similar to modern behavior when handling the delete/backspace
> key where the character is deleted from the input and rubbed out? The
> default, like in v6/v7 for erase and kill is # and @. I can live with
> this, if I can't get it to do the rubout, because at least you can see
> the # in the input
I use ASCII 'backspace' (^H) on my V6, and it 'sort of' works; it doesn't
erase the deleted character on the screen, but if one then types corrected
characters, they overlay the deleted ones, leaving the corrected input. That
should work on everything later than V6.
The MIT PWB1 tty handler (link in a prior message) not only supported a 'kill
line' (we generally used '^U') which actually visibly deleted the old line
contents (on screen terminals, of course; on printing terminals you're
stuck), it also had suppport for '^R' (re-type line) and some other stuff.
Noel
Did svr2 have anything similar to modern behavior when handling the
delete/backspace key where the character is deleted from the input and
rubbed out? The default, like in v6/v7 for erase and kill is # and @. I
can live with this, if I can't get it to do the rubout, because at least
you can see the # in the input, but if I can figure out how to get it to
rubout the last character, I'd map erase to DEL, which I believe to be
^U (but since it's invisible, it's confusing when it doesn't rubout).
Will
All,
Are there any bootable media available for any SVR 2 systems available
online? Or are they all under IP lock and key? If so, what's the closest
system that is available to get a feel for that variety of OS?
Happy holidays, folks.
Will
Hi all, I received an e-mail looking for the ksh-88 source code. A quick
search for it on-line doesn't reveal it. Does anybody have a copy?
Cheers, Warren
Original e-mail:
I recently built a PiDP11 and have been enjoying going back in time
to 2.11BSD.. I was at UC Davis in the the early 1980's and we had
a few PDP-11/70's running 2.8/2.9 BSD. Back then we reached out to
David Korn and he sent us the source for KSH -- this would have been
in 1985ish if I remember, and we compiled it for 2.9 & 4.1BSD, Xenix,
and some other variants that used K&R C. It may have been what was
later called ksh88. I wish I still had the files from then..
I was wondering if you might know if there's an older version like this
or one that's been ported for 2.11BSD?
Many thanks,
Joe
Hey Warren,
First and foremost; Thank you so much for maintaining this mailing list, and for including me within the subscribers list. I find myself intrigued by some of the topics that transfer over to the “COFF” mailing list. Could you include me on that mailing list as well?
Peace.
Thomas Paulsen:
bash is clearly more advanced. ksh is retro computing.
====
Shell wars are, in the end, no more interesting than editor wars.
I use bash on Linux systems because it's the least-poorly
supported of the Bourne-family shells, besides which bash
is there by default. Ksh isn't.
I use ksh on OpenBSD systems because it's the least-poorly
supported of the Bourne-family shells, besides which kh
is there by default. Bash isn't.
I don't actually care for most of the extra crap in either
of those shells. I don't want my shell to do line editing
or auto-completion, and I find the csh-derived history
mechanisms more annoying than useful so I turn them off
too. To my mind, the Research 10/e sh had it about right,
including the simple way functions were exported and the
whatis built-in that told you whether something was a
variable or a shell function or an external executable,
and printed the first two in forms easily edited on the
screen and re-used.
Terminal programs that don't let you easily edit input
or output from the screen and re-send it, and programs
that abet them by spouting gratuitous ANSI control
sequences: now THAT's what I call retro-computing.
Probably further discussion of any of this belongs in
COFF.
Norman Wilson
Toronto ON
John Cowan:
Unfortunately, approximately nobody except you has access to
[the 10/e sh] man page. Can you post or email it?
===
I am happy to remind you that you're a few years out of date:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V10/man/man1/sh.1
Norman Wilson
Toronto ON
So a number of Unix luminaries were photographed as part of the "Faces of
Open Source" project. I have to admit, the photos themselves are quite
good: https://www.facesofopensource.com/collect/
It seems that the photographer is now selling NFTs based on those photos,
which is...a thing.
- Dan C.
> From: Paul Ruizendaal
> Does anyone remember, was this a real life bug back in 6th edition
The 'V6' at MIT (actually, PWB1) never had an issue, but then again,
its TTY driver (here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/mit/dmr/tty.c
if anyone wants to see it) was heavily re-written. But from the below,
it's almost certainlynothing to do with the TTY code...
> From: Dave Plonka
> one experiment we did was to redirection the bas(1)ic program's output
> to a file and what we found was that (a) characters would still
> sometimes be lost
Good test.
If you all want to chase this down (I can lend V6 expertise, if needed), I'd
say the first step is to work out whether it's the application, or the
system, losing the characters. To do that, I'd put a little bit of code in
write() to store a copy of data sent through that in a circular buffer, along
with tagging it with the writing process, etc.
Once you figure out where it's getting lost, then you can move on to
how/why.
> From: Clem Cole
> First Sixth Edition does not have support for either the 11/23
Yeah, but it's super-trivial to add /23 support to V6:
http://gunkies.org/wiki/Running_UNIX_V6_on_an_-11/23
The only places where change is needed (no LKS register, no switch register,
and support for more than 256KB of main memory - and that one one can get by
without), it's hard to see how they could cause this problem.
> One other thought, I'm pretty sure that Noel's V6+ system from MIT can
> support a 23
No, we never ran than on a /23 BITD (no need, no mass storage); and I have
yet to bring the V6+ system up (although I have all the bits, and intend to,
at some point, to get its TCP/IP running). I've been using stock (well,
hacked a bit, in a number of ways - e.g. 8-bit serial line output) V6.
Noel
I am making some slow progress on the topic of Tom Reiser’s 32V with virtual memory.
Two more names popped up of folks who worked with his virtual memory code base at Bell Labs / USG in the early 80’s: Robert (Bob) Baron and Jim McCormick. Bob Barron was later working on Mach at CMU.
If anybody on this list has contact suggestions for these two folks, please send a private message.
Paul
> While doing some end of year retrocomputing revisiting, I thought some
> of you might enjoy this - there is hope for the next generation(s)! ;)
> https://www.youtube.com/watch?v=_Zyng5Ob-e8 <https://www.youtube.com/watch?v=_Zyng5Ob-e8>
Thanks for that video link!
I noticed the bit at the end about V6 and the occasional dropped character and that this was not a serial line issue. I have the same issue in my V6 port to the TI-990 and always assumed that it was a bug I introduced myself when hacking the tty driver.
Does anyone remember, was this a real life bug back in 6th edition back in the 1970’s? Maybe only showing at higher baud rates?
Paul
> there was a commercial package called Spag i which claimed to un-spagatti-ify your code which i always wanted but, could never afford.
You needed struct(1) in v7. It did precisely that, converting Fortran
to Ratfor. Amazingly (to me, anyway) it embodied a theorem: a Fortran
program has a canonical form. People found the converted code to be
easier to understand--even when they had written the original code
themselves.
Doug
hi,
having supported Pafec and then in a different job flow3d, i was most interested in anything that could make large fortran packages more manageable.
there was a commercial package called Spag i which claimed to un-spagatti-ify your code which i always wanted but, could never afford.
the best i managed was sed and awk scripts to split huge fortran files into one file per function and build a makefile. this at least made rebuilds quicker.
i do not miss maintaining fortran code hacked by dozens of people over many decades.
-Steve
Hi folks!
While doing some end of year retrocomputing revisiting, I thought some
of you might enjoy this - there is hope for the next generation(s)! ;)
https://www.youtube.com/watch?v=_Zyng5Ob-e8
In this video I share my personal pick for "best" demo at VCF
Midwest: Gavin's PDP 11/23 running UNIX Version 6! We write and run a
simple BASIC program in Ken Thompson's bas(1), finding some quirks
with this (currently) entirely floppy-based system, possible having to
do with a glitch in disk I/O. (We discovered bas(1) uses a temporary
file as backing store.)
Filmed at the Vintage Computer Festival Midwest: VCF Midwest 16,
September 11, 2021
http://vcfmw.org/
Here's the source code to the simple program we wrote; you can also
run it on modern machines if you install a Research UNIX version using
SimH (pdp-11 simulator).
5 goto 30
10 for col = 1 arg(1)
12 prompt " "
14 next
20 print "Welcome to VCF Midwest!"
25 return
30 for x = 0 55
40 10(x)
50 next
60 for x = _56 _1
70 10(_x)
80 next
--
dave(a)plonka.us http://www.cs.wisc.edu/~plonka/
Hi TUHS folks!
After having reincarnated ratfor, I am wondering about Stuart Feldman's
efl (extended fortran language). It was a real compiler that let you
define structs, and generated more or less readable Fortran code.
I have the impression that it was pretty cool, but that it just didn't
catch on. So:
- Did anyone here ever use it personally?
- Is my impression that it didn't catch on correct? Or am I ignorant?
Thoughts etc. welcome. :-)
Thanks,
Arnold
Spurred on by Bryan, I thought I should properly introduce myself:
I am a fairly young Unix devotee, having gotten my start with System V on a Wang word processing system (believe it or not, they made one!), at my mother’s office, in the late 1980s. My first personal system, which ran SLS Linux, came about in 1992.
I am a member of the Vintage Computing Federation, and have given talks and made exhibits on Unix history at VCF’s museum, in Wall, New Jersey. I have also had the pleasure to show Brian Kernighan and Ken Thompson, who are two of my computing heroes, my exhibit on the origins of BSD Unix on the Intel 386. I learned C from Brian’s book, as probably did many others here.
I have spent my entire professional career supporting Unix, in some form or another. I started with SunOS at the National Institutes of Health, in Bethesda, Maryland, and moved on to Solaris, HP-UX, SCO, and finally Linux. I worked for AT&T, in Virginia, in the early 2000s, but there were few vestiges of Unix present, other than some 3b1 and 3b2 monitors and keyboards.
I current work for Red Hat, in Tyson’s Corner, Virginia, as a principal sales engineer, where I spend most of my time teaching and presenting at conferences, both in person and virtual.
Thank you to everyone here who created the tools that have enabled my career and love of computing!
- Alexander Jacocks
Hello!
I have just joined this mailing list recently, and figured I would give
an introduction to myself.
My first encounter with Unix took place in 2006 when I started my
undergraduate studies in Computer Science. The main servers all ran
Solaris, and we accessed them via thin clients. Eventually I wanted a
similar feeling operating system for my personal computer, so that I
could do my assignments without having to always log into the school
servers, and so I came across Linux. I hopped around for a while, but
eventually settled with Slackware for my personal computers. Nowadays I
run a mixture of Linux and BSD operating systems for various purposes.
Unfortunately my day job has me writing desktop software for Windows (no
fun there :(), so I'm thankful to have found a group of people with
similar computing interests as myself, and I look forward to chatting
with you all!
Regards,
Bryan St. Amour
OK, this is my last _civil_ request to stop email-bombing both lists with
trafic. In the future, I will say publicly _exactly_ what I think - and if
screens still had phosphor, it would probably peel it off.
I can see that there are cases when one might validly want to post to both
lists - e.g. when starting a new discusson. However, one of the two should
_always_ be BCC'd, so that simple use of reply won't generate a copy to
both. I would suggest that one might say something like 'this discussion is
probably best continued on the <foo> list' - which could be seeded by BCCing
the _other_.
Thank you.
Noel
http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fission.pdf
Duncan Mak wrote
> Haskell's powerful higher-level functions
> make middling fragments of code very clear, but can compress large
> code to opacity. Jeremy Gibbons, a high priest of functional
> programming, even wrote a paper about deconstructing such wonders for
> improved readability.
>
I went looking for this paper by Jeremy Gibbons here:
https://dblp.org/pid/53/1090.html but didn't find anything resembling it.
What's the name of the paper?
All, I got this e-mail forwarded on from John Fox via Eric S. Raymond.
Cheers, Warren
Hi Eric, I think you might find this interesting.
I have a 2001 copy of your book. I dog-eared page 9 twenty years ago
because of this section:
It spread very rapidly with AT&T, in spite of the lack of any
formal support program for it. By 1980 it had spread to a large
number of university and research computing sites, and thousands
of hackers considered it home.
Regarding the "spread", I believe one of the contributing factors
was AT&T's decision to give the source code away to universities.
And in doing so, unwittingly provided the fertile soil for open
source development.
I happen to know the man who made that decision. He was my
father-in-law. He died Tuesday. He had no idea what UNIX was, and
had no idea what his decision helped to create. Funny when things we
do have such a major impact without us even knowing. That was
certainly true in this case.
Anyway, I thought you'd be interested to know. His name is John
(Jack) H. Bolt. He was 95.
PS, before making the decision, he called Ken Olson at DEC to see if
he'd be interested in buying it, lock, stock, and barrel. Jack's
opening offer was $250k. Olson wasn't interested. And on that,
Jack's decision was made.
John Fox
>> The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
> You seem to have a gift for notation. That's rare. Curious what you think of APL?
I take credit as a go-between, not as an inventor. Ken Knowlton
introduced the notation ABC in BEFLIX, a pixel-based animation
language. Ken didn't need an operator because identifiers were single
letters. I showed Ken's scheme to Bud Lawson, the originator of PL/I's
pointer facility. Bud liked it and came up with the vivid -> notation
to accommodate longer identifiers.
If I had a real gift of notation I would have come up with the pipe
symbol. In my original notation ls|wc was written ls>wc>. Ken Thompson
invented | a couple of months later. That was so influential that
recently, in a paper that had nothing to do with Unix, I saw |
referred to as the "pipe character"!
APL is a fascinating invention, but can be so compact as to be
inscrutable. (I confess not to have practiced APL enough to become
fluent.) In the same vein, Haskell's powerful higher-level functions
make middling fragments of code very clear, but can compress large
code to opacity. Jeremy Gibbons, a high priest of functional
programming, even wrote a paper about deconstructing such wonders for
improved readability.
Human impatience balks at tarrying over a saying that puts so much in
a small space. Yet it helps once you learn it. Try reading transcripts
of medieval Arabic algebra carried out in words rather than symbols.
Iverson's hardware descriptions in APL are another case where
symbology pays off.
Doug
Hi All.
Mainly for fun (sic), I decided to revive the Ratfor (Rational
Fortran) preprocessor. Please see:
https://github.com/arnoldrobbins/ratfor
I started with the V6 code, then added the V7, V8 and V10 versions
on top of it. Each one has its own branch so that you can look
at the original code, if you wish. The man page and the paper from
the V7 manual are also included.
Starting with the Tenth Edition version, I set about to modernize
the code and get it to compile and run on a modern-day system.
(ANSI style declarations and function headers, modern include files,
use of getopt, and most importantly, correct use of Yacc yyval and
yylval variables.)
You will need Berkely Yacc installed as byacc in order to build it.
I have only touch-tested it, but so far it seems OK. 'make' runs in like 2
seconds, really quick. On my Ubuntu Linux systems, it compiles with
no warnings.
I hope to eventually add a test suite also, if I can steal some time.
Before anyone asks, no, I don't think anybody today has any real use
for it. This was simply "for fun", and because Ratfor has a soft
spot in my heart. "Software Tools" was, for me, the most influential
programming book that I ever read. I don't think there's a better
book to convey the "zen" of Unix.
Thanks,
Arnold
I believe that the PDP-11 ISA was defined at a time when DEC was still using
random logic rather than a control store (which came pretty soon
thereafter). Given a random logic design it's efficient to organize the ISA
encoding to maximize its regularity. Probably also of some benefit to
compilers in a memory-constrained environment?
I'm not sure at what point in time we can say "lots of processors" had moved
to a control store based implementation. Certainly the IBM System/360 was
there in the mid-60's. HP was there by the late 60's.
-----Original Message-----
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> On Behalf Of Larry McVoy
Sent: Monday, November 29, 2021 10:18 PM
To: Clem Cole <clemc(a)ccc.com>
Cc: TUHS main list <tuhs(a)minnie.tuhs.org>; Eugene Miya <eugene(a)soe.ucsc.edu>
Subject: Re: [TUHS] A New History of Modern Computing - my thoughts
On Sun, Nov 28, 2021 at 05:12:44PM -0800, Larry McVoy wrote:
> I remember Ken Witte (my TA for the PDP-11 class) trying to get me to
> see how easy it was to read the octal. If I remember correctly (and I
> probably don't, this was ~40 years ago), the instructions were divided
> into fields, so instruction, operand, operand and it was all regular,
> so you could see that this was some form of an add or whatever, it got
> the values from these registers and put it in that register.
I've looked it up and it is pretty much as Ken described. The weird thing
is that there is no need to do it like the PDP-11 did it, you could use
random numbers for each instruction and lots of processors did pretty much
that. The PDP-11 didn't, it was very uniform to the point that Ken's
ability to read octal made perfect sense. I was never that good but a
little google and reading and I can see how he got there.
...
--lm
For DEC memo’s on designing the PDP-11 see bitsavers:
http://www.bitsavers.org/pdf/dec/pdp11/memos/
(thank you Bitsavers! I love that archive)
Ad van de Goor (author of a few of the memo’s) was my MSc thesis professor. I recall him saying in the early 80’s that in his view the PDP-11 should have been an 18-bit machine; he reasoned that even in the late 60’s it was obvious that 16-bits of address space was not enough for the lifespan of the design.
---
For those who want to experiment with FPGA’s and ancient ISA’s, here is my plain Verilog code for the TI 9995 chip, which has an instruction set that is highly reminiscent of the PDP-11:
https://gitlab.com/pnru/cortex/-/tree/master
The actual CPU code (TMS99095.v) is less than 1000 lines of code.
Paul
Eugene Miya visited by last week and accidentally left his copy of the
book here so I decided to read it before he came back to pick it up.
My overall impression is that while it contained a lot of information,
it wasn't presented in a manner that I found interesting. I don't know
the intended target audience, but it's not me.
A good part of it is that my interest is in the evolution of technology.
I think that a more accurate title for the book would be "A New History
of the Business of Modern Computing". The book was thorough in covering
the number of each type of machine sold and how much money was made, but
that's only of passing interest to me. Were it me I would have just
summarized all that in a table and used the space to tell some engaging
anecdotes.
There were a number of things that I felt the book glossed over or missed
completely.
One is that I didn't think that they gave sufficient credit to the symbiosis
between C and the PDP-11 instruction set and the degree to which the PDP-11
was enormously influential.
Another is that I felt that the book didn't give computer graphics adequate
treatment. I realize that it was primarily in the workstation market segment
which was not as large as some of the other segments, but in my opinion the
development of the technology was hugely important as it eventually became
commodified and highly profitable.
Probably due to my personal involvement I felt that the book missed some
important steps along the path toward open source. In particular, it used
the IPO of Red Hat as the seminal moment while not even mentioning the role
of Cygnus. My opinion is that Cygnus was a huge icebreaker in the adoption
of open source by the business world, and that the Red Hat IPO was just the
culmination.
I also didn't feel that there was any message or takeaways for readers. I
didn't get any "based on all this I should go and do that" sort of feeling.
If the purpose of the book was to present a dry history then it pretty much
did it's job. Obviously the authors had to pick and choose what to write
about and I would have made some different choices. But, not my book.
Jon
> The ++ operator appears to have been.
One would expect that most people on this list would have read "The
Development of the C Language", by Dennis Ritchie, which makes perfectly clear
(at 'More History') that the PDP-11 had nothing to do with it:
Thompson went a step further by inventing the ++ and -- operators, which
increment or decrement; their prefix or postfix position determines whether
the alteration occurs before or after noting the value of the operand. They
were not in the earliest versions of B, but appeared along the way. People
often guess that they were created to use the auto-increment and
auto-decrement address modes provided by the DEC PDP-11 on which C and Unix
first became popular. This is historically impossible, since there was no
PDP-11 when B was developed.
https://www.bell-labs.com/usr/dmr/www/chist.html
thereby alleviating the need for Ken to chime in (although they do allow a
very efficient implementation of it).
Too much to hope for, I guess.
Noel
> From: "Charles H. Sauer"k <sauer(a)technologists.com>
> I haven't done anything with 9 ktrack tapes for a long time ...
> I don't recall problems reading any of them. ...
> IMNSHO, it all depends on the brand/formulation of the tape. I've been
> going through old audio tapes and digitizing them
The vintage computer community has considerable experience with old tapes; in
fact Chuck Guzis has a business reading them (which often includes converting
old file formats to something modern software can grok).
We originally depended heavily on the work of the vintage audio community, who
pioneered working with old tapes, including the discovert of 'baking' them to
improve their mechanical playability. ("the binder used to adhere the magnetic
material to the backing ... becomes unstable" - playing such a tape will
transfer much of the magnetic material to the head, destroying the tape's
contents.)
It's amazing how bad a tape can be, and still be readable. I had a couple of
dump tapes of the CSR PWB1 machine at MIT, which I had thoughtlessly stored in
my (at one period damp) basement, and they were covered in mold - and not just
on the edges! Chuck had to build a special fixture to clean off the mold, but
we read most of the first tape. (I had thoughtfully ade a second copy, which
read perfectly.)
Then I had to work out what the format was - it turned out that even though
the machine had a V6 filesystem, my tape was a 'dd' of a BSD4.1c filesystem
(for reasons I eventually worked out, but won't bore you all with). Dave
Bridgham managed to mount that under Linux, and transform it into a TAR
file. That was the source of many old treasures, including the V6 NCP UNIX.
Noel
> What, if any, features does PL/I have that are not realized in a modern language?
Here are a few dredged from the mental cranny where they have
mouldered for 50+ years.
1. Assignment by name. If A and B are structs (not official PL/I
terminology), then A + B A = B copies similarly named fields of B to
corresponding fields in A.
2. Both binary and decimal data with arithmetic rounded to any
specified precision.
3. Bit strings of arbitrary length, with bitwise Boolean operations
plus substr and catenation.
4. A named array is normally passed by reference, as in F(A). But if
the argument is not a bare name, as in F((A)), it is passed by value.
5. IO by name. On input this behaves like assignment from a constant,
with appropriate type conversion.
6. A SORT statement.
7. Astonishingly complete set of implicit data conversions. E.g. if X
is floating-point and S is a string, the assignment X = S works when S
= "2" and raises an exception (not PL/I terminology) when S = "A".
My 1967 contribution to ACM collected algorithms exploited 3 and 4. I
don't know another language in which that algorithm is as succinct.
Doug
DEC's VAX/VMS group got a customer bug report that was accompanied by
a 9-track tape containing the programs and data necessary to reproduce
the problem. When the engineer mounted the tape, it contained
completely different data. He tried a different tape drive and this
time he got the expected data. It turned out that the customer had
reused the tape and recorded the reproducer at 1600 bpi on top of
previous data recorded at 800 bpi. If the tape was mounted such that
the drive didn't see the PE burst, it could still read the
NRZI-encoded 800 bpi data.
-Paul W.
The following remark stirred old memories. Apologies for straying off
the path of TUHS.
> I have gotten the impression that [PL/I] was a language that was beloved by no one.
As I was a designer of PL/I, an implementer of EPL (the preliminary
PL/I compiler used to build Multics), and author of the first PL/I
program to appear in the ACM Collected Algorithms, it's a bit hard to
admit that PL/I was "insignificant". I'm proud, though, of having
conceived the SIGNAL statement, which pioneered exception handling,
and the USES and SETS attributes, which unfortunately sank into
oblivion. I also spurred Bud Lawson to invent -> for pointer-chasing.
The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
After the ACM program I never wrote another line of PL/I.
Gratification finally came forty years on when I met a retired
programmer who, unaware of my PL/I connection, volunteered that she
had loved PL/I above all other programming languages.
Doug
I remember getting an early RT without being given the root password. Now there hadn’t been too many Unix boxes I wasn’t able to break into. The Rt you could turn the key to the wrench symbol and get into the maintenance menus. Selecting to view some document which was displayed with more and that you could bang a root shell out of.
> On Nov 24, 2021, at 14:42, Clem Cole <clemc(a)ccc.com> wrote:
>
>
>
>
>> On Wed, Nov 24, 2021 at 1:43 PM Larry McVoy <lm(a)mcvoy.com> wrote:
>> SMIT - just say no.
> There were a number of things IBM did well with AIX so I'm not quite so glib knocking everything from them.
>
>
Larry McVoy:
Say hi to Barry for me, I knew him back in the UUCP days, he was always
pleasant.
====
And for me. Knew him back in my DECUS days. Also tell him
that since he runs the World, it's all his fault.
Norman Wilson
Toronto ON
Here are two anecdotes that Doug suggested I share with TUHS (I am new to
TUHS, having joined just last month).
*First*:
*The creation of access(2).*
[Marc Rochkind documented a version of this on page 56 of his book *Advanced
Unix Programming* (1985, First Edition) discussing link(2). The footnote
on that page says "Alan L. Glasser and I used this scheme to break into
Dennis Ritchie and Ken Thompson's system back in 1973 or 1974."]
Doug pointed out that the timing was probably later, as access(2) was not
in the Sixth Edition release, but probably right after the release (after
May 1975?).
It arose from a discussion I was having with Marc, with whom I worked on
SCCS and other PWB tools. We were discussing some mechanism that would
require moving directories (actually, simple renaming) in a shell
procedure. I told Marc that only root could make links to directories or
unlink directories, but he told me that he has renamed directories with the
mv command. I said then mv must be setuid to root, so we looked, and, of
course, it was. I then looked at the source code for mv and quickly saw
that there was no attempt to check permission on the real uid. So I told
Marc it would allow anyone to become root. He wanted to see it in action,
so I logged into research (I don’t remember what our organization's shared
login was). No one in our organization had root access on research. Marc
and I didn't have root access on our organization's machines; Dick Haight
et. al. didn't share that privilege (Dick was the manager of the
super-users). I think the actual sequence of commands was:
cd /
cp etc/passwd tmp
ed tmp/passwd
1s/^root:[^:]*:/root::/
w
q
mv etc etc2
mv tmp etc
su
mv etc tmp
mv etc2 etc
mv etc/as2 etc/.as2
{logout, hangup and wonder}
The last bit was a test to see what was noticed about what I did.
Marc and I talked for a while about it and discussed if we had any need to
be root on our local machines, but couldn't think of any pressing need, but
knowing we could was a bit of a comfort. After a short time, Marc
suggested logging back in to see what, if anything, had been done.
/bin/mv had lost setuid to root
/etc/as2 was restored
/etc/.as2 was nonexistent
And the next day, the motd on research mentioned that there's a new
syscall: access. And mv(1) now used it.
*Second*:
Our organization was one (out of possibly others) subject of Ken's *codenih*
that he documented in his Turing Award article in CACM.
As previously described, root access was closely guarded in the PWB
organization and, according to Doug, freely available in research. Ken had
given us a login that was shared by PWB development and we had given Ken a
login to our systems. We had no root access on research and Ken had no root
access on our systems.
Our C compiler guy, Rich Graveman, who kept in close contact with Dennis
and was always getting us the latest compiler to install, had gone to MH
and came back with a tape of a new compiler. Rich, being a careful fellow,
did a size on c0, c1, c2 on the files from the tape and did the same on the
running compiler pieces in /lib.
Lo and behold, he discovered that the new compiler from Dennis was smaller
than the old compiler even though it had a whole new feature (I think it
was union). So Rich did nm's on the two different c0's and discovered a
name "codenih" in the old compiler that wasn't in the new one from Dennis.
He logged into research, cd'ed to /usr/ken and did an ls -ld codenih,
followed by a cd codenih. Was he surprised! Then he went back to a local
machine and tried to login as root/codenih, which, of course, worked. He
was even more surprised and told a number of colleagues, myself included.
(I logged into research and printed out the source in /usr/ken/codenih. I
was super impressed.)
I think you misunderstood the codenih bit.
As Ken had given us a (shared among a few of us) login, we had given him
one.
And Dick Haight refused him root access.
And no one in PY had root access on research.
So much for denying Ken root access on the PWB systems.
Ken "infected" the PWB C compiler with codenih, which gave him free rein.
I don't know how or when he first installed it, but I suspect he was aware
of any extant security holes (e.g., the mv setuid root) to allow him to
replace the compiler the first time.
I don't know if the PWB crowd was the impetus for Ken writing codenih or if
it was something he had used on others prior to us or if he ever used it on
anyone else.
Needless to say, Dick Haight was beside himself.
I just thought it was a great feat of programming and was thrilled when he
described it in CACM.
Alan
> I was deeply motivated by TMG. The good news is that you could say what
> you wanted and it just did it. The bad news was the error handling.
> Because it was recursive if you made a syntax error the program
> backtracked and tried again. And again. And again. And eventually was
> back at the first character of the input with nowhere to go. So it
> issued one of its two messages -- "Syntax Error".
This is somewhat of a caricature. If one's compilation strategy were
to build an abstract syntax tree of the whole source program before
emitting any object code, then things would go as Steve describes.
In reality, TMG wasn't used this way. For one thing, ASTs needed
too much memory.
In TMG one could emit code while parsing. Typically code
was generated on a statement-by-statement basis. This limited
the backtracking, so even a naive "syntax error" diagnostic could
be localized to the statement level. Long-distance connections, such
as the branch labels in a nest of if statements, could nevertheless
be realized recursively.
Thus, in actual use, a TMG "grammar" was partly a parsing program
and partly abstract specification. The balance of viewpoints was
left to the discretion of the grammar's author. Yacc swung the
pendulum toward the abstract.
Doug
I can say unequivocally that I asked Ken for grep, and the next day it
was in /bin.
But ...
I understand now that grep already existed in /usr/ken/bin. My request
was the last straw that tipped the balance from private to public.
Hopefully Ken can answer whether he made it originally for himself or
for Lee.
Doug
The discussions under this subject line have somewhat strayed from
Unix heritage issues, but because several people have contributed
views of assorted programming languages that mostly grew up on
Unix-family systems, I decided to add this memory.
Several years ago, I attended a talk by Dan McCracken (1930--2011),
noted book author in computer areas, and VP and later President of the
ACM (1978--1980). His talk was about programming languages, and was
done in a question/answer format, with Dan offering both parts.
He began:
Q: In 10 years, which of the following programming languages will
still be in use: Basic, Cobol, Fortran, PL/I, ....
A: First of all, Basic is not a programming language. Then ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Hello,
I tried asking question below awhile back - would anyone know the proper answer?
Many Thanks,
Silas
>
> Hello,
>
> Recently watched couple of videos[1][2] from Ken and Brian Kernighan
> on the origin of grep.
>
> In [1], Brian suggests Lee McMahon asked Ken for grep to help with
> federalist papers analysis.
>
> However Ken states in [2] that Doug Mcllroy asked for someway to “look
> for things in files”.
>
> Is this just Brian misremembering or multiple people asking around similar
> times?
>
> Many thanks,
> Silas
>
> [1]Where GREP Came From - Computerphile
> <youtube.com/watch?v=NTfOnGZUZDk>
>
> [2]VCF East 2019 -- Brian Kernighan interviews Ken Thompson
> <https://www.youtube.com/watch?v=EY6q5dv_B-o>
Here’s another angle on Perl, perhaps more on topic for TUHS. Let’s accept for a minute that Perl filled the void between C and shell scripts, and that there was a latent need for a scripting language like this on Unix.
The shell, awk, sed, etc. had arrived at more or less fully formed versions by 1980. Perl (and TCL) did not appear until the very end of the 1980’s. What filled the gap in that decade, if anything?
Ancient Unix has ‘bs’ https://en.wikipedia.org/wiki/Bs_(programming_language) but this seems to have had very little use.
Paul
I was a member of the team that typed in scans of PDP-7 UNIX (the
first batch of scans done didn't include the shell, so I cobbled one
together in March of 2016).
Scans of a second batch of listings turned up and were entered two
years ago (October 2019), including the original shell, and appeared
to be part of Doug McIlroy's implementation of TMG (TransMoGrifier),
the compiler compiler first used to implement B.
In January 2020 we got confirmation that the files t1.s thru t8.s
were, in fact, for TMG, but that we were missing the compiler for the
TMGL language, written in TMGL and the generated code.
In what is perhaps best described as a crazed act, over the past two
months I've worked to recreate a working TMG environment on PDP-7
UNIX, including a B compiler in TMGL, currently available at:
https://github.com/philbudne/pdp7-unix/tree/tmg
A good starting place is
https://github.com/philbudne/pdp7-unix/blob/tmg/misc/tmg-notes.txt
which started as my collected notes, questions and findings, and I've
expanded it with prose, observations and thoughts that could, at least
conceivably, be of interest to those not as oriented towards
self-punishment as I am.
(and on that topic, if you're looking for someone to expand, contract,
or otherwise deal with some seemingly intractable legacy code, let me
know: http://www.regressive.org/phil/resume.html)
>>I take credit as a go-between, not as an inventor. Ken Knowlton
>>introduced the notation ABC in BEFLIX, a pixel-based animation
>>language.
> In BEFLIX, 'ABC' meant what, exactly? Offsets from pixel locations?
It meant exactly what A->B->C means in C. Pixels had properties,
represented in data structures. One valid data type was pointer.
Incidentally, another BEFLIX innovation was the buddy system for
dynamic storage allocation.
Doug
While waiting to see the full text, I've poked around the index for
subjects of interest. It certainly is copious, and knows about a lot
of things that I don't.
The authors make a reasonable choice in identifying the dawn of
"modern computing" with Eniac and relegating non-electronic machines
to prehistory. Still, I was glad to find the prophetic Konrad Zuse
mentioned, but disappointed not to find the Bell Labs pioneer, George
Stibitz.
Among programming languages, Fortran, which changed the nature of
programming, is merely hinted at (buried in the forgettable Fortran
Monitoring System), while its insignificant offspring PL/I is present.
(Possibly this is an indexing oversight. John Backus, who led the
Fortran project, is mentioned quite early in the book.) Algol, Lisp,
Simula and Smalltalk quite properly make the list, but Basic rates
more coverage than any of them. C, obscurely alphabetized as "C
language", is treated generously, as is Unix in general.
Surprisingly there's almost nothing in the index about security or
privacy. The litany of whiggish chapters about various uses of
computers needs a cautionary complement. "The computer attracts crooks
and spies" would be a stylistically consistent title.
Doug
> My belief is that perl was written to replace a lot of Unix pipelines,
I understand Perl's motive to have been a lot like PL/I's: subsume
several popular styles of programming in one language. PL/I's ensemble
wasn't always harmonious. Perl's was cacophony.
Doug
At 11:00 AM 11/16/2021, Douglas McIlroy wrote:
>>> The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
>
>> You seem to have a gift for notation. That's rare. Curious what you think of APL?
>
>I take credit as a go-between, not as an inventor. Ken Knowlton
>introduced the notation ABC in BEFLIX, a pixel-based animation
>language.
In BEFLIX, 'ABC' meant what, exactly? Offsets from pixel locations?
- John
A private message with Uh, Clem reminds me of another quaint piece of
UNIX group history: JHU Ownership.
The original V6 kernel and file systems used a char for UID and GID.
This meant that you could only have 255 (plus the root user) distinct
users on the machine. The JHU PDP-11/45 was used for the EE classes
and we had more than that many users. The kernel was modified to
check if the GID was 200 or greater. If it was, that was taken along
with the UID to be part of the user identity. We gave all the class
accounts such GIDs.
Of course, we had to be careful about newgrp and fun and games with
setuid/setgid (both the system call and the bits on the executables).
I spent a lot of my time looking for exploits there and fixing them once
I (or someone else) found them.
Hi,
Will someone please explain the history and usage of gpasswd / newgrp / sg?
I've run across them again recently while reading old Unix texts. I've
been aware of them for years, but I've never actually used them for
anything beyond kicking the tires. So I figured that I'd inquire of the
hive mind that is TUHS / COFF.
When was the concept of group passwords introduced?
What was the problem that group passwords were the solution for?
How common was the use of group passwords?
I ran into one comment indicating that they used newgrp to work around a
limitation in the number of (secondary) groups in relation to an NFS
implementation. Specifically that the implementation of NFS they were
using didn't support more than 16 groups. So they would switch their
primary group to work around this limit.
Does anyone have any interesting stories related to group passwords /
gpasswd / newgrp / sg?
--
Grant. . . .
unix || die
Can people **please** send posts to one of these two lists, only? Having to go
through and delete every other post (yeah, I know, I could relete _all_
messages to either list, since they are archived, but old habits are hard to
break) is _really_ annoying.
OK, I can see sending an _initial_ query to both lists, to get it to as wide
a circle as possible: _but_ BCC at least one of them, to prevent lazy people
just hitting 'reply all' and thereby sanding out multiple copies of their
reply.
Thank you.
Noel
I wanted to pass on a recommendation of a new book from MIT Press called:
“A New History of Computing” by Thomas Haigh and Paul Cerruzzi, ISBN
978-0-262-54299-0
Full disclosure, I reviewed a bit of it for them and have been eagerly
awaiting final publication.
I do expect a lot of the readers of this mailing list will enjoy it. They
did a super job researching it and it’s very complete and of course,
interesting. FWIW: the work of a number people that are part of this list
is nice chronicled.
Clem
--
Sent from a handheld expect more typos than usual
I have been looking for some time for a C Reference Manual from early 1973 (or late 1972) where Dennis comments that multiple array subscripts will eventually have Fortran-like syntax with commas separating rather than multiple sets of square brackets. That was the first C manual I had back when I first learned the language. Silly me, I discarded it when a newer one was issued, not realizing the historical significance of the earlier one.
- Alan
> Is there a trick to make a macro or string return a value?
I know what you mean. Though a string does return a value, it
can't compute an arithmetic result. Alternatively, a macro,
which can use arithmetic, can only return the result as a distinct
input line. (That might be overcome by a trick with \c, but I don't
see one right off.)
Though I have no useful advice about this dilemma, it does spur
memories. I wrote the pre-Unix roff that was reimplemented on
Unix and then superseded by Joe Ossanna's nroff. Joe introduced
macros. Curiously, I had implemented macros in an assembler so
early on (1959) that I have (incorrectly) been cited as the father of
macros, yet it never occurred to me to put them in roff.
Joe's work inspired me to add macros to pre-Unix roff. I did
one thing differently. A macro could be called the usual way or
it could be called inline like an nroff string. The only difference
was that a macro's final newline was omitted when it was
expanded inline. That implementation might have helped with
the dilemma.
Doug
Just a quick note to announce that the retro-fuse project now supports
mounting seventh-edition file systems for read and write on Linux and
MacOS. As was done for v6, the project incorporates the actual
filesystem code from v7 Unix, lightly modernized to run in user space on
current systems.
The code is available on github for anyone who's interested:
https://github.com/jaylogue/retro-fuse
--Jay
Hello all,
I was wondering if there exists a book on Unix administration, specifically
for v7. I have the Unix programmers book already.
Regards
Joseph Turco
> From: "Ron Natalie"
> However, the last NCP host table shows this statistic for DEC machines
> on the NCP Arpanet
> ...
> PDP11 (MOS): 11
> PDP11 (MINITS); 10
Hi, which host table was this that you're looking at?
I'm pretty sure there was no MINITS NCP ('NCP' in the sense of 'Initial
Connection Protocol (ICP)' and 'ARPANET Host-to-Host Protocol (AHHP)' - see
below). There was _certainaly_ no MINITS machine on the ARPANET at MIT (the
birthplace of MINITS).
To confirm, I looked at a major MINITS source repository, here:
https://github.com/PDP-10/its/tree/master/src/mits_s
and saw nothing like that. (Not even an 1822 interface driver.)
If you look there, you _will_ see things labelled 'NCP', but this is just a
terminological affliction among the CHAOS people, to whom 'NCP' apparently
meant 'protocol implementation' or 'network code'.
Also, implementations of the 'Host-to-IMP Protocol (HIP)' are _not_ NCP
either; there was an HIP implementation in the C Gateway, but that was
as IP router, one that could connect to an IMP.
IF IT DOESN'T HAVE AHHP, IT"S NOT NCP.
Also, I was intimately familiar with MOS, and neither of the two earliest
applications that ran on it (the TIU, and the Port Expander, both of which I
have the source code for) had any NCP. I looked at a lot of the MOS 'NCP"
listings in a old host table (see here:
https://gunkies.org/wiki/Talk:Network_Control_Program
for details) and concluded that the MOS 'NCP' entries were all 'confused'.
> From: Clem Cole
> I was under the impression, that you folks at MIT did a ChaosNet
> interface, IIRC, so there may have been some sort of conversion on
> your LAN, but I really doubt there was a real NCP running.
The AI Lab did both i) a LAN called CHAOS (4 Mbit/seccond CSMA-CD over CATV
cable) and ii) a protocol family callled CHAOS (which later ran over XDI
Ethernet). I'm not sure that any of it has any relvance to what's under
discussion here.
> But there was a Rand stack around the same time and III think
> Holmgren ended up at UCSB after his time at UICI. Im fairly sure there
> was cross polinartion but I don't know how much.
I looked through my V6 Unix NCP, but although there were some RAND #ifdefs, I
didn't see anything about Rand (except that the MMDF is noted as being based
on something done at Rand). I retain the distinct impression that all V6 Unix
NCP machines were running some descendant of the UIUC code. NOSC seems to have
served as a distro center at one point, see:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC/new/dist.log
but I can't tell who they were sending it too.
(We never tried to get it running at MIT since we were out of IMP ports.
By the time we got another IMP, we had IP running on the -11 and
NCP was done anyway.)
As for UCB, there are a bunch of UCBUFMOD #ifdef's, not sure what that
was about.
> As for other NCPs, PARC had MAXC on the net, but I thought it had
> originally a DG Nova front end that was replaced with an Alto.
No, Maxc1 had a Nova, Maxc2 had an Alto.
> From: Paul Ruizendaal
> they started with 32V in the Fall of 1979, and ported UIUC's NCP code
> to it
Thanks for straightening that out. I had a vague memory that there were a
few VAXen that ran NCP, but wasn't sure.
> 2. Note that the BBN TCP worked over NCP as its primary transport.
Your terminology is confused. TCP _never_ ran 'on' NCP; they were
_alternative_ protocol stacks on top of IHP (on the ARPANET). No
AHHP, no NCP.
> The driver is still there if you look
That acc.c is a driver for the ACC 1822 interface; it includes bits of IHP
("Try to send 3 no-ops to IMP") but I don't think it includes the complete IHP.
There are other BSD 1822 device drivers, e.g.:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/sys/pdpif/if_sri.c
That's the BSD2.11 Stanford/SRI 1822 device driver.
Noel
I'm experimenting with my PiDP-11; I think I may have my modem set up properly to accept incoming calls, but with only one phone line I'm unable to test it. If anyone with a modem is willing to help me test, send me a message off-list and I'll give you my phone number & some login details.
john
Has anyone other than the owner of m88k.com preserved Motorola System V/68? Besides that, there’s the SVR1 (v2.8) for the Motorola VME/10 on Bitsavers and that’s about it.
I’m especially curious as to whether anyone has preserved the SVR2 1.1 binary+sources distribution, since there might be useful information in it—or derivable from it—about much of the early VME hardware.
— Chris
Sent from my iPad
Noel wrote:
>> 2. Note that the BBN TCP worked over NCP as its primary transport.
>
> Your terminology is confused. TCP _never_ ran 'on' NCP; they were
> _alternative_ protocol stacks on top of IHP (on the ARPANET). No
> AHHP, no NCP.
Yes, of course you are right. I meant BBN TCP used *Arpanet* as its primary transport and hence has drivers for the IMP interface hardware.
Lars wrote:
> Here's the rub. Some hosts may have jumped the TCP/IP gun ahead of the
> 1/1 1983 flag day. The host tables don't say. Could it be that all
> those VAXen were running experimental TCP/IP in January 1982?
From Mike Muuss’ TCP digest mailing list and a mail conversation with Vint Cerf a few years ago I understood the following. “Flag day” wasn’t as black and white as we remember it now. During 1982 there was a continuous push to move systems to TCP, and over the year more and more systems became dual protocol capable and later even TCP only. Because all TCP traffic used the same, dedicated Arpanet link number, BBN’s network control team could monitor the level of usage. From memory, in the Summer of 1982 traffic was about 50% TCP and by October 70%. Presumably it reached 80-90% by the end of the year.
During 1982 on 3 occasions, network control activated a feature in the IMP’s that refused traffic on link #0, which NCP used to negotiate a connection. This caused NCP applications to stop working. Again from memory, the first outage was a few hours, the second one a day and third one, late in 1982, for two days. This highlighted systems that needed conversion and helped motivate higher ups to approve conversion resources. It seems that making the switch often involved upgrading PDP11 to VAX.
From what I can tell flag day went well, although there were issues with mail gateways that lasted for several weeks.
At the start of 1982 there was no (usable) VAX Unix TCP code that I am aware of. There were several options for the PDP11, but of those I think only the 3COM code worked well. Around March/April there was code from BBN (see TUHS 4.1BSD) and from CSRG (4.1a BSD). A special build of PDP11 2.8BSD with TCP arrived somewhat later. My impression is that this was still the state of play on flag day, with 4.1cBSD only arriving well into 1983.
> I have searched the TUHS archive and elsewhere, but all I
> find for Unix is a copy of the PDP-11 Unix V6 NCP from Illinois.
>
> Has any other NCP implementation for Unix survived? From old host
> tables I think there may have been some VAXen online before the switch
> to TCP/IP.
Lars,
You may want to look at the 4 surviving BBN tapes on Kirk McKusick’s DVD software collection. A small part of that is on the TUHS Unix tree page - see the 4.1BSD entry.
1. A history of NCP on the VAX at BBN can be found in the change log:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/history
In short they started with 32V in the Fall of 1979, and ported UIUC’s NCP code to it in May 1980. They then moved to 4.1BSD in August and ported yet again. It would seem that the ports were fairly straightforward. Coding for TCP begins in January 1981.
2. Note that the BBN TCP worked over NCP as its primary transport. The driver is still there if you look through the surviving BBN tapes. Part of that code is on TUHS:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/dev/acc.chttps://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/bbnnet-oct82/imp_io.c
It will take some effort, but probably the NCP VAX code can be reconstructed from the surviving PDP11 UIUC code and these BBN tapes (the file names in the change log match).
3. The BBN tapes also have some user level software: telnet, ftp, mtp. This code consists of straight NCP to TCP conversions and the source code has #ifdef’s for NCP and TCP. An example is here:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/src/telnet/netser.c
Hope this helps.
Paul
PS - Info on the DVD is here (bottom of the page):
https://www.mckusick.com/csrg/
Hello,
I'm working on setting up an emulated ARPANET using the original IMP
software recovered some years ago. It turns out, the greatest challenge
is finding the NCP software on the host side that implements the ARPANET
protocols. I have searched the TUHS archive and elsewhere, but all I
find for Unix is a copy of the PDP-11 Unix V6 NCP from Illinois.
Has any other NCP implementation for Unix survived? From old host
tables I think there may have been some VAXen online before the switch
to TCP/IP.
Best regards,
Lars Brinkhoff
hello all,
i am a new unix user, so please excuse my ignorance.
I am trying to setup using unixV7 with simh pdp11 emulator. The guide i am
following is by Will Senn (in PDF form). I have been able to successfully
get the machine to boot with unix, and login as root. what i am having
problems with, is trying to get telnet access via dci to work. when i
follow the guide and do the following:
# cd /usr/sys/conf
# rm l.o c.o
# cp hptmconf myconfnf
# echo 4dc >> myconf
# mkconf < myconf
# make
as - -o l.o l.s
cc -c c.c
ld -o unix -X -i l.o mch.o c.o ../sys/LIB1 ../dev/LIB2
# sum unix
10314
106
# ls -l unix
-rwxrwxr-x 1 root
54122 Dec 31 19:09 unix
etc...
when i issue the mkconf < myconf command, i get a bunch of text printed
out, but with a 'root device not found'. the sum unix value is different,
as well as the size of the ls -l unix file size.. now when i try booting it
with the newly created mboot.ini file (as per the guide), i go to start up
the system with 'hp(0,0)munix' and it starts but hangs with the text 'fault
devtab'
what am I doing wrong?
regards,
Joseph Turco
I’ve gotten Minix 1.5 up and running on Hatari, the Atari ST emulator, and I’d like to update it to the latest in the 1.5 series (1.5.10.7).
The patch sets used to be quite readily available, but the only patch sets I’ve been able to find have been the 1.5.10.3 to 1.5.10.4 patches posted to Usenet (via minnie, thanks!) which won’t apply cleanly to my sources because I’m only running 1.5.
(I know about the 1.6.25-on-Atari efforts, I’m trying to do something different and also fill in some git history…)
— Chris
On Wed, Sep 29, 2021 at 09:40:23AM -0700, Greg A. Woods wrote:
> I think perhaps the problem was that mmap() came too soon in a narrow
> sub-set of the Unix implementations that were around at the time, when
> many couldn't support it well (especially on 32-bit systems -- it really
> only becomes universally useful with either segments or 64-bit and
> larger address spaces). The fracturing of "unix" standards at the time
> didn't help either.
>
> Perhaps these "add-on hack" problems are the reason so many people think
> fondly of the good old Unix versions where everything was still coming
> from a few good minds that could work together to build a cohesive
> design. The add-ons were poorly done, not widely implemented, and
> usually incompatible with each other when they were adopted by
> additional implementations.
mmap() did come from those days and minds.
The first appearance of mmap() was in 32V R3, done by John Reiser in 1981. This is the version of 32V with full demand paging; it implemented a unified buffer cache. According to John, that version of mmap() already had the modern 6 argument API. John added mmap() because he worked with Tenex a lot during his PhD days and missed PMAP. He needed some 6 months to design, implement and debug this version of 32V as a skunkworks project.
I am trying to revert early VAX SVr1/r2 code to get a better view of what 32V R3 looked like, but unfortunately I did not have much time for this effort in last couple of months. It would seem that 32V R3 assumed that disk blocks and memory pages were the same size (true on a 1980 VAX) and with that assumption a unified buffer cache comes natural in this code base.
For 4.2BSD initially Joy cs. had a different approach to memory mapped files in mind (see the 1981 tech report #4 from CSRG). By the time of 4.2BSD’s release the manual defined a mmap() system call, but it was not implemented and it appears to have been largely forgotten until SunOS 4 and dynamic libraries six years later.
In the SysV lineage it is less clear. For sure mmap() is not there, but the first implementation of the shmem IPC feature might derive from the 32V R3 code. On the inside, SVr2 virtual memory appears to implement the segments (now called regions) that Joy envisaged for 4.2BSD but did not implement.
CB Unix had a precursor to shmem as well, where a portion of system core was reserved for shared memory purposes and could be accessed either via the /dev/mem device or could be mapped into the PDP-11 address space (using 1 of the 8 segment registers for each map). Here too the device and the map were unified.
So far, I have not come across any shared library implementations or precursors in early Unix prior to SunOS 4.
Paul
Dear TUHS members,
The IEEE Annals on the History of Computing magazine, the primary
publication for recording, analyzing, and debating the history of
computing, is seeking a new Editor in Chief [2]. The EiC term begins on
January 1st 2022 and is for three years, renewable for two years. The
application deadline is October 31st 2021.
It would be valuable for us interested in our discipline's history to to
serve in the publication's leadership. You can contact Carrie Clark at
c.clark(a)computer.org to submit an application. Alternatively, if you
have connections in the community and some time to spare to head the EiC
selection committee, please drop me a note.
[1] https://www.computer.org/csdl/magazine/an
[2]
https://www.computer.org/press-room/2021-news/ieee-cs-publications-seek-app…
Kind regards,
Diomidis Spinellis
I have received message from his family that Jörg Schilling has
passed away from complications related to kidney cancer this sunday
around noon (CEST).
He will be remembered for his open source projects including
- cdrtools, the first portable CD burning program
- star, a powerful and fast tar implementation, the first to
use two processes with a shared ring buffer for better
performance.
- smake, a make implementation with autoconf features
- sformat, a versatile SCSI disk formatting program
- SING, an autoconf fork with a comprehensive set of libc
shims, providing a uniform API across operating systems
- ved, an early visual editor for the UNOS operating system (I believe)
- bosh, a carefully maintained fork of the Bourne shell
- sccs, a carefully maintained fork of SCCS. His attempts
to teach it projects and networking will remain unfinished.
- libfind, an implementation of find(1) as a library for
integration into other software.
- libxtermcap, an extended termcap library
- libscg, an early portable SCSI driver and library
He is also remembered for his commitment to open source, portability,
and his work on POSIX. He was working on adapting his software to
Z/OS and introducing message catalogues just weeks before his death.
Jörg worked for the Bethold typesetting company, one of the first
European customers of SUN microsystems. It is there that his love
for UNIX and SUN OS in particular was kindled. [1]
His interest in SUN OS culminated in Schillix, one of the first
open source Solaris distributions.
We will of course also remember him for his flames.
[1]: https://web.archive.org/web/20061201103910/http://www.opensolaris.org/os/ar…
May his software immortalise him.
Robert Clausecker
--
() ascii ribbon campaign - for an 8-bit clean world
/\ - against html email - against proprietary attachments
John Cowan:
"Between each" has been part of Standard English for a thousand years, and
still is today.
====
As in between each pair of elements, or between each element?
The latter strikes me rather like the currently-in-vogue phrase
`one of the only': it may have a defined meaning, but it sure
sounds distractingly stupid. (If it's one of the group at all,
it's by definition one of the only members; if what is meant is
one of the few, then say so, dammit.)
It's rather like obfuscated C, or nearly any use of Perl: sure,
you can write it to require extra mental effort to make sense of
it, but there are simpler ways to be rude.
Norman Wilson
Toronto ON
Please, sir, I'd like to join The Few.
I'm sorry, there are far too many.
Apropos of "finding the right exposition"!, consider the cited wiki article:
Separator: There is a symbol between each element.
The more carefully you read this the more it becomes nonsense.
1, "Each element" is an individual. You can't put something between an
individual.
2 The defining sentence states a property of a representation of a
sequence. It fails to indicate that "separator" is the symbol's role.
In fact what's being defined is "separator notation", not the bare
word "separator". This usage appears only later in the article. It
should be employed throughout--most importantly in the title and the
definition. The same goes for "terminator".
Doug
> Doug, if you insist on applying your superb editing skills on wiki material, we will never hear from you again!
Thanks, Bill, for the wise advice. If I'm putting out stuff like this
you shouldn't hear from me again.
Apologies for again(!) posting to the wrong mailing list.
Doug
Hello All,
I am attempting to restore 4.3BSD-Tahoe to a usable state on the VAX. It
appears, based on the work that I have done already, that this is
possible. Starting with stock 4.3BSD I got a source tree into /usr/tahoe
and using it I replaced all of /usr/include and /sys, recompiled and
replaced /bin, /lib, and /etc, recompiled a GENERIC kernel, and from there
I was able to successfully reboot using the new kernel. As far as I can
tell (fingers crossed!) the hardest part is over and I'm in the process of
working on /usr.
My question is: how was this sort of thing done in the real world? If I
was a site running stock 4.3BSD, would I have received (or been able to
request) updated tapes at regular intervals? The replacement process that
I have been using is fairly labor intensive and on a real VAX would have
been very time intensive too. Fortunately two to three years' worth of
changes were not so drastic that I ever found myself in a position where
the existing tools were not able to compile pieces of Tahoe that I needed
to proceed, but I could easily imagine finding myself in such a place.
(This was, by the way, what I ran into when attempting to upgrade from
2.9BSD to 2.10BSD, despite a fully documented contemporary upgrade
procedure).
-Henry
I can't speak to the evolution and use of specific
groups; I suspect it was all ad-hoc early on.
Groups appeared surprisingly late (given how familiar
they seem now): they don't show up in the manual
until the Sixth Edition. Before that, chown took
only two arguments (filename and owner), and
permission modes had three bits fewer.
I forget how it came up, but the late Lee McMahon
once told me an amusing story about their origin:
Ken announced that he was adding groups.
Lee asked what they were for.
Ken replied with a shrug and `I dunno.'
Norman Wilson
Toronto ON
Groups appeared surprisingly late (given how familiar
they seem now): they don't show up in the manual
until the Sixth Edition.
Mea culpa; read too hastily. The change actually
came with the Fourth Edition, at the same time as
several other landmark system changes:
-- Time changing from a 32-bit count of 60Hz clock
ticks (which ran out so quickly that its epoch kept
moving) to the modern 32 bits of whole seconds, based
at 1970-01-01T00:00:00 GMT (which takes much longer
to run out, though the horizon is now visible).
-- The modern super-block. In 4/e, the super-block
began at block 0, not 1 (so bootstrapping was rather
more complicated); the free list was a bitmap rather
than the later list of blocks containing lists of
free block numbers.
-- The super-block contained a bitmap of free
i-numbers too. All told, the free block and free
i-node map made up a 1024-byte super-block.
-- I-numbers 1-40 were device files; the root
directory was i-number 41. The only file-type
indication in the mode word was a single bit to
denote directory.
It was already clear that the lifetime of the
bitmaps was running out: the BUGS section says
two blocks isn't enough for the bitmaps for a
20-megabyte RP02.
Norman Wilson
Toronto ON
Hi all,
I was reading a recent thread, over on the FreeBSD forums about groups that quickly devolved into a discussion on the origin of the operator group:
https://forums.freebsd.org/threads/groups-overview.82303/
I thought y’all would be the best place to ask the questions that arose in me during my read of the thread.
Here they are in no special order:
1. Where did operator come from and what was it intended to solve?
2. How has it evolved.
3. What’s a good place to look/ref to read about groups, generally?
I liked one respondent’s answer about using find, heir, and the files themselves to learn about groups being used in a running system, paying attention to the owner, Audi, etc along the way and this is how I do it now, but this approach doesn’t account for the history and evolution.
Thanks!
Willu
Sent from my iPhone
Greg wrote:
> I guess pipe(2) kind of started this mess, [...] Maybe I'm
> going to far with thinking pipe() could/should have just been a library
> call that used open(2) internally, perhaps connecting the descriptors by
> opening some kind of "cloning" device in the filesystem.
At times I’ve been pondering this as well. All of creat/open/pipe could have been rolled into just open(). It is not clear to me why this synthesis did not happen around the time of 7th edition; although it seems the creat/open merger happened in BSD around that time.
As to pipe(), the very first implementation returned just a single fd where writes echoed to reads. It was backed by a single disk buffer, so could only hold ~500 bytes, which was probably not enough in practice. Then it was reimplemented using an anonymous file as backing store and got the modern two fd system call. The latter probably arose as a convenient hack to store the two file pointers needed.
It would have been possible to implement the anonymous file solution still using a single fd, and storing the second file pointer in the inode. Maybe this felt as a worse hack at the time (the conceptual split in vnode / inode was still a decade into the future.)
With a single fd, it would also have been possible to have a cloning device for pipe’s as you suggest (e.g. /dev/pipe, somewhat analogous to the implementation of /dev/stdin in 10th edition). Arguably, in total code/data size this would not have been much different from pipe().
My guess is that from a 1975 perspective, creat/open/pipe was not perceived as something that needed fixing.
Dan wrote:
> 3BSD and I think 4.1BSD had vread() and vwrite(), which looked like
> regular read() and write() but accessed pages only on demand. I was a
> grad student at Berkeley at the time and remember their genesis. Bill
> and I were eating lunch from Top Dog on the Etcheverry Hall plaza, and
> were talking about memory-mapped I/O. I remember suggesting the actual
> names, perhaps as a parallel to vfork(). I had used both TENEX and
> Multics, which both had page mapping. Multics' memory-mapped segments
> were quite fundamental, of course. I think we were looking for something
> vaguely upward compatible from the existing system calls. We did not
> leap to an mmap() right away just because it would have been a more
> radical shift than continuing the stream orientation of UNIX. I did not
> implement any of this: it was just a brainstorming session.
Thank you for reminding me of these.
On a substrate with a unified buffer cache and copy-on-write, vread/vwrite would have been very close to regular read/write and maybe could have been subsumed into them, using flags to open() as the differentiator. The user discernible effect would have been the alignment requirement on the buffer argument.
John Reiser wrote that he "fretted” over adding a 6 argument system call. Perhaps he was considering something like the above as the alternative, I never asked.
I looked at the archives and vread/vwrite were introduced with 3BSD, present in 4BSD but marked deprecated, and absent from 4.1BSD. This short lifetime suggests that using vread and vwrite wasn’t entirely satisfactory in 1980/81 practice. Maybe the issue was that there was no good way to deallocate the buffer after use.
Hello,
I've recently started to implement a set of helper functions and
procedures for parsing Unix-like command-line interfaces (i.e., POSIX +
GNU-style long options, in this case) in Ada.
While doing that, I learned that there is a better way to approach
this problem – beyond using getopt(s) (which never really made sense to
me) and having to write case statements in loops every time: Define a
grammar, let a pre-built parser do the work, and have the parser
provide the results to the program.
Now, defining such a grammar requires a thoroughly systematic approach
to the design of command-line interfaces. One problem with that is
whether that grammar should allow for sub-commands. And that leads to
the question of how task-specific tool sets should be designed. These
seem to be a relatively new phenomenon in Unix-like systems that POSIX
doesn't say anything about, as far as I can see.
So, I've prepared a bit of a write-up, pondering on the pros and cons
of two different ways of having task-specific tool sets
(non-hierarchical command sets vs. sub-commands) that is available at
https://www.msiism.org/files/doc/unix-like_command-line_interfaces.html
I tend to think the sub-command approach is better. But I'm neither a UI
nor a Unix expert and have no formal training in computer things. So, I
thought this would be a good place to ask for comment (and get some
historical perspective).
This is all just my pro-hobbyist attempt to make some people's lives
easier, especially mine. I mean, currently, the "Unix" command line is
quite a zoo, and not in a positive sense. Also, the number of
well-thought-out command-line interfaces doesn't seem to be a growing
one. But I guess that could be changed by providing truly easy ways to
make good interfaces.
--
Michael
> one other thing that SLS breaks, for data files, is the whole Unix 'pipe'
> abstraction, which is at the heart of the whole Unix tools paradigm.
Multics had an IO system with an inherent notion of redirectable data
streams. Pipes could have--and eventually did (circa 1987)--fit into
that framework. I presume a pipe DIM (device interface manager)
was not hard to build once it was proposed and accepted.
Doug
> From: Larry McVoy
> If you read(2) a page and mmap()ed it and then did a write(2) to the
> page, the mapped page is the same physical memory as the write()ed
> page. Zero coherency issues.
Now I'm confused; read() and write() semantically include a copy operation
(so there are then two copies of that data chunk, and possible consistency
issues between them), and the copied item is not necessarily page-sized (so
you can't ensure consistency between the original+copy by mapping it in). So
when one does a read(file, &buffer, 1), one gets a _copy of just that byte_
in the process' address space (and similar for write()).
Yes, there's no coherency issue between the contents of an mmap()'d page, and
the system's idea of what's in that page of the file, but that's a
_different_ coherency issue.
Or am I confused?
PS:
> From: "Greg A. Woods"
> I now struggle with liking the the Unix concept of "everything is a
> file" -- especially with respect to actual data files. Multics also got
> it right to use single-level storage -- that's the right abstraction
Oh, one other thing that SLS breaks, for data files, is the whole Unix 'pipe'
abstraction, which is at the heart of the whole Unix tools paradigm. So no
more 'cmd | wc' et al. And since SLS doesn't have the 'make a copy'
semantics of pipe output, it would be hard to trivially work around it.
Yes, one could build up a similar framework, but each command would have to
specify an input file and an output file (no more 'standard in' and 'out'),
and then the command interpreter would have to i) take command A's output file
and feed it to command B, and ii) delete A's output file when the whole works
was done. Yes, the user could do it manually, but compare:
cmd aaa | wc
and
cmd aaa bbb
wc bbb
rm bbb
If bbb is huge, one might run out of room, but with today's 'light my cigar
with disk blocks' life, not a problem - but it would involve more disk
traffic, as bbb would have to be written out in its entirety, not just have a
mall piece kept in the disk cache as with a pipe.
Noel
> From: "Greg A. Woods"
> the elegance of fork() is incredible!
That's because in PDP-11 Unix, they didn't have the _room_ to create a huge
mess. Try reading the exec() code in V6 or so.
(I'm in a bit of a foul mood today; my laptop sorta locked up when a _single_
Edge browser window process grew to almost _2GB_ in size. Are you effing
kidding me? If I had any idea what today would look like, back when I was 20 -
especially the massive excrement pile that the Internet has turned into - I
never would have gone into computers - cabinetwork, or something, would have
been an infinitely superior career choice.)
> I now struggle with liking the the Unix concept of "everything is a
> file" -- especially with respect to actual data files. Multics also got
> it right to use single-level storage -- that's the right abstraction
Well, files a la Unix, instead of the SLS, are OK for a _lot_ of data storage
- pretty much everything except less-common cases like concurrent access to a
shared database, etc.
Where the SLS really shines is _code_ - being able to just do a subroutine
call to interact with something else has incredible bang/buck ratio - although
I concede doing it all securely is hard (although they did make a lot of
progress there).
Noel
>> > It's part of my academic project to work on provable compiler security.
>> > I tried to do it according to the "Reflections on Trusting Trust" by Ken
>> > Thompson, not only to show a compiler Trojan horse but also to prove that
>> > we can discover it.
>>
>> Of course it can be discovered if you look for it. What was impressive about
>> the folks who got Thompson's compiler at PWB is that they found the horse
>> even though they weren't looking for it.
> I had not heard this story. Can you elaborate, please? My impression from having
> read the paper (a long time ago now) is that Ken did the experiment locally only.
Ken did it locally, but a vigilant person at PWB noticed there was an
experimental
compiler on the research machine and grabbed it. While they weren't looking for
hidden stuff, they probably were trying to find what was new in the
compiler. Ken
may know details about what they had in the way of source and binary.
Doug
> It's part of my academic project to work on provable compiler security.
> I tried to do it according to the "Reflections on Trusting Trust" by Ken
> Thompson, not only to show a compiler Trojan horse but also to prove that
> we can discover it.
Of course it can be discovered if you look for it. What was impressive about
the folks who got Thompson's compiler at PWB is that they found the horse
even though they weren't looking for it.
Then there was the first time Jim Reeds and I turned on integrity control in
IX, our multilevel-security version of Research Unix. When it reported
a security
violation during startup we were sure it was a bug. But no, it had snagged Tom
Duff's virus in the act of replication. It surprised Tom as much as it did us,
because he thought he'd eradicated it.
Doug
This is FYI. No comment on whether it was a good idea or not. :-)
Arnold
> From: Niklas Rosencrantz <niklasro(a)gmail.com>
> Date: Sun, 19 Sep 2021 17:10:24 +0200
> To: tinycc-devel(a)nongnu.org
> Subject: Re: [Tinycc-devel] Can tcc compile itself with Apple M1?
>
>
> Hello!
>
> For demonstration purpose I put my experiment with a compiler backdoor in a
> public repository
> https://github.com/montao/ddc-tinyc/blob/857d927363e9c9aaa713bb20adbe99ded7…
>
> It's part of my academic project to work on provable compiler security.
> I tried to do it according to the "Reflections on Trusting Trust" by Ken
> Thompson, not only to show a compiler Trojan horse but also to prove that
> we can discover it.
> What it does is inject arbitrary code to the next version of the compiler
> and so on.
>
> Regards \n
One of the things I really appreciate about participating in this community
and studying Unix history (and the history of other systems) is that it
gives one firm intellectual ground from which to evaluate where one is
going: without understanding where one is and where one has been, it's
difficult to assert that one isn't going sideways or completely backwards.
Maybe either of those outcomes is appropriate at times (paradigms shift; we
make mistakes; etc) but generally we want to be moving mostly forward.
The danger when immersing ourselves in history, where we must consider and
appreciate the set of problems that created the evolutionary paths leading
to the systems we are studying, is that our thinking can become calcified
in assuming that those systems continue to meet the needs of the problems
of today. It is therefore always important to reevaluate our base
assumptions in light of either disconfirming evidence or (in our specific
case) changing environments.
To that end, I found Timothy Roscoe's (ETH) joint keynote address at
ATC/OSDI'21 particularly compelling. He argues that what we consider the
"operating system" is only controlling a fraction of a modern computer
these days, and that in many ways our models for what we consider "the
computer" are outdated and incomplete, resulting in systems that are
artificially constrained, insecure, and with separate components that do
not consider each other and therefore frequently conflict. Further,
hardware is ossifying around the need to present a system interface that
can be controlled by something like Linux (used as a proxy more generally
for a Unix-like operating system), simultaneously broadening the divide and
making it ever more entrenched.
Another theme in the presentation is that, to the limited extent
the broader systems research community is actually approaching OS topics at
all, it is focusing almost exclusively on Linux in lieu of new, novel
systems; where non-Linux systems are featured (something like 3 accepted
papers between SOSP and OSDI in the last two years out of $n$), the
described systems are largely Linux-like. Here the presentation reminded me
of Rob Pike's "Systems Software Research is Irrelevant" talk (slides of
which are available in various places, though I know of no recording of
that talk).
Roscoe's challenge is that all of this should be seen as both a challenge
and an opportunity for new research into operating systems specifically:
what would it look like to take a holistic approach towards the hardware
when architecting a new system to drive all this hardware? We have new
tools that can make this tractable, so why don't we do it? Part of it is
bias, but part of it is that we've lost sight of the larger picture. My own
question is, have we become entrenched in the world of systems that are
"good enough"?
Things he does NOT mention are system interfaces to userspace software; he
doesn't seem to have any quibbles with, say, the Linux system call
interface, the process model, etc. He's mostly talking about taking into
account the hardware. Also, in fairness, his highlighting a "small" portion
of the system and saying, "that's what the OS drives!" sort of reminds me
of the US voter maps that show vast tracts of largely unpopulated land
colored a certain shade as having voted for a particular candidate, without
normalizing for population (land doesn't vote, people do, though in the US
there is a relationship between how these things impact the overall
election for, say, the presidency).
I'm curious about other peoples' thoughts on the talk and the overall topic?
https://www.youtube.com/watch?v=36myc8wQhLo
- Dan C.
> Maybe there existed RE notations that were simply copied ...
Ed was derived from Ken's earlier qed. Qed's descendant in Multics was
described in a 1969 GE document:
http://www.bitsavers.org/pdf/honeywell/multics/swenson/6906.multics-condens….
Unfortunately it describes regular expressions only sketchily by
example. However, alternation, symbolized by | with grouping by
parentheses, was supported in qed, whereas alternation was omitted
from ed. The GE document does not mention character classes; an
example shows how to use alternation for the same purpose.
Beginning-of-line is specified by a logical-negation symbol. In
apparent contradiction, the v1 manual says the meanings of [ and ^ are
the same in ed and (an unspecified version of) qed. My guess about the
discrepancies is no better than yours.
(I am amused by the title "condensed guide" for a manual in which each
qed request gets a full page of explanation. It exemplifies how Unix
split from Multics in matters of taste.)
Doug
> From: Roland Huisman
> I have a PDP11/20 and I would love to run an early Unix version on
> it. ... But it seems that the earliest versions of Unix do not need the
> extra memory. Does anyone have RK05 disk images for these early Unix
> versions?
Although the _kernel_ source for V1 is available:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V1
most of the rest is missing; only 'init' and 'sh' are available. So one would
have to write almost _everything_ else. Some commands are available in PDP-11
assembler in later versions, and might be movable without _too_ much work -
but one would have to start with the assembler itself, which is luckily in
assembler.
If I were trying to run 'UNIX' on an -11/20, I think the only reasonable
choice would be MINI-UNIX:
https://gunkies.org/wiki/MINI-UNIX
It's basically V6 UNIX with all use of the PDP-11 memory management
removed. The advantage of going MINI-UNIX is that almost all V6 source
(applications, drivers, etc) will run on it 'as is'.
It does need ~56KB of main memory. If you don't have that much on the -11/20,
LSX (links in the above) would be an option; it's very similar to MINI-UNIX,
but is trimmed down some, to allow its use on systems with less main memory.
I'm not sure if MINI-UNIX has been run on the -11/20, but it _should_ run
there; it runs on the -11/05, and the only differences between the /20 and the
/05 are that the /20 does not have the RTT instruction (and I just checked,
and MINI-UNIX doesn't use RTT), and SWAB doesn't clear the V condition code
bit. (There are other minor differences, such as OP Rn, (Rn)+ are different on
the -11/20, but that shouldn't be an issue.)
Step 1 would be to get MINI-UNIX running on an -11/20 under a simulator; links
in the above to get you there.
Noel
> From: Clem Cole
> The KS11 MMU for the 11/20 was built by CSS ... I think Noel has done
> more investigation than I have.
I got a very rough description of how it worked, but that was about it.
> I'm not sure if the KS11 code is still there. I did not think so.
No, the KS11 was long gone by later Vn. Also,I think not all of the -11/20
UNIX machines had it, just some.
> The V1 work was for a PDP-7
Actually, there is a PDP-11 version prior to V2, canonically called V1.
The PDP-7 version seems to be called 'PDP-7 UNIX' or 'V0'.
> I'm fairly sure that the RK05, used the RK11-D controller.
Normally, yes. I have the impression that one could finagle RK03's to work on
the RK11-D, and vice versa for RK05's on the RK11-C, but I don't recall the
details. The main difference between the RK11-C and -D (other then the
implementation) was that i) the RK11-C used one line per drive for drive
selection (the -D used binary encoding on 3 lines), and ii) it had the
'maintenance' capability and register (al omitted from the -D).
> The difference seems to have been in drive performance.
Yes, but it wasn't major. They both did 1500RPM, so since they used
the same pack format, the rotational delay, transfer rate, etc were
identical. The one peformance difference was in seeks; the
average on the RK01 was quoted as 70 msec, and 50 msec on the
RK05.
> Love to see the {KT11-B prints] and if you know where you found them.
They were sold on eBait along with an -11/20 that allegedly had a KT11-B. (It
didn't; it was an RK11-C.) I managed to get them scanned, and they and the
minimal manual are now in Bitsavers. I started working on a Tech Manual for
it, but gave up with it about half-way done.
> I wonder if [our V1 source] had the KS-11 stuff in it.
No; I had that idea a while back, looked carefully, our V1 listings
pre-date the KS11.
> From: Roland Huisman
> There is a KT11B paging option that makes the PDP11/20 a 18 bit
> machine.
Well, it allows 2^18 bytes of main memory, but the address space of the
CPU is still2^16 bytes.
> It looks a bit like the TC11 DECtape controller.
IITC, it's two backplanes high, the TC11 is one. So more like the RK11-C...
:-)
> I have no idea how it compares to the later MMU units from the
> software perspective.
Totally different; it's real paging (with page tables stored in masin
memory). The KT11-B provides up to 128 pages of 512 bytes each, in both Exec
and User mode. The KT11-C, -D etc are all segmentation, with all the info
stored in registers in the unit.
> I wonder is there is some compatibility with the KT11-B [from the KS11]
I got the impression that the KS11 was more a 'base and bounds' kind
of thing.
Noel
Hello Unix fanatics,
I have a PDP11/20 and I would love to run an early Unix version on it. I've been working on the hardware for a while and I'm getting more and more of the pieces back online again. The configuration will be two RK05 hard disks, TU56H tape, PC11 paper tape reader/puncher and a RX01 floppy drive. Unfortunately I don't have a MMU or paging option. But it seems that the earliest versions of Unix do not need the extra memory.
Does anyone have RK05 disk images for these early Unix versions? That would be a great help. Otherwise it would be great to have some input about how to create a bootable Unix pack for this machine.
A bit about the hardware restoring is on the vcfed forum:https://www.vcfed.org/forum/forum/genres/dec/78961-rk05-disk-drive-ve…
Booting RT11 from RK05https://youtu.be/k0tiUcRBPQATU56H tape drive back onlinehttps://youtu.be/_ZJK3QP9gRA
Thanks in advance!Roland Huisman
Hoi,
I'm interested in the early design decisions for meta characters
in REs, mainly regarding Ken's RE implementation in ed.
Two questions:
1) Circumflex
As far as I see, the circumflex (^) is the only meta character that
has two different special meanings in REs: First being the
beginning of line anchor and second inverting a character class.
Why was it chosen for the second one? Why not the exclamation mark
in that case? (Sure, C didn't exist by then, but the bang probably
was used to negate in other languages of the time, I think.)
2) Symbol for the end of line anchor
What is the reason that the beginning of line and end of line
anchors are different symbols? Is there a reason why not only one
symbol, say the circumflex, was chosen to represent both? I
currently see no disadvantages of such a design. (Circumflexes
aren't likely to end lines of text, neither.)
I would appreciate if you could help me understand these design
decisions better. Maybe there existed RE notations that were simply
copied ...
meillo
You can check the Computer History Museum's holdings on line. If they don't
have the documents already, they would probably like them.
The Living Computer Museum in Seattle had a working blit on display. If
they don't already have the manual, I'm sure they would love to have one.
Alas, their website says they've "suspended all operations for now", a
result of the double whammy of Covid and the death of their principal
angel, Paul Allen.
more garage cleaning this last weekend. i came across some memorabilia
from my time at Bell Labs, including a lovely article titled
The Electrical Properties of Infants
Infants have long been known to grow into adults. Recent experiments
show they are useful in applications such as high power circuit breakers.
Not to mention a lovely article from the “Medical Aspects of Human Sexuality”
(July 1991) titled “Scrotum Self-Repair”.
the two items are
1) “Documents for UNIX Volume 1” by Dolotta, Olson and Petrucelli (jan 1981)”
2) The complete manual for the Blit. this comes in a blue Teletype binder and includes
the full manual (including man pages) and circuit diagrams.
i’d prefer to have them go to some archival place, but send me a private email
if you interested and we’ll see what we can do.
andrew
I’d be interested in a scan of the Blit schematics, and it seems that a few others might be as well:
https://minnie.tuhs.org/pipermail/tuhs/2019-December/thread.html#19652https://github.com/aiju/fpga-blit
(for clarity: I’m not ‘aiju')
Paul
> Message: 1
> Date: Wed, 8 Sep 2021 01:29:13 -0700
> From: Andrew Hume <andrew(a)humeweb.com>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] desiderata
> Message-ID: <34E984D3-AD92-402D-9A9C-E84B6362DF77(a)humeweb.com>
> Content-Type: text/plain; charset=utf-8
>
> more garage cleaning this last weekend.
[...]
> 2) The complete manual for the Blit. this comes in a blue Teletype binder and includes
> the full manual (including man pages) and circuit diagrams.
>
> i’d prefer to have them go to some archival place, but send me a private email
> if you interested and we’ll see what we can do.
>
> andrew
I recently upgraded my machines to fc34. I just did a stock
uncomplicated installation using the defaults and it failed miserably.
Fc34 uses btrfs as the default filesystem so I thought that I'd give it
a try. I was especially interested in the automatic checksumming because
the majority of my storage is large media files and I worry about bit
rot in seldom used files. I have been keeping a separate database of
file hashes and in theory btrfs would make that automatic and transparent.
I have 32T of disk on my system, so it took a long time to convert
everything over. A few weeks after I did this I went to unload my
camera and couldn't because the filesystem that holds my photos was
mounted read-only. WTF? I didn't do that.
After a bit of poking around I discovered that btrfs SILENTLY remounted the
filesystem because it had errors. Sure, it put something in a log file,
but I don't spend all day surfing logs for things that shouldn't be going
wrong. Maybe my expectation that filesystems just work is antiquated.
This was on a brand new 16T drive, so I didn't think that it was worth
the month that it would take to run the badblocks program which doesn't
really scale to modern disk sizes. Besides, SMART said that it was fine.
Although it's been discredited by some, I'm still a believer in "stop and
fsck" policing of disk drives. Unmounted the filesystem and ran fsck to
discover that btrfs had to do its own thing. No idea why; I guess some
think that incompatibility is a good thing.
Ran "btrfs check" which reported errors in the filesystem but was otherwise
useless BECAUSE IT DIDN'T FIX ANYTHING. What good is knowing that the
filesystem has errors if you can't fix them?
Near the top of the manual page it says:
Warning
Do not use --repair unless you are advised to do so by a developer
or an experienced user, and then only after having accepted that
no fsck successfully repair all types of filesystem corruption. Eg.
some other software or hardware bugs can fatally damage a volume.
Whoa! I'm sure that operators are standing by, call 1-800-FIX-BTRFS.
Really? Is a ploy by the developers to form a support business?
Later on, the manual page says:
DANGEROUS OPTIONS
--repair
enable the repair mode and attempt to fix problems where possible
Note there’s a warning and 10 second delay when this option
is run without --force to give users a chance to think twice
before running repair, the warnings in documentation have
shown to be insufficient
Since when is it dangerous to repair a filesystem? That's a new one to me.
Having no option other than not being able to use the disk, I ran btrfs
check with the --repair option. It crashed. Lesson so far is that
trusting my data to an unreliable unrepairable filesystem is not a good
idea. Since this was one of my media disks I just rebuilt it using ext4.
Last week I was working away and tried to write out a file to discover
that /home and /root had become read-only. Charming. Tried rebooting,
but couldn't since btrfs filesystems aren't checked and repaired. Plugged
in a flash drive with a live version, managed to successfully run --repair,
and rebooted. Lasted about 15 minutes before flipping back to read only
with the same error.
Time to suck it up and revert. Started a clean reinstall. Got stuck
because it crashed during disk setup with anaconda giving me a completely
useless big python stack trace. Eventually figured out that it was
unable to delete the btrfs filesystem that had errors so it just crashed
instead. Wiped it using dd; nice that some reliable tools still survive.
Finished the installation and am back up and running.
Any of the rest of you have any experiences with btrfs? I'm sure that it
works fine at large companies that can afford a team of disk babysitters.
What benefits does btrfs provide that other filesystem formats such as
ext4 and ZFS don't? Is it just a continuation of the "we have to do
everything ourselves and under no circumstances use anything that came
from the BSD world" mentality?
So what's the future for filesystem repair? Does it look like the past?
Is Ken's original need for dsw going to rise from the dead?
In my limited experience btrfs is a BiTteR FileSystem to swallow.
Or, as Saturday Night Live might put it: And now, linux, starring the
not ready for prime time filesystem. Seems like something that's been
under development for around 15 years should be in better shape.
Jon
...
DEC Diagnositcs would run on a beached whale
?
Anyone remember and/or know?
(It seems to apply to other manufacturer's diagnostics as well, even today.)
Thanks,
Arnold
I hope that this does not start any kind of language flaming and that if
something starts the moderator will shut it down quickly.
Where did the name for abort(3) and SIGABRT come from? I believe it was
derived from the IBM term ABEND, but would like to know one way or the
other.
Clem Cole:
I believe the line was: *"running **DEC Diagnostics is like kicking a dead
whale down the beach.*"
As for who said it, I'm not sure, but I think it was someone like Rob
Kolstad or Henry Spencer.
=====
The nearest I can remember encountering before was a somewhat
different quote, attributed to Steve Johnson:
Running TSO is like kicking a dead whale down the beach.
Since scj is on this list, maybe he can confirm that part.
I don't remember hearing it applied to diagnostics. I can
imagine someone saying it, because DEC's hardware diags were
written by hardware people, not software people; they required
a somewhat arcane configuration language, one that made more
sense if you understood how the different pieces of hardware
connected together.
I learned to work with it and found it no less usable than,
say, the clunky verbose command languages of DEC's operating
systems; but I have always preferred to think in low levels.
DEC's diags were far from perfect, but they were a hell of a
lot better than the largely-nonexistent diags available for
modern Intel-architecture systems. I am right now dealing
with a system that has an intermittent fault, that causes
the OS to crash in the middle of some device driver every
so often. Other identical systems don't, so I don't think
it's software. Were it a PDP-11 or a VAX I'd fire up the
diagnostics for a while, and have at least a chance of spotting
the problem; today, memtest is about the only such option,
and a solid week of running memtest didn't shake out anything
(reasonably enough, who says it's a memory problem?).
Give me XXDP, not just the Blue Screen of Death.
Norman Wilson
Toronto ON
Not to get into what is soemthing of a religious war,
but this was the paper that convinced me that silent
data corruption in storage is worth thinking about:
http://www.cs.toronto.edu/~bianca/papers/fast08.pdf
A key point is that the character of the errors they
found suggests it's not just the disks one ought to worry
about, but all the hardware and software (much of the latter
inside disks and storage controllers and the like) in the
storage stack.
I had heard anecdotes long before (e.g. from Andrew Hume)
suggesting silent data corruption had become prominent
enough to matter, but this paper was the first real study
I came across.
I have used ZFS for my home file server for more than a
decade; presently on an antique version of Solaris, but
I hope to migrate to OpenZFS on a newer OS and hardware.
So far as I can tell ZFS in old Solaris is quite stable
and reliable. As Ted has said, there are philosophical
reasons why some prefer to avoid it, but if you don't
subscribe to those it's a fine answer.
I've been hearing anecdotes since forever about sharp
edges lurking here and there in BtrFS. It does seem
to be eternally unready for production use if you really
care about your data. It's all anecdotes so I don't know
how seriously to take it, but since I'm comfortable with
ZFS I don't worry about it.
Norman Wilson
Toronto ON
PS: Disclosure: I work in the same (large) CS department
as Bianca Schroeder, and admire her work in general,
though the paper cited above was my first taste of it.
This may be due to logic similar to that of a classic feature that I
always deemed a bug: troff begins a new page when the current page is
exactly filled, rather than waiting until forced by content that
doesn't fit. If this condition happens at the end of a document, a
spurious blank page results. Worse, if the page header happens to
change just after the exactly filled page, the old heading will be
produced before the new heading is read.
Doug
> fork() is a great model for a single-threaded text processing pipeline to do
> automated typesetting. (More generally, anything that is a straightforward
> composition of filter/transform stages.) Which is, y'know, what Unix is *for*.
> It's not so great for a responsive GUI in front of a multi-function interactive program.
"Single-threaded" is not a term I would apply to multiple processes in
a pipeline. If you mean a single track of data flow, fine, but the
fact that that's a prevalent configuration of cooperating processes in
Unix is an artifact of shell syntax, not an inherent property of
pipe-style IPC. The cooperating processes in Rob Pike's 20th century
window systems and screen editors, for example, worked smoothly
without interrupts or events - only stream connections. I see no
abstract distinction between these programs and "stuff people play
with on their phones."
It bears repeating, too, that stream connections are much easier to
reason about than asynchronous communication. Thus code built on
streams is far less vulnerable to timing bugs.
At last a prince has come to awaken the sleeping beauty of stream
connections. In Go (Pike again) we have a widely accepted programming
language that can fully exploit them, "[w]hich is, y'know, what Unix
is 'for'."
(If you wish, you may read "process" above to include threads, but
I'll stay out of that.)
Doug
Steve Simon:
once again i am taken aback at the good taste of the residents of the unix room.
As a whilom denizen of that esteemed playroom, I question
both the accuracy and the relevance of that metric.
Besides, what happened to the sheep shelf? Was it scrubbed
away after I left? And, Ken, whatever happened to Dolly the
Sheep (after she was hidden to avoid upsetting visitors)?
Norman Wilson
Toronto ON
No longer a subscriber to sheep! magazine
> I don't think anyone knows. Nobody relevant, I believe.
>
> -rob
I understand that Dave Presotto bought that photo at a garage sale for $1. The photo hung in
the Unix Room for years, at one point labeled “Peter Weinberger.”
One day I removed it from its careful mounting and scanned in the photo. It bore the label
“what, no steak?”
The photo was stolen from a wall sometime after I left. The scanned image is at
https://cheswick.com/ches/tmp/whatnosteak.jpeg
ches
At 07:24 PM 8/6/2021, Rob Pike wrote:
>I don't think anyone knows. Nobody relevant, I believe.
Indeed, a clipped and cleaned version in reverse image search on
Google, Bing, Tineye and Yandex found nothing. Feels like a
head shot for a theater major.
- John
I sent a picture (actually two at different resolutions; keep reading) to
the list, but being images they are larger than the address space of a
PDP-11 so not allowed here.
Is it really necessary to have such a low message size limit in an era when
I can buy a terabyte of storage for less than a hundred bucks?
Here is a Google Drive link, for the adventurous.
20180123-UnixSkeleton.jpg
<https://drive.google.com/file/d/1aS8ZmzwPUawIa8WXGoXOK9jDiYtJETGG/view?usp=…>
-rob
On Sat, Aug 7, 2021 at 7:44 AM Rob Pike <robpike(a)gmail.com> wrote:
> I sent a higher-res version in which you can read all the text but it was
> "moderated".
>
> This is the Unix room as of the year 2000 or so.
>
> -rob
>
>
> On Sat, Aug 7, 2021 at 4:34 AM ron minnich <rminnich(a)gmail.com> wrote:
>
>> The story of the mice, one of which I gave to John:
>>
>> I ran a program called FAST-OS for LANL/Sandia for 6 years starting
>> 2005. Think of it as "Plan 9 on petaflop supercomputers" -- it may
>> seem strange now, but in that era when some top end systems ran custom
>> kernels, there was a strong case to be made that plan 9 was a good
>> choice. By 2011, of course, the Linux tsunami had swept all before it,
>> which is why you no longer hear about custom HPC kernels so much --
>> though in some places they still reign. In any event, this program
>> gave me 6 years to work with "the Unix room", or what was left of it.
>> I had been in the Unix Room in 1978, and even met Dennis, so this
>> prospect was quite a treat.
>>
>> We funded Charles Forsyth to write the amd64 compilers for Plan 9,
>> which if you used early Go you ran into (6c 6a 6l); we also funded the
>> amd64 port of Plan 9 (a.k.a. k10) as well as the port to Blue Gene.
>> That amd64 port is still out and about. You can find the Blue Gene
>> kernel on github.
>>
>> I had lots of fun spending time in the Unix room while working with
>> the late Jim McKie, and others. I saw the tail end of the traditions.
>> They had cookie day once a week, if memory serves, on Thursday at 3. I
>> got to see the backwards-running clock, Ken's chess trophies, his
>> pilot's license, pictures of Peter everywhere, a "Reagan's view of the
>> world" map, the American Legion award for Telstar (which was rescued
>> from a dumpster!), and so on. The "Unix room" was more than one room,
>> all built on a raised floor, as I assume it was former old school
>> machine room space. If memory serves, it filled the entire width of
>> the end of the top floor of the building it was in (4th floor?) --
>> maybe 50 ft x 50 ft -- maybe a bit more. There was a room with desks,
>> and a similar-sized room with servers, and a smaller room containing a
>> lab-style sink, a very professional cappucinno machine, decades of old
>> proceedings, and a sofa. I fixed the heavy-duty coffee grinder one
>> year; for some reason the Italian company that produced it had seen
>> fit to switch BOTH hot and neutral, and the fix was to only switch
>> hot, as the neutral switch had failed; I guess in the EU, with 220v,
>> things are done differently.
>>
>> It was fun being there. A few years later the whole room, and all its
>> history, was trashed, and replaced with what Jim called a "middle
>> management wxx dream" (Jim was never at a loss for words); Jim found
>> some yellow Police crime scene tape and placed it in front of the
>> doors to the new space. It was redubbed "the innovation space" or some
>> such, and looked kind of like an ikea showroom. Much was lost. I tried
>> to find a way to save the contents of the room; I had this dream of
>> recreating it at Google, much as John Wanamaker's office was preserved
>> in Philadelphia for so many decades, but I was too late. I have no
>> idea where the contents are now. Maybe next to the Ark.
>>
>> One day in 2008 or so jmk took me for a tour of the buildings, and we
>> at one point ended up high in the top floor of what I think was
>> Building One (since torn down?), in what used to be Lab Supply. Nobody
>> was there, and not much supply was there either. Finally somebody
>> wandered in, and Jim asked where everyone was. "Oh, they closed lab
>> supply, maybe 4 years ago?"
>>
>> Bell Labs had seen hard times since the Lucent split, and it was clear
>> it had not quite recovered, and Lab Supply was just one sign of it. I
>> think the saddest thing was seeing the visitor center, which I first
>> saw in 1976. In 1976, it was the seat of the Bell System Empire, and
>> it was huge. There was a map of the US with a light lit for every
>> switching office in the Bell Labs system. There was all kinds of Bell
>> Labs history in the visitor center museum.
>>
>> The museum had shrunk to a much smaller area, and felt like a closet.
>> The original transistor was still there in 2010, but little else.The
>> library was, similarly, changed: it was dark and empty, I was told.
>> Money was saved. At that time, Bell Labs felt large, strangely quiet,
>> and emptied of people. It made me think of post-sack Rome, ca. 600,
>> when its population was estimated to be 500. I have not been back
>> since 2011 so maybe things are very different. It would be nice if so.
>>
>> As part of this tour, Jim gave me 3 depraz mice. I took one, gutted
>> it, (sorry!), and filled its guts with a USB mouse innards, and gave
>> it back to Jim. He then had a Depraz USB mouse. jmk's mouse did not
>> have any lead in it, as John's did, however. The second I gave to
>> someone at Google who had worked at the labs back in the day. The
>> third mouse I gave to John, and he made it live again, which is cool.
>>
>> In spite of their reputation, I found Depraz mice hard to use. I have
>> gone through all kinds of mice, and am on an evoluent, and as far as
>> Depraz go, I guess "you had to be there". I don't recall if jmk used
>> his "usb depraz" or it ended up on a shelf. Sadly, I can no longer ask
>> him.
>>
>> I'll be interested to see what John thinks of the Depraz.
>>
>> ron
>>
>> On Fri, Aug 6, 2021 at 9:52 AM John Floren <john(a)jfloren.net> wrote:
>> >
>> > Ah, right. I opened the mouse because one of the encoders didn't seem
>> to be working (it worked fine again this morning, who knows...) and
>> discovered that there was something duct taped inside the plastic shell:
>> >
>> > http://jfloren.net/content/depraz/inside.jpg
>> >
>> > Peeling back the tape, I saw what I first took to be chunks of
>> flattened beer cans:
>> >
>> > http://jfloren.net/content/depraz/reveal.jpg
>> >
>> > A closer look showed that they were the wrappers which cover the corks
>> of wine bottles. Up into the 1980s, these were made out of lead, and by
>> flattening five of them, a previous owner of the mouse was able to add
>> quite a bit of extra weight to it:
>> >
>> > http://jfloren.net/content/depraz/wrapper.jpg
>> >
>> >
>> > john
>> >
>> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> >
>> > On Friday, August 6th, 2021 at 9:34 AM, ron minnich <rminnich(a)gmail.com>
>> wrote:
>> >
>> > > john, don't forget to mention the beer can
>> > >
>> > > On Fri, Aug 6, 2021 at 9:29 AM John Floren john(a)jfloren.net wrote:
>> > >
>> > > > I stuck an Arduino on it and with surprisingly little code I have
>> it acting like a 3-button USB mouse.
>> > > >
>> > > > The only problem is that the pointer doesn't move smoothly. It does
>> OK left-to-right, and can move down pretty well, but going up is a problem.
>> I think pushing the mouse forward tends to move the ball away from the
>> Y-axis wheel, and the old spring on the tensioner just doesn't have the
>> gumption to hold that heavy ball bearing in any more.
>> > > >
>> > > > john
>> > > >
>> > > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> > > >
>> > > > On Wednesday, August 4th, 2021 at 9:12 PM, ron minnich
>> rminnich(a)gmail.com wrote:
>> > > >
>> > > > > John, you can see that "stick a bird on it" -> "stick an arduino
>> on
>> > > > >
>> > > > > it" -> "stick a pi on it" has gone as you once predicted :-)
>> > > > >
>> > > > > On Wed, Aug 4, 2021 at 8:59 PM John Floren john(a)jfloren.net
>> wrote:
>> > > > >
>> > > > > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> > > > > >
>> > > > > > On Wednesday, August 4th, 2021 at 6:12 PM, Henry Bent
>> henry.r.bent(a)gmail.com wrote:
>> > > > > >
>> > > > > > > On Wed, 4 Aug 2021 at 20:52, John Floren john(a)jfloren.net
>> wrote:
>> > > > > > >
>> > > > > > > > Having just been given a Depraz mouse, I thought it would
>> be fun to get it working on my modern computer. Since the DE9 connector is
>> male rather than female as you usually see with serial mice, and given its
>> age, I speculate that it might have a custom protocol; in any rate,
>> plugging it into a USB-serial converter and and firing up picocom has given
>> me nothing.
>> > > > > > > >
>> > > > > > > > Does anyone have a copy of a manual for it, or more
>> information on how to interface with it? If I knew how it was wired and
>> what the protocol looked like, I expect I could make an adapter pretty
>> trivially using a microcontroller.
>> > > > > > >
>> > > > > > > This might be of some help?
>> > > > > > >
>> > > > > > >
>> https://www.vcfed.org/forum/forum/technical-support/vintage-computer-hardwa…
>> > > > > > >
>> > > > > > > -Henry
>> > > > > >
>> > > > > > This looks great, thank you!
>> > > > > >
>> > > > > > john
>>
>
The mouse with wine-bottle lead foil in the top may have
been my fault. I did that to two of them--at home and in
my office--because I found a little more pressure made
the ball track better.
I've never been an alcohol-consumer; the lead came from
a friend, who used to save it (back in the 1980s) to
mail to Republicans. Apparently he had, many years
before, registered to vote in a Republican primary solely
to oppose a particularly-poor candidate. That somehow
got him on a GOP mailing list that sent him endless
funding appeals with post-paid envelopes. He used to
fill the envelopes with lead and drop them in the mail,
in the hope that he would cost the party even more in
excess postage than they were already spending to send
the funding pitches.
By the time I was at Bell Labs, he had moved to Canada,
and was no longer receiving unwanted political funding
pitches, but he was glad to save a few bits of lead for
me when I thought of the trick and asked him. Only too
glad, it turned out; he kept saving it and saving it
and saving it even though I neither needed nor wanted
any more. Eventually I managed to get the message
through to him.
He has since moved back to the US. He is still fond of
wine. I don't know what he does with the cork wrappers.
Norman Wilson
Toronto ON
I uploaded the high-resolution one to https://jfloren.net/content/unix_skeleton.jpg if anyone wants to check it out in all its glory.
Thanks, Rob, this is a great picture. I don't think things were *too* different by the time I visited for IWP9 in 2007, but it's been a long time and I guess I didn't take any pictures then.
john
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, August 6th, 2021 at 2:44 PM, Rob Pike <robpike(a)gmail.com> wrote:
> I sent a higher-res version in which you can read all the text but it was "moderated".
>
> This is the Unix room as of the year 2000 or so.
>
> -rob
>
> On Sat, Aug 7, 2021 at 4:34 AM ron minnich <rminnich(a)gmail.com> wrote:
>
>> The story of the mice, one of which I gave to John:
>>
>> I ran a program called FAST-OS for LANL/Sandia for 6 years starting
>> 2005. Think of it as "Plan 9 on petaflop supercomputers" -- it may
>> seem strange now, but in that era when some top end systems ran custom
>> kernels, there was a strong case to be made that plan 9 was a good
>> choice. By 2011, of course, the Linux tsunami had swept all before it,
>> which is why you no longer hear about custom HPC kernels so much --
>> though in some places they still reign. In any event, this program
>> gave me 6 years to work with "the Unix room", or what was left of it.
>> I had been in the Unix Room in 1978, and even met Dennis, so this
>> prospect was quite a treat.
>>
>> We funded Charles Forsyth to write the amd64 compilers for Plan 9,
>> which if you used early Go you ran into (6c 6a 6l); we also funded the
>> amd64 port of Plan 9 (a.k.a. k10) as well as the port to Blue Gene.
>> That amd64 port is still out and about. You can find the Blue Gene
>> kernel on github.
>>
>> I had lots of fun spending time in the Unix room while working with
>> the late Jim McKie, and others. I saw the tail end of the traditions.
>> They had cookie day once a week, if memory serves, on Thursday at 3. I
>> got to see the backwards-running clock, Ken's chess trophies, his
>> pilot's license, pictures of Peter everywhere, a "Reagan's view of the
>> world" map, the American Legion award for Telstar (which was rescued
>> from a dumpster!), and so on. The "Unix room" was more than one room,
>> all built on a raised floor, as I assume it was former old school
>> machine room space. If memory serves, it filled the entire width of
>> the end of the top floor of the building it was in (4th floor?) --
>> maybe 50 ft x 50 ft -- maybe a bit more. There was a room with desks,
>> and a similar-sized room with servers, and a smaller room containing a
>> lab-style sink, a very professional cappucinno machine, decades of old
>> proceedings, and a sofa. I fixed the heavy-duty coffee grinder one
>> year; for some reason the Italian company that produced it had seen
>> fit to switch BOTH hot and neutral, and the fix was to only switch
>> hot, as the neutral switch had failed; I guess in the EU, with 220v,
>> things are done differently.
>>
>> It was fun being there. A few years later the whole room, and all its
>> history, was trashed, and replaced with what Jim called a "middle
>> management wxx dream" (Jim was never at a loss for words); Jim found
>> some yellow Police crime scene tape and placed it in front of the
>> doors to the new space. It was redubbed "the innovation space" or some
>> such, and looked kind of like an ikea showroom. Much was lost. I tried
>> to find a way to save the contents of the room; I had this dream of
>> recreating it at Google, much as John Wanamaker's office was preserved
>> in Philadelphia for so many decades, but I was too late. I have no
>> idea where the contents are now. Maybe next to the Ark.
>>
>> One day in 2008 or so jmk took me for a tour of the buildings, and we
>> at one point ended up high in the top floor of what I think was
>> Building One (since torn down?), in what used to be Lab Supply. Nobody
>> was there, and not much supply was there either. Finally somebody
>> wandered in, and Jim asked where everyone was. "Oh, they closed lab
>> supply, maybe 4 years ago?"
>>
>> Bell Labs had seen hard times since the Lucent split, and it was clear
>> it had not quite recovered, and Lab Supply was just one sign of it. I
>> think the saddest thing was seeing the visitor center, which I first
>> saw in 1976. In 1976, it was the seat of the Bell System Empire, and
>> it was huge. There was a map of the US with a light lit for every
>> switching office in the Bell Labs system. There was all kinds of Bell
>> Labs history in the visitor center museum.
>>
>> The museum had shrunk to a much smaller area, and felt like a closet.
>> The original transistor was still there in 2010, but little else.The
>> library was, similarly, changed: it was dark and empty, I was told.
>> Money was saved. At that time, Bell Labs felt large, strangely quiet,
>> and emptied of people. It made me think of post-sack Rome, ca. 600,
>> when its population was estimated to be 500. I have not been back
>> since 2011 so maybe things are very different. It would be nice if so.
>>
>> As part of this tour, Jim gave me 3 depraz mice. I took one, gutted
>> it, (sorry!), and filled its guts with a USB mouse innards, and gave
>> it back to Jim. He then had a Depraz USB mouse. jmk's mouse did not
>> have any lead in it, as John's did, however. The second I gave to
>> someone at Google who had worked at the labs back in the day. The
>> third mouse I gave to John, and he made it live again, which is cool.
>>
>> In spite of their reputation, I found Depraz mice hard to use. I have
>> gone through all kinds of mice, and am on an evoluent, and as far as
>> Depraz go, I guess "you had to be there". I don't recall if jmk used
>> his "usb depraz" or it ended up on a shelf. Sadly, I can no longer ask
>> him.
>>
>> I'll be interested to see what John thinks of the Depraz.
>>
>> ron
>>
>> On Fri, Aug 6, 2021 at 9:52 AM John Floren <john(a)jfloren.net> wrote:
>>>
>>> Ah, right. I opened the mouse because one of the encoders didn't seem to be working (it worked fine again this morning, who knows...) and discovered that there was something duct taped inside the plastic shell:
>>>
>>> http://jfloren.net/content/depraz/inside.jpg
>>>
>>> Peeling back the tape, I saw what I first took to be chunks of flattened beer cans:
>>>
>>> http://jfloren.net/content/depraz/reveal.jpg
>>>
>>> A closer look showed that they were the wrappers which cover the corks of wine bottles. Up into the 1980s, these were made out of lead, and by flattening five of them, a previous owner of the mouse was able to add quite a bit of extra weight to it:
>>>
>>> http://jfloren.net/content/depraz/wrapper.jpg
>>>
>>>
>>> john
>>>
>>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>>>
>>> On Friday, August 6th, 2021 at 9:34 AM, ron minnich <rminnich(a)gmail.com> wrote:
>>>
>>> > john, don't forget to mention the beer can
>>> >
>>> > On Fri, Aug 6, 2021 at 9:29 AM John Floren john(a)jfloren.net wrote:
>>> >
>>> > > I stuck an Arduino on it and with surprisingly little code I have it acting like a 3-button USB mouse.
>>> > >
>>> > > The only problem is that the pointer doesn't move smoothly. It does OK left-to-right, and can move down pretty well, but going up is a problem. I think pushing the mouse forward tends to move the ball away from the Y-axis wheel, and the old spring on the tensioner just doesn't have the gumption to hold that heavy ball bearing in any more.
>>> > >
>>> > > john
>>> > >
>>> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>>> > >
>>> > > On Wednesday, August 4th, 2021 at 9:12 PM, ron minnich rminnich(a)gmail.com wrote:
>>> > >
>>> > > > John, you can see that "stick a bird on it" -> "stick an arduino on
>>> > > >
>>> > > > it" -> "stick a pi on it" has gone as you once predicted :-)
>>> > > >
>>> > > > On Wed, Aug 4, 2021 at 8:59 PM John Floren john(a)jfloren.net wrote:
>>> > > >
>>> > > > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>>> > > > >
>>> > > > > On Wednesday, August 4th, 2021 at 6:12 PM, Henry Bent henry.r.bent(a)gmail.com wrote:
>>> > > > >
>>> > > > > > On Wed, 4 Aug 2021 at 20:52, John Floren john(a)jfloren.net wrote:
>>> > > > > >
>>> > > > > > > Having just been given a Depraz mouse, I thought it would be fun to get it working on my modern computer. Since the DE9 connector is male rather than female as you usually see with serial mice, and given its age, I speculate that it might have a custom protocol; in any rate, plugging it into a USB-serial converter and and firing up picocom has given me nothing.
>>> > > > > > >
>>> > > > > > > Does anyone have a copy of a manual for it, or more information on how to interface with it? If I knew how it was wired and what the protocol looked like, I expect I could make an adapter pretty trivially using a microcontroller.
>>> > > > > >
>>> > > > > > This might be of some help?
>>> > > > > >
>>> > > > > > https://www.vcfed.org/forum/forum/technical-support/vintage-computer-hardwa…
>>> > > > > >
>>> > > > > > -Henry
>>> > > > >
>>> > > > > This looks great, thank you!
>>> > > > >
>>> > > > > john
Having just been given a Depraz mouse, I thought it would be fun to get it working on my modern computer. Since the DE9 connector is male rather than female as you usually see with serial mice, and given its age, I speculate that it might have a custom protocol; in any rate, plugging it into a USB-serial converter and and firing up picocom has given me nothing.
Does anyone have a copy of a manual for it, or more information on how to interface with it? If I knew how it was wired and what the protocol looked like, I expect I could make an adapter pretty trivially using a microcontroller.
Thanks,
john
What do folks think about event-driven programming as a substitute for threads in UI and process control settings?
I wrote the service processor code for the Sicortex Machines using libevent.a and I thought it was very lightweight and fairly easy to think about. (This was a thing running on ucLinux on a tiny 16 MB coldfire that managed the consoles and power supplies and temp sensors and JTAG access and booting and so forth.)
Tk (IIRC) has a straightforward event driven model for UI interactions.
Meanwhile, the dropbox plugin for my Mac has 120 threads running. WTF?
This was triggered by the fork/spawn discussion.
-Larry
(started with Unix at V6 on an 11/34)
> spawn() beats fork()[;] fork() should be deprecated
Spawn is a further complication of exec, which tells what signals and
file descriptors to inherit in addition to what arguments and
environment variables to pass.
Fork has a place. For example, Program 1 in
www.cs.dartmouth.edu/~doug/sieve/sieve.pdf forks like crazy and never
execs. To use spawn, the program would have to be split in three (or
be passed a switch setting).
While you may dismiss Program 1 as merely a neat demo, the same idea
applies in parallelizing code for use in a multiprocessor world.
Doug
Another issue I have run into is recursion, when a reference includes
another reference. This comes up in various forms:
Also published as ...
Errata available at ...
Summarized [or reviewed] in ...
Preprint available at ...
Often such a reference takes the form of a URL or a page in a journal
or in a proceedings. This can be most succinctly placed in situ --
formatted consistently with other references. If the reference
identifies an alternate mode of publication, it may lack %A or %T
fields.
Partial proposal. a %O field may contain a reference, with no further
recursion, The contained reference will be formatted in line in the %O
text unless references are accumulated and the contained reference is
not unique.
Doug
> Go gets us part of the way there, but cross-machine messaging is still a mess.
Shed a tear for Plan 9 (Pike yet again). While many of its secondary
innovations have been stuffed into Linux; its animating
principle--transparently distributable computing--could not overcome
the enormous inertia of installed BSD-model systems.
Doug
I have considerable sympathy with the general idea of formally
specifying and parsing inputs. Langsec people make a strong case
for doing so. The white paper,"A systematic approach to modern
Unix-like command interfaces", proposes to "simplify parsing by
facilitating the creation of easy-to-use 'grammar-based' parsers".
I'm not clear on what is meant by "parser". A plain parser is a
beast that builds a parse tree according to a grammar. For most
standard Unix programs, the parse tree has two kinds of leaves:
non-options and options with k parameters. Getopt restricts
k to {0,1}.
Aside from fiddling with argc and argv, I see little difference
in working with a parse tree for arguments that could be
handled by getopt and working with using getopt directly.
A more general parser could handle more elaborate grammatic
constraints on options, for example, field specs in sort(1),
requirements on presence of options in tar(1), or representation
of multiple parameters in cut(1).
In realizing the white paper's desire to "have the parser
provide the results to the program", it's likely that the mechanism
will, like Yacc, go beyond parsing and invoke semantic actions
as it identifies tree nodes.
Pioneer Yaccification of some commands might be a worthy demo.
Doug
> On 7/31/21, Michael Siegel <msi(a)malbolge.net> wrote:
> The TOPS-20 COMND JSYS implemented both of these features, and I
think that command
> completion was eventually added to the VMS command interpreter, too.
FYI, There is also a unix version of the COMND JSYS capability. It was
developed at Columbia University as part of their "mm" mail manager. It
is located in to the ccmd subdirectory in the mm.tar.gz file.
url: https://www.kermitproject.org/mm/ftp://ftp.kermitproject.org/kermit/mm/mm.tar.gz
-ron
Besides C-Kermit on Unix systems, the TOPS-20 command interface is
used inside the mm mail client, which I've been using for decades on
TOPS-20, VMS, and several flavors of Unix:
http://www.math.utah.edu/pub/mm
mm doesn't handle attachments, or do fancy display of HTML, and thus,
cannot do anything nasty in response to incoming mail messages.
I rarely need to extract an attachment, and I then save the message in
a temporary file and run munpack on it.
Here are some small snippets of its inline help:
MM] read (messages) ? message number
or range of message numbers, n:m
or range of message numbers, n-m
or range of message numbers, n+m (m messages beginning with n)
or "." to specify the current message
or "*" to specify the last message
or message sequence, one of the following:
after all answered before
current deleted flagged from
inverse keyword last longer
new on previous-sequence recent
seen shorter since subject
text to unanswered undeleted
unflagged unkeyword unseen
or "," and another message sequence
R] read (messages) flagged since yesterday
[message(s) appear here]
MM] headers (messages) since monday longer (than) 100000
[list of long messages here]
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> TUHS list (Rob Pike, Aug 2019)
>
> I think it was slightly later. I joined mid-1980 and VAXes to replace the
> 11/70 were being discussed but had not arrived. We needed to convert a lab
> into a VAX machine room and decide between BSD and Reiser, all of which
> happened in the second half of 1980.
>
> Reiser Unix got demand paging a little later, and it was spectacularly
> fast. I remember being gobsmacked when I saw a demo in early 1981.
>
> Dead ends everywhere.
I think I have figured out why 32V R3 was so fast (assuming my current understanding of how 32V R3 must have worked is correct).
Its VM subsystem tags each memory frame with its disk mirror location, be it in swap or in the file system. A page can quickly be found as they are hashed on device and block no. This is true both for pages in the working set and pages on the 2nd chance list. Effectively, most core is a disk cache.
In a unified buffer design, the buffer code would first look for an existing buffer header for the requested disk block, as in V7. If not found, it would check the page frame list for that block and if found it would connect the frame to an empty buffer header, increment its use count and move it to the working set. If not found there either, it would be loaded from disk as per usual. When a buffer is released, the frame use count would be decremented and if zero the page frame would be put back on the 2nd chance list and the buffer header would be marked empty. With this approach, up to 4MB of the disk could be cached in RAM.
Early in 1981 most binaries and files were a few dozen KB in size. All of the shell, editor, compiler tool chain, library files, intermediate files, etc. would have fitted in RAM all at once. In a developer focused demo and once memory was primed, the system would effectively run from RAM, barely hitting the disk, even with tens of concurrent logins. Also something like “ls -l /bin” would have been much faster on its second run.
It puts a comment from JFR in a clearer context:
<quote>
Strict LRU on 8,192 pages, plus Copy-On-Write, made the second reference to a page "blindingly fast".
<unquote>
So far I read this in context of the paging algorithm, and then it is hard to understand (is LRU really that much better than NRU?). In the context of a unified buffer and disk pages, it makes a lot more sense. Even the CoW part: as the (clean) data segment of executables would still be in core, they could start without reloading from disk - CoW would create copies as needed.
===
The interesting question now is: if this buffer unification was so impressive, why was it abandoned in SVr2-vax? I can think of 3 reasons:
1. Maybe there was a subtle bug that was hard to diagnose. “Research" opting for the BSD memory system “as it did not want to do the maintenance” suggests that there may have been lingering issues.
1b. A variation of this: JFR mentioned that his implementation of unified buffers broke conceptual layering. USG do not strike me as purists, but maybe they thought the code was too messy to maintain.
2. Maybe there was an unintended semantic issue (e.g. you can lock a buffer, but not a mmap ‘ed page).
3. Maybe it was hard to come up with a good sync() policy, making database work risky (and system crashes more devastating to the file system).
JFR mentioned that he did the design and implementation for 32V R3 in about 3 months, with 3 more months for bug fixing and polishing. That is not a lot of time for such a big and complex kernel mod (for its time).
Currently reading through IPC in CB Unix 3.0, Vr1-vax and SVr1-pdp11.
The IPC primitives in CB Unix (maus, event and msg) are quite different from those in SVr1 (although maus survives in SVr1-pdp11).
It made me wonder: is it still known who designed the IPC primitives for SysV?
> Sat Aug 31 06:21:47 AEST 2019
> John Reiser did do his own paging system for UNIX 32/V.
> I heard about it from friends at Bell Labs ca. 1982-83,
> when I was still running systems for physicists at Caltech.
> It sounded very interesting, and I would love to have had
> my hands on it--page cache unified with buffer cache,
> copy-on-write from the start.
>
> [...]
>
> It is in any case a shame that jfr's code never saw the light
> of day. I really hope someone can find it on an old tape
> somewhere and we can get it into the archive, if only because
> I'd love to look at it.
>
> Norman Wilson
> Toronto ON
Norman,
I am getting more optimistic that much of this version of 32V can be ‘triangulated’ from the surviving sources. For convenience I call this version “32V r3” (with the first swapping version being "32V r1" and the scatter loading version being "32V r2").
I’ve been reading my way through the surviving 32/V r2, SysIII-Vax, SVr1-Vax and SVr2-Vax sources. There seems to be a lot of continuous progression in these versions. From a communication with Keith Kelleman I thought that VM in SVr2 was unrelated, but that appears to have been a misunderstanding. In short, it seems that the basic paging algorithms and data structures in SVr2 (other than the region abstraction) come from 32V r3.
The strongest clue for the source link is in the SVr2 “page.h” file. In it, the union describing a page table entry matches with JFR’s description and is otherwise (partially) unused in SVr2. There are other, similar clues in the other source trees.
To explain more clearly, have a look at this code section: https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/uts/vax…
In this union, the 2nd struct “pgd” is never used, nor is the bit pg_disk in the “pgm” struct ever used. It matches with JFR’s recollection:
<quote>
My internal design strategy was to use the hardware page table entry
(4 bytes per page, with "page not present" being one bit which freed
the other 31 bits for use by software) as the anchor to track everything
about a page. This resulted in a complicated union of bitfield structs,
which became a headache. When other departments took over the work,
the first thing they did was use 8 bytes per page, which meant shuffling
between the software and the hardware descriptors: its own mess.
<unquote>
In the pte as given, a pte can be in two states: (i) the pg_disk bit is reset in which case it is in “pgm” format - this format is recognized by the mmu hardware; (ii) the pg_disk bit is set in which case it is in “pgd” format. When a page is paged in, the disk form of the pte was I think saved in the page frame descriptor (the “pfdat" table) and the pte converted to the memory form. Upon page-out the reverse happened. In the SVr2 version of this code the “pgd” form is abandoned and replaced by the separate disk descriptors (see https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/uts/vax…)
The “pgd” structure is a bit puzzling. There is a 19 bit device block number, which is less than the 24 bits allowed by the filesystem code and also less than the 20 bits that would fit in the pte. Maybe this is because the RP04/RP05 disks of the early 80’s worked with 19 bits. I am not sure what the “pg_iord” field is about. The comment “inode index” may be misleading. My hypothesis that it is a short form of the device code, for instance an index into the mount table; magic values could have been used to identify the swap device, “demand zero”, etc.
It seems probable to me that the paging algorithm in SVr2 was derived from the 32/V r3 algorithm. John's recollection:
<quote>
Our machine started with 512KB of RAM, but within a few months was upgraded
to 4 MB with boards that used a new generation of RAM chips.
The hardware page size was 512 bytes, which is quite small. Strict LRU
on 8,192 pages, plus Copy-On-Write, made the second reference to a page
"blindingly fast".
</unquote>
In a follow up mail conversation with JFR we (I?) arrived at the conclusion that literal “strict LRU” is not likely on VAX hardware, but that an algorithm that maintains a small working set combined with a large second chance list amounts to about the same. It seems to me that this description also applies to the surviving SVr2 implementation: every 4 seconds sweep through the page tables of all processes and pages that were not referenced are moved to the free/2nd chance list.
To implement this it seems likely to me that 32V r3 used a structure similar to this: https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/uts/vax… Such a structure is also in line with viewing core as a cache for disk, similar to TENEX that JFR had in mind.
The big change from SysIII to SVr1 in kernel page table management is the introduction of the “sptmap” (https://chiselapp.com/user/pnr/repository/paging/file?name=os/machdep.c&ci=…) In 32V r2 and SysIII the user page tables are swapped in and out of kernel space along with the _u area. This makes it impractical to do the working set sweep every few seconds. The “sptmap” innovation effectively creates a global page table space that fits with the needs of the working set sweep. In SVr1 it seems to have no real benefit, and it seems likely to me that it came from 32V r3.
In general it seems plausible to me that SVr1 derives from 32V r3, but was regressed to SysIII code where necessary to keep the code PDP-11 compatible. Another clue for this is in the buffer code of SVr1: https://chiselapp.com/user/pnr/repository/paging/file?name=os/main.c&ci=fba…
Here, the disk buffers are allocated as virtual memory pages, not as an array. This too is otherwise unused in SVr1, but makes perfect sense in the context of 32V r3.
So, in summary, it would seem to me that the 32V r3 virtual memory system:
- used sptmap code similar to SVr1/r2 to manage page tables in the kernel
- used the pte structure from SVr2 and a pfdat table similar to SVr2 to manage mappings
- used page stealer and fault code similar to SVr2
Phrased the other way around: SVr2-vax seems to use the 32V r3 virtual memory code with the region abstraction added on top, and the unified buffer removed.
At the moment I have not found clear clues for the unified buffer algorithms or mmap implementation. Perhaps careful reading of the IPC shared memory code in SVr1 will yield some.
To be continued …
In 1992, 386BSD is released by Lynne and William Jolitz, starting the open
source operating system movement (Linux didn't come along under later).
-- Dave
Unlike most here, I always pronounced Mt Xinu with an
eye, not an eee. I don't know where I got that, though.
I did know Ed Gould via USENIX and DECUS, but that doesn't
make my pronunciation correct.
As an aside, anyone know where Ed is these days or how he's
doing? I last saw him at a USENIX conference, probably in
San Jose in 2013 but I'm not sure. He showed up just for the
reception; he'd retired, and had cut away most of his famous
beard because he was spending a lot of time sailing and it
got in the way.
Norman Wilson
Toronto ON
Nelson H. F. Beebe:
P.S. Jay was the first to get Steve Johnson's Portable C Compiler,
pcc, to run on the 36-bit PDP-10, and once we had pcc, we began the
move from writing utilities in Pascal and PDP-10 assembly language to
doing them in C.
======
How did that C implementation handle ASCII text on the DEC-10?
Were it a from-scratch UNIX port it might make sense to store
four eight- or nine-bit bytes to a word, but if (as I sense it
was) it was C running on TOPS-10 or TOPS-20, it would have had
to work comfortably with DEC's convention of five 7-bit characters
(plus a spare bit used by some programs as a flag).
Norman Wilson
Toronto ON
More from Yost below.
My purpose in relating this was to point out that the original unix
implementation choices were mostly fine; they just had to be tweaked a
bit. Clearly an independent implementation such as in Linux would veer
off in a different direction, done in a different era and with different prior
experience. I was a bit surprised that Bruce didn't make this same
tweak to cblock size but no way of knowing his reasons now.
> Begin forwarded message:
>
> From: Dave Yost
> Subject: Re: [TUHS] 386BSD released
> Date: July 16, 2021 at 9:21:53 AM PDT
> To: Bakul Shah
>
> Plz forward this
> thanks
>
> This was in early 1983 or late 1982.
>
> We got the serial driver to go 19200 out and 9600 in.
>
> I did 2 things in the Fortune Systems 68k serial driver:
> • hand-coded asm pseudo-DMA, suggested by Robert P Warnock III
> • cblock size 128 bytes instead of 8, count ’em, 8.
>
> From Lyons,
> https://cs3210.cc.gatech.edu/r/unix6.pdf <https://cs3210.cc.gatech.edu/r/unix6.pdf>
> the unix v6 serial driver used a clist of cblocks, like this:
>
>
> The pseudo-DMA interrupt handler was a function made up of a few hand-coded 68k instructions, entered into C code as hex data. That code transferred one byte into or out of a cblock, and at the end of the cblock it grabbed the next cblock from a queue and rang the “doorbell” hardware interrupt, which caused a “software interrupt” at lower priority for further processing. Rob put the doorbell into the architecture with a couple of gates on the board because he was well aware of this software interrupt trick, which was already used in bsd. For some reason I didn’t look at the bsd code, probably because Rob’s explanation was lucid and sufficient.
>
> I once had occasion to mention this, and specifically the relaxing of the draconian 8 byte cblock size, to Dennis Ritchie. He said, sure, why not, the 8 byte cblock size was just a neglected holdover from early days.
>
> This approach was just an interrupt version of what I had proposed to Rick Kiessig as a first project at Fortune Systems: to get a 30x speed up when writing to the Fortune Systems memory-mapped character display hardware. I had done the same thing a few years earlier in Z80 in C code in a serial CRT terminal. It’s simple and obvious: make the inner loop do as little as possible. The most primitive operation needs to be a block operation, not a byte-at-a-time operation.
Doug McIlroy asks about the Rosetta Stone table relating TOPS-20
commands to Unix command in my ``Unix for TOPS-20 Users'' document:
>> I was puzzled, though, by the Unix command "leave", which is
>> not in the manuals I edited, nor is it in Linux. What does
>> (or did) it do?
I reread that 1987 document this morning, and found a few small
mistakes, but on the whole, I still agree with what I wrote 34 years
ago, and I'm pleased that almost everything there about Unix still
applies today.
I confess that I had forgotten about the TOPS-20 ALERT command and its
Unix equivalent, leave. As Doug noted, leave is not in Linux systems,
but it still exists in the BSD world, in DragonFlyBSD, FreeBSD,
NetBSD, OpenBSD, and their many derivatives. From a bleeding-edge
FreeBSD 14 system, I find
% man leave
LEAVE(1) FreeBSD General Commands Manual LEAVE(1)
NAME
leave – remind you when you have to leave
SYNOPSIS
leave [[+]hhmm]
DESCRIPTION
The leave utility waits until the specified time, then reminds you that
you have to leave. You are reminded 5 minutes and 1 minute before the
actual time, at the time, and every minute thereafter. When you log off,
leave exits just before it would have printed the next message.
...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Clem Cole asks:
>> Did you know that before PCC the 'second' C compiler was a PDP-10
>> target Alan Snyder did for his MIT Thesis?
>> [https://github.com/PDP-10/Snyder-C-compiler]
I was unaware of that compiler until sometime in the 21st Century,
long after our PDP-10 was retired on 31-Oct-1990.
The site
https://github.com/PDP-10/Snyder-C-compiler/tree/master/tops20
supplies a list of some of Snyder's files, but they don't match
anything in our TOPS-20 archives of almost 180,000 files.
I then looked into our 1980s-vintage pcc source tree and compared
it with a snapshot of the current pcc source code taken three
weeks ago. The latter has support for these architectures
aarch64 hppa m16c mips64 pdp11 sparc64
amd64 i386 m68k nova pdp7 superh
arm i86 mips pdp10 powerpc vax
and the pdp10 directory contains these files:
CVS README code.c local.c local2.c macdefs.h order.c table.c
All 5 of those *.c files are present in our TOPS-20 archives. I then
grepped those archives for familiar strings:
% find . -name '*.[ch]' | sort | \
xargs egrep -n -i 'scj|feldman|johnson|snyder|bell|at[&]t|mit|m.i.t.'
./code.c:8: * Based on Steve Johnson's pdp-11 version
./code2.c:19: * Based on Steve Johnson's pdp-11 version
./cpp.c:1678: stsym("TOPS20"); /* for compatibility with Snyder */
./local.c:4: * Based on Steve Johnson's pdp-11 version
./local2.c:4: * Based on Steve Johnson's pdp-11 version
./local2.c:209: case 'A': /* emit a label */
./match.c:2: * match.c - based on Steve Johnson's pdp11 version
./optim.c:318: * Turn 'em into regular PCONV's
./order.c:5: * Based on Steve Johnson's pdp-11 version
./pftn.c:967: * fill out previous word, to permit pointer
./pftn.c:1458: register commflag = 0; /* flag for labelled common declarations */
./pftn2.c:1011: * fill out previous word, to permit pointer
./pftn2.c:1502: register commflag = 0; /* flag for labelled common declarations */
./reader.c:632: p2->op = NOASG p2->op; /* this was omitted in 11 & /6 !! */
./table.c:128: " movei A1,1\nZN", /* ZN = emit branch */
./xdefs.c:13: * symbol table maintainence
Thus, I'm confident that Jay's work was based on Steve Johnson's
compiler, rather than Alan Snyder's.
Norman Wilson asks:
>> ...
>> How did that C implementation handle ASCII text on the DEC-10?
>> Were it a from-scratch UNIX port it might make sense to store
>> four eight- or nine-bit bytes to a word, but if (as I sense it
>> was) it was C running on TOPS-10 or TOPS-20, it would have had
>> to work comfortably with DEC's convention of five 7-bit characters
>> (plus a spare bit used by some programs as a flag).
>> ...
Our pcc compiler treated char* as a pointer to 7-bit ASCII strings,
stored in the top 35 bits of a word, with the low-order bit normally
zero; a 1-bit there meant that the word contained a 5-digit line
number that some compilers and editors would report. Of course, that
low-order non-character bit meant that memset(), memcpy(), and
memmove() had somewhat dicey semantics, but I no longer recall their
specs.
kcc later gave us access to the PDP-10's 1- to 36-bit byte
instructions.
For text processing, 5 x 7b + 1b bits matched the conventions for all
other programming languages on the PDP-10. When it came time to
implement NFS, and exchange files and data with 32-bit-word machines,
we needed the ability to handle files of 4 x 8b + 4b and 9 x 8b (in
two 36-bit words), and kcc provided that.
The one's-complement 36-bit Univac 1108 machines chose instead to
store text in a 4 x 9b format, because that architecture had
quarter-word load/store instructions, but not the general variable
byte instructions of the PDP-10. Our campus had an 1108 at the
University of Utah Computer Center, but I chose to avoid it, because
it was run in batch mode with punched cards, and never got networking.
By contrast, our TOPS-20, BSD, RSX-11, SunOS, and VMS systems all had
interactive serial-line terminals, and there was no punched card
support at all.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
>> -r is weird because it enables backwards reading, but only as
>> limited by count. Better would be a program, say revfile, that simply
>> reads backwards by lines. Then tail p has an elegant implementation:
>> revfile p | head | revfile
> tail -n can be smarter in that it can simply read the last K bytes
> and see if there are n lines. If not, it can read back further.
> revfile would have to read the whole file, which could be a lot
> more than n lines! tail -n < /dev/tty may never terminate but it
> will use a small finite amount of memory.
Revfile would work the same way. When head has seen enough
and terminates, revfile will get SIGPIPE and stop. I agree that,
depending on scheduling and buffer management, revfile might
read more than tail -n, but it wouldn't read the whole of a
humongous file.
Doug
> Arguably ancient PDP-10 operating systems like ITS, WAITS, TENEX were
> somewhat "open" and "free", but it's not a clear cut case.
The open source movement was a revival of the old days of SHARE and other
user groups.
SAP, the SHARE assembly program for the IBM 704, was freely available--with
source code--to all members of the SHARE user group. I am not aware of any
restrictions on redistribution.
Other more specialized programs were also freely available through SHARE. In
particular, Fortran formatted IO was adopted directly from a SHARE program
written by Roy Nutt (who also wrote SAP and helped write Fortran I).
Bell Labs freely distributed the BESYS operating system for the IBM 704.
At the time (1958) no operating system was available from IBM.
IBM provided source code for the Fortran II compiler. In the
fashion of the time, I spent a memorable all-night session with
that code at hand, finding and fixing a bizarre bug (a computed GOTO
bombed if the number of branches was 74 mod 75) with a bizarre cause
(the code changed the index-register field in certain instructions on the
fly--inconsistently). And there was no operating system to help, because
BESYS swapped itself out to make room for the compiler.
Doug
This somewhat stale note was sent some time ago, but was ignored
because it was sent from an unregistered email address.
> And if the Unix patriarchs were perhaps mistaken about how useful
> "head" might be and whether or not it should have been considered
> verboten.
Point well taken.
I don't know which of head(1) and sed(1) came first. They appeared in
different places at more or less the same time. We in Research
declined to adopt head because we already knew the idiom "sed 10q".
However one shouldn't have to do related operations in unrelated ways.
We finally admitted head in v10.
Head was independently invented by Mike Lesk. It was Lesk's
program that was deemed superfluous.
Head might not have been written if tail didn't exist. But, unlike head,
tail strayed from the tao of "do one thing well". Tail -r and tail -f are
as cringeworthy as cat -v.
-f is a strange feature that effectively turns a regular file into a pipe
with memory by polling for new data, A clean general alternative
might be to provide an open(2) mode that makes reads at the current
file end block if some process has the file open for writing.
-r is weird because it enables backwards reading, but only as
limited by count. Better would be a program, say revfile, that simply
reads backwards by lines. Then tail p has an elegant implementation:
revfile p | head | revfile
Doug
>> -f is a strange feature that effectively turns a regular file into a pipe
>> with memory by polling for new data, A clean general alternative
>> might be to provide an open(2) mode that makes reads at the current
>> file end block if some process has the file open for writing.
> OTOH, this would mean adding more functionality (read: complexity)
> into the kernel, and there has always been a general desire to avoid
> pushing <stuff> into the kernel when it can be done in userspace. Do
> you really think using a blocking read(2) is somehow more superior
> than using select(2) to wait for new data to be appended to the file?
I'm showing my age. tail -f antedated select(2) and was implemented
by alternately sleeping and reading. select(2) indeed overcomes that
clumsiness.
> I'll note, with amusement, that -r is one option which is *NOT* in the
> GNU version of tail. I see it in FreeBSD, but this looks like a
> BSD'ism.
-r came from Bell Labs. This reinforces the point that the ancients
had their imperfections.
Doug