Starting this in TUHS due to UNIX relevance, but with the heavy disclaimer this should quickly diverge to COFF if it drifts into foreign waters.
I've gotten to reading about 5ESS lately, and it seems there are many in use still today. I found myself wondering what the evolution has looked like as far as computing hardware and operating systems involved. DMERT ran on the 3B-20D supporting the 5ESS systems in the early 80s, at the same time that UNIX 4.x was making rounds on the 3B-20S.
A 5ESS manual from 2001[1] mentions UNIX RTR (Real-Time Reliable) of the DMERT lineage. Wikipedia[2] suggests 5ESS is still very much in use and mentions more modern systems like VCDX; its "Sun Microsystems SPARC workstation runs the UNIX-based Solaris (operating system) that executes a 3B20/21D processor MERT OS emulation system." This sounds like the Lucent 3B20 emulator.
Is there still a line of UNIX RTR/DMERT being maintained to this day? Or are users left with other avenues to keep their hardware updated if necessary?
[1] - https://documentation.nokia.com/cgi-bin/dbaccessfilename.cgi/235600510_V1_5…
[2] - https://en.wikipedia.org/wiki/5ESS_Switching_System
- Matt G.
On SCO UNIX 3.2V4 you indeed have local virtual consoles moving from one to
another using the function keys. Worked from F1 to F12, but how often you
could login would depend on your license. Of course all still character
based, and depending on your TERM setting.
--
The more I learn the better I understand I know nothing.
Hi!
I have an updated Unix 2.0v2 package running under the current Vax-780 sim.
It has been tuned up with missing packages and some new ones. Like automatic
startup with date/time setting, running fsck at boot and properly shutting
down
ALL processes at shutdown allowing for a "clean" shutdown thereby avoiding
fsck issues at the next boot.
Also there is a virtual tape so you can do backups..
My question is can THIS version access an IP address? And if so, how.
If there is any interest I could package it up into a tar file for others.
Thanks,
Ken
--
WWL 📚
Hello folks, posing a question here that will help with some timelining.
So System III, according to everything I've read, was commercially issued in 1982. However, PWB 3.0 was issued internally in 1980, two years prior. This isn't that surprising, give USL some time to work it up for commercial-readiness.
Where I'm curious is if there was a similar gap for the public release of PWB, given that was earlier on and pre-support and such. Was there a particular "public release" date for PWB 1.0 or would it have just been whenever folks started getting tapes out of Bell? I know it shows up in a price sheet floating around from say 1983 or 1984 among the likes of V7, 32V, System III, and System V also for sale, but would anything that early have had a formal "ship date" indicating a day they cut the master to copy tapes from or was it more of a contact Bell, someone will cut you a tape of whatever we've got right now?
Also, was PWB held as something that would be "marketable" from the get-go, or was it more of a happy accident that it wound up in the right place in the right time to become the commercial line? One would think USG Generic would be the one they'd shoot for being the "base" to build on, but everything I'm finding in my study of System III lately is pointing to a much more PWB-ish lineage with random borrowings from CB, PY, HO, IH, among others.
- Matt G.
There was such a tool, psroff (same name as, but no shared code with the
Adobe version) that read either C/A/T or ditroff output and produced
postcript. It was in volume 24 of comp.sources.unix. There were also
multiple patches to it.
It dates from 1991.
segaloco, you were looking for this? Contact me privately if you can't
find it.
Arnold
> From: "Ronald Natalie"
> Multilevel breaks are as bad as goto with regard to structure violation.
In a way, you are right. There isn't really much difference between:
for (mumble) {
for (foobar) {
do some stuff
break-2;
}
}
and:
for (mumble) {
for (foobar) {
do some stuff
goto all_loops_done;
}
}
all_loops_done:
The former is basically just 'syntactic sugar' for the latter.
I think the point is that goto's aren't necessarily _always_ bad, in and of
themselves; it's _how_, _where_ and _why_ one uses them. If one uses goto's
in a _structured_ way (oxymoronic as that sounds), to get around things that
are lacking in the language's flow-control, they're probably fine.
Then, of course, one gets into the usual shrubbery of 'but suppose someone
uses them in a way that's _not_ structured?' There's no fixing stupid, is my
response. Nested 'if/then/else' can be used to write comletely
incomprehensible code (I have an amusing story about that) - but that's not
an argument against nested 'if/then/else'.
As I've said before, the best sculpting tools in the world won't make a great
sculptor out of a ham-handed bozo.
Noel
> From: Warner Losh
> for breaking out of multiple levels of while/for loops.
Yeah, multi-level 'breaks' were one of the things I really missed in C.
The other was BCPL's 'VALOF/RESULTIS'. Back before C compilers got good
enough to do inline substitutions of small procedures (making macros with
arguments less useful), it would have been nice to have, for macros that
wanted to return something. Instead, one had to stand on one's head and use a
'(cond ? ret1 : ret2 )' of some form.
Noel
> From: Ralph Corderoy
> if you say above that most people are unfamiliar with them due to their
> use of goto then that's probably wrong
I didn't say that. I was just astonished that in a long thread about handling
exceptional conditions, nobody had mentioned . . . exceptions. Clearly, either
unfamiliarity (perhaps because not many laguages provide them - as you point
out, Go does not), or not top of mind.
Noel
I have successfully got System V running on a PDP11 sim.
I have been trying to add serial lines like on Version 7 but
have had no success.
What would be necessary under System V on a sim to do so.
I have already tried the SIMH group but no working answers.
If direct telnet is a better way please let me know.
Thanks
Ken
--
WWL 📚
Ca. 1981, if memory serves, having even small numbers of TCP connections
was not common.
I was told at some point that Sun used UDP for NFS for that reason. It was
a reasonably big deal when we started to move to TCP for NFS ca 1990 (my
memory of the date -- I know I did it on my own for SunOS as an experiment
when I worked at the SRC -- it seemed to come into general use about that
time).
What kind of numbers for TCP connections would be reasonable in 1980, 90,
2000, 2010?
I sort of think I know, but I sort of think I'm probably wrong.
So I decided to keep the momentum and have just finished the first pass of a Fifth Edition manual restoration based on the same process I used for 3B20 4.1:
https://gitlab.com/segaloco/v5man
There were a few pages missing from the extant PDF scan, at least as far as pages that were in both V4 and V6 sources, so those are handled by seeing how V5 source of the few programs compares to V6. I'll note which pages required this in a second pass.
I've set my sights on V1 and V2 next, using V3's extant roff sources as a starting point, so more to come.
- Matt G.
From reading a lot of papers on the origins of TCP I can confirm that people appear to have been thinking in terms of a dozen connections per machine, maybe half that on 16-bit hardware, around 1980. Maybe their expectations for PDP-10’s were higher, I have not looked into that much.
> From: Tom Lyon <pugs78(a)gmail.com>
> Sun chose UDP for NFS at a point when few if any people believed TCP could
> go fast.
> I remember (early 80s) being told that one couldn't use TCP/IP in LANs
> because they were WAN protocols. In the late 80s, WAN people were saying
> you couldn't use TCP/IP because they were LAN protocols.
I’m not disputing the above, but there was a lot of focus on making TCP go fast enough for LAN usage in 1981-1984. For example see this 1981 post by Fabry/Joy in the TCP-IP mailing list: https://www.rfc-editor.org/in-notes/museum/tcp-ip-digest/tcp-ip-digest.v1n6…
There are a few other similar messages to the list from around that time.
An early issue was check-summing, with that initially taking 25% of CPU on a VAX750 when TCP was heavily used. Also ideas like having "trailing headers" (so that the data was block aligned) were driven by a search for LAN performance. Timeouts were reduced from 5s and 2s to 0.5s and 0.2s. Using a software interrupt instead of a kernel thread was another thing that made the stack more performant. It always seemed to me that the BBN-CSRG controversy over TCP code spurred both teams ahead with BBN more focussed on WAN use cases and CSRG more on LAN use cases. I would argue that no other contemporary network stack had this dual optimisation, with the possible exception of Datakit.
Guys,
Find attached an updated date.c for Y2K for System V
IE: date 0309182123
Also works:
# date +%D
03/09/23
# date +%y%m%d%H%M
2303091823
Interesting Version 7 wants: date 2303091821
Ken
--
WWL 📚
> From: Kenneth Goodwin
> The first frame buffers from Evans and Sutherland were at University of
> Utah, DOD SITES and NYIT CGL as I recall.
> Circa 1974 to 1978.
Were those on PDP-11's, or PDP-10's? (Really early E+S gear attached to
PDP-10's; '74-'78 sounds like an interim period.)
Noel
In PWB1, support for 'huge' files appears to have been removed. If one
compares bmap() in PWB1'S subr.c with V6's, the "'huge' fetch of double
indirect block" code is gone. I guess PWB didn't need very large (> 8*256*512
= 1,048,576 bytes) files? I'm not sure what the _benefits_ of removing it
were, though - unless PWB was generating lots of files of between 7*256*512
and 8*256*512 bytes in length, and they wanted to avoid the overhead of the
double-indirect block? (The savings in code space are derisory - unlike in
LSX/MINI-UNIX.) Anyone know?
Noel
I am confused on the history of the frame buffer device.
On Linux, it seems that /dev/fbdev originated in 1999 from work done by Martin Schaller and Geert Uytterhoeven (and some input from Fabrice Bellard?).
However, it would seem at first glance that early SunOS also had a frame buffer device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe and configure the hardware). Is that correct, or were these entirely different in nature?
Paul
The wheel of reincarnation discussion got me to thinking:
What I'm seeing is reversing the rotation of the wheel of reincarnation.
Instead of pulling the task (e.g. graphics) from a special purpose device
back into the general purpose domain, the general purpose computing domain
is pushed into the special purpose device.
I first saw this almost 10 years ago with a WLAN modem chip that ran linux
on its 4 core cpu, all of it in a tiny package. It was faster, better, and
cheaper than its traditional embedded predecessor -- because the software
stack was less dedicated and single-company-created. Take Linux, add some
stuff, voila! WLAN modem.
Now I'm seeing it in peripheral devices that have, not one, but several
independent SoCs, all running Linux, on one card. There's even been a
recent remote code exploit on, ... an LCD panel.
Any of these little devices, with the better part of a 1G flash and a large
part of 1G DRAM, dwarfs anything Unix ever ran on. And there are more and
more of them, all over the little PCB in a laptop.
The evolution of platforms like laptops to becoming full distributed
systems continues. The wheel of reincarnation spins counter clockwise -- or
sideways?
I'm no longer sure the whole idea of the wheel or reincarnation is even
applicable.
Rob Pike:
As observed by many others, there is far more grunt today in the graphics
card than the CPU, which in Sutherland's timeline would mean it was time to
push that power back to the CPU. But no.
====
Indeed. Instead we are evolving ways to use graphics cards to
do general-purpose computation, and assembling systems that have
many graphics cards not to do graphics but to crunch numbers.
My current responsibilities include running a small stable of
those, because certain computer-science courses consider it
important that students learn to use them.
I sometimes wonder when someone will think of adding secondary
storage and memory management and network interfaces to GPUs,
and push to run Windows on them.
Norman Wilson
Toronto ON
Recently, I stumbled upon a photo of the Lions Commentary that didn't have
a bell disclaimer, but a Wollongong Group disclaimer on it. Not Wollongong
University, but The Wollongong Group (a company I coincidentally used to
work for). I wish I'd saved the images, because now I can't find it. Has
anybody else seen this?
Warner
> I'll turn this into a 'Fixing damaged V5/V6 file systems' article on
> the CHWiki.
Here'a a first crack at it:
https://gunkies.org/wiki/Repairing_early_UNIX_file_systems
Any suggestions for improvements/additions will be gratefully received!
I've also been amusing myself trying to figure out who wrote:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/fcheck.c
and how it got to MIT - which might give us a clue as to who wrote it. (It's
clearly a distant ancestor to 'fsck'.) The fact that we've lost Ted Kowalski
is really hindering, alas. Interestingly, Dale DeJager, head of the CB-UNIX
group, earlier remembered Hal Pierson working on a file system checker early
on:
"Hal also implemented the first file system check routine that was written
in C. It replaced an .. assembler version from research"
but it's not clear if the thing Hal wrote, mentioned there, has any
relationship with the 'check' of V5:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s1/check.c
Maybe one of the Labs old-timers here remembers where the V5 thing came from?
(I.e. did Ken or Dennis write it, or did it come from Columbus?) If you do, it
would be a big help!
Noel
Hi,
When executing ps alx on the interdata sim I get good output:
# ps alx
F S UID PID PPID CPU PRI NICE ADDR SZ WCHAN TTY TIME CMD
3 S 0 0 0 255 0 20 2235 2 4262 ? 36:48 swapper
1 S 0 1 0 0 30 20 2255 8 46060 ? 0:00 /etc/init
1 S 0 19 1 0 30 20 2745 11 46114 co 0:00 -sh
1 R 0 301 19 4 50 20 4056 20 co 0:00 ps alx
1 S 0 12 1 0 40 20 2545 5 140000 ? 0:00 /etc/update
1 S 1 18 1 0 40 20 2625 10 140000 ? 0:00 /etc/cron
#
When executing ps alx on the pdp11 sim I get bad output:
# ps alx
F S UID PID PPID CPU PRI NICE ADDR SZ WCHAN TTY TIME CMD
115
5120 0 0 1 26 1 55 1 3003 ? 120150:37 swapper
#
I tried copying the source from one machine to the other. No luck, same
issue.
I have attached the source from both machines.
Any help appreciated.
Ken
--
WWL 📚
> From: Douglas McIlroy
> Typo, in v3 through v6 ...
> 26^3 16-bit trigram counts didn't fit in the PDP-11 memory
Being mildly curious, I fed '26 3 ^p' into 'dc' to see just how big it was -
and got "17576", a 16-bit word array of which would fit into a PDP-11 64KB
address space.
I think the answer is in the first line - V3 didn't use the PDP-11 memory
management, so the kernel _and_ the application had to fit into 56KB. So
there may well have not been 36KB available to hold a 26^3 array of 16-bit
words.
The other possible explanation is that it was perfectly possible to run UNIXes
of that era (V4 on) on machines without enough main memory to hold the kernel
and a 'full-sized' process simultaneously. (Our original machine, an -11/40,
started out without a lot of memory; I don't recall exactly how much, though.
It had, I'm pretty sure, 3 banks of core; I was thinking it was 3 MM11-L core
units, which would be 3x16KB, or only 48KB, but my memory must be wrong;
that's not really enough.)
Noel
> From: Clem Cole wrote:
> It had more colorful name originally - fsck (pronounced as fisk BTW)
> was finished. I suspect the fcheck name was a USG idea.
I dunno. I don't think we at MIT wold have gratuitously changed the name to
'fcheck'; I rather think that was its original name - and we pretty
definitely got it from CMU. 'fsck' was definitely descended from 'fcheck'
(below).
> From: Jonathan Gray
>> (are 'fsck' and 'fcheck' the same program?)
> https://www.tuhs.org/cgi-bin/utree.pl?file=V7addenda/fsck
Having looked the the source to both, it's quite clear that 'fcheck' is a
distant ancestor of 'fsck' (see below for thoughts on the connection(s)). The
latter has been _very_ extensively modified, but there are still some traces
of 'fcheck' left.
A lot of the changes are to increase the portability, and also to come into
compliance with the latest 'C' (e.g. function prototypes); others are just to
get rid of oddities in the original coding style. E.g.:
unsigned
dsize,
fmin,
fmax
;
Perfectly legal C, but nobody uses that style.
> From: Jonathan Gray
> fcheck is from Hal Pierson at Bell according to
> https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/readme.txt
Hmm. "the major features that were added to UNIX by CB/UNIX ... Hal Person
(or Pierson?) also rewrote the original check disk command into something
that was useful by someone other than researchers."
I poked around in CB/UNIX, and found 'check(1M)':
https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/cbunix_man1_01.pdf
(dated November 1979). Alas, the source isn't there, but it's clearly in the
fheck/fsck family. (CB/UNIX also has chkold(1M), which looks to me like it's
'icheck'.)
So now we have a question about the ancestry of 'check' and 'fcheck' - is one
an ancestor of the other, and if so, which - or are they independent
creations? Without the source, it's hard to be definitive, bur from the
messages (as given in the manual), they do seem related.
Clem's message of 3 Mar, 14:35 seems to indicate the the original was from
CMU, authored by Ted Kowalski; he also:
https://wiki.tuhs.org/doku.php?id=anecdotes:clem_cole_student
says "Ted Kowalski shows up for his OYOC year in the EE dept after his summer
at Bell Labs ... He also brought his cool (but unfinished) program he had
started to write originally at U Mich - fsck". So maybe the CB/UNIX 'check' is
descended from a version that Ted left behind at Bell Labs?
Is anyone in touch with Hal Pierson? He could surely clear up these questions.
Noel
> From: KenUnix
> So is it safe to say there is no fsck or similar for v7?
There was a version of 'fcheck' (are 'fsck' and 'fcheck' the same program?)
for V7, but I don't know if it's available. It would be really easy to
convert the 'fcheck.c' that I put up to a V7 version; the V6 and V7 file
systems are almost identical, except for the block size, I think.
> From: Dan Cross
> I believe you posted a link to end(3) here back in 2018
Yes, but that does't talk about '_end' not being defined if there
are missing externals, either! All it says is:
"Values are given to these symbols by the link editor 'ld' when, and only
when, they are referred to but not defined in the set of programs loaded."
Now that I think about it, I have this vague memory we had to look at the
source for 'ld.c' to verify what was going on!
> From: Jonathan Gray
> That is close, but slightly different to the PWB fcheck.c
Interesting. I wonder how 'fcheck' made it from CMU to Bell? Clem and I
discussed how it made it from CMU to MIT, and we think it was via Wayne
Gramlich, who'd been an undergrad at CMU, and then went to grad school at MIT.
I'm pretty sure the reason we liked it was not any auto-repair capabilities,
but ISTR it was somewhat faster than icheck/dcheck. (Interesting that they were
separate programs in V6; V5 seems to have only had check:
http://squoze.net/UNIX/v5man/man8/checkhttps://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s1/check.c
which contained the functionality of both. I wonder why they were split?
Space?)
> From: Rich Salz
> But the amazing point was it worked regardless of bit order.
I forgot to mention thast, but yes, its input was the number in bit-serial
form. I suspect there's a connection between the property he mentioned, and
the fact that the grad student could design something which would work with
binary numbers fed in from either end, but I can't bring myself to devote the
brain cells to figure it out.
> From: John Cowan
> I didn't know that one was done at MIT.
Yes; see:
https://www.hactrn.net/sra/alice/alice.intro
There's a really funny story at the end of that about the real Ann Marie
Finn. In Rob's version, she took the role of KAREN in the earlier one. That
would be Karen Prendergast, Patrick Winston's admin; why we used her I don't
know, since I didn't really know her, but I guess she had a reputation as a bit of
a 'tough cookie'.
>> I think that the person fails their oral. I have no idea if it's a
>> true story.
> That's vicious.
Hey, this _is_ the school that used to tell incoming freshpeople, at the
welcoming picnic 'look at the person to your left, and to your right; at
graduation, one of you won't be here'. I don't remeber if they said the same
thing at mine, or if the story had just been passed down from class to class.
Noel
In looking at the first AUUGN today, I noticed the following at the end of
a letter John Lions sent home when he spent a sabbatical at Bell Labs
[image: image.png]
I've seen the first patent, but not the second one... That's got to be a
joke or inside joke, right? Anybody know anything else about it?
There was a well known ftp site in the early 1990s called
simtel20.army.mil. It was mostly known as a repository for ms-dos
utilities, but it also had a collection of source code to various
user-contributed unix utilities. I just uploaded those to the Internet
Archive: https://archive.org/details/oak-unix-c--full-mirror-1999.12.14
So in working on an unrelated 6502 project, I got to wondering about UNIX on it and other 8-bits. Did some Googling, and while I was able to turn up some attempts at UNIX-likes on 6502 as well as Z80, the only one I found that might have some Bell connection is "uNIX" as documented here: https://bitsavers.org/pdf/uNIX/uNIX_Jan82.pdf
A forum post I read suggested those involved were some former Bell folks from NJ. In any case, this begs the question for me: Were there ever any serious attempts at an 8-bit UNIX in the labs or Bell System at large? Certainly it would've provided quite the challenge without much return compared with 16 and 32-bit efforts, but does anyone know if, say, an LSX/Mini-UNIX-ish attempt was ever made at the 6502, Z80, or other 8-bits? Thanks all!
- Matt G.
Hi,
I am trying to use the 'dump' program but it references rmt1.
My system only has rmt0. I have been unable to find how to
create this device. I have looked over the reference material
but it only references rmt0.
Is there any way to redirect a dump to use rmt0?
Any help is appreciated.
# ls -l
total 4
drwxr-xr-x 2 root 336 Mar 1 16:56 .
drwxr-xr-x 8 root 288 Feb 21 17:18 ..
crw--w--w- 1 root 0, 0 Mar 2 07:47 console
crw-r--r-- 1 bin 8, 1 Jan 10 1979 kmem
-rw-rw-r-- 1 bin 775 Jan 10 1979 makefile
crw-r--r-- 1 bin 8, 0 Jan 10 1979 mem
*brw-rw-rw- 1 root 3, 0 Mar 1 20:42 mt0*
crw-rw-rw- 1 root 12,128 Dec 31 1969 nrmt0
crw-rw-rw- 1 bin 8, 2 Dec 31 1969 null
*crw-rw-rw- 1 root 12, 0 Feb 23 15:55 rmt0*
brw-r--r-- 1 root 6, 0 Mar 2 07:47 rp0
brw-r--r-- 1 root 6, 15 Dec 31 1969 rp3
crw-r--r-- 1 root 14, 0 Dec 31 1969 rrp0
crw-r--r-- 1 root 14, 15 Dec 31 1969 rrp3
brw-r--r-- 1 root 6, 1 Dec 31 1969 swap
crw-rw-rw- 1 bin 17, 0 Mar 1 19:39 tty
crw--w--w- 1 root 3, 0 Mar 1 19:41 tty00
crw--w--w- 1 root 3, 1 Feb 23 16:47 tty01
crw--w--w- 1 root 3, 2 Feb 21 16:56 tty02
crw--w--w- 1 root 3, 3 Feb 21 16:56 tty03
Not only that but when attempting to use dump it creates
a file and consumes all the space on rp0
In dev it creates:
-rw-rw-r-- 1 root 174080 Mar 2 09:20 rmt1
Sample run:
# dump
date = Thu Mar 2 09:20:16 2023
dump date = the epoch
dumping /dev/rrp3 to */dev/rmt1*
I
II
estimated 24870 tape blocks on 0 tape(s)
III
IV
*no space on dev 6/0*
no space on dev 6/0
no space on dev 6/0
no space on dev 6/0
Thanks,
Ken
--
WWL 📚
> This one, perhaps:
> https://patents.google.com/patent/US3964059A/en
Yes, that's the Typo patent. Notice that it features "method and
apparatus". The bizarre idea of doing it in hardware was a figment of
the patent department's imagination. This was a dance to circumvent
the belief at the time that software could not be patented. Software
was smuggled in by stating that it was one way to realize the
apparatus in the patent disclosure.
The now obsolete belief was fallout from Gottsschalk v Benson, in
which the Supreme Court invalidated another Bell Labs patent, on a
trick to save a few cycles in converting integers between BCD and
binary. The grounds for rejection were roughly that software was math
(a "mental step") and therefore not patentable.
The Benson decision, written by William O. Douglas, makes ludicrous
reading: it argues, though the patent does not claim, that a patent on
this narrow method could be enforced against any program that converts
BCD to binary. Apparently Douglas thought that all black-box programs
for a given purpose were the same, although the patent office did not
so conflate different mechanical or electrical apparatuses that have a
common purpose.
Doug
> I am having a problem clearing a dup inode.
V6 had almost no tools for automagically fixing file system corruption.
To do it, you need to i) understand how the FS works (see:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man5/fs.5
but it's pretty simple); ii) understand what the few tools (dcheck; icheck;
clri) do; iii) dive in.
I recall I used to use 'adb' a lot, to manually patch things when there was a
problem, so you'll want to study up on the 'db' syntax (no 'adb' in vanilla
V6, but for this, they are basically equivalent):
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man1/db.1
You'll have to use the non-raw version of the device (the raw version can only
read/write complete blocks), and then judiciously use 'sync' to flush the
updated blocks out to the 'physical' disk. (There are some corner cases where
data is stored elsewhere, such as when one is patching the inode of an open
file, but I'm going to ignore them.)
> # icheck -s /dev/rp0
'icheck -s' only rebuilds the free list; it doesn't help with any other error
(e.g. a block being assigned to two different files).
> 4244 dup; inode=323
Which is probably what is happening here. 'icheck':
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man8/icheck.8
is not telling you what _else_ is using that block, because it has already
forgotten that by the time it discovers this second claimant (it only keeps a
bit array of 'used' blocks).
> # icheck -b 323 /dev/rp0
Err, you wanted to say 'icheck -b 4244' to find out who else was using block
4244.
I'm not sure if 'fsck' would fix these; I have a V6 one, if anyone wants it.
The 'easy' way to fix this is i) copy the second file to somewhere else, ii)
delete the original, iii) re-build the free list (because the duplicate block
will now be in both the first file, and the free list), iv) examine both files,
and see which one has the smashed contents.
I'll turn this into a 'Fixing damaged V5/V6 file systems' article on the
CHWiki.
Noel
I think discussion of early Linux is in scope for this list, after all that is 30 years ago. Warren, if that is a mis-assumption please slap my wrist.
Following on from the recent discussion of early workstations and windowing systems, I’m wondering about early windowing on Linux. I only discovered Linux in the later nineties (Red Hat 4.x I think), and by that time Linux already seemed to have settled on Xfree86. At that time svgalib was still around but already abandoned.
By 1993 even student class PC hardware already outperformed the workstations of the early/mid eighties, memory was much more abundant and pixels were no longer bits but bytes (making drawing easier). Also, early Linux was (I think) more local machine oriented, not LAN oriented. Maybe a different system than X would have made sense.
In short, I could imagine a frame buffer device and a compositor for top-level windows (a trail that had been pioneered by Oriel half a decade before), a declarative widget set inspired by the contemporary early browsers and the earlier NeWS, etc. Yet nothing like that happened as far as I know. I vaguely recall an OS from the late 90’s that mixed Linux with a partly in-kernel GUI called “Berlin” or something like that, but I cannot find any trace of that today, so maybe I misremember.
So here are a few things that I am interested in and folks on this list might remember:
- were there any window systems popular on early Linux other than X?
- was there any discussion of alternatives to X?
- was there any discussion of what kernel support for graphics was appropriate?
> From: KenUnix
> things are missing:
> Undefined:
> _setexit
> _reset
> _seek
> _alloc
> _end
> Yes, I am trying to compile it on Unix v7.
Well, there's your answer. They are all in the V6 library. Here's
the source for setexit/reset:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s5/reset.s
You do realize that if you got it compiled under V7 and ran it, it would
trash the disk, right? (The V6 and V7 filesystems are different; very
similar, but block nubers are 16 bits on V6, ans 32 bits on V7.)
> Is there a makefile?
No. No 'make' in V6. Which is why you find those 'run' shell files:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s4/run
everywhere.
> From: John Cowan
> It was an update/rewrite of the MIT version.
Which one? There were two: "MIT's AI Lab", by CSTACY, Alan Wecsler, and me;
which Rob Austein re-wrote into "Alice's PDP-10". I thought the original was
centered around ITS, but my memory was poor (hey, it has been ~40 years :-),
it seems to sort of be about LISP Machines. Rob's version was about TWENEX
(yech). The original was written in 926, MOON's office; I can't believe he
put up with me hanging out there!
>> Although I like the old story about the person at their oral exam and
>> the Coke bottle in the window.
> Details?
So they're giving someone an oral exam. They can't make up their minds, or
something, and they ask the person to step out for a second. When the person
comes back in, they point to a Coke bottle sitting on a window-sill in the
sunlight, and ask them to examine it. The person notices that it's warm on
one side - the side facing the window. 'Why that side?', they ask. So the
person goes into a long explanation about how the curved glass must have
focused the light, yadda-yadda. WRONG! They turned it around while the
person was out of the room. I think that the person fails their oral. I
have no idea if it's a true story.
Steve Ward told another oral story which I'm pretty sure _is_ true, though.
They ask the candidate to design a state machine (or digital logic, I forget
which) which can tell if a number is divisible by three (I think I have the
details correct, but I'm not absolutely certain). So they describe one - and
then point out that you can feed the number in from either end (most or least
significant end first) - and proves that it will work either way! The
committee was blown away.
Noel
> I'm not sure if 'fsck' would fix these
Turns out it was called 'fcheck' when we had it.
> I have a V6 one
I'd already put it on my Web site, here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/fcheck.c
if anyone wants it.
> From: "Ron Natalie"
> You had adb?
Yeah, MIT had a lot of stuff that 'fell off the back of a truck' (including
things like the circuit design tools, etc). Well, having an undergrad who was
in the famous Boy Scout troop at Bell helped... :-)
> From: KenUnix <ken.unix.guy(a)gmail.com>
> What I finally did was restore the ".disk" files from a previous backup
You may sit with Arlo Guthrie on the 'Windows user' bench. :-)
> From: "Theodore Ts'o"
> some have argued that if someone doesn't do backups of their research
> data, maybe they don't *deserve* to get their Ph.D. :-)
'Think of it as evolution in action.'
Although I like the old story about the person at their oral exam and
the Coke bottle in the window.
Noel
>> _end
> They are all in the V6 library.
Oops, not _end. In the V6 linker, "_end" is not defined if there are still
undefined symbols at the end of the linking run.
I remember finding this in some obscure place in the V6 documents; it's not
in 'ld(I)'. Anyone remember where it's discussed?
Noel
I've been doing a lot of reading of systems admin books lately including:
Frisch, E. (1991). Essential System Administration (3rd edition is my
fattest book other than Unabridged Shakespeare)
Hunter, B. H., & Hunter, K. B. (1991). UNIX Systems Advanced
Administration and Management Handbook (Opinionated praxis)
Nemeth, E., Synder, G., & Seebass, S. (1989). UNIX System Administration
Handbook (5th edition is another fatty)
Tons of other more recent drivel.
I have been working on my ancient and not so ancient Unix library for a
while now, and it's kind of funny. It seems like once I read a book, be
it new or old, I hardly need it anymore - most of them wind up back at
half-price books. The exceptions are those that I find myself going back
to over and over and over again and wow are those few and far between.
An example of one of the gems is S. R. Bourne's The UNIX System, another
is Kernighan and Pike's The UNIX Programming Environment, and a couple
of newcomers for me are Volumes 3 and 8 of O'Reilly's The Definitive
Guides to the X Window System. I've written in the margins so many times
with these that there are sections where I can't fit any more notes.
That's the kind of sys admin guide I'd like to hear about. So, my
question for y'all is, what did y'all think about sys admin texts as
they were coming out? Were they well received, were they water to a
dying horse, were they paperweights, what? If you are of the camp, "we
don't need no stinking admin guide", or "we did it all by muscle memory
and didn't use books", don't reply. I'm curious about the experience of
those of y'all who actually used them. Were there any early standouts
and why did they stand out?
Anything from 1970 on is fair game.
Later,
Will
P.S. Can you believe that 2000 is fast becoming 'history' worth
preserving? In 1997, we were rewriting our gas pump and credit card
transaction systems, which were written in C, to deal with upcoming Y2K
bugs. Oh, how the worm has turned :).
So now that I'm done futzing with the 4.1 manual for a while, I've decided to look around a few others to try and get a better feel for the continuity of different features as well as documentation practices between branches. On that note, the more I compare, the more continuity I find between PWB 1.0 and UNIX System III. Between that, various emails here, and some info from Clem, I feel fairly confident calling System III and V as well as the 4.1 in between PWB releases, at least as far as version continuity is concerned, so I may start using that nomenclature more here and there. On the flip side, the name UNIX/TS seems to bear less and less relevance outside of whatever efforts it did originate with in the late 70's. I eventually mean to aggregate all the references I've found together to develop a better picture of what that was, but just know that documentation consistency points to PWB 3, 4, and 5 as the true identity of the USG releases in the 80s, not UNIX/TS as I have referenced previously.
So now for an interesting bit of init history I've managed to piece together. The 4.1 manual still indicates the same init system as System III, based on a file called /etc/inittab but with a slightly different format and semantics from what we ultimately see in System V. That init, seen earliest thus far in PWB 3.0, seems to have been developed around that time. I'm struggling to find the email right now but I think someone on the list mentioned having written this one in either '79 or '80. As for the init in System V, it likely entered the PWB line from CB-UNIX based on the CB-UNIX 2.3 manual which can be found here: https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/cbunix_man5.pdf . I say likely as some recent perusal of the MERT Release 0 manual has now cast some doubt on that. So to start, CB-UNIX 2.3 appears to be somewhat contemporaneous with UNIX 5.0 in that the manual has pages labeled specifically UNIX 5.0. That said, the issue date on the front pages of the manuals is over a year apart, so the 5.0 additions could be just that, added pages after the formal 2.3 issue. Anyway, in common between them is an init system utilizing likewise a file named /etc/inittab, but with a slightly different format and some expanded functionality. The CB-UNIX manpage for this inittab(5) can be found on page 29 of the above document. I had never looked further in that one though, and a couple days ago, I was cataloging all of the pages present in that manual when I came across page 34. This is a page for lines(5) that simply says "No longer used. See inittab(5)." Curious, so perhaps this stream of init once called the file /etc/lines, then renamed it inittab to match PWB? The very next page answers part of that question. Page 35, also lines(5), contains a description of a file very similar to the inittab(5) entry except a line is 5 fields wide, including a "shellcm" field. Unfortunately, the remaining pages of this entry are not in the PDF, presumably this was a mistaken entry. I have another such mistaken entry in a moment. Anywho, what is there of the page indicates the id field actually had an impact on the /dev entry for the terminal line in that it would be named /dev/ln<id>, and goes on to say that if a line monitor other than /etc/getty is used this must be accounted for. So interesting little note there about using something besides getty for line monitoring. Otherwise the page reads pretty similar for what is there, indicating a likely continuity between this /etc/lines-based init and the CB variant of the /etc/inittab system. One interesting note is that /etc/lines supports C-style comments, /etc/inittab instead uses sh conventions. This lines file also indicates there is a run-level 7 which may have become the S run-level. The respawn and wait actions are there, and that's where the page cuts off, so no further evidence here of what was in /etc/lines in 1979 in CB-UNIX 2.1. By 1980 it is replaced with /etc/inittab.
So aside from the lines(5) stuff, this has been pretty well understood that System V init came from or at least strongly resembles CB-UNIX init as far as what is available documentation-wise. Well, as I mentioned above, the MERT Release 0 manual may shed some further light on this matter, albeit without fully illuminating it. So first, some MERT context. This document details the origins of MERT Release 0 pretty succinctly: https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Pgs%2037-… . Basically MERT 0 is the first "supported" release by the "Small Systems Planning and Development Department (8234)" at Murray Hill. The manual is based on the USG Program Generic Issue 3 version of the manual which was issued in early 1977. This manual, in turn, is from October 1977. This intro goes on to indicate that this product will eventually be released instead as "UNIX/RT" to accompany the in-progress "UNIX/TS" which is said here to be based on V7 with additions from USG Generic 3 and PWB. The MERT-specific portion of this manual is technically considered the second edition, presumably in that the first was the material maintained and distributed by Lycklama and Bayer. So in any case, the landscape of the time seems to be that V7 is starting to go around to the various streams, with UNIX/TS in the works to present it with USG/PWB stuff and then UNIX/RT being planned as the MERT-counterpart. This isn't quite my focus here, but does provide some UNIX/TS "stuff" that has been explained to varying degrees already. So where this all relates to the init though is this: https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Pgs%2001-…'s%20Manual%20for%20MERT.pdf On page 4 of this document is a list of pages that one must replace in a standard USG Program Generic 3 manual to produce a MERT Release 0 manual. In other words, this essentially lists which pages are changed for MERT. Of interest to this discussion is the V File Formats section on page 5. The instructions are to remove a page called "lines" and to add a page called "ttys". If you then go on to read through the manual, in section V, the ttys file suggests an init system akin to the research init system. This is also indicated by replacement of the init page itself with one describing a system closer to that of research than anything else: https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Unix%20Pr… . So based on this, it sounds like USG Generic 3 may have had the same /etc/lines init that we see hints of in CB-UNIX. This also then suggests that CB-UNIX 2.3 may have some basis in USG Generic 3 just like MERT. Based on yet another email I'm struggling to find, another user here had offered up some info regarding a USG Generic 2 manual he has from days past. That manual contained an init similar in flavor to the research init as well, indicating that this init system shows up as part of Generic 3.
So now for that other misprint that also bears some relevance. In the UNIX 4.1 manual I recently finished restoring, there was an errant gettydefs(4) page which was only the very last few lines of the manpage (basically the SEE ALSO) section. This file is not part of the pages one needs to discard to create a MERT manual and is not otherwise in the manual, so does not appear to be a part of Generic 3. However, this file does show up in CB-UNIX 2.3, and from there presumably gets sucked into System V, but the misprint suggests such a manpage existed somewhere that would've accidentally wound up in a UNIX 4.1 typesetting run. In any case, it is very well possible this init system may have started popping up in the PWB line before 5.0, but I can't confirm this.
In any case, to consolidate this information, here's a bit of a timeline for the init as I see it:
1975 May - V6 issued. Begins getting adopted as the standard base for various streams
1976 January - USG Generic 2 issued. From what has been discussed, this still has a research-style init based on /etc/rc and /etc/ttys
1977 March - USG Generic 3 issued. This allegedly features the addition of the /etc/lines-based init that eventually becomes System V init
1977 May - PWB 1.0 issued. Forks the documentation style quite a bit, starts trends that continue to System III and beyond, still research-style init
1977 October - MERT 0 issued. This uses Generic 3 as a base but reverts to a research-style init system
1979 January - V7 issued. Still using /etc/rc and /etc/ttys
1979 November - CB-UNIX 2.1 issued. This appears to be using a Generic 3-style /etc/lines init
1980 June - PWB 3.0 issued. This has the first known appearance of /etc/inittab but with a different implementation from USG Generic 3
1981 May - CB-UNIX 2.3 issued. By this point, /etc/lines has morphed into /etc/inittab and /etc/gettydefs has been added
1981 June - PWB 4.1 issued. The manual still features the PWB 3.0-style init but an errant page suggests at least possibly /etc/gettydefs support
1982 June - PWB 5.0 issued. The transformation is complete, PWB now uses Generic 3-descended init with CB-UNIX gettydefs
This all makes me wonder what PWB 2.0 and UNIX/TS (if it ever coalesced) would've used. Precedent would say PWB 2.0 would use research-style init and UNIX/TS would've likely used Generic 3 if the aim was continuity with the USG Generics, but who knows. Anywho, hope that info's helpful to anyone else researching init systems. Of course if you have info that contradicts or enhances any of this I'd love to hear it!
- Matt G.
With all your help I thought I would share a couple
of programs I had a hand in.
more.c is like 'more' for Linux. more abc.txt or cat abc.txt | more or ls
-l | more
pg.c is a 'pager' program. pg abc.txt def.txt
cls. is a clear screen home cursor program using VT100 codes. It works on
the console or telnet or putty.
All can be compiled and placed in /usr/bin
cc -o /usr/bin/cls cls.c
cc -o /usr/bin/pg pg.c
cc -o /usr/bin/more more.c
More to come. Enjoy.
Ken
--
WWL 📚
> From: Michael Huff
> All of the simh vax tutorials on Gunkies still give instructions to turn
> all of the tar files (plus things like miniroot) into a single tap file
> for installation.
If those are out-of-date, you/someone should get a CHWiki account and
update them! :-)
Documentation skew from code has always been a problem...
Noel
Morning,
I am using enblock to create tap files from tar files.
Was a program ever written to convert tap files to tar files or
a Linux program that could read tap files?
I also see that writing to a tap file from Unix the size increases
when writing multiple files however when writing 1 file to the tap
file "tar cv0 ..." the tap file still remains at its larger size from the
previous larger writes. Is this what is expected?
Thanks,
Ken
--
WWL 📚
On Monday, February 27, 2023, Dan Cross <crossd(a)gmail.com> wrote:
> On Mon, Feb 27, 2023 at 12:22 PM Paul Ruizendaal via TUHS <tuhs(a)tuhs.org>
> wrote:
> > Thanks all for the insights. Let me attempt a summary.
> >
> <SNIP>
>
Oh, and lots of games; I had a nice
> Solitaire version that I can no longer recall the name of.
>
>
Coming from the 8-bit microcomputer world (Atari 8-bit, C64), and then
upgrading to 16-bit (Amiga and PC MS-DOS) I experienced a myriad of
unforgettable games on those platforms. They were mostly commercial but
they were very much different from modern games. It was the era of
innovation and pouring all your soul into the games you produce. I still
think that some of them have not been surpassed in quality and playability.
I still play them on period correct hardware as they are still extremely
fun and challenging.
This is my top 10 list (sorted by year):
* Prince of Persia
* The Secret of Monkey Island
* Civilization
* Dune II
* Master of Orion
* Reunion
* Warcraft
* Ascendancy
* Quake
* Half-Life
As I started to play with Linux in the mid 90s I remember a port of Doom
and then Quake, but not that many other games. Can you elaborate more on
what Unix afficionados played in the late 80s/early 90s?
--Andy
And, you know, let's say you have all the time and patience in the world
and you download the source and read it carefully and determine it's not
malicious...
I believe there might have been a lecture/paper about this once.
https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrust…
(I can just hear them damn kids standing on my lawn chanting "You can't
spell 'trust' without 'rust'!")
I keep trying to give VSCode a go. It seems really nifty. And somehow I
keep bouncing off and landing in Emacs, every time. Maybe when I finally
get around to writing, rather than cargo-culting TypeScript, or Unity/C#,
it'll be a better fit. But for my current life, which is mostly Python...I
appear to be sticking with Emacs.
Adam
Got a little present for folks today. Been working on this for a little while now, and while there'll probably be some edits here and there, I believe it to be quite accurate.
After the link is a manpage restoration of the UNIX 4.1 User's Manual (3B20S) that I bought a little while ago: https://gitlab.com/segaloco/pwb4u_man
The permuted index is the only significant piece that isn't done, but that shouldn't impact the informational value. Note this is just u_man, I haven't found a complimentary a_man copy yet. I hope one will turn up one of these days, but I plan on at least analyzing the gap between System III and System V with regards to those pages as a future project.
My process involved diff'ing the available III and V manpage sources and reconciling differences between the two and 4.1 with some copy-paste here and some restoration there. Where differences couldn't be resolved, I simply removed content to match the physical pages. One minute detail that is also not filled in is the page count in M.folio. So I didn't count the pages. Maybe someday. In any case, I appreciate the opportunity this has given me to learn the manpage macros pretty well.
Anywho, in the second pass of verifying the changes I took some notes on noteworthy mentions. This list is not an exhaustive analysis but represents some of the areas where significant developments shine through in the text:
System III->4.1 (No claims are made as to what occurred at 4.0):
- The documentation is cleaned up quite a bit in general, in what seems like a push towards commercial-ready manuals. Many sections are edited to be more clear and descriptive. There is also a notable shift towards gender neutral language. The editors and acknowledgements info are removed, casting an anonymous shadow over the manual maintainers and their muses alike.
- The tty manpage is renamed termio, reflecting the shifting terminal interface landscape at this time.
- This release adds IPC with a familiar interface to what is in System V. According to various accounts the IPC was under heavy development at this time, but while the underlying components may have been shifting and changing, the documentation changes suggest a relatively stable programmer API by this point. The only IPC-related piece System V adds is icprm(1).
- The LP print service is added here. The old lpr system is still there in the background; it is in System V. However, it is relegated to DEC only status.
- SGS and COFF development components show up with 4.1 3B-20. No telling what else they officially supported in the 4.x timeframe. The System V pages as described below indicate a number of supported platforms.
- The shell gets the $CDPATH and ulimit features
- Many system features show a trend towards portability (except the PDP-11, the system appears to be moving away from it)
- The Virtual Protocol Machine (VPM) seems to go from targeting KMC11 to UN53 and V.35. Haven't researched what these are yet, but VPM is on the move.
- As of 4.1, 3B-20 does *not* support: Fortran, BASIC, Honeywell/GCOS 6000 connectivity, lpr printing, SNOBOL, standalone C
- Added pages include cflow(1), cprs(1), cxref(1), dis(1), dump(1) (was a tape dump (1m), now a SGS tool), enable(1), hpio(1), ipcs(1), list(1), lp(1), lpstat(1), newform(1), sadp(1), trouble(1), x25pvc(1), msgctl(2), msgget(2), msgop(2), plock(2), semctl(2), semget(2), semop(2), shmctl(2), shmget(2), shmop(2), sys3b(2), drand48(3c), getcwd(3c), hsearch(3c), ld*(3x) (COFF library), setbuf(3s), stdipc(3c), strtol(3c), termio(4) (renamed from tty(4)), ldfcn(5), mosd(5), mptx(5), jotto(6)
- Removed pages include cref(1), dump(1m), fget.odemon(1c), odpd(1c), orjestat(1c), reform(1), tp(1), typo(1), xref(1), tp(4), tty(4) (renamed to termio(4))
- Some pages were skipped and show back up in System V with minimal changes, meaning they were probably in 4.x: adb(1), arcv(1), bs(1), dpd(1c), dpr(1c), efl(1), f77(1), factor(1), fget(1c), fget.demon(1c), fsend(1c), gcat(1c), gcosmail(1c), kas(b)/kun(b)(1), lpd(1c), lpr(1), ratfor(1), scc(1), sno(1), vpr(1)
4.1->System V (Likewise, there was at least a 4.2):
- Documentation is cleaned up and edited some more. Almost everywhere that the name "UNIX" occurs, it has been replaced with some variation on "The UNIX System" with a capital S. This is lower case in my 5.0 manual which I have not combed for differences with System V yet. Still, the "system" following is standard by 5.0 it seems. This is right around the time of dashing Bell associations too, so variations of this manual exist with and without the Bell logo on the front, and with varying degrees of modification to explain the legal landscape involved.
- Section 3 in particular sees a pretty significant rewrite effort. This coincides with MR 1055 here: https://archive.org/details/unix-system-release-description-system-v/I%20-%…
- A new portable archive format is introduced. By the sounds of it, this introduces a new header type into the ar(4) format.
- A new 1024-block filesystem is introduced, along with necessary support.
- A new synchronous terminal interface is added.
- VAX is supported by SGS/COFF now. Additional platforms as suggested by formatting marks in the pages include: Basic-16, Bellmac 32, and 8086, in addition to the already supported 3B-20. Unknown whether these platforms found any support with USG releases.
- ex(1) is added (along with vi(1) and edit(1)). There is also the se(1) editor which I don't know much about.
- CB-init is added, shaking up the /etc/inittab format and many login-related features. MAUS also steps in from CB for shared memory on PDP-11.
- Added pages include asa(1), convert(1), cpp(1), edit(1), ex(1), fsplit(1), ipcrm(1), machid(1), makekey(1), net(1c), nscstat(1c), nsctorje(1c), nusend(1c), scat(1), se(1), stlogin(1), ststat(1), vi(1), maus(2), clock(3c), dial(3c), erf(3m), getut(3c), matherr(3m), memory(3c), sputl(3x), ttyslot(3c), x25*(3c), filehdr(4), gettydefs(4), issue(4), linenum(4), reloc(4), scnhdr(4), syms(4)
- Removed pages include vpmc(1c), vpmsave(1c), vpmset(1c), x25pvc(1c), fptrap(3x)
That's all I've got. As time goes on I'll start documenting worthwhile tidbits in the Wiki. If there's any question of the contents of any of the pages, I'll happily consult the original and make corrections, and can scan any page to verify the contents if needed. I'll eventually be scanning the whole thing, just not right now. Feel free to open a pull request if you think something needs to change.
- Matt G.
On 2/25/23, Brian Walden <tuhs(a)cuzuco.com> wrote:
> It was originaly 205. See A.OUT(V) (the first page) at
> https://www.bell-labs.com/usr/dmr/www/man51.pdf it was documented as to
> why.
>
>
> The header always contains 6 words:
> 1 "br .+14" instruction (205(8))
> 2 The size of the program text
> 3 The size of the symbol table
> 4 The size of the relocation bits area
> 5 The size of a data area
> 6 A zero word (unused at present)
>
> I always found this so elegant in it's simplicity. Just load and start
> execution at the start (simplifies exec(2) in the kernel) I always wondered
> if this has done anywhere else before, or invenetd first in unix.
IBM's Basic Program Support (BPS) for System/360 was a set of
stand-alone utilities for developing and running stand-alone programs.
BPS/360 wasn't really an operating system because there wasn't any
resident kernel. You just IPLed (Initial Program Load; IBM-speak for
"boot") your application directly. So the executable format for BPS
had a bootstrap loader as the "program header". Not quite the same
thing as a.out's 205(8) magic number, but similar in concept.
I don't know of any other OS ABI that uses this trick to transfer
control to application programs.
Microsoft uses something similar in PECOFF. A PECOFF executable for
x86 or X86-64 starts with a bit of code in MS-DOS MZ executable format
that prints the message "This program cannot be run in DOS mode".
-Paul W.
It was originaly 205. See A.OUT(V) (the first page) at https://www.bell-labs.com/usr/dmr/www/man51.pdf it was documented as to why.
The header always contains 6 words:
1 "br .+14" instruction (205(8))
2 The size of the program text
3 The size of the symbol table
4 The size of the relocation bits area
5 The size of a data area
6 A zero word (unused at present)
I always found this so elegant in it's simplicity. Just load and start
execution at the start (simplifies exec(2) in the kernel) I always wondered
if this has done anywhere else before, or invenetd first in unix.
Theres was also a recent discussion of ar(1). That pdf also explains its magic
number a few pages later. It was simply choosen because it seemed unique.
A file produced by ar has a "magic number" at the start,
followed by the constituent files, each preceded by a file
header. The magic number is -147(10), or 177555(8) (it was
chosen to be unlikely to occur anywhere else).
-Brian
On Sat, 25 Feb 2023, Dave Horsfall wrote:
> On Thu, 23 Feb 2023, Paul Winalski wrote:
>
> > a.out was, as object file formats go, a throwback to the stone age from
> > the get-go. Even the most primitive of IBM's link editors for
> > System/360 supported arbitrary naming of object file sections and the
> > ability for the programmer to arrange them in whatever order they
> > wished. a.out's restriction to three sections (.text, .data, .bss) did
> > manage to get the job done, and even (with ZMAGIC) could support
> > demand-paged virtual memory, but only just.
>
> That may be so, but those guys didn't exactly have the resources of
> IBM behind them...
>
> And I wonder how many people here know the significance of the "407" magic
> number?
>
> -- Dave
Good day all, figured I'd start a thread on this matter as I'm starting to piece enough together to articulate the questions arising in my research.
So based on my analysis of the 3B20S UNIX 4.1 manual I've been working through, all evidence points to the formalized SGS package and COFF originating tightly coupled to the 3B-20 line, then growing legs to support VAX, but never quite absorbing PDP-11 in entirety. That said, there are bits and pieces of the manual pages for the object format libraries that suggest there was some providence for PDP-11 in the development of COFF as well.
Where this has landed though is a growing curiosity regarding:
- Whether SGS and COFF were tightly coupled to one another from the outset, with SGS being supported by the general library routines being developed for the COFF format
- Whether COFF was envisioned as a one-size-fits-all object format from its inception or started as an experiment in 3B-20 development that wound up being general enough for other platforms
- If, prior to this format, there were any other efforts to produce a unifying binary format and set of development tools, or if COFF was a happy accident from what were a myriad of different architectural toolset streams
One of the curious things is how VAX for a brief moment did have its own set of tools and a.out particulars before SGS/COFF. For instance, many of the VAX-targeted utilities in 3.0/System III bear little in common option/manual-wise with the general common SGS utilities in System V. The "not on PDP-11" pages for various SGS components in System V much more closely resemble the 3B-20 utilities in 4.1 than any of the non PDP-11/VAX-only bits in System III.
Some examples:
- The VAX assembler in System III contains a -dN option indicating the number of bytes to set aside for forward/external references for the linker to fill in.
- The VAX assembler in System V contains among others the -n and -m options from 4.1 which indicate to disable address optimization and use m4 respectively
- The System V assembler goes on to also include -R (remove input file after completion) -r (VAX only, add .data contents to .text instead) and options -b, -w, and -l to replace the -d1, -d2, and -d4 options indicated in the previous VAX assembler
- System V further adds a -V to all the SGS software indicating the version of the software. This is new circa 5.0, absent from the 4.1 manual like the R, r, b, w, and l options
- The 4.1 manual's singular ar(1) entry still agrees with the System III version. No arcv(1) is listed, implying the old ar format never made it to 3B-20
- The System V manual has both this ar(1) version as well as the new COFF-supporting version. Not sure if this implies the VAX ar format was expanded to support the COFF stuff for a little while until they decided on a new one or what.
- The System III ld (which is implied to support PDP and VAX) survives in System V, but is cut down to supporting PDP-11 only
- The COFF-ish ld shows up in 4.1, is then extended to VAX presumably in the same breath as the other COFF-supporting bits by Sys V, leading to two copies like many others, PDP-11-specific stuff and then COFF-specific stuff
The picture that starts to form in the context of all of this is, for a little while in the late 70s/early 80s, the software development environments for PDP-11, VAX-11, and 3B-20 were interplaying with each other in often times inconsistent ways. Taking a peek at the 32V manuals, the VAX tools in System III appear to originate with that project, which makes sense. If I'm understanding the timeline, COFF starts to emerge from the 3B-20 project and USG probably decides that's the way to go, a unified format, but with PDP-11 pretty much out the door support wise already, there was little reason to apply that to PDP-11 as well, so the PDP-11 tools get their swan song in System V, original VAX-11 tools from 32V are likely killed off in 4.x, and the stuff that started with the 3B-20 group goes on to dominate the object file format and development software stuff until ELF comes along some time later.
I guess other questions this raises are:
- Were the original VAX tools built with any attention to compatibility with the PDP-11 bits Ken and Dennis wrote many years prior (based on some option discrepancies, possibly not?)
- Do the VAX utilities derive from the Interdata 8/32 work or if there was actually another stream of tools as part of that project?
- Was there any interplay between the existing tool streams (original PDP-11, 32V's VAX utilities, possibly Interdata 8/32) and the eventual COFF/SGS stuff, or was the latter pretty well siloed in 3B-20 land until deployment with 4.1?
- Matt G.
I found this 1984 video about MIT's Project Athena. A few VAXstation
100 are visible running some demos in a windowing system. Nothing
exciting unfortunately, just a few seconds and no details visible.
Since X was started in 1984, it could be a very early version we see
here. I asked Jim Gettys, and he suggested it could also be W since it
was apparently used before X in Project Athena.
https://youtu.be/tG7i7HCD9g0?t=28
I checked the xdemo director in X10R3 (earliest available so far) for a
match against the circle demo, but all samples are written in CLU...
> Date: Thu, 23 Feb 2023 18:38:25 +0000
> Subject: [TUHS] Re: Origins of the SGS (System Generation Software)
> and COFF (Common Object File Format)
>
> For the sake of timelines:
>
> June 1980 - Publication date on the front page of the 3.0 manual in which the utilities are still very much research for PDP-11 and 32V-ish for VAX where distinctions matter.
>
> June 1981 - Publication date on the front page of the 4.1 manual in which the man-pages very much refer to all of this as the "3B-20 object format"
>
> June 1982 - Publication date on the front page of the 5.0 manual by which point these same pages had been edited and extended to describe the "common object file format"
>
> Additions at the 1981 release include dump(1), list(1), and the ld-prefixed library routines for managing these object files. These likewise persist in 5.0, SysV, and beyond as COFF-related tools.
>
> So this puts the backstop of what would become COFF at at least '81.
>
> - Matt G.
The surviving source code for SysV R2 supports this timeline:
- The header files (start from https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/head/a.…) have dates of late ’82, early ’83.
- The source for exec() has a comment that refers to the 4xx magic formats as “pre 5.0 stuff”.
- The COFF format headers are #ifdef’ed for the 3B series.
Interestingly, the lowest magic numbers in the 5xx series are not for the 3B, but for the “Basic-16” and for the “x86”. That led me to this paper:
https://www.bell-labs.com/usr/dmr/www/otherports/newp.pdf
It seems that the roots of COFF go back to the initial portability effort for V7 and in particular the 8086 port (which was done in 1978 according to the paper).
> From: Clem Cole
> MIT had a modified a.out format for the NU machine ports - that might
> have been called b.out.
Yes. Here's the man page output:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/help/b.out.lpt
(I don't have the source for that, alas.) It's basically just a.out with
32-bit fields instead of 16-bit. I have a .h file for the format too, if
anyone has any interest in it. It's all part of the MIT 68K workbench that
used PCC (the source for all of which I do have).
Noel
COFF transfer, TUHS Bcc'd to know where this thread went.
Between the two if you're not doing UNIX-specific things but just trying to resurrect/restore these, COFF will probably be the better place for further discussion. @OP if you're not a member of COFF already, you should be able to reach out to Warren Toomey regarding subscription.
If you're feeling particularly adventurous, NetBSD still supports VAX in some manner: http://wiki.netbsd.org/ports/vax/
YMMV, but I've had some success with NetBSD on some pretty oddball stuff. As the old saying goes, "Of course it runs NetBSD". You might be able to find some old VMS stuff for them as well, but I wouldn't know where to point you other than bitsavers. There's some other archival site out there with a bunch of old DEC stuff but I can never seem to find it when I search for it, only by accident. Best of luck!
- Matt G.
------- Original Message -------
On Wednesday, February 22nd, 2023 at 10:08 AM, jnc(a)mercury.lcs.mit.edu <jnc(a)mercury.lcs.mit.edu> wrote:
> > From: Maciej Jan Broniarz
>
>
> > Our local Hackroom acquired some VAX Station machines.
>
>
> Exactly what sort of VAXstations? There are several different kinds; one:
>
> http://gunkies.org/wiki/VAXstation_100
>
> doesn't even include a VAX; it's just a branding deal from DEC Marketing.
> Start with finding out exactly which kind(s) of VAXstation you have.
>
> Noel
Has anyone tried talking to anyone at Oracle about possibly getting
the SunOS code released under an open source license? There can't be
any commercial value left in it.
- Dan C.
> From: Maciej Jan Broniarz
> Our local Hackroom acquired some VAX Station machines.
Exactly what sort of VAXstations? There are several different kinds; one:
http://gunkies.org/wiki/VAXstation_100
doesn't even include a VAX; it's just a branding deal from DEC Marketing.
Start with finding out exactly which kind(s) of VAXstation you have.
Noel
Hello Everyone,
Our local Hackroom acquired some VAX Station machines. The problem is, we
have absolutely no docs or knowledge how to run the machine or how to test
if it is working properly. Any help would be appreciated
All best,
--
Maciej Jan Broniarz
Was an upgrade for ethernet TCP/IP for Unix V7 ever
done when running under SIMH?
Was an upgrade ever done for Unix V7 when running
under SIMH to read and set the date/time? I have a
work around but it doesn't work because when running
the sim it inserts boot code then when the 'run 2002'
is issued further startup commands in the 'conf' file are
ignored.
System was built from unix_v7.tm dated 20 June 2006.
run.conf contents:
echo
echo Unix V7 startup 2-19-2023 KenUnix
echo
set cpu 11/45
set cpu 256k
set rp0 rp04
attach rp0 system.hp
d cpu 2000 042102
d cpu 2002 012706
d cpu 2004 002000
d cpu 2006 012700
d cpu 2010 000000
d cpu 2012 012701
d cpu 2014 176700
d cpu 2016 012761
d cpu 2020 000040
d cpu 2022 000010
d cpu 2024 010061
d cpu 2026 000010
d cpu 2030 012711
d cpu 2032 000021
d cpu 2034 012761
d cpu 2036 010000
d cpu 2040 000032
d cpu 2042 012761
d cpu 2044 177000
d cpu 2046 000002
d cpu 2050 005061
d cpu 2052 000004
d cpu 2054 005061
d cpu 2056 000006
d cpu 2060 005061
d cpu 2062 000034
d cpu 2064 012711
d cpu 2066 000071
d cpu 2070 105711
d cpu 2072 100376
d cpu 2074 005002
d cpu 2076 005003
d cpu 2100 012704
d cpu 2102 002020
d cpu 2104 005005
d cpu 2106 105011
d cpu 2110 005007
echo
echo To boot type boot enter then hp(0,0)unix enter after 'mem =' press
ctrl-d
echo To cancel press ctrl-e then at sim> type exit enter
echo At login: type root enter at password type root enter
echo To shutdown sync;sync wait 5 then press ctrl-e then at sim> type exit
enter
echo
echo Copy / paste date at #
echo DATE 2302191116
run 2002
Thanks,
Ken/--
WWL 📚
Hello there,
I recently watched an old Unix promotion video by AT&T on YouTube (AT&T
Archives: The UNIX Operating System: https://youtu.be/tc4ROCJYbm0) and
they mention a design tool for integrated circuits (apparently named
L-Gen or lgen; timestamped link: https://youtu.be/tc4ROCJYbm0?t=1284)
Part of this software is a language implemented with YACC that appears
to describe the behavior of digital logic, like modern hardware
description languages, i.e. Verilog and VHDL.
Does anyone have information about this, in particular:
- Documentation
- Which projects were realized with this?
- Source code, if possible
I asked this question on retrocomputing.stackexchange.com (see
https://retrocomputing.stackexchange.com/q/26301/26615) but so far there
is no satisfying answer. A "Circuit Design Language" (CDL) is mentioned
and there is some good information about it but it has another syntax
(as shown in the video vs. the documentation about CDL) and apparently
another purpose (description of board wiring vs. logic behavior).
Best regards,
Christian
Here is a simplified 'more' command for Unix V7:
/*********************************************************************
* UNIX pager (v7 compatible) Chipmaster and KenUnix
*
* cc -o more more.c
*
* Usage examples:
* man wall | more
* more xyz
* more abc def xyz
*
* Started February 15th, 2023 YeOlPiShack.net
*
* This is the ultimately dumbest version of more I have experienced.
* Its main purpose is to illustrate the use of /dev/tty to interact
* with the user while in a filter role (stdin -> stdout). This also
* leaves stderr clear for actual errors.
*
*
* NOTES on Antiquity:
*
* - The early C syntax didn't allow for combining type information
* in the parenthesized argument list only the names were listed.
* Then a "variable" list followed the () and preceded the { that
* declared the types for the argument list.
*
* - There is no "void", specifically there is no distinction
* between a function that returns an int or nothing at all.
*
* - Many of the modern day header files aren't there.
*
* - Apparently "/dev/tty" couldn't be opened for both reading and
* writing on the same FD... at least not in our VM.
*
* - Apparently \a wasn't defined yet either. So I use the raw code
* \007.
*
* - Modern compilers gripe if you do an assignment and comparison in
* the same statement without enclosing the assignment in (). The
* original compilers did not. So if it looks like there are too
* many ()s it's to appease the modern compiler gods.
*
* - I'm not sure where they hid errno if there was one. I'd think
* there had to be. Maybe Kernighan or Pike knows...
*
*********************************************************************/
#include <stdio.h>
/*** Let's make some assumptions about our terminal columns and lines. ***/
#define T_COLS 80
#define T_LINES 24
/*** Let's set up our global working environment ***/
FILE *cin; /* TTY (in) */
FILE *cout; /* | (out) */
int ct = 0;
/*** message to stderr and exit with failure code ***/
err(msg)
char *msg;
{
fputs(msg, stderr);
exit(1);
}
/*** A poor man's CLear Screen ***
*
* Yup! This is how they used to do it, so says THE Kenrighan & Pike!
* termcap?!?! What's that?
*/
cls()
{
int x;
for(x=0; x<T_LINES; ++x) fputc('\n', cout);
ct = 0; /* reset global line count */
}
/*** The PAUSE prompt & wait ***/
pause()
{
char in[T_COLS+1]; /* TTY input buffer */
fflush(stdout); /*JIC*/
fputs("--- [ENTER] to continue --- Ctrl-d exits ", cout);
fflush(cout);
if(!fgets(in, 81, cin)) {
/* ^D / EOF */
fputc('\n', cout); /* cleaner terminal */
exit(0);
}
}
/*** Read and page a "file" ***/
int pg(f)
FILE *f;
{
char buf[T_COLS+1]; /* input line: usual term width + \0 */
/*** read and page stdin ***/
while(fgets(buf, sizeof(buf), f)) {
/* page break at T_LINES */
if(++ct==T_LINES) {
pause();
ct = 1;
}
fputs(buf, stdout);
}
return 0;
}
/*** Let's do some paging!! ***/
int main(argc, argv)
int argc;
char *argv[];
{
FILE *in;
int x, er;
/*** Grab a direct line to the TTY ***/
if(!(cin=fopen("/dev/tty", "r")) || !(cout=fopen("/dev/tty", "w")))
err("\007Couldn't get controlling TTY\n");
/*** with CLI args ***/
if(argc>1) {
er = 0;
for(x=1; x<argc; ++x) {
if(argc>2) {
if(!er) cls();
er = 0;
/* remember all user interaction is on /dev/tty (cin/cout) */
fprintf(cout, ">>> %s <<<\n", argv[x]);
pause();
}
/* - is tradition for stdin */
if(strcmp("-", argv[x])==0) {
pg(stdin);
/* it must be a file! */
} else if((in=fopen(argv[x], "r"))) {
pg(in);
fclose(in);
} else {
/* errors go on stderr... JIC someone want to log */
fprintf(stderr, "Could not open '%s'!\n", argv[x]);
fflush(stderr);
er = 1; /* this prevents cls() above. */
}
}
/*** no args - read and page stdin ***/
} else {
pg(stdin);
}
return 0;
}
End...
--
WWL 📚
In 'more.c' there is a typo
Line 154
replace
fprnntf(stderr, "Could not open '%s'!\n", argv[x]);
with
fprintf(stderr, "Could not open '%s'!\n", argv[x]);
--
WWL 📚
All,
If you think unix ends without x, just move along, nothing to see here.
Otherwise, I thought I would share the subject of my latest post and a
link with those of you interested in such things.
Recently, I've been tooling around trying to wrap my head around x
windows and wanted to give programming it a shot at the xlib level... on
my mac, if possible. So, I bought a copy of Adrian Nye's Xlib
Programming Manual for Version 11 R4/R5, aka Volume One of The
Definitive Guides to the X Window System, published, get this... 30+
years ago, in 1992 :) and started reading like a madman. As usual, this
was an example of great technical writing from the prior millenium,
something rarely found today.
Anyway, I hunted up the source code examples as published, unpacked
them, did a few environmental things to my mac, and built my first xlib
application from that source. A few tweaks to my XQuartz configuration
and I was running the application in twm on my mac, with a root window.
To read about it and see it in all of its glory, check it out here:
https://decuser.github.io/operating-systems/mojave/x-windows/2023/01/24/x-w…
The same sort of setup works with Linux, FreeBSD, or my latest
environment DragonFly BSD. It's not the environment that I find
interesting, but rather the X Window System itself, but this is my way
of entering into that world. If you are interested in running X Windows,
not as an integrated system on your mac (where x apps run in aqua
windows), but with a 'regular' window manager, and you haven't figured
out how, this is one way.
On the provocateur front - is X part of unix? I mean this in oh so many
nuanced ways, so read into it as you will. I would contend, torpedoes be
damned, that it is :).
Will
I found mention of Cadmus CadMac Toolbox/PCSMac when reading more
about the RT PC. It is interesting that this would later be sold to
Apple. Something not often mentioned in histories of PCS,
Cadmus Computer Systems or Apple.
"Cadmus Computer Systems had created the CadMac Toolbox, a Macintosh
Toolbox implemented in C on a UNIX base), and under a special agreement
with Cadmus, we received a license to port the CadMac code from their
hardware base to the RT"
Norman Meyrowitz - Intermedia: The Architecture and Construction of an
Object-Oriented Hypermedia System and Applications Framework
https://dl.acm.org/doi/10.1145/28697.28716
A video presentation of Intermedia and CadMac running on
IBM ACIS' port of 4.2BSD for the RT PC, with kernel changes
https://vimeo.com/20578352
they later moved to A/UX on the Macintosh II
InfoWorld 13 Jun 1988
https://books.google.com/books?id=-T4EAAAAMBAJ&pg=PT11
'Meyrowitz: It was before Apple Unix. There was a workstation company
called Cadmus that was producing workstations largely for CAD and
Tom Stambaugh took the Macintosh APIs and reprogrammed them to run
on Unix workstations. We went up to Cadmus and convinced them to
license that to us for the university. We were using Mac APIs on
Unix systems because we wanted systems that supported the network
file system, that had virtual memory, that had bigger disks. So we
did that and it was a beautiful system. Eventually, convinced Apple
to create a A/UX [Apple Unix] which was a Unix system and we ran
on that. Cadmus went out of business, and we had the only license
to CadMac, and Apple wasn't that happy to have it around. We had
actually convinced them that they should release it to universities,
because having that API around, especially if they could sell it
to other companies to put it on their workstations, could actually
be lucrative. We almost convinced them to do that. We were actually
having a celebratory dinner at McArthur Park in Palo Alto to celebrate
this and then Jean-Louis Gassée walked in and said, "The deal is
off."'
Oral History of Norman Meyrowitz
http://archive.computerhistory.org/resources/access/text/2015/05/102658326-…
"we chose the Apple Macintosh front-end, or rather our local version
thereof (CadMac, now PCSMac) which was available on the
Cadmus 9000 computers."
https://quod.lib.umich.edu/cgi/p/pod/dod-idx/development-of-an-intelligent-…
"Apple Workstation Efforts Get Boost from Cadmus
Although Apple Computer Inc, won't say so, industry analysts believe the
company's acquisition of software technology from Unix workstation maker
Cadmus Computer Systems may speed up the introduction of high-end Unix
workstations from Apple.
...
At Apple's analysts' meeting April 23, chairman John Sculley said Apple
had acquired the rights to the technology from Cadmus of Lowell,
Massachusetts, to use in long-term advanced product development
efforts for the Macintosh. He said the Cadmus software would help the
company develop a workstation for the engineering community and federal
government markets, which use Unix-based systems, according to Apple's
transcripts.
Although Sculley has not detailed what Apple will do with the Cadmus
technology, Cadmus introduced last summer a graphics workstation
equipped with Cadmac, a graphics environment that is compatible with
Apple's Macintosh graphics routines."
InfoWorld 19 May 1986
https://books.google.com/books?id=SS8EAAAAMBAJ&pg=PA3
Sandy wrote the original CDL tool set. It had a graphical front end that provided for chips , pins,
wires etc and generated a connectivity list which was then consumed by wire wrap and other back end
tools. We had Tek 4014 terminals at the time (late 70s is what I recall) which made all this
reasonable.
I took it over the code base at some point and among other things added macros so that repeated wire
to chip connection patterns and names could be generated without the labor intensive one wire at a
time e.g. wire[0-7] connects to pin[0-7]. The macro would be expanded so that wire0 connected to
pin0 etc. This later system was call UCDS - Unix Circuit Design System.
UCDS was used by Joe Condon for the chess machine that he and Ken built. Ken may remember more
about this. It was also one of the reasons that BTL got a $1B Navy contract.
Steve
Good day everyone, I'm emailing to start a thread on part of my larger UNIX/TS 4.x project that is coming to a conclusion. Lots of info here, so pardon the lack of brevity.
Over the course of the past month or so I've been diffing all of the manual pages between System III and System V to produce a content-accurate set of typesetter sources for the UNIX Release 4.1 (3B20S) manual I found a while back. I've completed my shallow pass of everything (all pages accounted for, generally complete, except the permuted index) and am now about halfway through my second pass (detailed, three-way diffs, documenting changes) and thought I'd share a few findings to kick off what will likely be more exposition of the 3.x->4.x->5.x->SysV time-period vis-a-vis available documentation. Most analysis here will center specifically around the contents of the 4.1 3B20S manual and later Documents For UNIX 4.0 as those are the only documents I've found for 4.x. I mention that as there are whole subsystems excluded from this manual that show back up in my 5.0 manual, so I suspect they were pieces that weren't ready for 3B-20 at the time but were in other installations. Fortran, SNOBOL, Honeywell 6000 communication, and the old lpr print system are absent, for instance. Anywho, as I wrap up what I can prove, then I'll probably swoop through and compare these SysIII to SysV to at least document what may have happened in that timeframe, if anything. If I'm lucky there are no visible changes in the man pages meaning they were nominally identical between the various versions. We'll see.
Also a disclaimer, this is entirely documentation based. I have not yet cross-referenced changes I see in manuals with changes observed between code revisions. A later phase of my project will be doing this sort of analysis to try and reconstruct some idea of what code changes were 3-to-4 and which ones were 4-to-5, but that's quite a ways away. All that to say, a manual page could entirely be updated much later than a change it describes, so if something in code contradicts anything in the manuals, the code is obviously what the system was actually doing at the time. The manual is just how well someone bothered to document it.
The sections I've finished thus far are 2 - System Calls, 5 - Miscellaneous Facilities, and 6 - Games, and the frontmatter/intro section. Here's a bit of digest on what I've gone over with a fine toothed comb thus far:
3.x->4.x:
---------
There is a general trend towards platform-independence that already started with merging of PDP-11 and VAX support into a single-ish codebase ala 3.x. This trend continues with indicating that machine discrepancies (and obsolescence) will be noted in the mast head of pertinent manual entries. References to adb are dropped from this intro section. Additionally, a new section numbering is applied to the User's and Administrator's Manuals (which are split by the way). This starts the numbering/split scheme we continue to see in 5.0 and SysV a year or so later, where sections 4, 5, and 7 are shuffled to stick device files in section 7 and in turn split off 1M, 7, and 8 into a separate Administrator's Manual (a_man vs u_man). Unfortunately, since this split did occur at 4.x, I don't have the 1M, 7, and 8 sections to compare with, the copy of the manual I nabbed was just the User's Manual. If someone has a UNIX Administrator's Manual Release 4.x that they'd be willing to offer up for scanning/analysis, that would definitely help complete the circle.
Other frontmatter changes imply a move more towards commercialize-able literature. The Editors are commented out as indicated by the SysV manual sources later on. Unfortunately this means my goal of documenting authorship remains unattainable at present, but in any case, somewhere along the way the responsibility was shifted from Lab 364 (3.x) to Lab 4542 (5.x). An acknowledgement from the Lab 364 folks in the 3.x manual is dropped entirely, not even commented out. This acknowledgement thanks the efforts of those who assembled the V6, V7, PWB/2.0, and UNIX/TS 1.1 manuals (what I wouldn't give for the latter two...). Another change regarding commercialized literature is the reference to UNIX for Beginners in the intro section is replaced with a reference the "UNIX User's Guide". This manual does show up by SysV, but I don't know if this implies they were running those sets this way by the time of 4.1. Arnold Robbins provided Documents for UNIX 4.0 last year which is very much still the old /usr/doc *roff documents, so either they were pre-empting the material they would start to produce with 5.x, or there is yet another set of potential 4.x documents floating around out there.
In any case, there are a handful of changes to 2 - system calls:
- intro reflects that error.h has been renamed to errno.h
- The "SysV" IPC shows up here in 4.1. I forget who mentioned it but someone has mentioned in the past 4.0 and 4.1 had different IPC systems, so this isn't that illuminating unfortunately.
- A minimum of 1 character for filenames is noted. I suppose around this time someone won the "well it doesn't *say* you can't have a zero character file name" pedantic argument.
- Various areas where groups are referred to, text is updated to ensure it is understood the author means "effective" group
- brk, exec, exit, and fork pages all now have verbiage concerning how they interact with IPC
- brk clarifies that added space is initialized to zero
- exec adds verbiage about argc, argv, envp, and environ, and notes that ENOEXEC doesn't apply to execlp and execvp
- fork adds a thorough description of which attributes of the parent process are passed down
- kill elaborates that real or effective user (and group) can influence permissions
- ptrace updates adb references to sdb and adds 3B-20 verbiage
- setuid consolidates explanations of setuid and setgid, no noticed change to functionality
- signal now defers to exit(2) to describe termination actions and changes header references from signal.h to sys/signal.h
- sys3b is added for 3B-20-specific system calls
- utsname gains the machine field (for -m)
- wait now stashes the signal number causing a return in the upper byte of the status word
And then under 5 - misc:
- Many pages used a .so directive to directly populate a given header. By SysV this has been changed to include the text in the pages directly. Unsure exactly what 4.x did but I went with the latter
- eqnchar loses the scrL, less-than-or-equal-to, and greater-than-or-equal-to character replacements
- ldfcn is added, this is the general description page for what will become the COFF library, at this stage it is very 3B-20 oriented in description
- man adds the \*(Tm trademark indicator
- mosd and mptx macro pages are added
- mv's macro page is pretty much rewritten to include the actual macros, the version in 3.x simply referred to there being macros and a manual coming soon
- types in 3.0 has variable sizes for cnt_t and label_t depending on VAX or not. 4.x seems to remove this discrepancy and always present the VAX sizes
And finally under 6 - games:
- A note about using cron to restrict access has been removed, unknown if this is a stylistic choice or because cron is a 1M and therefore no longer in this manual
- chess and sky both have their FILES sections removed
- jotto is added
- ttt gains a note that the cubic variant does not work on VAX
- wump is no longer PDP-11 only as of 4.1
Other stuff not fully digested yet:
- It appears what would become COFF (Common Object File Format) had its beginnings as the 3B-20 object file format for UNIX/TS 4.x. The 3B-20 object-related stuff becomes the more general versions in 5.x.
- The LP print service has its start in 4.x.
- SysV IPC appears to be largely there by 4.1, with only icprm missing as far as I could tell.
- 4.x introduces the termio system.
- 4.1 may signal the start of distributing guidance material as "Guides" rather than "Documents For UNIX". There are a number of tech report citations that have been updated to reflect this.
4.x->5.x:
---------
As for the 4.x->5.x for these same items (I'll mention 4.x->5.x and 5.x->SysV both):
- Sys V adds a notice that the manual describes features of the 3B20S which is not out. No such notice is in the 5.0 manuals, so this was specifically for outside consumption
- The BTL version of the 5.0 manual features the return of the acknowledgements page as well as a preface describing the manual and DIV 452's involvement. If there was a BTL version of 4.x manuals, I suspect they may have also included this as it was in the 3.0 manual.
- Sys V begins the "The UNIX System" nomenclature in earnest.
- Otherwise the frontmatter seems pretty much unchanged from 4.x to 5.x, the only discrepancies arise in the BTL and SysV variants
In 2 - system calls:
- adds a few required headers
- shmop appears to have changed headers slightly, requiring ipc.h and shm.h instead of shmem.h
- signal states that apparently SIGCLD is now reset when caught
- sys3b adds syscalls 3, 9, and 10, for attaching to an address translation buffer, changing the default field test set utility-id, and changing FPU flag bits respectively
- times switches to the tms struct from the tbuffer struct. Same fields but slight change in names. Times also notes that times are given in 1/100th of a second for WECo processors (1/60th for DEC).
In 5 - misc:
- ldfcn is moved to section 4 and drops 3B-20 specificity, signaling the start of COFF in earnest
- term adds names for TELETYPE 40/4 and 4540 as well as IBM Model 3270 terminals
- types adds the uint (unsigned int) and key_t (long) types
And there are actually no noteworthy changes to section 6.
So general takeaways thus far on the 4->5 transition:
- COFF becomes formalized here rather than being the 3B-20 format that might get extended to things
- "The UNIX System" nomenclature begins with System V
- 3B20S starts showing up in external literature but isn't openly available yet
- 3B-20 support is still growing and being used as a model for "generic" components, especially COFF and SGS
- CB-UNIX init is moved over starting with 5.0
---------
There'll definitely be more to come as I do my second pass of sections 1, 3, and 4. As things begin to wrap up I also intend to upload the manual restoration somewhere, probably archive.org. I still intend to scan the physical copy sometime later this year, but this'll get info out so people can research things, and of course if any discrepancy ever arises I'll happily pull the original manual page and scan it as proof of anything odd.
My biggest takeaway from what I've covered thus far is this clarifies the history of COFF a bit. Wikipedia states that COFF was a System V innovation, which commercially, it very well was. However, this documentation demonstrates that it began life likely as a 3B-20-specific format that was then applicable to others. This now places, COFF, LP, *and* IPC all as things credited to System V that really started with at most UNIX/TS 4.1.
The more I look at things, the more 5.0 appears to actually be a minor release compared to what all was going on in the 4.x era. From 4.1 to 5.0 the largest changes I see thus far are the addition of CB-UNIX init, generalization of COFF from a 3B-20 object format, and otherwise just clerical, marketing, and accuracy improvements to the literature. This statement will be qualified much better as I turn over more ground on this, but that's the general gist I've been gathering as I go through this: 3.x->4.x saw the introduction of a great number of soon-to-be-ubiquitous parts of UNIX and then 4.x->5.x and on to System V saw those components being tuned and the release being shepherded along into a viable commercial solution.
As with anything, this is of course, all based on analysis of documents, so if there are any inaccuracies in anything, I apologize and welcome corrections/clarifications. Hopefully by the end of all of this there'll be enough content to draft up a proper Wikipedia article on the "System IV" that never was and correct what truly was a System V innovation vs what just finally popped out of USL with that version. If you made it all the way here, thanks for reading!
- Matt G.
The subject of Communication Files on DTSS came up recently, and Doug
linked to this wonderful note:
https://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf
Over on the Multicians list, I raised a question about the history of the
DTSS emulator on Multics in response to that, which sadly broke down into
antagonism, the details of which aren't terribly interesting. But Barry
Margolin suggested that the closest modern Unix analogue of Communication
Files were pseudo-TTYs, that had generated a dustup here. Doug's note
suggests that Plan 9's userspace filesystems, aided and abetted by mutable
namespaces and 9P as a common sharing mechanism, were a closer analogy.
But I wonder if multiplexed files were perhaps an earlier analogue; my
cursory examination of mpx(2) shows some similarities to the description of
the DTSS mechanism.
But I confess that I know very little about these, other than that they
seem to be an evolutionary dead end (they don't survive in any modern Unix
that I'm aware of, at any rate). I don't see much about them in my
archives; Paul Ruizendaal mentioned them tangentially in 2020 in relation
to non-blocking IO: they are, apparently, due to Chessen?
Does anyone have the story here?
- Dan C.
Hi all,
Today, as I was tooling around on stack overflow, I decided to ask a
question on meta. For those of you who don't know, stack overflow is
supposedly a q&a site. There are zillions of answers to quite a few "how
to do i do x" style questions. Folks upvote and downvote the answers and
the site is a goto for a lot of developers. I've used it since it came
online - back in the late 2000's. I have a love hate relationship with
the site. When there's a good answer to a question that I have, I love
it. When they downvote fringe cases that I care about to the point where
they effectively become gray literature that is near on impossible to
locate - I hate it. Meta is supposedly where you go to ask questions
about the stack.
Yesterday, I asked this question:
Do you know of any studies that have been done around downvoted
content, specifically on stack overflow or stack exchange?
By way of background - I find any questions or answers that are on
the border (+1, 0, -1) as dubiously helpful, but when the downvotes
pile up, much like upvotes, the answers become interesting to me
again as they give me insights I might miss otherwise.
After a slew of why would you think that was interesting, there's no
value with upvotes and downvotes, and your question is unclear responses
along with, as of now, 31 downvotes net, the question was closed for
lack of clarity. My answer, which was informed by some of the comments was:
There don't appear to be any papers on downvoting specific to Stack
Overflow. You can find a good list of known academic papers using
Stack Exchange data in the list hosted on Stack Exchange Meta
(link). It is an attempt to keep a current list of works up to date.
The Stack Exchange Data Explorer (link) is an open API for doing
data research, if you want to dig into the data yourself.
Which was quickly downvoted 9 times net.
To see the entire debacle:
https://meta.stackoverflow.com/questions/423080/are-there-any-serious-studi…
Anyhow, other than what I perceive to be a decidely hostile environment
for asking questions, it is still actually a useful resource.
Wow, have times changed though on the hostility front.
So, it got me thinking...
What was it like in the very beginning of things (well, ok, maybe not
the very beginning, but around and after the advent of v6 and when it
was at or around 50 sites) for folks needing answers to questions
related to unix?
The questions... and for the love of Pete, don't downvote me anymore
today, I'm a fragile snowflake, and I might just cry...
What was the mechanism - phone, email, dropbox of questions, snail mail,
saint bernardnet, what?
What was the mood - did folks quickly tire of answering questions and
get snippy, or was it all roses?
When did those individual inquiries get too much and what change was
made to aggregate things?
I'm thinking there may have been overlap between unix users and
usenet... Also, I remember using fidonet for some of my early question
about linux, but that was 1991, many years after the rise of unix.
Thanks,
Will
There was recent discussion here about the Typesetter C compiler; I don't
have the energy to look through the tons of opinion posts about recent
programming styles, to find the posts about actual Unix history which related
to that compiler, but I seem to recall that there was interest in locating
the source for it? I had strted to look, but then got distracted by some
other high-pri stuff; here are a few notes that I had accumulated to reply -
I hope they aren't too out-of-date by now.
I have a copy of it, from the dump of the CSR machine (I can't make the whole
dump public, sorry; it has personal material from a bunch of people mixed in).
I was pretty sure the C compiler from Mini-Unix, here:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=Mini-Unix/usr/source/c
was from the right timeframe to be the Typesetter C, but a quick check of
c0.h, shows that it's not; that one seems to be more like the V6 one. (Ditto
for LSX.)
The PWB1 one:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/c/c
seems, from a very quick look at c0.h (using that nice side-by-side compare
feature on the TUHS archive - thanks, Warren!), to be somewhat close to the
Typesetter C. It would be interesting to compare that one to the CSR one
(which definitely is) to be sure.
Also, the V7 C compiler (not pcc, but the PDP-11 one) seems to be a fairly
close relative, too.
Noel
Herewith some interesting (somewhat) contemporary papers on early windowing systems:
1. There was a conference in the UK early in 1985 discussing the state of window systems on Unix. Much interesting discussion and two talks by James Gosling, one about NeWS (then still called SunDew), and one about what seems to be SunWindows. It would seem then that these were developed almost in parallel.
http://www.chilton-computing.org.uk/inf/literature/books/wm/contents.htm
2. Then there is a 1986 paper by James Gettys, discussing the 18 month journey towards X10. In particular it focuses on the constraints that Unix set on the design of the X system.
https://www.tech-insider.org/unix/research/acrobat/860201-b.pdf
3. Next is the 1989 NeWS book that has a nice overview and history of windowing systems in its chapter 3:
http://bitsavers.trailing-edge.com/pdf/sun/NeWS/The_NeWS_Book_1989.pdf
Both the UK conference and the NeWS book mention a Unix kernel-based windowing system done at MIT in 1981 or 1982, “NU" or “NUnix”, by Jack Test. That one had not been mentioned before here and may have been the first graphical windowing work on Unix, preceding the Blit. Who remembers this one?
4. Finally, an undated paper by Stephen Uhler discussing the design of MGR is here:
https://sau.homeip.net/papers/arch.pdf
I’ve not included Rob Pike’s papers, as I assume they are well known on this list.
Some of the above papers may be worthy of stable archiving.
Hi all,
Firstly, thanks to Warren for adding me to the list!
The 6th Edition manual refers to 'cron', not 'crond' (even though
cron was indeed referred to as a 'daemon'). By 4.2BSD, however, we
have things like 'telnetd' and 'tftpd'.
Does anyone have any pointers as to when and where the '-d'
convention started to be used?
Thanks in anticipation,
Alexis.
All,
I just saw this over on dragonflydigest.com:
https://0j2zj3i75g.unbox.ifarchive.org/0j2zj3i75g/Article.html
It's an article from 2007 about the history and genesis of the Colossal
Cave Adventure game - replete with lots of pics. What I found
fascinating was that the game is based on the author's actual cave
explorations vis a vis the real Colossal Cave. Gives you a whole new
appreciation for the game.
My question is do y'all know of any interesting backstories about games
that were developed and or gained traction on unix? I like some of the
early stuff (wumpus, in particular), but know nothing of origins. Or,
was it all just mindless entertainment designed to wile away the time?
Spacewar, I know a bit about, but not the story, if there is one...
Maybe, somebody needed to develop a new program to simulate the use of
fuel in rockets against gravity and... so... lunar lander was born? I
dunno, as somebody who grew up playing text games, I'd like to think
there was more behind the fun that mindless entertainment... So, how
about it, was your officemate at bell labs tooling away nights writing a
game that had the whole office addicted to playing it, while little did
they know the characters were characterizations of his annoying neighbors?
If you don't mind, if you take the thread off into the distance and away
from unix game origins, please rename the thread quickly :).
Thanks,
Will
> In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes
I don't know any Unix examples, but DTSS (Dartmouth Time Sharing
System) "communication files" were used for the purpose. For a fuller
story see https://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf
> This is probably a bit more Plan 9-ish than UNIX-ish
So it was with communication files, which allowed IO system calls to
be handled in userland. Unfortunately, communication files were
complicated and turned out to be an evolutionary dead end. They had
had no ancestral connection to successors like pipes and Plan 9.
Equally unfortunately, 9P, the very foundation of Plan 9, seems to
have met the same fate.
Doug
> segaloco wrote:
> In the annals of UNIX gaming, have there ever been notable games that
> have operated as multiple processes, perhaps using formal IPC or even
> just pipes or shared files for communication between separate processes
> (games with networking notwithstanding)?
The machine of the DSSR/RTS group at MIT-LCS, Steve Ward's group (an -11/70
running roughly PWB1) had an implementation of a form of Perquackey:
https://en.wikipedia.org/wiki/Perquackey
that was a multi-player game; I'm pretty sure there was a process per player,
and they communicated, I'm pretty sure, through pipes, not files - there was
certainly no IPC in that system.
IIIRC, the way it worked was that there was a parent process, and it spawned
a child process for each terminal that was playing, and the children could
all communicate through pipes. (They had to communicate because in that
version, all the players shared a single set of dice, and once one person had
played a word, the other players couldn't play that word. So speed was
important in playing; people got really addicted to it.)
Alas, although their machine was very similar to CSR's (although ours was an
-11/45 with an Able ENABLE and a lot of memory, making it a lot more like a
/70), and we shared most code between the machines, and I have a full dump of
the CSR machine, we apparently didn't have any of the games on the CSR
machine, so I can't look at the source to confirm exactly how it worked.
Noel
I did not want to disrupt the FD 2 discussion, but I could not really let
this comment go unanswered:
"Smells like C++ to me. Rust in essence is a re-implementation of C++ not
C. It tries to pack as much features as it possibly can. "
It's hard for me to see how you can say this if you have done any work in
Rust and C++.
But, short form: Rust is not C++. Full stop.
I did not feel that comment should go unanswered, lest people believe it.
On Wed, Feb 01, 2023 at 01:50:05AM +0000, segaloco via COFF wrote:
> COFF'd
Thanks Matt. Yes please, C vs. Rust should mv to the COFF list.
Cheers, Warren
Hello!
I saw someone playing chess on their pdp-11 and thought it could be an
interesting project to run on my pdp-11. At this point the RK05s are not
yet running so booting unix v6 is not possible.
I then thought that if the source code could be found it might be possible
to get it to run standalone with some modifications.
After some googling I found the archive
https://www.tuhs.org/Archive/Distributions/UNSW/7/record0.tar.gz
which contained a chess.lib file. It appeared that this archive contained
source code for some kind of chess program. I have been told that it isn't
the chess written by Ken Thompson so the question is who wrote it? There
are not many comments in the code. Could be interesting to know more about
this chess implementation.
Just looking through the source files and the mk file show that it is
missing a set of files. The mk file references a set of "b"-prefixed
assembly files, bgen.s, bmove.s, bheur.s and bplay.s which are present in
the archive. But it also references a set of files with "w"-prefix, wgen.s,
wmove.s, wheur.s and wplay.s which are missing.
I also recognise that there is an include file, "old.h" that is included
from all c-modules that most likely is present in the overload.lib which
seems to be an overlay loader.
Anyone that has an idea how this thing was built once upon a time?
/Mattis
A lot of this dates back from the old “tty” days (or at least crt
terminals that were still alphanumeric). Even the concept of putting
a job in the background with & is really a feature of trying to
multiplex multiple tasks on the same dumb terminal. This is indeed
less important with windowing things like the DMDs or modern windowing
workstatiosn when you can just get another window. The Berkeley job
control was an interesting hack. For us at BRL the problem was I
absolutely detested the C shell syntax. The Korn shell hadn’t escaped
from AT&T yet, so, I spent time figuring out how that really worked in
the C shell (not really well documented), mostly by inspection, and then
reimplemented it in the Bourne Shell (we were using the System V source
code version for that). I still couldn’t get traction at BRL for
using the Bourne shell because by that time, tcsh had come out with
command line editing. So back to the shell sources I went. By this
time, 5R2 had come out so I grabbed the shell source form that. I was
very delighted to find that the macros that made C look sorta like Algol
had been unwound and the thing was back to straight C. I reworked
emacs-ish command line editing into the shell. Subsequently, I had a
nice conversation with David Korn at USENIX, being probably at that
point the two most familiar with Bourne shell job control internals. I
also sat down with the guys writing either bash or the pdksh (can’t
remember which) and explained all how this work. As a result my name
ended up in the Linux manpages which was pretty much all I found for a
while when I googled myself.
Years later, I had left the BRL, spent three years as a Rutgers
administrator and was working for a small startup in Virginia. There
was a MIPS workstation there. I was slogging along using ed (my
employees always were amazed that if there was no emacs on the system, I
just used ed, having never learned vi). Not thinking about it, I
attempted to retrieve a backgrounded job by typing “fg.” To my
surprise the shell printed “Job control not enabled.” Hmm, I say.
That sounds like my error message. “set -J” I type. “Job control
enabled.” Hey! This is my shell. Turns out Doug Gwyn put my mods
into his “System V on BSD” distribution tape and it had made its way
into the Mach code base and so every Mach-derived system ended up with
it. Certainly, I found it convenient.
There have been other schemes other than job control to multiplex
terminals. IBM in the AIX that ran on the 370/PS2/i860 had a device
called the High Function Terminal that allowed you to swap screens on
the console. When we implemented the i860 (which was an add in card
for the micro channel), we called our console the Low Function Terminal.
Then after spending some time on MIT’s ITS and TOPS20, I got intrigued
by the fact on those systems you could have a “shell” that persisted
across logins and could be detached and reattached on another device at
another time. I set about making such an implementation. Not
particularly efficient, it essentially grabbed one of the BSD ptys and
spawned the shell there and then a small alternative login shell
forwarded the real tty to that. You could then detach it leaving the
shell running on the PTY and reattach it elsewhere. Much like ITS,
on. login it reminded you that you had a detached shell running and
offered to reattach it rather than spawning a new one (complete with the
ITS-ish: space for yes, rubout for no). It never really caught on.
Oh well, pardon my ramblings.
-Ron
All, w.r.t. FD 2, if your e-mail doesn't mention file
descriptors, can you please change the subject line?
And this, in general, for all TUHS discussions :-)
Thanks! Warren
Some folks here might be interested in this. I haven't read this (not gonna
subscribe), I know nothing about the author (Bradford Morgan White), but I
saw Steve Sinofsky tweet good things about it.
The Network is the Computer
The story of Sun Microsystems and the Java programming language
https://www.abortretry.fail/p/the-network-is-the-computer
> * What do I really mean by workstation? Ex.gr. If an installation had a
> PDP-11 with a single terminal and operator, is it not a workstation? Is it
> the integration of display into the system that differentiates?
Certainly integration is critical. The display should be integral to the
terminal, not simply an available device.
Without that stipulation Ken's original single-user PDP-7 system would
count (unless, perhaps, the system had not yet been christened "Unix").
Doug
Another of Ron’s historical diversions that came to mind.
Most of you probably know of various exploits that can happen now with
setuid programs, but this was pretty loose back in the early days. I
was a budding system programmer back in 1979 at Johns Hopkins. Back
then hacking the UNIX system was generally considered as sport by the
students. The few of us who were on the admin side spent a lot of time
figuring out how it had happened and running around fixing it.
The first one found was the fact that the “su” program decided that if
it couldn’t open /etc/passwd for some reason, things must be really bad
and the invoker should be given a root shell anyhow. The common
exploit would be to open all the available file descriptors (16 I think
back then) and thus there wasn’t one available. That was fixed before
my time at JHU (but I used it on other systems).
One day one of the guys who was shuffling stuff back and forth between
MiniUnix on a PDP-11/40 and our main 11/45 UNIX came to me with his RK05
file system corrupted. I found that the superblock was corrupted.
With some painstaking comparison to another RK05 superblock, I
reconstituded it enough to run icheck -s etc.. and get the thing back.
What I had found was that the output of the “mount” command had been
written on the superblock. WTF? I said, how did this happen.
Interrogating the user yielded the fact that he decided he didn’t want
to see the mount output so he closed file descriptor one prior to
invoking mount. Still it seemed odd.
At JHU we had lots of people with removable packs, so someone had
modified mount to run setuid (with the provision of only allowing
certain devices to be mounted certain places). At his point we had
started with the idea of putting volume labels in the superblock to
identify the pack being mounted. Rather than put the stuff in the
kernel right away, Mike Muuss just hacked reading it from the super
block in the usermode mount program so that he could put the volume
label in /etc/mtab. Now you can probably see where this is headed.
It opens up the disk, seeks to the pack label in the superblock and
reads it (for somereason things were opened RW). Then the output goes
to file descriptor 1 which just happens to be further in the superblock.
I figured this out. Fixed it and told Mike about it. I told him
there were probably other setuid programs around that had the problem
and asked if it was OK if I hacked on things (at the time I yet was not
trusted with the root password). He told me to go ahead, knock
yourself out.
Well I spent the evening closing various combinations of file
descriptors and invoking setuid programs. I found a few more and noted
them. After a while I got tired and went home.
The next day I came in and looked through our paper logbook that we
filled out anytime the machine was shutdown (or crashed). There was a
note from two of the other system admins saying they had shut the system
down to rebuild the accounting file (this was essentially the shadow
password file and some additional per-user information not stored in
/etc/passwd). The first 8 bytes were corrupted. Oh, I say, I think
I might know how that happened. Yeah, we thought you might. Your
user name was what was written over the root entry in the file. The
passwd changing program was one of the ones I tested, but I hadn’t
noticed any ill-effects for it at the time.
Notably: 32v_usr.tar.gz and sys3.tar.gz. I’ve not unpacked the tar files. If someone would like more detail about the contents I’ll produce a TOC offline for them.
David
As a result of the recent discussion on this list I’m trying to understand the timeline of graphical computing on Unix, first of all in my preferred time slot ’75 -’85.
When it comes to Bell Labs I’m aware of the following:
- around 1975 the Labs worked on the Glance-G vector graphics terminal. This was TSS-516 based with no Unix overlap I think.
- around the same time the Labs seem to have used the 1973 Dec VT11 vector graphics terminal; at least the surviving LSX Unix source has a driver for it
- in 1976 there was the Terak 8510; this ran primarily USCD pascal, but it also ran LSX and/or MX (but maybe only much later)
- then it seems to jump 1981 and to the Blit.
- in 1984 there was MGR that was done at Bellcore
Outside of the labs (but on Unix), I have:
- I am not sure what graphics software ran on the SUN-1, but it must have been something
- Clem just mentioned the 1981 Tektronix Magnolia system
- Wikipedia says that X1 was 1984 and X11 was 1987; I’m not sure when it became Unix centered
- Sun’s NeWS arrived only in 1989, I think?
Outside of Unix, in the microcomputer world there was a lot of cheap(er) graphics hardware. Lot’s of stuff at 256 x 192 resolution, but up to 512 x 512 at the higher end. John Walker writes that the breakout product for Autodesk was Interact (the precursor to AutoCAD). Initially developed for S-100 bus systems it quickly moved to the PC. There was a lot of demand for CAD at a 5K price point that did not exist at a 50K price point.
> From: Lars Brinkhoff
> It's my understanding it was started by Bob Scheifler of the CLU group.
Yes, that's correct. (Bob's office was right around the corner from me -
although I had very little knowledge of what his group was up to; I was too
busy with other things.)
I have this vague memory that his version was actually written in CLU? Can
that be correct? It would make sense, since that group was so focused on CLU
- but maybe not, see below.
X must have been done after LCS got the 750 farm (on which we ran 4.1c, to
start with) - although I don't know what kind of terminals they were using to
run X on - we didn't have any bit-mapped displays on them, I'm pretty sure.
Although maybe it was later, once Micro-Vaxes appeared?
I have this vague memory that it was based (perhaps only in design, not code
re-use) on a window system done at Stanford {looks}; yes, W (hence 'X'):
https://en.wikipedia.org/wiki/W_Window_System
The X paper listed there:
https://dl.acm.org/doi/pdf/10.1145/22949.24053
doesn't say anything about the implementation, so maybe that vague
memory/assumption that I had that it was originally written in CLU is wrong.
Liskov's 'History of CLU' paper, which lists things done in CLU, doesn't
mention it, so I must have been confused?
Do any of the really early versions of X (and W) still exist?
Noel
Hi.
I've been using trn for decades to read a very few USENET groups. Until recently I've
been using aioe.org as my NNTP server but it seems to have gone dark. Before that
I used eternal-september.org, but when I try that I now get:
| $ NNTPSERVER=news.eternal-september.org trn
| Connecting to news.eternal-september.org...Done.
|
| Invalid (bogus) newsgroup found: comp.sys.3b1
|
| Invalid (bogus) newsgroup found: comp.sources.bugs
|
| Invalid (bogus) newsgroup found: comp.misc
|
| Invalid (bogus) newsgroup found: comp.compilers
| ....
And those all are (or were!) valid groups. If anyone has suggestions for a good
free NNTP server, please let me know. Privately is fine. I'm at a bit of
a loss otherwise.
Thanks,
Arnold
i worked for some years on video and film archive restoration.
baking old, badly stored magnetic tapes prior to reading them is a common practice.
my favourite was a story of a rock band (i think the stones) who wanted to play an old 24 track master tape but discovered it seemed to be stuck together.
there is a nasty affliction of mag tapes called sticky vinegar syndrome, so they did the right thing and sent a section of tape for analysis.
the results came back: the tape had suffered impregnation with “vodka and coke”.
some things never change.
-Steve
Hi All,
I just wanted to let y'all know that tesseract ocr has significantly
improved and is much easier to use that it used to be. I have been using
it with my workflow for a bit and it's crazy how much better it is than
it was back when I tried it last (admittedly 5-6 years ago). For those
of you doing your own scans, or those of you finding sad little pdfs
without ocr, the process is fairly simple.
Let's say you find "The Master Manual of Fortran.pdf" out there in the
wild (or scan it). Here's how to turn it into a glorious ocr'd version:
Export your pdf as a multi-image tiff - it'll be ginormous, but you can
delete it later (on Mac, this is just export from preview and select
tiff, but gs will do it to, if I remember correctly) and then:
tesseract The\ Master\ Manual\ of\ Fortran.tiff out -l eng PDF
et voila, I nice, if large pdf, called out.pdf or somesuch will appear
with ocr text that actually matches your scan (it seems to have caught
up to adobe's ocr, or is quite close in my view, ymmv).
I speak English, so I installed tesseract and tesseract-eng, but it
supports a bunch of other languages if you need them. Apparently
google's been supporting and developing it for while now and if my
results are any indicator, it's paying off (boy do I remember all the
gobbledegook it used to produce).
tesseract will import from different image types, multiple images, etc.
I just like the simplicity of tiff->pdf.
Anyhow, thought y'all might like to know as many of you live off the
scans :).
Will
If you don't want to play Space Travel on PDP-7 Unix, you can now do it
more easily running this C port. The controls are--so far--just as
quirky as the original.
https://github.com/mohd-akram/st
At 01:20 AM 1/26/2023, John Cowan wrote:
>WP says the Terak 8510/a was the first graphical workstation; it came out in 1976-77 and ran the UCSD p-System. I had never heard of it before.
I have a dozen or so Teraks (a PDP-11/03 based system) as well as
many floppies and other inherited items and notebooks from one
of the Terak founders. This may seem like a lot but there's another
guy who might still have a larger collection.
Mini-Unix is described here:
http://www.tavi.co.uk/unixhistory/mini-unix.html
Sixth edition, no MMU. The Bell memo there is dated January 1977.
There was a Mini-Unix for the Terak described here in May 1979 but
I don't think I have a copy. See page 14...
https://conservancy.umn.edu/bitstream/handle/11299/159028/UCC_Special%20_Is…
Terak floppies are described here:
http://www.60bits.net/msu/mycomp/terak/termedia.htm
A memo there says they got their copy in April 1980.
There's no indication that this Mini-Unix can *use* the Terak's mono
bitmapped display, short of writing your own routines. Pinning "first"
on computers is always a tricky process.
- John
>> * What do I really mean by workstation? Ex.gr. If an installation had a
>> PDP-11 with a single terminal and operator, is it not a workstation? Is
>> it the integration of display into the system that differentiates?
>
> I remember people calling something a workstation,
> if it has the four "M"
>
> at least 1 MByte memory
> at least 1 megapixel display
> at least 1 mbit/s network
> can't remember the fourth(was there a fourth?)
I remember it as:
at least 1 MByte memory
at least 1 megapixel display
at least 1 MIPS
cost at most 1 mega penny (10K, maybe 35K in today’s money)
That matches with Wikipedia, for whatever that is worth: https://en.wikipedia.org/wiki/3M_computer
but note that it talks about 3M not 4M.
With hindsight, not adding in networking speed looks strange -- but maybe the world had already settled on LAN speeds above 1Mb/s by 1980 (Ethernet, ARCNet)
[Bcc: to TUHS as it's not strictly Unix related, but relevant to the
pre-history]
This came from USENET, specifically, alt.os.multics. Since it's
unlikely anyone in a position to answer is going to see it there, I'm
reposting here:
From Acceptable Name <metta.crawler(a)gmail.com>:
>Did Bell Labs approach MIT or was it the other way around?
>Did participating in Project MAC come from researchers requesting
>management at Bell Labs/MIT or did management make the
>decision due to dealing with other managers in each of the two
>organizations? Did it grow out of an informal arrangement into
>a format one?"
These are interesting questions. Perhaps Doug may be in the know?
- Dan C.
Good morning all, currently trying to sort out one matter that still bewilders me with this documentation I'm working on scanning.
So I've got two copies of the "Release 5.0" User's Manual and one copy of the "System V" User's Manual. I haven't identified the exact differences, lots of pages...but they certainly are not identical, there are at least a few commands in one and not the other.
Given this, and past discussion, it's obvious Release 5.0 is the internal UNIX version that became System V, but what I'm curious about is if it was ever released publicly as "Release 5.0" before being branded as System V or if the name was System V from the moment the first commercial license was issued.
The reason I wonder this is some inconsistencies in the documentation I see out there. So both of my Release 5.0 User's Manuals have the Bell logo on the front and no mention of the court order to cease using it. Likewise, all but one of the System V related documents I received recently contain a Bell logo on the cover next to Western Electric save for the Opeartor's Guide which curiously doesn't exhibit the front page divestiture message that other documents missing the Bell logo include. Furthermore, the actual cover sheet says "Operator's Guide UNIX System Release 5.0" so technically not System V. In fact, only the User's Manual, Administrator's Manual, Error Message Manual, Transition Aids, and Release Description specifically say System V, all the rest don't have a version listed but some list Release 5.0 on their title page.
Furthering that discrepancy is this which I just purchased: https://www.ebay.com/itm/314135813726?_trkparms=amclksrc%3DITM%26aid%3D1110…
Link lives as of this sending, but contains a closed auction for an Error Message Manual from the "Release 5.0" documentation line but no Bell logo. Until the Operator's Guide and this auction link, I haven't seen any "Release 5.0" branded stuff without a Bell logo, and before I bought the System V gold set, I hadn't seen System V branded stuff *with* the Bell logo.
This shatters an assumption that I had made that at the same time the documentation branding shifted to System V was the same time the removal of the Bell logo happened, given that divestiture was what allowed them to aggressively market System V, but now this presents four distinct sets of System V gold documentation:
Release 5.0 w/ Bell logo
Release 5.0 w/o Bell logo
System V w/ Bell logo
System V w/o Bell logo
I'm curious if anyone would happen to know what the significance here is. The covers are all printed, I can't see any indication that a bunch of 5.0 manuals were retroactively painted over nor that any System V manuals got stamped with a Bell post-production. What this means is "Release 5.0" documentation was being shipped post-divestiture and "System V" was being shipped pre-divestiture. If Release 5.0 was publicly sold as System V, then what explains the post-divestiture 5.0 manuals floating around in the wild, and vice versa, if USG couldn't effectively market and support UNIX until the divestiture, how is it so many "Release 5.0" documents are floating around in well produced commercial-quality binding, both pre and post-divestiture by the time the name "System V" would've been king. Were they still maintaining an internal 5.x branch past System V that warranted its own distinct documentation set even into the commercial period? This period right around '82-'83 is incredibly fascinating and I feel very under-documented.
- Matt G.
Good day everyone, just emailing to notify of three more documents I've uploaded to archive.org since my last slew of them:
https://archive.org/details/unix-programming-starter-package - Up first is the UNIX Programming Starter Package. This is one of a pair of manuals that saw publication in the Bell system around the time of UNIX/TS 4.0. The documents here appear to be a subset of those which shipped with Documents for UNIX 4.0. Nothing particularly new here. There is a companion manual, UNIX Text Editing & Phototypesetting Starter Package, which I also have but haven't hit the scan bench with yet. Like this one, that is just a subset of papers from the Documents for UNIX collection. Based on the TOC, this second one also shipped with one of those PWB/MM multi-fold pamphlets, but didn't receive one when I got this. Luckily that was also scanned as part of the 4.0 collection. So nothing really new here, save that these are 1st generation scans vs the scans of photocopies for the 4.0 release. That said, I've seen this set with the same cover motif except with an AT&T death star logo in the upper right. Didn't look into it too much at the time, but I'd be curious if anyone might have those and if they have the programming one, if it still refers to Release 4.0 in the documentation roadmap.
https://archive.org/details/unix-system-users-guide-release-5-0 - This is the User's Guide that shipped with Release 5.0/System V. This covers the usual suspects as well as some notes on RJE and SCCS from a user's perspective.
https://archive.org/details/unix-system-administrators-guide-5-0 - And this is the Administrator's Guide likewise from SysV era. This one contains setup and maintenance notes for both DEC (PDP & VAX) and 3B20(S) machines, as well as papers on accounting, LP printing, RJE, filesystem checking, and the System Activity Package. Additionally, the guide includes the original Modification Request form.
- Matt G.
Found this tweetstream, here folded together, when looking for something
else (now lost) in my twitter archive:
==========
Things I miss from the v8 shell.
1) All shell output was valid shell input.
2) Typing dir/cmd would find the command $PATH/dir/cmd. Subdirectories of
your bin, in other words.
3) Functions were exportable. For one brief shining POSIX meeting, that was
true in POSIX too but then...
4) The implementation was lovely and easy to understand. (No, it wasn't
shalgol. Bourne fixed that for us.)
5) That I could learn things from it, like how to write a recursive descent
parser.*
6) It ran in cooked mode.
As expected, all that work making it a great shell is lost to history.
https://t.co/IzApAUSmzN is silent. Well, the code is released now.
==========
-rob
* elegantly.
> From: Larry McVoy <lm(a)mcvoy.com>
> At least 30 years ago I said "He's good programmer, a good architect,
> and a good manager. I've never seen that in one person before".
Corby? Although he was just down the hall from me, I never saw him operating
in any of those roles; maybe some of the old-time Unix people have some
insight. Saltzer is about off-scale in #2; probably good as a manager
(although I had a monumental blow-up with him in the hallway on the 5th
floor, but I was pretty close to unmanageable when I was young ;-); he took
over Athena when it was stumbling, and got it going. Dave Clark is high on
all three - he could manage me! :-)
Bob Taylor? PARC did some _incredibly_ important stuff in his time. Yes, I
know a lot of the credit goes to those under him (Butler Lampson, Alan Kay -
not sure if he was in Taylor's group, Boggs, Metcalfe, etc) but he had to
manage them all. Not sure what his technical role was, though.
Vint Cerf? Again, A1*** as a manager, but had some failings as a architect. I
think the biggest share of the blame for the decision to remove the variable
size addresses from TCP/IP3, and replace them with 32-bit addresses in
TCP/IPv4, goes to him. (Alas, I was down the hall, not in the room, that day;
I wasn't allowed in until the _next_ meeting. I like to think that if I'd been
there, I could/would have pointed out the 'obvious' superior alternative -
'only length 4 must be supported at this time'.)
Noel
PS: ISTR that about a month ago someone was asking for management papers
from that era (but I was too busy to reply); two good ones are:
- F. J. Corbat��, C. T. Clingen, "A Managerial View of the Multics System Development"
https://multicians.org/managerial.html
- F. J. Corbat��, C. T. Clingen, and J. H. Saltzer, "Multics -- the first seven years"
https://multicians.org/f7y.html
> My guess is that Ivan Sutherland probably qualified back when he still
> programmed ... I mean, after all, he invented the linked list in order to
> implement his thesis program (Sketchpad) in about 1960.
I don't know whether Sutherland invented the linked list, but if he
did, it had to be before he worked on sketchpad. I attended a lecture
about Lisp in 1959 in which McCarthy credited list-processing to
IPL-V, whose roots Newell places in 1954. Sketchpad ran on TX 0, which
became operational in 1956.
My nomination for a triple-threat computer guy is Vic Vyssotsky. A
great programmer, he invented the first stream-processing language
(BLODI) and bitwise-parallel dataflow analysis. As an architect, he
invented the single underlying address space for multics. As a
manager, he oversaw the building of and later ran the lab that became
AT&T Research. Finally he founded the DEC Cambridge Lab. He was a
subtle diplomat, too, who more than once engineered reversals of
policy without ruffling feathers.
Relative to linked lists, I remember Vic perceptively touting the then
startling usage J=NEXT(J).in Fortran.
Doug
Accidentally sent this only to the person I was replying to.
> I am getting some grief on Twitter too for "omitting" FreeBSD. I
> didn't, but the BSDs don't fit either definition of "Unix". The
> pre-1993 one being "based on AT&T code" -- after all, BSD (4.4 Lite r2
> was it? Before my time!) -- went to a lot of effort to eliminate AT&T
> code.
From what I've seen it's very much a gradual transition; 4.3-Tahoe starts to
have the new code and UCB copyright notices with the predecessor of what we call
the "BSD License" appearing in some of the source files. Then with Reno, a
majority of the userland is open-sourced, and Net/2 is fairly complete. (Net/2
and 4.4BSD-Lite / Lite/2 were lacking a few things.) But even right up until
the end things were in a state of flux.
A few things weren't finished until much later by the FreeBSD, OpenBSD and
NetBSD people.
-uso.
Hello --
Regarding "appliance-ization" (locking down / dumbing down) of commercially-available computer systems, and returning to the history of Unix (in the context of our current era), I am reminded of Ken Thompson's (excellent and humorous) panel presentation at the ACM Turing 100 conference I attended in 2012, imagining Alan Turing being brought to our time and given a current-generation computer system, etc.
The webcast links for the "Systems Architecture" session, etc., on the main conference site, https://turing100.acm.org/, seem to be broken, however the video at this link works for me:
https://dl.acm.org/doi/10.1145/2322176.2322182
(Ken's part starts at ~0:09:28.)
Cheers,
***PSI***
<<<psi(a)valis.com>>>
tuhs-request(a)tuhs.org writes:
[...]
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 18 Jan 2023 17:08:00 +0000
> From: segaloco <segaloco(a)protonmail.com>
> Subject: [TUHS] Re: Maintenance mode on AIX
> To: Clem Cole <clemc(a)ccc.com>
> Cc: tuhs(a)tuhs.org
> Message-ID: <zpdIicuX7AbN-y6hYho0eLOnHgzRs4iHa1UD6bxUyiTZhqZkg3Ha8TKWV
> ASxWkDZitFw0JIopRVh7BRC2PzLFrF_Gjsb2yCi-uxJ3Yr3AtE=(a)protonmail.com>
> Content-Type: multipart/alternative;
> boundary="b1_7WKJsCnT0P2jggZLBLwbL2iRavDFXPykjXdIMPRs"
>
> Apple's unreasonable hardening has been the latest deterent to my ever wanting to use macOS as a personal driver. I've got a Mac as my daily driver for work, it can happily stay with work until I can decide how the filesystem is laid out and what folders I, as the root user, can and can't interact with from user land. I own my machine, not Apple.
>
> - Matt G.
> ------- Original Message -------
> On Wednesday, January 18th, 2023 at 8:59 AM, Clem Cole <clemc(a)ccc.com> wrote:
>
>> On Wed, Jan 18, 2023 at 11:39 AM Larry McVoy <lm(a)mcvoy.com> wrote:
>>
>>> Someone once told me that if they had physical access to a Unix box, they
>>> would get root. That has been true forever and it's even more true today,
>>> pull the root disk, mount it on Linux, drop your ssh keys in there or add
>>> a no password root or setuid a shell, whatever, if you can put your hands
>>> on it, you can get in.
>>
>> A reasonable point, but I think it really depends on the UNIX implementation I suspect. Current mac OS is pretty well hardened from this, with their current enclaves and needing to boot home to Apple to get keys if things are not 100% right. Not saying you or I can not, but basically means the same cracking tricks you need to use for iPhones. It's not as easy as you describe.
>>
>> The ubiquitous Internet/WiFi changed the rules - as you can start to keep some set of keys somewhere else and then encrypt the local volumes. In fact, one of the things they do if mac OS boot detects that root has been modified (it has a crypto index stored away when it was made read-only), the boot rolls back to the last root snapshot -- since they are all read-only that works. In fact, it is a PITA to update/fix things like traditional scripts (for instance the scripts in the /etc/periodic area). Basically, they make it really unnatural to change the root files system, make a new snapshot and index (I have yet to see it documented although, with much pain, I previously created a procedure that is close -- i.e. it once worked on my pre-Ventura Mac - but currently -- fails, so I need to some more investigation when I can bring this back to the top of the importance/curiosity stack (I have a less than satisfying end around for now so I'm ignoring doing it properly).
>>
>> Clem
>> ᐧ
I just stumbled across an old letter, from a VP of Burroughs to me and
Steve Bartels, authorizing $30,000 for a port of Unix to the E-mode stack
machine. I had forgotten getting it.
Burroughs was famed for its stack machines. E-mode was a kind of last gasp
attempt to save the stack architecture, which failed as far as I know, see
this table:
http://jack.hoa.org/hoajaa/Burr126b.html
I worked as a hardware engineer on the A15. I also had been a Unix user for
7 years at that point and kept pointing out how awful the Burroughs CANDE
time-sharing system was, and how much better Unix was. At some point I
guess they asked me to put up or shut up. I got that money, and left
Burroughs a week later for grad school.
Funny note: A15 was Motorola ECL (MECL), and ran at 16 Mhz., considered
fast at that time. We used a technique called "stored logic" which was,
believe it or not, using MECL RAM to map logic inputs to outputs, i.e.
implement combinational logic with SRAM. Kind of nuts, but it worked at the
time. We also used a precursor of JTAG to scan it in. Those of you who know
JTAG have some idea of how fun this had to be.
One side effect of working with MECL is you realized just how well designed
the TI 7400 SSI/MSI parts were ... MECL always just felt like an awkward
family to design with.
Another funny story, pointing to what was about to happen to Burroughs. We
had an app that ran for hours on the stack machine. We quick ported it to a
VAX, started it up, and headed out to lunch -- "this will take a while,
let's go eat." We got to the front door and: "Oh, wait, let me hop back
into the office,I forgot my jacket". And, noticed, the program was done in
... about 3 minutes. Not 8 hours.
That's when we knew it was game over for Burroughs.
If a picture of this letter would be useful in some archive somewhere, let
me know, I can send it.
The security vulnerability in question could be briefly summarized as,
"Fortran divide-by-zero gives root." I think that was just a specific
manifestation of the underlying problem, though. More specifically it
was actually due to failure to sanitize state after handling a SIGFPE
(and possibly other signals as well?).
I have a distinct memory of this, but can no longer find any evidence
for it. Did I just make it up from whole cloth, or was this actually a
thing?
- Dan C.
London and Reiser report about porting the shell that “it required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable.” By the time of SysIII this is greatly improved, but also in porting the SysIII user land it was the most complex of the set so far.
There were three aspects that I found noteworthy:
1. London/Reiser apparently felt strongly about a property of casts. The code argues that casting an l-value should not convert it into a r-value:
<quote from "mode.h">
/* the following nonsense is required
* because casts turn an Lvalue
* into an Rvalue so two cheats
* are necessary, one for each context.
*/
union { int _cheat;};
#define Lcheat(a) ((a)._cheat)
#define Rcheat(a) ((int)(a))
<endquote>
However, Lcheat is only used in two places (in service.c), to set and to clear a flag in a pointer. Interestingly, the 32V code already replaces one of these instances with a regular r-value cast. So far, I’d never thought about this aspect of casts. I stumbled across it, because the Plan 9 compiler did not accept the Lcheat expansion as valid C.
2. On the history of dup2
The shell code includes the following:
<quote from “io.c”>
rename(f1,f2)
REG INT f1, f2;
{
#ifdef RES /* research has different sys calls from TS */
IF f1!=f2
THEN dup(f1|DUPFLG, f2);
close(f1);
IF f2==0 THEN ioset|=1 FI
FI
#else
INT fs;
IF f1!=f2
THEN fs = fcntl(f2,1,0);
close(f2);
fcntl(f1,0,f2);
close(f1);
IF fs==1 THEN fcntl(f2,2,1) FI
IF f2==0 THEN ioset|=1 FI
FI
#endif
}
<endquote>
I’ve check the 8th edition source, and indeed it supports using DUPFLG to signal to dup() that it really is dup2(). I had earlier wondered why dup2() did not appear in research until 10th edition, but now that is clear. It would seem that the dup of 8th edition is a direct ancestor to dup() in Plan 9. I wonder why this way of doing things never caught on in the other Unices.
3. Halfway to demand paging
I stumbled across this one because I had a bug in my signal handling. From early days onwards, Unix supported dynamically growing the stack allocation, which arguably is a first step towards building the mechanisms for demand paging. It appears that the Bourne shell made another step, catching page faults and expanding the data/bss allocation dynamically:
<quote from “fault.c”>
VOID fault(sig)
REG INT sig;
{
signal(sig, fault);
IF sig==MEMF
THEN IF setbrk(brkincr) == -1
THEN error(nospace);
FI
ELIF ...
<endquote>
This was already present in 7th edition, so it is by no means new in 32V or SysIII -- it had just escaped my attention as a conceptual step in the development of Unix memory handling.
Here’s a stretch, but does anybody have a copy of the 1982-ish C With
Classes Reference Manual kicking around. I can take it in n/troff or a
more modern format if you have it.
> segaloco via TUHS writes:
>> I think that's a good point that scripting problems may be
>> a symptom of the nature of the tools being used in them.
> I think that you're hinting at something different.
> To the best of my recollection, scripting languages were originally
> intended and used for the automation of repetitive personal tasks;
> making it easier for users who found themselves typing the same
> stuff over and over again.
Indeed!
> Somewhere along the line people forgot
> how to use a compiler and began writing large systems in a variety
> of roughly equivalent but incompatible interpreted languages. Can
> one even boot linux without having several different incompatible
> versions of Python installed today? So I don't think that it's the
> nature of the tools; I think that it's people choosing the wrong
> tools for the problems that they're trying to solve.
> Jon
The forgotten compilers were typically used to supply glue
to paste major tools together. The nature of that glue---often
simple data reformatting--inspired tools like sed and awk.
Each use of a tool became a process that saved many minutes
of work that would in a scriptless world be guided by hand,
boringly and unreliably.
Yet glue processes typically did only microseconds of
"real" work. In the name of efficiency, the operations began
to be incorporated directly into the shell. The first
inklings of this can be seen in "echo" and various forms
of variable-substitution making their way into the v7
shell. The phenomenon proliferated into putting what were
typically canned sed one-liners (but not sed itself) into
the shell.
Lots of specializations crowded out universality. A side
effect was an explosion of knowledge required to write
or understand code. Such is the tragedy of "forgetting
compilers".
Doug
Someone dumped a bunch of Unix/Plan 9/FORTRAN/FOCAL documents on github:
https://github.com/kenmartin-unix/UnixDocs
I haven't looked at them closely to see what may be there, but this
may interest some TUHS readers.
- Dan C.
I'd love to get my hands on a 3B2 someday, this'll be cool if I can get it going but that'd be a much more robust machine.
I'm starting to suspect if there isn't any sort of boot ROM that spits out commentary on the UART and that doesn't get flexed until UNIX is up, I may not be able to get very far. I referred to http://bitsavers.trailing-edge.com/pdf/att/3b1/999-809-010IS_UNIX_PC_Remote… for the serial settings and it appears:
9600 baud, 1 stop bit, no parity, 8 data bits
And the relevant pins
Pin 1 - GND
Pin 2 - RX
Pin 3 - TX
Pin 4 - RTS
Pin 5 - CTS
Pin 6 - DSR
Pin 7 - GND
Pin 8 - DCD
Pin 20 - DTR
So I've plugged my USB-TTY GND/RX/TX into the relevant pins and setup the necessary tty settings. The manual then suggests if running null modem mode to short pin 4 to 5 and then pins 6, 8, and 20 together, presumably omitting any need for modem signalling from the remote machine, doing basic serial RX/TX. Unfortunately even with all of this bypassing I get nothing out of the RS-232 port. What I don't know is if I could even expect something or if this is unlikely to bear fruit whether the hardware works or not. In any case, if I do get this thing running I'll have a writeup for folks afterwards. If not, then hopefully I can figure out something useful to do with this thing rather than junking it.
- Matt G.
------- Original Message -------
On Tuesday, January 3rd, 2023 at 3:53 PM, rob(a)atvetsystems.com <rob(a)atvetsystems.com> wrote:
> Hello Matt,
>
> I’ve got one of these in my garage. I bought it about twenty years ago as a working system but when I got it home I noticed that the hard disk wasn’t connected but at some point I’d like to get it and my 3b2/300 working.
>
> Regards, Rob.
>
>> On 3 Jan 2023, at 23:27, segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>>
>> And here are some pictures of the guts.
>>
>> https://imgur.com/a/E1ioxZl
>>
>> Various bits inside date this to late 1985. The good news is it at least turns on, but that's about as far as I've gotten with it. The display never turns on, nor do I hear any sounds indicating it tries to start the CRT. The fans kick on and there it stays until I turn it off. I plugged in a USB-TTY to pins 2, 3, and 7 (RX/TX/GND) and listened at 9600 baud 8 bit 1 stop no parity and got nothing. Swapped the RX/TX, still nothing. Of course, that's all predicated on the assumption there's something there to even interact with. I have little faith that whatever UNIX install was on this is extant. Additionally, it didn't come with a keyboard, so if there was some futzing with key combos that would trigger some sort of UART over those lines, I can't do that. I wonder if there are some contacts inside I can just poll for activity with this serial connector, not sure how safe that is...
>>
>> Anywho, the CPU has a bit of corrosion on the surface, not sure how that bodes for the innards, but this is in kinda rough shape either way. I hope I can salvage it but if not, I'm going to at least do some study on the CRT particulars and see if I can extract and keep the monitor from it, been wanting a smaller CRT to have around for a while.
>>
>> - Matt G.
>> ------- Original Message -------
>> On Tuesday, January 3rd, 2023 at 12:20 PM, segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>>
>>> Good day everyone, just starting a thread for yet another project I'll be tinkering on over time. Picked up a (presumably broken/untested) 7300 off eBay to at the very least tear down and get some good pictures of and, with some luck, perhaps get working again.
>>>
>>> https://imgur.com/a/CExzebl
>>>
>>> Here are some pictures of the exterior for starters. I'll update this thread when I've got pictures of the guts and also with any info I can glean regarding whether this might be salvageable. The rust on the back is pretty nasty but I've seen older/worse start up just fine.
>>>
>>> - Matt G.
Good day everyone, just starting a thread for yet another project I'll be tinkering on over time. Picked up a (presumably broken/untested) 7300 off eBay to at the very least tear down and get some good pictures of and, with some luck, perhaps get working again.
https://imgur.com/a/CExzebl
Here are some pictures of the exterior for starters. I'll update this thread when I've got pictures of the guts and also with any info I can glean regarding whether this might be salvageable. The rust on the back is pretty nasty but I've seen older/worse start up just fine.
- Matt G.
Does anyone have the original troff of this document? It was written
by Bill Shannon at Sun, documenting the C style conventions for SunOS.
A PDF rendering is here:
https://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf
Thanks!
- Dan C.
The /bin/sh stuff made me think of an interview question I had for engineers,
that a surprisingly few could pass:
"Tell me about something you wrote that was entirely you, the docs, the
tests, the source, the installer, everything. It doesn't have to be a
big thing, but it has to have been successfully used by at least 10
people who had no contact with you (other than to say thanks)."
Most people fail this. I think the people who pass might look
positively on the v7 sh stuff. But who knows?
As mentioned in the first post on SysIII porting, I was surprised to see how much code was needed to initialise modern hardware and to load an OS. Of course, modern devices are much more capable than the ones of 40 years ago, so maybe my surprise is misplaced. It did raise an interest in the history of Unix system configuration though.
It would seem that 5th Edition already contained a configuration program that generated a few system tables and the ‘low.s’ file with interrupt vectors and alike. Although it steadily grew in sophistication, the approach appears still the same in SysIII. I suppose this is all in line with common practice of the era, with OS’s typically having a ’system generation kit' to combine the pre-linked OS kernel with device drivers and system tables.
SysIII also introduces the "var struct" and the “v” kernel variable that summarises some of the system configuration. I’m not sure whether it has roots in earlier Unix systems, it does not seem to originate from Research. I’m not sure what the point of this ‘v’ kernel variable was. Does anybody remember?
One could argue that one of the drivers of the success of CP/M in the 1970’s was due to its clear separation between the boot rom, BIOS and BDOS components. As far as I am aware, Unix prior to 1985 did never attempt to separate the device drivers from the other kernel code. I am not very familiar with early Xenix, it could be that Microsoft had both the skill and the interest to separate Xenix in a standard binary (i.e. BDOS part) and a device driver binary (i.e. BIOS part). Maybe the differences in MMU for the machines of the early 80’s were such that a standard binary could not be done anyway and separating out the device drivers would serve no purpose. Once the PC became dominant, maybe the point became moot for MS.
It would seem that the next step for Unix in the area of boot, config and device drivers came with Sun’s OpenBoot in 1988 or so. This also appears to be the first appearance of device trees to describe the hardware to the bios and the kernel. Moreover, it would seem to me that OpenBoot is a spiritual ancestor of the modern Risc-V SBI specification. Maybe by 1988 the IO hardware had become sufficiently complex and/or diverse to warrant a break from tradition?
Was there any other notable Unix work on better organising the boot process and the device drivers prior to OpenBoot?
> "Originally the idea of adding command line editing to ksh was
> rejected in the hope that line editing would move into the terminal
> driver." [2]
> I have always wondered, what such a central terminal driver driven
> history/line-editing would have felt like.
You can get a feel for it in Rob's "sam" editor, which works that way.
Doug
at the risk of making a fool of myself - there are several people far better qualified here, however…
my memory is that the plan9 linker could be easily rebuilt to use malloc and free in the traditional style, reducing its memory footprint - though making it much slower.
-Steve
Adam Thorton wrote:
> I mean all I really want for Christmas is a 64-bit v7 with TCP/IP support, a screen editor, and SMP support.
>
> The third one is a solved problem. The second one would not be that hard to adapt, say, uIP 0.9, to v7. That first one would require some work with C type sizes, but getting larger is easier than the reverse. It's that last one.
>
> Having said that...maybe what I really want is 64-bit 4.3 BSD?
>
> I mean, just a Unix, without all the cruft of a modern Linux, but which can actually take advantage of the resources of a modern machine. I don't care about a desktop, or even a graphical environment, I don't care about all the strange syscalls that are there to support particular databases, I don't care much about being a virtualization host.
Luther Johnson wrote:
> I'm in the process of building a system like that for myself, but
> perhaps a little smaller - mine will be based on an embedded
> microprocessor I've developed (so much work still yet to do ! at least a
> year out).
Earlier this year I ported VAX System III to Risc-V, to a simple Allwinner D1 based SBC. This is RV64GC. Just ported to the console terminal.
It turned out that porting Sys III to 64 bit was surprisingly easy, most of the kernel and user land appears to be 64 bit clean. It helps that I am using a LLP64 compiler, though. Apart from networking Sys III also feels surprisingly modern (for an ancient Unix) - its should get more attention than it does. The hardest work was in porting the VAX memory code to Risc-V page tables (and to a lesser extent, updating libc for the different FP formats).
The code is currently in an ugly state (with debug stuff in commented-out blocks, a mix of ansi and K&R styles, an incoherent kludgy build system, etc.) and the shame stopped me from putting it out on gitlab until I found enough time to clean this up. As there seems to be some interest now, I’ll put it up anyway in the next week or so. There you go Adam, half your wish comes true.
The kernel is about 60KB and most binaries are quite close in size to the VAX equivalents.
My next goals for it are to re-implement the Reiser demand paging (I think I have a good enough view of how that worked, but the proof of the pudding will be in the eating), and to add TCP/IP networking, probably the BBN stack. Making it work on RV32 and exploring early SMP work is also on my interest list.
===
David Arnold wrote:
> I think xv6 does SMP? (param.h says NCPU = 8, at least).
>
> You’d need to add a network stack and a userland, but there are options for those …
For the above, making xv6 work on the D1 board was my first stepping stone, to proof the tool chain and to get the basics right (hardware init, low-level I/O, etc.).
As an educational tool, I am sure that xv6 hits all the right spots, and it certainly does SMP (the D1 is single hart, so I have not tried that myself). I like it a lot in that context. However, as a simple Unix it is useless: from a user-land view it is less capable than LSX. At minimum it needs fixes to make the file system less constrained.
In my view, for SMP Keith Kelleman’s work for Sys-V is probably a better place to start.
Having done the SysIII 64-bit port to a recent Risc-V chip, I realised that whilst it is an interesting exercise bij itself -- and maybe even useful to students and educators around the world -- it is not ideal as a research tool for analysing Unix from the early 80’s. The address size difference adds some superfluous porting, and the 100x speed difference can hide critical algorithm constraints. Also the complex IO devices are out of character.
For a Risc-V 32 bit target I’ve settled on an FPGA implementation from the University of Tokyo. I’ve somewhat modified the system to work with the open source Yosys/NextPNR tool chain. It now implements a Linux-capable SoC with a full MMU, a 4-way cache and SD card driver in less than 4,000 lines of plain Verilog (compiling to about 14K LUTs). In a way, the code has a 6th edition feel to it: it covers a real and usable system and the code can be understood in a couple of days -- or a semester for a student who is new to the concepts involved.
https://gitlab.com/r1809/rvsoc/-/tree/main/doc
So far I have Linux and XV6 (https://gitlab.com/r1809/xv6-rv32) running, but have not started on SysIII yet.
Usefully for my use case this system is not very fast, completing an instruction in on average 10 clocks. Still, when running at 40MHz it is about 2 or 3 times as fast as a VAX11/780, which is similar to the systems of the mid-80’s. Even at this speed, a single user console Linux is surprisingly usable. By the way, funny to realise that ‘Unix/Linux capable’ has been a marketing slogan for system capability for 40 years now.
There is a short video clip with a demonstration (but running at 100MHz) here: https://youtu.be/Kt_iXVAjXcQ
Due to its simple design, the main CPU only uses some 30% of the cache memory bandwidth and it should not be all that hard to add a second CPU to the system (the CPU already supports the Risc-V atomic operations), and this could be a nice target for studying the early Unix multi-processor designs (e.g. VAX/BSD & 3B2/SVR3).
I find it an intriguing thought that the chip technology of the early 80’s (let’s say the technology used for the Bellmac-32 or M68K) would probably have sufficed to build a CPU similar to the one used in this FPGA design.
As the topic of this post is on a tangent from the focus of this list, I would recommend that any follow-ups not related to the history of Unix are sent off list.
Porting the SysIII kernel to a D1 board (https://www.aliexpress.us/item/3256803408560538.html) began with a port of XV6, in order to test the tool chain and to get comfortable with this target. Michael Engel had already ported XV6 to the D1 chip a few months before (https://www.uni-bamberg.de/fileadmin/sysnap/slides/xv6-riscv.pdf) giving a ready base to work with.
The main new effort was to add code to initialise the DRAM controller and the SD Card interface, and to have a simple boot loader. Such code is available from the manufacturer board support package (BSP), although in this case the DRAM controller code was only available as assembler compiler output and had to be reverse engineered back into C. In general I was surprised to see how big and unwieldy the BSP code is; maybe the code just looks sloppy because it has to deal with all kinds of edge cases - but I can also imagine that it accumulates cruft as it is ported from SoC to SoC by the manufacturer.
The resulting XV6 source tree is here: https://gitlab.com/pnru/xv6-d1
This version automatically boots from the SD Card on the board.
With that out of the way, the ancient Unix port was relatively easy. It would seem to me that the SysIII code base has a lot of clean-up work in it that still pays off today. The code compiles to a 64-bit target with minimal updates, which I think is a compliment to the engineers that worked on it. Probably using a LLP64 compiler also helped. In order to bring something up quickly, I modified the kernel to load ELF binaries, so that I could use proven material from the XV6 port (such as a minimalistic init and shell).
Initially, I just replaced VAX memory management with page table code taken from XV6 (i.e. no VM or swapping). Working with Risc-V page tables gives much simpler code, but I have a deeper appreciation of the VAX paging design now: for the type of code that was run in 1980, the VAX design enables very small page tables with just a few dozen entries. In contrast, for the 3-level page tables of 64-bit Risc-V I end up with 7 pages of page table of 4KB each, or 28KB -- that is larger than the memory image of many SysIII programs. If I move the ‘trampoline' to just above the stack in virtual memory it could be 5 pages instead of 7, but the overall picture remains the same. The 68020 or ‘030 MMU could be configured to have various page sizes -- this looked byzantine to me when I first saw it, but it makes more sense now.
Next I replaced the VAX scatter paging / partial swapping code, keeping the same methodology. I noticed that there is still confusion over memory management in 32V and SysIII (and implicitly SVR1,R2). The original 32V as described in the London/Reiser paper used V7 style swapping. This code can be found as ‘slowsys’ in the surviving source (https://www.tuhs.org/cgi-bin/utree.pl?file=32V/usr/src/slowsys) It was quickly (Mar vs. Jan 1979) replaced by the scatter loading / partial swapping design already hinted at in the paper (source is in 32V/usr/src/sys). Unfortunately, the “32V uses V7 swapping” meme lives on.
In scatter paging, the pages are no longer allocated continuous in physical memory but new pages are taken from a free list and expansion swaps are not usually needed. Also, when a process is swapped out, it is not fully swapped out, but just enough pages to make room for the new process. When it is readied to run again, only the partial set needs to be reloaded. In the VAX context, scatter paging and partial swapping are quite effective and I think competitive with demand paging for the 25-100KB processes that were in use at the time. As I mentioned in the post on the toolchain, the Plan 9 C compiler can easily use 1MB of memory and in a 4MB of core context, this trashes the algorithm; it starts to behave much like traditional swapping. The reason for this is that the entire process must be in memory in order to be able to run and the algorithm cannot recognise that a much smaller working set is needed. The implicit assumption of small processes can also be seen in the choice to limit partial swaps to 4KB per iteration (8 VAX pages).
For handling processes with a large memory footprint but a small working set a true demand paged VM approach is needed. The simplest such approach appears to be Richard Miller’s work for SVR1 (see June 1984 Usenix conference proceedings, "A Demand Paging Virtual Memory Manager for System V"). This is a very light touch implementation of demand paging and it seems that enough bits and pieces survive to recreate it.
The journey through the memory code made it clear again that in SysIII and before, the memory code is scattered over several locations and not so easy to fathom at first glance. It would seem that in SysV/68 an attempt was made to organise the code into separate files and with a more defined API. It does not seem to have carried through. Maybe this was because the MMU’s of the 1980-1985 era were all too different to be efficiently abstracted into a single framework.
Beyond SysV/68, were there any other attempts in the early 80’s to organise and abstract the kernel memory management code (outside BSD)?
After initially gearing up to use the Motorola 68020 or 68030 as a porting target for a study of Unix in the 1980-1985 era, I reconsidered and used Risc-V as a target instead. As the original RISC and MIPS projects were contemporaneous with early 32-bit Unix (32V, BSD, SysIII and SVr1,r2) it seems appropriate and there is currently considerable interest (hype?) around Risc-V.
From a programming perspective, the Risk-V ISA does not feel (at least to me) all that different from what was current in the early 80’s — the number of WTFs/min is low. The modularity is a pleasant surprise, as is the observation that the 32-bit and 64-bit instruction sets are almost identical and that compressed instructions mingle nicely with full size ones. The MMU design appears straightforward. Maybe this is because the ISA is still relatively new and has not acquired much historical baggage at this point in its lifespan, but it also seems to be a good synthesis of insights gained over the last 4 decades and applied with a sense of minimalism.
At first I was thinking to create a toolchain based on pcc or pcc2 for the SysIII porting effort, based on some preparation I had done when I was still thinking about 68030 as a target (the surviving Blit code includes a pcc-based 68000 compiler and the SysV/68 source archive contains a pcc2-based compiler). Before I got underway with that, I came across a presentation Richard Miller had done about his Risc-V compiler:
https://riscv.org/news/2020/10/a-plan-9-c-compiler-for-rv32gc-and-rv64gc/
Richard was kind enough to share the source code for his Risc-V back-end. The first complication was that the source code assumes that it will be running inside a Plan-9 environment, whereas I was looking for a Unix/Posix environment. Luckily somebody already had assembled the libraries needed for this:
https://github.com/aryx/fork-kencc
I’m not sure where it came from, but I would assume it has some roots in the "Plan-9 from user space" effort. From this work I extracted the minimum needed to get the C compiler working and to build from scratch. The libraries mostly just worked. The compiler was a bit harder: the source code assumes a LLP64 model in a few places and compiling this with clang (which uses a LP64 model) introduces issues in a handful of places. Other than this initial hurdle, the compiler and tools have worked flawlessly, both for 64-bit code and for 32-bit code, and have been a joy to use. One particular nicety is that Plan 9 style "abstract assembler" source for 64-bit code is even more identical to its 32-bit variant than with the mainstream Risc-V assembler syntax. My repo for the tool chain is here:
https://gitlab.com/pnru/riscv-kencc
Initially, my expectation was that I could only use these compilers as cross-compilers and that I would need to do a pcc2 version for native compilation at some point. However, when I saw how small and fast the tools were, I experimented with using them on SysIII. Much to my surprise the effort required was absolutely minimal, all that was needed was adding a dozen simple standard functions in libc (see here: https://gitlab.com/pnru/SysIII_rv64/-/tree/master/libc/compat) and adding the ‘dup2' sys call. I had not expected that SysIII was so close to the Unix systems of the 1990’s in this regard. This result inspires ideas for future projects: as I plan to add an 8th edition style file system switch anyway, maybe it will not be all that hard to make the Plan-9 debugger work on this “SysIII+” as well.
Another observation has been that the code size of binaries compiled for Risc-V by this tool chain is almost the same as those compiled for the VAX by pcc (the Risc-V ones are maybe 10-20% larger). This is using the compressed instructions where possible. This is I think another indication that both the Risc-V ISA and the tool chain are quite well done.
The one less positive surprise has been the memory use of the compiler. Even on a relatively simple program file it will quickly use 1 megabyte or more of ram. I understood from Richard that this is because the compiler only malloc()’s and never free()’s by design. This has been a mixed blessing. Such large memory images don’t work all that well with the "scatter paging + partial swapping" memory management of SysIII when memory is constrained to say 4MB of core to mimic the systems of the era. On the other hand, parallel compiling the kernel on SysIII itself heavily exercises the partial swapping code and has been a great test case for debugging.
Many thanks to Ken, Rob, Richard and all the others who created this fine tool chain!
> James Johnston:
>> Yeah, but Rob, where was Fred? I was there in Acoustics Research (not
>> 127!) then, using R70 for UCDS.
> Rob Pike:
> Not in (1)127 yet. He was transferred in some time after I arrived. Not
> sure quite when. Mid-80s maybe.
> =====
> Early-to-mid 1980s. ftg was already there when I interviewed in early 1984.
> Norman Wilson
In 1980 Fred was a stalwart of the computer center. There he exhibited
great creativity, including the invention of "quest" for sniffing out
security lapses throughout the BTL computer network. His findings
underpinned the headline claim of the Labs' first computer-security
task force (1982), "It is easy and not very risky to pilfer data from
Bell Labs computers".
Doug
James Johnston:
> Yeah, but Rob, where was Fred? I was there in Acoustics Research (not
> 127!) then, using R70 for UCDS.
Rob Pike:
Not in (1)127 yet. He was transferred in some time after I arrived. Not
sure quite when. Mid-80s maybe.
=====
Early-to-mid 1980s. ftg was already there when I interviewed in early 1984.
Norman Wilson
Toronto ON
From [1]:
The X11 Conservancy Project (X11CP) pulls together the disparate set of programs which were being written between the very late 80s, and early 90s -- usually for Unix and Linux.
…snip…
As the Internet expanded and Linux distributions became established, certain FTP sites were largely used to host some of the more established programs, as well as those found in the LSM.
…snip…
But the early dawn of free software, especially around applications written for X11, using Motif and XT and other widget libraries has now mostly been consigned to obscurity.
With X11 itself now under threat of no longer being developed in favour of Wayland, these applications are going to be harder to run and be discovered.
Hence, the X11CP is designed to be a central place for hosting the sources of these applications, and to showcase their unique history and properties. In keeping this software active, it will help keep an important historical point alive.
[1] https://x11cp.org/
— Michelangelo
> From: Phil Budne
> The cover page has:
> ...
> Upper right corner:
> PA-1C300-01
> Section 1
> Issue 1, January 1976
> AT&TCo SPCS
I have a very similar manual; I got it a long time ago, and no longer recall
how I came by it. Minor difference: mine is for PD-1C301-01, and at the
bottom of the page, it says "ISSUE 1 1/30/76", followed by a prominent trade
secret notice.
TUHS has a copy of this version, here:
https://www.tuhs.org/Archive/Distributions/USDL/unix_program_description_ja…
The README file in that directory:
https://www.tuhs.org/Archive/Distributions/USDL/README
speculates that "this is PWB/1.0" but admits "this has not yet been
confirmed". It's not PWB1, it's stock V6. If you look at the writeup of
sys1$exec(), on pg. 39 of the PDF, you'll see it describing how arguments are
copied into a disk buffer; that right there is the tip-off. In PWB1 (whose
source we do have):
https://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/sys/os/sys1.c
you'll see that PWB1 accumulates the arguments in a chunk of swap space.
V6 _does_ use a disk buffer for this:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/ken/sys1.c
So this is for V6.
Noel
The October 1984 BSTJ article by Felton, Miller and Milner
https://www.bell-labs.com/usr/dmr/www/otherports/ibm.pdf
Describes an AT&T port of UNIX to System/370 using TSS/370
underpinnings as the "Resident System Supervisor" and used as the 5ESS
switching system development environment.
I also found mention at http://www.columbia.edu/~rh120/ch106.x09
chapter 9 of http://www.columbia.edu/~rh120/ with footnote 96:
Ian Johnstone, who had been the tutor at University of New
South Wales working with Professor John Lions, was one of the
researchers invited to Bell Labs. He managed the completion at
AT&T Bell Labs of the port of Unix to the IBM 370 computer. See
"Unix on Big Iron" by Ian Johnstone and Steve Rosenthal, UNIX
Review, October, 1984, p. 26. Johnstone also led the group that did
the port to the AT&T 2B20A multiprocessor system.
I found
https://ia902801.us.archive.org/3/items/Unix_Review_1984_Oct.pdf/Unix_Revie…
"BIG UNIX: The Whys and Wherefores" (pdf p.24), which only offers rationale.
Also:
"IBM's own involvement in Unix can be dated to 1979, when it
assisted Bell Labs in doing its own Unix port to the 370 (to
be used as a build host for the 5ESS switch's software). In
the process, IBM made modifications to the TSS/370 hypervisor
to better support Unix.[12]"
at https://en.wikipedia.org/wiki/IBM_AIX#cite_ref-att-s370-unix_12-0
Is there any other surviving documentation about the system?
Any recall of what branch of AT&T UNIX it was based on?
Thanks!
Phil
> From: Bakul Shah
> There is a further para:
> Reducing external memory fragmentation to zero by utilizing the VAX-
> 11/780 memory mapping hardware for scatter loading is high on the list
> of things to do in the second implementation pass.
I'm curious as to exactly what is meant by "external memory"? They must mean
memory on the Synchronous Backplane Interconnect:
http://gunkies.org/wiki/Synchronous_Backplane_Interconnect
I.e. what most of us would call 'main memory'.
If this code didn't even allocate main memory by pages, instead of in
process-size blocks, it sounds like it's much like 32V (or is it 32V that's
being discussed; I thought this thread had moved on to the Reiser demand
paging version - my apologies it I've gotten lost).
Also, this note:
http://gunkies.org/wiki/Talk:CB-UNIX
from Dale DeJager (which he kindly gave me permission to post) gives a fair
amount of detail on the relationship between the Research and CB/UNIX
versions, with a brief mention of USG - precisely the era, and relationships,
that are so poorly documented. Interestingly, he indicates that the early
versions of what later became CB/UNIX used something in the V1/V3 range (V4
was the first one in C), so it dates back earlier than most people apparently
assume.
If anyone else has any first-hand notes (i.e.from people who were there at the
time), about the relationship between all the early systems, for which the
author has given permiosssion to post it, please send it to me and I will
add it to the appropriate article on the CHWiki.
Probably the most needed is more about the roots of USG; Dale has filled in
CB/UNIX, and the roots of PWB are covered fairly well in the BSTJ article
on it:
https://archive.org/details/bstj57-6-2177
at least, for PWB1. Anything that covers the later PWBs would likewise be
gratefully receied.
I suppose I should also write up the relationships of the later UNIXen - 32V
and its descendants too - any material sent to me about them will be most
gratefully received. (If anyone want a CHWiki account, to write it up
themselves, please let me know).
Noel
Good evening folks. I'm starting a new thread to pass along info as I scan materials from the 3B20S manual that I picked up. I figured it'd be easier to trickle out the bits folks ask me for first and then continue to scan the rest, that way anyone looking to sink their teeth into something specific can be sated first.
With that, the first scan (and frankly one of my favorite things about this manual) is the cover itself: https://commons.wikimedia.org/wiki/File:UNIX4.1UsersManualCover.png
Someone had mentioned the idea of making this into a poster and I gotta say, I'd gladly put one up. The image definitely would need some cleanup for that, I just scanned it like it came, haven't tried to clean up any of the wear of time yet. Sadly, the back cover isn't emblazoned with a big Bell logo like the 3.0 and 5.0 (Bell variant) manuals, so scanning that would be a boring white piece of cardstock.
Anywho, the next round which may come later this evening or sometime this weekend is going to be various *ROFF-related documents, so documents like troff(1), mm(5), etc.
- Matt G.
> From: Bakul Shah
> Is there a publicly available description of Reiser's VM system? I
> found "A Unix operating system for the DEC VAX 11/780 Computer" by
> London & Reiser which includes a long paragraph on VM (included below)
That para is basically all about the VAX paging hardware; it doesn't say
anything about how that (any :-) Unix actually uses it.
Noel
Good morning all. I've been doing some historical research on the UUCP cu utility this morning and have come across a little discrepancy between the various UNIX streams I was wondering if someone could illuminate.
So cu as of V7 supported the ~$ escape, a means of calling a local procedure and emitting stdout over the TTY line to the remote machine, all fine and good for packaging a character stream to emit. However, what I'm not finding in that age of documentation is any means of requesting std*in* from the TTY line as input to a local procedure (in essence running a text filter or handshake-driven protocols over cu). The context in which I'm researching this is integrating cu into my bare-metal SBC programming using XMODEM so I can rest a little easier my process is based on tools I'll probably find in most places.
So old fashioned Mike Lesk-era cu only seems to do stdout redirect, but no stdin. I did some further digging and it looks like different UUCP implementations cracked this nut with different escapes, with BSD eventually going with ~C and Taylor UUCP opting for ~+. Checking the current illumos manual pages (for a SVR4-ish example) doesn't turn up any command for this. This is indicative of there never being an agreed-upon mechanism for doing this, although I could see this being a very useful mechanism.
What I'm curious about is if the lack of a bi-directional redirect in early cu is reflective of a lack of need for that sort of functionality at the time or that matters such as that were handled through a different mechanism. One thought I did have is that there wasn't file locking at the time (right?) and so theoretically nothing would prevent one from using a cu session on one terminal to send interactive commands and then a second using fd redirects in the shell to run filters/protocols separately of the interactive stream.
- Matt G.
https://bitsavers.org/pdf/usenix/USENIX_1986_Winter_Technical_Conference_Pr…
Here is the URL
(All I did was search for ‘“Unix on big iron” usenix proceedings’)
On Mon, Dec 19, 2022 at 4:59 PM James Frew <frew(a)ucsb.edu> wrote:
> Hello Marc,
>
> Where did you find the 1986 USENIX proceedings?
>
> Reason I'm asking is, I have a pile of pre-1991 USENIX proceedings that I
> haven't found online, and I'm planning to get them scanned. It would be
> great if I didn't have to go the trouble :-)
>
> Thanks,
> /James Frew
> On 2022-12-19 13:36, Marc Donner wrote:
>
>
> There was a track of USENIX 1986 called "UNIX on Big Iron." Peter Capek
> of IBM was the chair and Gene Miya and Jim Lipkis rounded out the program
> committee. The proceedings are available.
>
> --
=====
nygeek.netmindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home>
> To what extent were the Unix folks at Bell Labs already familiar with DEC systems before the PDP-7?
Some awareness, but no hands-on experience,
> was any thought given to trying to get a 360 system?
Very serious thought. However, virtual memory was a non-negotiable
desideratum, to which Gene Amdahl was implacably opposed because
demand paging would devastate hardware performance. Soon after GE got
the nod, IBM revealed Gerrit Blaauw's skunk-works project, the 360/67,
but by then the die had been cast. Michigan bought one and built a
nice time-sharing system that was running well before Multics.
Doug
I think this cited quote from
https://www.joelonsoftware.com/2001/12/11/ is urban legend.
Why do C strings [have a terminating NUl]? It’s because the PDP-7
microprocessor, on which UNIX and the C programming language were
invented, had an ASCIZ string type. ASCIZ meant “ASCII with a Z (zero)
at the end.”
This assertion seems unlikely since neither C nor the library string
functions existed on the PDP-7. In fact the "terminating character" of
a string in the PDP-7 language B was the pair '*e'. A string was a
sequence of words, packed two characters per word. For odd-length
strings half of the final one-character word was effectively
NUL-padded as described below.
One might trace null termination to the original (1965) proposal for
ASCII, https://dl.acm.org/doi/10.1145/363831.363839. There the only
role specifically suggested for NUL is to "serve to accomplish time
fill or media fill." With character-addressable hardware (not the
PDP-7), it is only a small step from using NUL as terminal padding to
the convention of null termination in all cases.
Ken would probably know for sure whether there's any truth in the
attribution to ASCIZ.
Doug
> From: Bob Supnik
> The PDP11 had .ASCIZ, starting with Macro11 in 1972.
I was just about to report on my results, after a tiny bit of digging, which
included this. The important datum is that PAL-11 (in DEC-11-GGPB-D, "paper
tape software", April 1970, revised March 1971), which _preceded_ Macro-11,
_does not_ include .ASCIZ (although it has .ASCII). My oldest Macro-11 book
(DEC-11-OMACA-B-D, "BATCH-11/DOS-11 Assembler (MACRO-11)", April 1972, revised
March 1973) does have .ASCIZ. So in the DEC PDP-11 universe, it dates from
sometime between 1970 and 1972.
I'm not sure if Bell had any of the DEC paper tape software: "In early 1970 we
proposed acquisition of a PDP-11, which had just been introduced by
Digital. ... an order for a PDP-11 was placed in May. The processor arrived at
the end of the summer, but the PDP-11 was so new a product that no disk was
available until December. In the meantime, a rudimentary, core-only version of
Unix was written using a cross-assembler on the PDP-7." So the .ASCIZ in
Macro-11 wasn't until a couple of years later.
Noel
[Resend from my subscribed address, as the list is subscribers-only, it seems]
In C, most syscalls and libc functions use strings, that is, zero or more
non-NUL characters followed by a NUL.
However, there are a few cases where other incompatible character constructs are
used. A few examples:
- utmpx(5): Some of its fields use fixed-width char arrays which contain a
sequence of non-NUL characters, and padding of NULs to fill the rest (although
some systems only require a NUL to delimit the padding, which can then contain
garbage).
- Some programs use just a pointer and a length to determine sequences of
characters. No NULs involved.
- abstract sockets: On Linux, abstract Unix socket names are stored in a
fixed-width array, and all bytes are meaningful (up to the specified size), even
if they are NULs. Only special that that the first byte is NUL.
Since those are only rare cases, those constructs don't seem to have a name;
some programmers call them strings (quite confusingly).
Has there been any de-facto standard (or informal naming) to call those things,
and differentiate them?
Thanks,
Alex
--
<http://www.alejandro-colomar.es/>
All,
I recently migrated my blog - it's new and improved, of course:) over to
https://decuser.github.io. When I saw Warren was awarded Usenix's "The
Flame" last week, I thought it appropriate that one of my first new blog
posts celebrate Warren and his well deserved award.
Here's the post:
https://decuser.github.io/unix/2022/12/15/usenix-flame-award-2022.html
Thanks again to Warren, both for the initiative, and for the maintenance
of one of my all time favorite archives.
Thanks,
Will
Having recently emeritated, I'm clearing out my university office and
giving away hundreds of books. It occurs to me that some of them may be
of interest to some of the folks on this list. (Before you ask, no, you
can't have my original printed-on-Kleenex versions of the Lions notes...)
Most of the books are listed here:
https://www.librarything.com/catalog/james.frew . They're (alas) utterly
uncategorized, but include a fair amount of UNIX, C, and general CS stuff.
I also have some manuals and USENIX conference proceedings even
LibraryThing couldn't locate; they're listed in the attached Markdown
file. None of these proceedings are online at usenix.org, so I'd be
stoked if someone volunteered to scan them.
If you want any of them, let me know, and we'll figure out some way to
reimburse me for shipping them. (No charge for the "content".) Or if
you're close enough to Santa Barbara, come and get 'em.
Cheers,
/Frew <https://purl.org/frew>
I vaguely remember having read here about 'clever code' which took into
account the time a magnetic drum needed to rotate in order to optimise
access.
Similarly I can imagine that with resource restraints you sometimes need to
be clever in order to get your program to fit. Of course, any such
cleverness needs extra documentation.
I only ever programmed in user space but even then without lots of comment
in my code I may already start wondering what I did after only a few months
past.
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
Wow, this brings back memories. When I was a kid I remember visiting
a guy who had a barn full of computers in or around Princeton, N.J.
There was a Burroughs 500, a PB 250, and a PDP-8. The 500 was a vacuum
tube and nixie display machine. That sucker used a lot of neon, and I
seem to remember that it used about $100 worth of electricity in 1960s
dollars just to warm it up. I think that the PB 250 was one of the
first machines built using transistors. I assume that all of you know
what a PDP-8 is. I remember using the PDP-8 using SNAP (simple numeric
arithmetic processor) to crank out my math homework. Note that the PB
250 also had SNAP, but in that case it was their assembler.
Some of the first serious programming that I did was later at BTL on
516-TSS using FSNAP (floating-point SNAP) written by Heinz. Maybe he
can fill us in on whether it was derived from SNAP.
Anyway, I could only visit the place occasionally because it was far
from home. Does anyone else out there know anything about it? It's a
vague memory brought back by the mention of the 250.
Jon
> From: Stuff Received
> I had always thought of a delay line as a precursor to a register (or
> stack) for storing intermediate results. Is this not an accurate way of
> thinking about it?
No, not at all.
First: delay lines were a memory _technology_ (one that was inherently
serial, not random-access). They preceded all others.
Second: registers used to have two aspects - one now gone (and maybe the
second too). The first was that the _technology_ used to implement them
(latches built out of tubes, then transistors) was faster than main memory -
a distinction now mostly gone, especially since caches blur the speed
distinction between today's main memory and registers. The second was that
registers, being smaller in numbers, could be named with a few bits, allowing
them to be named with a small share of the bits in an instruction. (This one
still remains, although instructions are now so long it's probably less
important.)
Some delay-line machines had two different delay line sizes (since size is
equivalent to average access time) - what one might consider 'registers' were
kept in the small ones, for fast access at all times, whereas main memory
used the longer ones.
Noel
> From: Bakul Shah
> one dealt with it by formatting the disk so that the logical blocks N &
> N+1 (from the OS PoV) were physically more than 1 sector apart. No
> clever coding needed!
An old hack. ('Nothing new', and all that.) DEC Rx01/02 floppies used the
same thing, circa 1976.
Noel
After my posting on Sat, 10 Dec 2022 17:42:14 -0700 about the recent
work on kermit 10.0, some readers asked why a serial line connection
and file transfer tool was still of interest, and a few others
responded with use cases.
Modern kermit has for several years supported ssh connections, and
Unicode, as well: here is a top-level command list:
% kermit
(~/) C-Kermit>? Command, one of the following:
add define hangup msleep resend telnet
answer delete HELP open return touch
apc dial if orientation rlogin trace
array directory increment output rmdir translate
ask disable input pause run transmit
askq do INTRO pdial screen type
assign echo kcd pipe script undeclare
associate edit learn print send undefine
back enable LICENSE pty server version
browse end lineout purge set void
bye evaluate log push shift wait
cd exit login pwd show where
change file logout quit space while
check finish lookup read ssh who
chmod for mail receive statistics write
clear ftp manual redial status xecho
close get message redirect stop xmessage
connect getc minput redo SUPPORT
convert getok mget reget suspend
copy goto mkdir remote switch
date grep mmove remove tail
decrement head msend rename take
or one of the tokens: ! # ( . ; : < @ ^ {
Here are the descriptions of connection and character set translations:
(~/) C-Kermit>help ssh
Syntax: SSH [ options ] <hostname> [ command ]
Makes an SSH connection using the external ssh program via the SET SSH
COMMAND string, which is "ssh -e none" by default. Options for the
external ssh program may be included. If the hostname is followed by a
command, the command is executed on the host instead of an interactive
shell.
(~/) C-Kermit>help connect
Syntax: CONNECT (or C, or CQ) [ switches ]
Connect to a remote computer via the serial communications device given in
the most recent SET LINE command, or to the network host named in the most
recent SET HOST command. Type the escape character followed by C to get
back to the C-Kermit prompt, or followed by ? for a list of CONNECT-mode
escape commands.
Include the /QUIETLY switch to suppress the informational message that
tells you how to escape back, etc. CQ is a synonym for CONNECT /QUIETLY.
Other switches include:
/TRIGGER:string
One or more strings to look for that will cause automatic return to
command mode. To specify one string, just put it right after the
colon, e.g. "/TRIGGER:Goodbye". If the string contains any spaces, you
must enclose it in braces, e.g. "/TRIGGER:{READY TO SEND...}". To
specify more than one trigger, use the following format:
/TRIGGER:{{string1}{string2}...{stringn}}
Upon return from CONNECT mode, the variable \v(trigger) is set to the
trigger string, if any, that was actually encountered. This value, like
all other CONNECT switches applies only to the CONNECT command with which
it is given, and overrides (temporarily) any global SET TERMINAL TRIGGER
string that might be in effect.
Your escape character is Ctrl-\ (ASCII 28, FS)
(~/) C-Kermit>help translate
Syntax: CONVERT file1 cs1 cs2 [ file2 ]
Synonym: TRANSLATE
Converts file1 from the character set cs1 into the character set cs2
and stores the result in file2. The character sets can be any of
C-Kermit's file character sets. If file2 is omitted, the translation
is displayed on the screen. An appropriate intermediate character-set
is chosen automatically, if necessary. Synonym: XLATE. Example:
CONVERT lasagna.txt latin1 utf8 lasagna-utf8.txt
Multiple files can be translated if file2 is a directory or device name,
rather than a filename, or if file2 is omitted.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Clem Cole mentions kermit in connection with the question raised about
the uses of the cu utility.
As an FYI, Kermit's author, Frank da Cruz, is preparing a final
release, version 10.0, and I've been working with him on testing
builds in numerous environments. There are frequent updates during
this work, and the latest snapshots can be found at
https://kermitproject.org/ftp/kermit/pretest/
The x-YYYYMMDD.* bundles do not contain a leading directory, so be
careful to unpack them in an empty directory. The build relies on a
lengthy makefile with platform-specific target names, like irix65,
linux, and solaris11: the leading comments in the makefile provide
further guidance.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Good day all, this may be COFF instead, but I'm not joined over there yet, might need some Warren help/approval.
In any case, received that 3B20S 4.1 manual in good shape, unpacked it, and out fell a little tri-fold titled "The Office Automation System (OAS) Editor-Formatter (ef) Reference Card" emblazoned with the usual Bell Laboratories, non-disclosure note abut the Bell System, and a nice little picture of a terminal I can't identify as well as the full manual for this OAS leaning against it: "The Office Automation System (OAS)" with a nice big Bell logo at the bottom of the spine.
The latter is likely a manual I spotted in a video once and couldn't make out the name/title at the time, thought I was seeing another long-lost UNIX manual. I've never heard of this before, and Google isn't turning up much as Office Automation System appears to be a general industry term.
I seem to recall hearing about ef itself once or twice, some sort of pre-vi screen editor from Bell methinks? Not super familiar with it though, I just seem to recall reading about that before somewhere.
Anywho, dealing with a move in the near future that is hopefully into a home I own, so pretty distracted from that scanning I keep talking about, but hopefully when I'm settled in in a few months I can setup a proper scan bench in my new place and really go to town on things.
- Matt G.
Exciting development in the process of finding lost documentation, just sealed this one on eBay: https://www.ebay.com/itm/385266550881?mkcid=16&mkevt=1&mkrid=711-127632-235…
After the link is a (now closed) auction for a Western Electric 3B20S UNIX User's Manual Release 4.1, something I thought I'd never see and wasn't sure actually exited: print manuals for 4.x.
Once received I'll be curious to see what differences are obvious between this and the 3.0 manual, and this should be easy to scan given the comb binding. What a nice cover too! I always expected if a 4.x manual of some kind popped up it would feature the falling blocks motif of the two starter package sets of technical reports, but the picture of a 3B20S is nice. How auspicious given the recent discussion on the 3B series. I'm particularly curious to see what makes it specifically a 3B20S manual, if that's referring to it only having commands relevant to that one or omitting any commands/info specific to DEC machines.
Either way, exciting times, this is one of those things that I had originally set out to even verify existed when I first started really studying the history of UNIX documentation, so it's vindicating to have found something floating around out there in the wild. Between this and the 4.0 docs we now should have a much clearer picture of that gulf between III and V.
More to come once I receive it!
- Matt G.
I finally got myself a decent scanner, and have scanned my most prized
relic from my summer at Bell Labs - Draft 1 of Kernighan and Ritchie's "The
C Programming Language".
It's early enough that there are no tables of contents or index; of
particular note is that "chapter 8" is a "C Reference Manual" by Dennis
dated May 1, 1977.
This dates from approx July 1977; it has my name on the cover and various
scribbles pointing out typos throughout.
Enjoy!
https://drive.google.com/drive/folders/1OvgKikM8vpZGxNzCjt4BM1ggBX0dlr-y?us…
p.s. I used a Fujitsu FI-8170 scanner, VueScan on Ubuntu, and pdftk-java
to merge front and back pages.
(Recently I mentioned to Doug McIlroy that I had infiltrated IBM East
Fishkill, reputedly one of the largest semiconductor fabs in the world,
with UNIX back in the 1980s. He suggested that I write it up and share it
here, so here it is.)
In 1986 I was working at IBM Research in Yorktown Heights, New York. I had
rejoined in 1984 after completing my PhD in computer science at CMU.
One day I got a phone call from Rick Dill. Rick, a distinguished physicist
who had, among other things, invented a technique that was key to
economically fabricating semiconductor lasers, had been my first boss at
IBM Research. While I’d been in Pittsburgh he had taken an assignment at
IBM’s big semiconductor fab up in East Fishkill. He was working to make
production processes there more efficient. He was about to initiate a
major project, with a large capital cost, that involved deploying a bunch
of computers and he wanted a certified computer scientist at the project
review. He invited me to drive up to Fishkill, about half an hour north of
the research lab, to attend a meeting. I agreed.
At the meeting I learned several things. First of all, the chipmaking
process involved many steps - perhaps fifty or sixty for each wafer full of
chips. The processing steps individually were expensive, and the amount
spent on each wafer was substantial. Because processing was imperfect, it
was imperative to check the results every few steps to make sure everything
was OK. Each wafer included a number of test articles, landing points for
test probes, scattered around the surface. Measurements of these test
articles were carried out on a special piece of equipment, I think bought
from Fairchild Semiconductor. It would take in a boat of wafers (identical
wafers were grouped together on special ceramic holders called boats, for
automatic handling, and all processed identically) and feed each wafer to
the test station, and probe each test article in turn. The result was
about a megabyte of data covering all of the wafers in the boat.
At this point the data had to be analyzed. The analysis program comprised
an interpreter called TAHOE along with a test program, one for each
different wafer being fabricated. The results indicated whether the wafers
in the boat were good, needed some rework, or had to be discarded.
These were the days before local area networking at IBM, so getting the
data from the test machine to the mainframe for analysis involved numerous
manual steps and took about six hours. To improve quality control, each
boat of wafers was only worked during a single eight-hour shift, so getting
the test results generally meant a 24 hour pause in the processing of the
boat, even though the analysis only took a couple of seconds of time on the
mainframe.
IBM had recently released a physically small mainframe based on customized
CPU chips from Motorola. This machine, the size of a large suitcase and
priced at about a million dollars, was suitable to locate next to each test
machine, thus eliminating the six hour wait to see results.
Because there were something like 50 of the big test machines at the
Fishkill site, project represented a major capital expenditure. Getting
funding of this size approved would take six to twelve months, and this
meeting was the first step in seeking this approval.
At the end of the meeting I asked for a copy of the manual for the TAHOE
test language. Someone gave me a copy and I took it home over the weekend
and read through it.
The following Monday I called Rick up and told him that I thought I could
implement an interpreter for the TAHOE language in about a month of work.
That was a tiny enough investment that Rick simply wrote a letter to Ralph
Gomory, then head of IBM Research, to requisition me for a month. I told
the Fishkill folks that I needed a UNIX machine to do this work and they
procured an RT PC running AIX 1. AIX 1 was based on System V. The
critical thing to me was that it had lex, yacc, vi, and make.
They set me up in an empty lab room with the machine and a work table.
Relatively quickly I built a lexical analyzer for the language in lex and
got an approximation to the grammar for the TAHOE language working in
yacc. The rest was implementing the functions for each of the TAHOE
primitives.
I adopted rigorous test automation early, a practice people now call test
driven development. Each time I added a capability to the interpreter I
wrote a scrap of TAHOE code to test it along with a piece of reference
input. I created a test target in the testing Makefile that ran the
interpreter with the test program and the reference input. There were four
directories, one for test scripts, one for input data, one for expected
outputs, and one for actual outputs. There was a big make file that had a
target for each test. Running all of the tests was simply a matter of
typing ‘make test’ in the root of the testing tree. Only if all of the
tests succeeded would I consider a build acceptable.
As I developed the interpreter I learned to build tests also for bugs as I
encountered them. This was because I discovered that I would occasionally
reintroduce bugs, so these tests, with the same structure (test scrap,
input data, reference output, make target) were very useful at catching
backsliding before it got away from me.
After a while I had implemented the entire TAHOE language. I named the
interpreter MONO after looking at the maps of the area near Lake Tahoe and
seeing Mono Lake, a small lake nearby.
Lake Tahoe and Mono Lake, with walking routes between them. Source: Google
Maps
At this point I asked my handler at Fishkill for a set of real input data,
a real test program, and a real set of output data. He got me the files
and I set to work.
The only tricky bit at this stage was the difference in floating point
between the RT PC machine, which used the recently adopted IEEE 754
floating point standard and the idiosyncratic floating point implemented in
the System 370 mainframes at the time. The problem was that the LSB
rounding rules were different in the two machines, resulting in mismatches
in results. These mismatches were way below the resolution of the actual
data, but deciding how to handle this was tricky.
At this point I had an interpreter, MONO, for the TAHOE language that took
one specific TAHOE program, some real data, and produced output that
matched the TAHOE output. Almost done.
I asked my handler, a lovely guy whose name I am ashamed I do not remember,
to get me the regression test suite for TAHOE. He took me over and
introduced me to the woman who managed the team that was developing and
maintaining the TAHOE interpreter. The TAHOE interpreter had been under
development, I gathered, for about 25 years and was a large amount of
assembler code. I asked her for the regression test suite for the TAHOE
interpreter. She did not recognize the term, but I was not dismayed - IBM
had their own names for everything (disk was DASD and a boot program was
IPL) and I figured it would be Polka Dot or something equally evocative. I
described what my regression test suite did and her face lit up. “What a
great idea!” she exclaimed.
Anyway, at that point I handed the interpreter code over to the Fishkill
organization. C compilers were available for the PC by that time, so they
were able to deploy it on PC-AT machines that they located at each testing
machine. Since a PC-AT could be had for about $5,000 in those days the
savings from the original proposal was about $50 million and about a year
of elapsed time. The analysis of a boat’s worth of data on the PC-AT took
perhaps a minute or two, so quite a bit slower than on the mainframe, but
the elimination of the six-hour delay meant that a boat could progress
forward in its processing on the same day rather than a day later.
One of my final conversations with my Fishkill handler was about getting
them some UNIX training. In those days the only way to get UNIX training
was from AT&T. Doing business with AT&T at IBM in those days involved very
high-level approvals - I think it required either the CEO or a direct
report to the CEO. He showed me the form he needed to get approved in
order to take this course, priced at about $1,500 at the time. It required
twelve signatures. When I expressed horror he noted that I shouldn’t worry
because the first six were based in the building we were standing in.
That’s when I began to grasp how big IBM was in those days.
Anyway, about five years later I left IBM. Just before I resigned the
Fishkill folks invited me up to attend a celebratory dinner. Awards were
given to many people involved in the project, including me. I learned that
there was now a department of more than 30 people dedicated to maintaining
the program that had taken me a month to build. Rick Dill noted that one
of the side benefits of the approach that I had taken was the production of
a formal grammar for the TAHOE language.
At one point near the end of the project I had a long reflective
conversation with my Fishkill minder. He spun a metaphor about what I had
done with this project. Roughly speaking, he said, “We were a bunch of
guys cutting down trees by beating on them with stones. We heard that
there was this thing called an axe, and someone sent a guy we thought would
show us how to cut down trees with an axe. Imagine our surprise when he
whipped out a chainsaw.”
=====
nygeek.netmindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home>
All, thank you all for all the congratulations! I was going to pen an e-mail
to the list last night but, after a few celebratory glasses of wine, I demurred.
It still feels weird that Usenix chose me for the Flame award, given such
greats as Doug, Margo, Radia and others have previously received the
same award. In reality, the award belongs to every TUHS member who has
contributed documents, source code, tape images, anecdotes, knowledge
and wisdom, and who have given their time and energy to help others
with problems. I've been a steward of a remarkable community over three
decades and I feel honoured and humbled to receive recognition for it.
Casey told me the names of the people who nominated me. Thank you for
putting my name forward. Getting the e-mail from Casey sure was a surprise :-)
https://www.tuhs.org/Images/flame.jpg
Many thanks for all your support over the years!
Warren
Hello all,
I'm giving a presentation on the AT&T 3B2 at a local makerspace next month, and while I've been preparing the talk I became curious about an aspect that I don't know has been discussed elsewhere.
I'm well aware that the 3B2 was something of a market failure with not much penetration into the wider commercial UNIX space, but I'm very curious to know more about what the reaction was at Bell Labs. When AT&T entered the computer hardware market after the 1984 breakup, I get the impression that there wasn't very much interest in any of it at Bell Labs, is that true?
Can anyone recall what the general mood was regarding the 3B2 (and the 7300 and the 6300, I suppose!)
-Seth
--
Seth Morabito
Poulsbo, WA
web(a)loomcom.com
Around 1985 the computer division of Philips Electronics had a Moterola
68010 based server running MPX (Multi Processor Unix) based on System 5.3
with modification. The 'Multi' part was related to the intelligent LAN and
WAN controllers each with their own 68010 processor and memory. A separate
system image would be downloaded at server boot-time. Truly Multi-Processor
:-)
Here an announcement of the latest (probably last) model, from 1988.
https://techmonitor.ai/technology/philips_ready_with_68030_models_for_its_p…
--
The more I learn the better I understand I know nothing.
> Has anyone roughly calculated “man years” spent developing Unix to 1973 or 1974?
> Under 25 "man-years”? (person years now)
I cannot find the message at the moment (TUHS mail archive search is not working anymore?), but I recall that Doug McIlroy mentioned on this list that 1973 was a miracle year, where Ken & Dennis wrote and debugged over 100,000 lines of code between them. In software, “man year” is an elastic yardstick...
There is also this anecdote by Andy Herzfeld:
===
Quickdraw, the amazing graphics package written entirely by Bill Atkinson, was at the heart of both Lisa and Macintosh. "How many man-years did it take to write QuickDraw?", the Byte magazine reporter asked Steve [Jobs].
Steve turned to look at Bill. "Bill, how long did you spend writing Quickdraw?"
"Well, I worked on it on and off for four years", Bill replied.
Steve paused for a beat and then turned back to the Byte reporter. "Twenty-four man-years. We invested twenty-four man-years in QuickDraw."
Obviously, Steve figured that one Atkinson year equaled six man years, which may have been a modest estimate.
===
There is also another anecdote involving Atkinson. At some point all Apple programmers had to file a weekly report with how many lines of code they wrote that week. After a productive week of refactoring and optimising, he filed a report saying “minus 2,000 lines”.
On DEC's TRU64 UNIX it was /mdec
Making a system image with mkisofs I'd follow with
disklabel -rw -f ${UTMP}/${NAME_ISO} /mdec/rzboot.cdfs /mdec/bootrz.cdfs
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
> From: Dave Horsfall
> MAINDEC was certainly on all of their standalone diagnostic media
Actually, it was the name for all their diagnostics (usually stand-alone),
dating back to the paper tape days - when that was the only form they were
distributed in. So it makes sense that it's a short form of 'MAINDEC'.
Noel
I'm curious about the origin of the directory name /usr/mdec.
(I am reminded of it because I've noticed that it lives on in
at least one of the BSDs.)
I had a vague notion that it meant `DEC maintenance' but that
seems a bit clumsy to describe a place holding boot blocks.
A random web board suggests it meant `magnetic DECtape.'
That's certainly not true by the time I came along, when it
contained the master copy of the disk boot block(s).
But I suppose it could have meant that early on and
the name just carried forward.
A quick skim of the V1-V7 manuals doesn't explain the name.
Anyone have any clearer memories than I do? Doug or Ken or
anyone who was there when it was coined, do you still recall?
Norman Wilson
Toronto ON
> Date: Sat, 12 Nov 2022 17:56:24 -0800
> From: Larry McVoy <lm(a)mcvoy.com>
> Subject: [TUHS] Re: DG UNIX History
>
> It sounds like they could have supported mmap() easily. I'd love to see
> this kernel, it sounds to me like it was SunOS with nicely done SMP
> support. The guy that said he'd never seen anything like it before or
> since, just makes me want to see it more.
> I know someone who was friends with one of the kernel guys, haven't talked
> to her in years but I'll see if I can find anything.
Following on from the exchange on TUHS about DG-UX, it would seem to me that the (Unix) unified cache was invented at least three times for Unix:
- John Reiser at AT&T
- At Sun
- At DG
As to the latter I could find two leads that might help you finding out more. It would seem that this unique Unix is specifically DG-UX version 4:
https://web.archive.org/web/20070930212358/http://www.accessmylibrary.com/c…
and
Michael H. Kelly and Andrew R. Huber, "Engineering a (Multiprocessor) Unix Kernel", Proceedings of the Autumn 1989 EUUG Conference, European Unix Systems User Group, Vienna, Austria, 1989, pp. 7- 19.
The unified cache isn’t mentioned, but it would seem that the multiprocessor redesign might have included it. Maybe the author names are helpful. I could not find the paper online, but there was a web page suggesting that a paper copy still exists in a (university?) library in Sweden.
=====
Publication: DG Review
Publication Date: 01-NOV-88
Author: Huber, Andrew R.
DG-UX 4.00: DG's redesigned kernel lays the foundation for future UNIX systems. (includes related article on DG-UX 4.00's file system and an excerpt from Judith S. Hurwitz's 'Data General's UNIX strategy: an evaluation' report)
COPYRIGHT 1988 New Media Publications
DG/UX 4.00
Revision 4.00 of Data General's native UNIX operating system siginificantly enhances the product and adds unique capabilities not found in other UNIX implementations. This article reviews the goals of DG/UX 4.00 and discusses some of its features.
When DG released DG/UX 1.00 in March, 1985, it was based on AT&T's System V Release 2 and incorporated the Berkeley UNIX file system and networking.
As DG/UX grew, it continued to incorporate functions of the major standard UNIX systems, as illustrated in the following timeline:
* DG/UX 1.00 March, 1985 Based on System V Release 2 and Berkely 4.1.
Included Berkely 4.2 file system and TCP/IP (LAN).
* DG/UX 2.00, September, 1985 Added Berkeley 4.2 system calls.
* DG/UX 3.00, April 1986 Added support for new DG hardware.
* DG/UX 3.10 March, 1987 Added Sun Microsystem's Network File System.sup.(R) Added X Windows.
* DG/UX 4.00, August, 1988 Re-designed and re-implemented kernel and file system.
I spotted this when glancing through a book catalogue; well, with a title
like that how could I miss it?
Subtitled "How 26 Lines of Code Changed the World", edited by Torie Bosch
and illustrated by Kelly Chudler (can't say that I've heard of them).
Summary:
``Programming is behind so much of life today, and this book draws together
a group of distinguished thinkers and technologists to reveal the
stories and people behind the computer coding that shapes our
world. From how university's [sic] databases were set up to
recognise only two genders to the first computer worm and the
first pop-up ad, the diverse topics reveal the consequences of
historical decisions and their long-lasting, profound implications.
Pb $34.99''
Lines of code, eh? :-)
Abbey's Bookshop: www.abbeys.com.au
Disclaimer: I have no connection with them, but I'll likely buy it.
-- Dave
Clem Cole:
Yep -- but not surprising. There were a bunch of folks at DG that had
worked on a single-level store system (Project Fountain-Head) that had
failed [some of that story is described in Kidder's book].
====
Are you sure? I thought Fountainhead was a Rand project.
Norman Wilson
Toronto ON
PS: if you don't get it, consider yourself fortunate.
>> To be honest, I've forgotten many (most) of the details. But that sounds
>> about right. As I remember it, it was like SunOS. The key point was that
>> the kernel only had one view of the memory system period, no FS
>> buffer cache etc...which was a departure from many of the traditional UNIX
>> implementations. IIRC they did not support BSD's mmap -- but check the
> It sounds like they could have supported mmap() easily. I'd love to see
> this kernel, it sounds to me like it was SunOS with nicely done SMP
> support. The guy that said he'd never seen anything like it before or
> since, just makes me want to see it more.
"One view of the memory, period." That describes Multics.
Doug
This is what Scott Lee, who ran the Eclipse at Georgia Tech
recalls. He has given permission for me to forward it, with
the caveat that it was long ago and that "memories are malleable".
Arnold
> Date: Mon, 14 Nov 2022 05:35:51 -0500
> Subject: Re: [TUHS] DG UNIX History
> From: scott(a)thelees.org
> To: arnold(a)skeeve.com
>
> > I'm pretty sure that DG never ported DG-UX to the Nova. There was
> > a native port to the Eclipse (32 bit). There was also a Eunice-style
> > Unix environment that sat on top of their native OS, whatever it was
> > called.
>
> Yeh, that was an MV-10000 that we received. As I remember it, we also got
> a copy of DG-UX, which was a port of SYS Vr2, not r3 as mentioned. I
> think that it may have also had a directory with UCB versions of a bunch
> of the utilities ported over so you could run either SysV tools or UCB
> tools.
>
> LeBlanc was going to use it to teach ADA, so I was building some tools to
> create/maintain user accounts, but I believe that I left just before they
> were actually getting around to that.
>
> I was also playing with it on the side, when no one else was using it, to
> build a small OS on it. I found that it followed a lot of the Nova
> behavior, so I figured out how to write code onto a tape and bootstrap it
> into the machine. Wrote a tape driver and a console driver and was
> working on a disk driver. Targeting putting a small OS on it.
>
> Wow... I had almost forgotten that it even existed until I saw this.
>
> Enjoy,
> Scott
This recent activity on the simh mailing list WRT to DG Nova and
Ecpilse got me wondering. At Locus in the 80s and 90s, we did a lot of
work with DG and DG-UX with their later MP-based ports using commercially
available microprocessors (which I have reported was a very nicely done
system, easy to work on, the locks tended to scale well, e*tc*.).
But I am trying to remember if C or UNIX was on a Nova or an Eclipse. This
could be my failed memory, given that so many people ported V7 in the late
1970s (the infamous 'NUIX' bug from the Series/1 port probably being my
favorite tale). So to the hive mind, did anyone (DG themselves or a
University) ever build 16 or 32-bit tools for the DG architectures and do a
UNIX port, and if so, does anyone know what became of those efforts? Is
this something that needs to be in the TUHS archives also?
Clem
ᐧ
I’ve only recently stumbled across this paper.
It gives the answer to one question I’ve had:
Why did Linux become more popular than everything that came before it?
There were surprises.
The “Dot Boom” then “Dot Bust” along with Y2K.
Microsoft developed an architecture, Active Directory, designed to support Enterprise scale deployments.
Everything Good in A.D. is Old (LDAP, Kerberos, DNS)
everything badly done is New (replicated DB’s & ???).
Other surprises is the rise of “Internet Scale” datacentres, Social Media and Smartphones & Tablets.
All of which are dominated by Linux or Unix derived solutions.
And Virtual Machines on Intel.
IA-64 was in the far future :(
And ARM CPU’s made a big comeback.
==========
The Sourceware Operating System Proposal
9 November 1993
Revision: 1.8
<https://www.landley.net/history/mirror/unix/srcos.html>
==========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
I remember being told back in the 1980s that vi would set the terminal
to "cooked mode" when vi was in "insert mode", so as to reduce expensive
context switching for each character typed. Only vi's "command mode"
would set the terminal to "raw mode" so as to provide immediate feedback
on each (command) character typed. This would be a clever system
performance optimization, and would also explain designing vi around
distinct insert and command modes.
However, I can't find such evidence even as far back as BSD 1. It seems
that in insert mode ESC was processed like any other character.
https://github.com/dspinellis/unix-history-repo/blob/BSD-1-Snapshot-Develop…
Cooked mode was only entered when scrolling in order to receive interrupts.
https://github.com/dspinellis/unix-history-repo/blob/BSD-1-Snapshot-Develop…
Also, for this scheme to work ESC would need to be mapped to an
interrupt key, so as to allow exiting the cooked mode through the
corresponding signal handler. Again, grepping for ESC, did not show me
any such code.
I also remember being told that this optimization was what allowed
twenty students to concurrently perform interactive editing on a VAX
11/780 (running 4.2BSD and then 4.3BSD), and that Emacs was not provided
to students because it was always operating in raw mode.
Was I misled? Was there perhaps a hacked version of vi that worked in
this way?
-Diomidis
> From: Diomidis Spinellis
> I remember being told back in the 1980s that vi would set the terminal
> to "cooked mode" when vi was in "insert mode", so as to reduce expensive
> context switching for each character typed.
> ...
> However, I can't find such evidence even as far back as BSD 1.
Maybe you're thinking of Multics Emacs, which had such a capability:
https://multicians.org/mepap.html
Noel
Hi all, I'll be attending the Usenix SREcon22 Asia/Pacific Conference
which is being held in Sydney, Australia on the 7-9 December. Is anybody
else attending? If so, it'd be nice to catch up with some other TUHSers :-)
https://www.usenix.org/conference/srecon22apac/program
Cheers, Warren
> Touch typists can spot an illtyperate programmer from a mile away.
> They don't even have to be in the same room.
I once thought of touch typing as employment of all fingers. Then I met
Fred Grampp. Using only four fingers, he typed as fast as most good
programmers. He knew where to hit, with a kinesthetic sense that had
progressed beyond dependence on "home keys". It was an athletic
performance, astonishing to watch.
Doug
Some of you may recall my friend Jim Joyce, who was an early proponent of Unix. IIRC, he taught the first course on Unix at UCB. Later on, he started and ran mail-order bookstores and seminars specializing in Unix-related topics, helped to found Unix Review, etc.
In any event, I have about a cubic foot of early Unix papers, saved from Jim's files after his death. It's quite likely that all of these papers are already available in collections, but I'd like to make sure that any exceptions don't get lost. Also, the printed copies may have some independent historical merit. Suggestions?
-r
Larry McVoy reports today:
>> People like Sunview's api enough that there was an Xview toolkit which
>> was Sunview ported to X10/X11.
The interface was nicely documented in three editions of a book (I
have no entry for the second edition):
@String{pub-ORA = "O'Reilly \& {Associates, Inc.}"}
@String{pub-ORA:adr = "981 Chestnut Street, Newton, MA 02164, USA"}
@Book{Heller:1990:XPM,
author = "Dan Heller",
title = "{XView} Programming Manual",
volume = "7",
publisher = pub-ORA,
address = pub-ORA:adr,
pages = "xxviii + 557",
year = "1990",
ISBN = "0-937175-38-2",
ISBN-13 = "978-0-937175-38-5",
LCCN = "QA76.76.W56 D44 v.7 1990",
bibdate = "Tue Dec 14 22:55:18 1993",
bibsource = "http://www.math.utah.edu/pub/tex/bib/master.bib",
acknowledgement = ack-nhfb,
}
@Book{Heller:1991:XPM,
author = "Dan Heller",
title = "{XView} Programming Manual",
volume = "7A",
publisher = pub-ORA,
address = pub-ORA:adr,
edition = "Third",
pages = "xxxvii + 729",
month = sep,
year = "1991",
ISBN = "0-937175-87-0",
ISBN-13 = "978-0-937175-87-3",
LCCN = "QA76.76.W56 H447 1990",
bibdate = "Mon Jan 3 17:55:53 1994",
bibsource = "http://www.math.utah.edu/pub/tex/bib/master.bib",
series = "The Definitive guides to the X Window System",
acknowledgement = ack-nhfb,
}
I have the first edition on a shelf near my campus office chair, and
continue to use olvwm as my window manager on multiple O/Ses, for 30+
years.
Every window manager designed since seems to fail to understand the
importance of user customizable, and pinnable, menus, which I exploit
to the hilt. The menu customization goes into a single, easy to edit,
text file, $HOME/.openwin-menu.
Compare that to the Gnome desktop, with hundreds of files, many of
them binary, stored in hidden directories under $HOME, and for which
any corruption breaks the window system, and prevents login (except
via a GUI console).
Also. olvwm does not litter a default desktop with icons for
applications that many of use would never use: just a simple blank
desktop, with menu popups bound to mouse buttons.
With olvwm, you can have any number of virtual desktops, not just the
2 or 4 offered by more modern window manaugers, and unlike some of
those, windows can be dragged between desktops.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
This may be a bit off-topic, so please forgive me. Lucent is central to
the book. I want to let you know I had a memoir published today, on the
25th anniversary of Lucent's historic policy. Here's the main part of
the press release.
> Before 1997, transgender workers were routinely fired when their
> employers found out they were changing their sex. That changed on Oct.
> 28, 1997, when Lucent Technologies became the first Fortune 500
> company to formally commit that it would not discriminate based on
> "gender identity, characteristics, or expression". Dr. Mary Ann
> Horton, who instigated the change, has written a memoir, Trailblazer:
> Lighting the Path for Transgender Inclusion in Corporate America.
> "When I led transgender-101 workshops, my personal story was people's
> favorite part. They wanted more, and Trailblazer is the result," said
> Horton. "It will be released on the 25th anniversary, Oct. 28."
>
> Horton was a software technology worker at Lucent in Columbus, Ohio,
> when Lucent added the language. It allowed Mary Ann, then known as
> Mark, to come out in the workplace without fear of reprisal. When she
> didn't need to spend energy hiding part of herself, her productivity
> soared, and she was promoted. Three years later, she persuaded Lucent
> to cover gender-confirming medical care in their health insurance. She
> blazed the trail for Apple, Avaya, Xerox, IBM, Chase, and other
> companies to follow.
Nokia blogged about it today.
https://www.nokia.com/about-us/careers/life-at-nokia/employee-blogs/25th-an…
You can find the book at
https://www.amazon.com/Trailblazer-Lighting-Transgender-Equality-Corporate-…
If you read it, please post a review to Amazon.
--
Thanks,
/Mary Ann Horton/ (she/her/ma'am)
maryannhorton.com <https://maryannhorton.com>
"This is a great book" - Monica Helms
"Brave and Important" - Laura L. Engel
Available on Amazon and bn.com!
<https://www.amazon.com/Trailblazer-Lighting-Transgender-Equality-Corporate-…>
Sorry if this is a repost. No idea of the legality, and therefore no idea
how long it will stay:
https://twitter.com/nixcraft/status/1586276475614818305 is the tweet and
https://github.com/Arquivotheca/SunOS-4.1.3 is the repository with this
README, below. Many other OS's there too.
README <https://github.com/Arquivotheca/SunOS-4.1.3#readme>
This is the SunOS 4.1.3 SUNSRC CD-ROM. It contains the source in 3 forms.
1. plain text source, as a ufs tree, rooted at the top level of
this filesystem. Symlinks to the SCCS hierarchy are in place.
2. SCCS hierarchy, rooted at SCCS_DIRECTORIES.
3. a tar image of the SCCS hierarchy, in a file named 4.1.3_SUNSRC.tar.
This is rooted at ./SCCS_DIRECTORIES.
Please see the SunOS 4.1.3 Source Installation Guide for further details.
Following up on my v6 udpate a couple of weeks ago, I've updated my v7
note to use OpenSIMH and bring it up to date. In addition, I've switched
the multi-session notes over to DZ-11 from DC-11 cuz it supports 9600
over telnet.
Here's the link:
http://decuser.blogspot.com/2022/10/installing-and-using-research-unix_29.h…
Changes since revision 2.1 (2/3/2022)
Revision 3.1 (10/29/2022) - minor revision:
Changed over to DZ-11 vs DC-11 for serial connections which allows
for 9600 baud connections.
Revision 3.0 (10/28/2022) - major revision:
Started using OpenSIMH
Restored the learn notes which went missing between 2.0 and 2.1
Updated host notes for Macos Monterey
Cleaned up a number of lingering issues
This note covers building a working v7 instance from tape files that
will run in the OpenSImH emulator. First, the reader is led through the
restoration of a pristine v7 instance from tape to disk. Next, the
reader is led through adding a regular user and making the system
multi-user capable. Then, the reader is shown how to make the system
multi-session cable to allow multiple simultaneous sessions. Finally,
the system is put to use with hello world, DMR style, and the learn
system is enabled.
The note explains each step of the process in detail.
I know branch and link was in the 360; was it earlier? And ... anybody know
who invented it?
This came up in a risc-v meeting just now :-) My claim is that if anybody
knows, they will be in this group.
> From: ron minnich
> I know branch and link was in the 360; was it earlier?
Well, as I understand it, branch and link (BAL and BALR) did a couple of
different things (if I have this wrong, I hope someone will correct me). It
was a subroutine call, but it also loaded a base register.
(Those were used to deal with the /360's bizarro memory management, which was
not 'base and bounds, with a user's virtual address space starting at zero',
like a lot of contemporary machines. Rather, a process saw its actual physical
memory location, so depending on where in memoty a process was loaded, it
would be executing at different addresses visible to it; the base registers
were used to deal with that. This made swapping complicated, since it had to
be swapped back in to the same location.)
Which function of BALR are you enquiring about? The subroutine call part?
> From: Angelo Papenhoff
> The Whirlwind used the A register for this purpose. ...
> Might be earlier than this, I just happen to know the Whirlwind
> somewhat well. It's late 40s machine, so you probably won't find
> anything *much* older.
The only machines older than Whirlwind I know of are the ACE (design;
not implemented until later) and EDVAC.
I have ACE stuff, but i) the documentation is really wierd, and hard to read,
and ii) it's really bizarre (it didn't have opcodes; different registers did
different things). There were subroutines written for it, but it's not clear
how they were called.
The EDVAC, the only thing I have on it is von Neumann's draft, and it's
even harder to read than Turing's ACE Report!
Noel
All,
I have revised my Research Unix Version 6 instructions for 2022, in part to
support the change to OpenSSH and also to bring it along into the modern
era (I did it on my Monterey instance). I updated the links, cleaned up
some lingering issues, and confirmed it working.
Please take a look and let me know if you find any issues,
misrepresentations, missing pieces, etc. I haven't touch v6 in a while, but
it was fun to reminisce.
http://decuser.blogspot.com/2022/10/installing-and-using-research-unix.html
Regards,
Will
>> Looking at net_vdh.h, it seems to be a "VDH-11C"
>> ...
>> I wasn't able to find anything out about it at all. (I have some
>> hardcopy manuals for other ACC IMP interface products, and I was going
>> to look in them to see if any of them listed its manual in a 'see
>> also', but I can't find them.)
> From: Lars Brinkhoff
> Noel Chiappa wrote:
>> Did VDH PDP-11's have a special VDH interface
> Sorry, no idea.
That was a semi-rhetorical question; after I typed it, I did some looking,
and came up with the answer above, the ACC VDH-11.
I did eventually find the hardcopy manuals for other ACC IMP interface
products, but none of them mention the VDH11.
On a hunch, I looked to see if there was a VDH11 driver for ELF, and
sure enough, there was:
https://github.com/pdp11/elf-operating-system/blob/master/files/kdvdh.m11%5…
(If anyone wants to look at it, ktbl.sml hold the register definitions.)
With no manual for the device, and no museum catalog hits to show that
someone has one which hasn't been scanned in yet, that's probably a dead end,
although with the two drivers, one could probably mock up a rudimentary
programmming manual.
I'm not sure there's any point, though; using an LH/DH interface is going to
work as well, and those device are already supported.
> From: Paul Ruizendaal
> impio.c: available here:
Thanks for chasing those all down; I knew the BBN system was based on the NCP
Unix (called in this discussion the 'NOSC' system), so I figured it would
have the missing files, in some form.
Looking at a diff between the damaged impio.c in the NOSC tree, and the
impio.c in the BBN tree, there are some changes (in the section where we have
both versions) between the NOSC one and the BBN one, but it will probably be
possible to take the missing piece off the bank end of the BBN one and tack
it on to the NOSC one.
Somewhere in the document scans available online from DOD, there used to
be a long thing from the UIUC people who did the original NCP Unix. I don't
know if it included source; it might have.
Noel
Berkeley Tague says he invited John to work with the USG in 1978
[ AUUG, below. Also by Ronda Hauben in multiple places ]
[ In two ‘recollections’ / history docs sourced from UNSW, ]
[ the first visit was misremembered as 1976 or 1977. ]
I was wondering when John's two other sabbaticals to Bells Labs were.
The Peter Ivanov interview with John in “Unix Review”, 1985, notes two sabbaticals by then.
After writing the celebrated Commentary on the UNIX Operating System in 1976,
Lions was asked to spend two sabbaticals at the Labs.
This comment in 2000 on “9fans” from Rob Pike
says John also visited in 1989,
but sadly his work by then had been affected.
<https://marc.info/?l=9fans&m=144372955601968&w=2>
Anyone know when the other visit was?
Presumably 1983 or 1984 if John took a semester off every 5 years.
========
In AUUGN Oct 1995, V16 #5, there’s a collection of emails, an ‘interview’ with John
PDF pages 17 & 24
<https://www.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V16.5.pdf>
Berkeley Tague says in 1978 he invited John to work with the USG.
then
He wanted to come to Murray Hill for his sabbatical so it was a win/win situation.
He spent two or three summers at Bell Labs over the years
and supplied us with many of his graduate students
for sabbaticals and permanent employment.
Later:
AUUGN:
What have been the professional highlights of your career?
JL:
For myself, three sabbaticals at Bell Laboratories have been highlights.
For my students, opportunities arose for employment at the Laboratories.
========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
> From: Michael Casadevall
> sys4.c is entirely corrupted, and part of impio.c is cut off
The copy on MIT-CSR (the origin of the copy at TUHS) has the same issues. I
doubt it's possible to recover them from that system; you'll have to find
some other way to recover them (perhaps through a dump of the BBN system), or
re-code them (as you did with sys4.c).
> I do need to do a readthrough for the VDH driver ... I think that might
> be for the radio links to Hawaii and the UK?
No. Read BBN 1822.
The LH and DH bit-serial physical interfaces only work up to about 1000 feet
or so. (Less for LH; DH is logically idential to LH, but uses differential
pairs - the LH is single-sided). VDH is, in the bottom layer, simply a
synchronous serial link, allowing the host to be up to hunreds of miles from
the IMP.
> From: Lars Brinkhoff
> Another it adding emulators for various IMP interfaces. I.e. you will
> not get anywhere without adding one of IMP11A, ACC, or VDH to SIMH.
Did VDH PDP-11's have a special VDH interface, or did they simply use an
off-the-rack DEC synchronous serial interface like a DU11? (More of them
here:
http://gunkies.org/wiki/Category:DEC_Synchronous_Serial_Interfaces
if anyone wants.) Looking at net_vdh.h, it seems to be a "VDH-11C"
Looking online, the VDH-11 seems to be an ACC prodict, but I wasn't able to
find anything out about it at all. (I have some hardcopy manuals for other
ACC IMP interface products, and I was going to look in them to see if any of
them listed its manual in a 'see also', but I can't find them.)
I'm not sure why people did just use an off-the-rack DEC synchronous serial
interface; maybe the VDH11 did a BBN specific CRC, or something (in addition
to using DMA; mostr DEC sync interfaces didn't, IIRC)?
Anyway, you don't want to use VDH.
Noel
Tom Perrine wrote:
> A specific example of the VDH interface was the IMP at NOSC.
>
> IIRC it had 4 ports?
Normally yes (depending on the hardware configuration).
> One was a local connection to the machine at NOSC in the same room/building.
> One was a VDH to UCSD
> One was a VDH to LOGICON - the connection at the LOGICON end was a "56K
> wideband modem", which was a little larger than a 2-drawer file cabinet. I
> also seem to recall that the power supply had tubes.
> Not sure where the 4th port went - I only saw the UCSD and LOGICON ends in
> person.
Maybe this information for IMP 3 and 35 from 1979 can jog your memory.
HOST NOSC-SECURE2, 0/35,USER,TENEX,PDP10,[USC-ISIR1,ISIR1]
HOST LOGICON, 1/35,USER,UNIX,PDP11
HOST ACCAT-TIP, 2/35,USER,TIP,H316,[NELC-TIP]
HOST NOSC-SECURE3, 3/35,USER,UNIX,PDP11
HOST NOSC-CC, 0/3,SERVER,,UNIVAC-1110,[NUC-CC,NOSC-ELF]
HOST NOSC-SECURE1, 1/3,SERVER,UNIX,PDP11,[NUC-SECURE]
HOST NOSC-SDL, 2/3,SERVER,UNIX,PDP11,[NELC-ELF,NELC]
HOST NWC, 3/3,SERVER,EXEC-8,UNIVAC-1110
HOST NPRDC-11, 4/3,SERVER,UNIX,PDP11
>> The BBN with TCP stack is a bit mislabeled: it still appears to
>> support NCP, but none of the client apps are there, but its directly
>> built off the NOSC stack.
>
> That's very good. I hope the NCP support there is in good shape.
>
>> it's probably a fork from earlier in development. 79-80 timespawn
>> would have been *very* early in TCP's life
>
> TCP had been underway since 1973. Experiments called "TCP bakeoffs"
> started around 1979.
That is what the “V6 with TCP” on TUHS is:
Following the success of NCP Unix, it became a base for various TCP experiments in the ’77-’79 era. The first was an implementation by Jack Haverty, that wrapped an existing TCPv2 stack that was written in PDP-11 assembler into a Unix application. It ran in user mode and depended on Rand ports and several extensions that Jack added to the kernel (await/capac and a user mode timing variable, where the clock routine incremented a variable in user space). He used a PDP-11 with little core and the pipes (ports) did not stay in the file buffers, but flushed onto disk. This killed performance: Jack recalls that a bad run would average a few characters per second.
Next Mike Wingfield wrote a TCPv4 stack in C, more or less using the architecture of the above. It was the “winner" of the December 1979 bake-off. I think it is the first TCPv4 implementation for Unix and maybe the oldest surviving source for TCPv4 overall. I wanted to try if it would still interoperate with modern TCP/IP, but I never got around to that. An actual printout survives in the SRI archives and I painstakingly retyped that source, just weeks before Noel found the right tapes :^). Later still, Craig Partridge found a full report and listing in the BBN archives (report no. 3724). This NCP Unix with the Wingfield library is the version that is labeled “BBN V6 with TCP” on TUHS.
Some of the code in the Wingfield stack is to test the protocol. Arpanet essentially offers circuit switching, and some of the code is there to simulate dropped packets, out-of-order packets, etc. It also tested security features that were under consideration, but subsequently dropped as interest shifted to end-to-end encryption.
Again, user mode TCP was not found to be practical, the 16-bit era was ending and that is when Rob Gurwitz was assigned to write a new stack for the VAX (1980). By that time Jack Haverty was his boss. Some parts of the BBN VAX-TCP design still echo the user space origins and experiences of the BBN team in the immediate years before. This stack I got working and it still interoperates with modern TCP/IP (at least it did some 5 years ago).
Jack Haverty can easily be reached via the internet-history mailing list.
I’ve summarised the history here: http://chiselapp.com/user/pnr/repository/TUHS_wiki/wiki?name=early_networki…
I should transfer it to the TUHS wiki or to Gunkies one of these days.
===
I am not sure I understood which files are missing or corrupted and in which NCP Unix trees. Noel retrieved the files from old (mouldy even) tapes so some corruption is quite possible. Pulling together material from various sources can hopefully lead to a working source tree. Glad to help.
Further note that NCP Unix was initially developed on 5th Edition, but soon migrated to 6th edition. I am sure that the various installations tracked new developments and installed extra bits and pieces. The surviving images are from 1979 and for sure would have picked up bits from newer releases and other sources (such as the Uni of Calgary buffer extensions).
Paul
Hey all,
I've been recently working on researching ARPANET and the like for use in
an upcoming video. As a starting point, I've been looking at the early UNIX
networking code in archives, specifically, the NOSC tarball, and I've spent
quite a few hours trying to get building on livestreams.
What I've found is that there's corruption issues in code; which is noted
in JOHNS-NOTE, although it's more severe than I realized. For example
sys4.c is entirely corrupted, and part of impio.c is cut off. However, this
isn’t quite as bad as it sounds. For example, by kitbashing both the
original v6 source code, and the later BBN TCP code, I was able to create a
sys4.c that builds and links which should be close to the original.
Furthermore, it is possible to use the “vdh” target instead of the imp
target to at least try and get the code building. I did successfully get a
kernel to build, and it even prints out a mem message before deadlocking.
My guess is it's either deadlocked waiting for the IMP, or there’s
something wrong with the build. There’s some indication that it might need
a split kernel, although I’ve not had any success with that thus far. I
have uploaded my current build tree to git, as well as tarball with simh
images if anyone wants to try and figure out what has gone wrong. I admit,
my knowledge of PDP-11 assembly and debugging platforms is a bit wanting :)
There’s some indication that this, and the later BBN TCP (which is from
around the same time period) code were built on top of the Programmer’s
Workbench vs. stock v6; especially because some code patches were needed to
get it to build. I did look at the TUHS PWB archives, I see a bunch of
binaries, but absolutely no idea how to install them. I’ve heard that none
of these archives are actually complete, but I'm hoping someone might have
some idea of how to go forward, since, if nothing else, I’d like to end
this with a success story, although I’m happy with as far as I got.
Furthermore, I do know that I can run some of the ARPA level utilities in
MIT ITS on CHAOSNET, which will be an upcoming project, although that is
going to be a dive in and of itself.
In short, I’m hoping someone might be able to provide some insight into
where things have gone wrong:
* Is the netunix kernel I built hanging because of corrupted code, or is
it waiting for non-existent hardware.
* NOTE: The DC-11 driver was not included, but I don’t think I need that
for a single console?
* Is there any versions of PWB that is “easily” installable, since its
very clear the later BBN code requires it (it refers to ncc explicitly)?
* I know IMPs have been emulated, and even have successfully routed
packets, so I’m also trying to figure out how much would still be necessary
to actually recreate a minimal ARPA network?
I did try to build some of the NCP applications regardless; it does appear
that some parts are simply missing. Mailer.cc seems to want a hnconv.o but
no source file exists. The FTP daemon on the other hand wants a library
simply called “j”.
My guess is that even if the NCP code was buildable, the applications might
not be. However, this did make me take a closer look at the BBN code, and
it does have an ifdef for NCP, suggesting that it was still
usable/supported? It makes sense, it seems to have been written before the
TCP/IP flag day. I’m just not sure where to approach compiling it.
I’ve uploaded the code with patches to build with the v6 compiler to github
here: https://github.com/NCommander/network-unix-v6/tree/attempted-repair
NOTE: v6's cc needs a seperate patch to increase the symbol table size;
that's done in the disk image.
Files, with SIMH configuration available here:
https://drive.google.com/file/d/1QS0B3RU_mwXSGtl2En-d0WI3PJ1-udEs/view?usp=…
My livestreams (12 hours or so) are on my YouTube channel:
https://youtube.com/c/NCommander
Feel free to forward this to other lists that may have PDP-11 or ARPANET
experts!
~ NCommander
Has this been covered before? I’ve searched but not found an obvious answer, which would be a timeline, not a table.
Has anyone roughly calculated “man years” spent developing Unix to 1973 or 1974?
Under 25 "man-years”? (person years now)
It compares favourably to the quoted “5,000 man years over 4 years” invested by IBM for the 1st million lines of OS/360 - which by 1978 was estimated a 10M LOC.
In my reading on Multics, I’d not noticed a ‘man years’ estimate, thought the project ran from 1965 to 1985 when Honeywell ‘capped’ development and span off a commercial entity [ Bell Labs leaving April 1969 ].
There were many releases of Multics beginning with "MR 1.0” in 1974 to “MR 11.0” in 1984, with some others later.
<https://www.multicians.org/chrono.html>
Cobato in 1977 wrote that by 1969, when it moved out of ‘development only’, 150 man-years were spent on software.
<https://rcs.uwaterloo.ca/~ali/cs854-f17/papers/multics.pdf>
The original Unix group working with the PDP-11 on the 6th floor should be well known, but I can’t recall seeing a list.
Would the first 1973 conferenceor 1974 CACM paper be the natural cut-off date for an ‘original' group?
There’s a list of "Former members of 1127” maintained on-line.
As an outsider who doesn’t know ppl & dates, can’t extract what I was interested in.
<https://www.spinroot.com/gerard/1127_alumni.html>
There was a constraint on active users / concurrent logins in the “Attic": the number of connected terminals & desks there.
Assuming it took time to get terminals installed elsewhere, then later dedicated phone lines and 300 baud modems outside.
The typesetter used by the patent dept isn’t mentioned.
I’ve a recollection it was fed by mag tape, but unsure of that source.
Infer at least the people named in the piece, which looks very sparse:
Ken & Dennis :)
Morris
Cherry
Kernighan
Plus Joe Osanna working on roff / troff? [ do I have that wrong ]
The 1971 “1st Edition” Manual Intro lists 4 names. No others appear in Sections 1, 3, 6, 7 [ commands & libraries ]
<https://www.tuhs.org/Archive/Distributions/Research/Dennis_v1/1stEdman.html>
ken K. Thompson
dmr D. M. Ritchie
jfo J. F. Ossanna
rhm R. Morris
In June 1974, Dennis wrote in the preface to the Version 5 manual, with he and Ken named as authors.
<https://www.tuhs.org/Archive/Distributions/Research/Dennis_v5/v5man.pdf>
The authors are grateful to
L. L. Cherry,
L. A. Dimino,
R. C. Haight,
S. C. Johnson,
B. W. Kernighan,
M. E. Lesk, and
E. N. Pinson
for their contributions to the system software,
and to
L. E. McMahon for software and for his contributions to this manual.
We are particularly appreciative of the invaluable technical, editorial, and administrative efforts of
J. F. Ossanna,
M. D. Mcllroy, and
R. Morris
========
An Oral History of Unix
<https://www.tuhs.org/Archive/Documentation/OralHistory/>
Assembling the History of Unix [ Report of FRS121 course ]
<https://www.tuhs.org/Archive/Documentation/OralHistory/finalhis.htm#attic>
The center of Unix activity was a sixth-floor room at Murray Hill which contained the PDP-11 that ran Unix.
"Don't think of a fancy laboratory, but it was a room up in the attic," as Morris describes it.
In addition to the programmers, four secretaries from the patent department worked in the attic,
performing the text-processing tasks for which Unix was ostensibly developed.
Robert Morris:
We all worked in the same room.
We worked all up in an attic room on the sixth floor, in Murray Hill.
In space that maybe was one and a half times the size of this hotel room.
We were sitting at adjacent terminals, and adjacent, and we knew each other and we always in fact ate lunch together.
Shared a coffeepot.
So, it was a very close relationship and most of us were both users and contributors and there was a significant initiative for research contribution at all points.
========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Good afternoon folks, linked is a list of all of the call numbers of UNIX-relevant documentation that I've been able to catalogue lately: https://pastebin.com/DbDAhX3W
This isn't exhaustive, I skipped many documents under dept (assuming dept) 305, 306, and 308, focusing mainly on 700, 301, 307, and 320.
I was wondering if anyone that has some knowledge of the numbering system used for these documents in Bell might be able to comment on this in any way. What I've been able to make some determination on is:
700-prefixed call numbers appear to be general Western Electric stuff, most of these manuals being related to switching, power, hardware, etc. However, the UNIX 3.0 manual and 4.0 reference guide are both under this series too. I imagine this was simply because the computer systems group hadn't been formally spun off or otherwise received directive to manage UNIX documentation at this point? In any case, I'd be curious what all else may have gotten 700-series call numbers before the 300-series took over UNIX docs.
As for the 300 series, as far as I can tell 300 is the umbrella for AT&T Computer Systems, with several sub departments handling slightly different (although overlapping in circumstances) concerns. What I have managed to determine is that 301 series encompasses the original System V version documentation, a few "Level II COBOL" documents, as well as some M68000 and Z8000-specific versions of docs (I didn't know UNIX System V ever hit the Z8000, that's cool).
After System V gold, the wealth of UNIX documentation appears to come from code 307-X instead, I'm assuming 307 is whatever permutation of USG/USL happened to exist at the time. However, there are a few other codes that seem to sporadically be involved in UNIX docs as well as other computing docs:
302 - Just a smattering of Writers Workbench docs, very high call number suffixes (950-958).
303 - Bunch of 3B20D (Real-Time-Reliable) docs as well as other 3B20 stuff, mainly hardware manuals but a few SVR2.1-related docs as well for 3B20A, S, and D
304 - Another smattering of 3B20 docs, this time mostly A and S, mix of hardware and UNIX docs
305 - This one is hard to pin down, they've got the basic 3B2 docs, some other guidance docs for non-20 3B computers, and a mishmash of language tools like assemblers, a BASIC interpreter, compilers, and a few odd technical bulletins for products covered in other groups
306 - There wasn't much direct UNIX documentation here, just stuff about 3BNet (3B computer networking?) and the 5620 DOT Mapped terminal
308 - Documentation on a whole mess of software utilities with some odd Sys V manuals sprinkled in. You've got stuff like the "Office Telesystem", Instructional Workbench, more docs on BASIC, Pascal, and COBOL, some Fortran stuff as well, and a few other reference documents
310 - Seems to be entirely related to Documenter's and Writer's Workbenches. Whats odd is there is also a pretty even split of DWB and WWB documents in the 302 and 307 groups, so hard to say why the split, maybe a secondary department producing supplementary literature? Very low call number suffixes, so possibly 302 transitioned into 310 for DWB/WWB support
311 - Might be a "trade book" publishing arm, seems to only contain a few books, including "The C Programming Language"
320 - Might be the "standard systems" trade books arm as opposed to the version/system specific documentation gotten from USL directly. This list contains books like the SVID, Bach's Design of the UNIX Operating System book, some programming guidance books, and the UNIX Programmer's Manual 5 volume series with the metallic alphabet blocks on the cover (echoing the V7 trade release). What's interesting is call number 320-X comes back around with SVR4 as the call code that a number of 386-specific manuals were published under.
341 - This one is very odd, a higher call number than any of the others, but the only docs I could find under this are the System V gold Document, Graphics, Programming, and Support Tools guides, which curiously weren't published under 301 like the rest of the documentation for that version.
Finally, some digestion from this research:
This gives some compelling version-support information in early SysV I wasn't aware of previously:
- System V Gold:
- PDP-11
- VAX-11
- 3B
- M68000
- Z8000
- System V R2:
- VAX-11
- 3B
- M68000
- NS32000
- iAPX 286
It appears Bell also opted to have different documentation sets for different processors in SVR2. We kinda see this later on with i386 variants of the SVR3 and SVR4 documents, but I don't think we ever quite see this wide of a spread of docs straight from AT&T after this.
Also, among the many documents (one I didn't add to the list yet) is one referring specifically to UNIX Release 5.3, not System V R3 or anything like that, but a Release 5.3. I know I've seen "Release 5.2" listed in a few places, which had me curious, is there a well established record of what happened with internal (non research) UNIX after System V was branched? Whether the development stream simply became System V development, or if there was still a totally separate UNIX 5.x branch for a while that, while borrowed into System V at necessary times, did still constitute a distinct branch of development after the initial System V release. I know there is at least evidence of aspects of System V being put into CB UNIX 2.3, meaning CB 2.3 was post-System V, that would make a compelling argument for there being some more development work between CB and USG folks before they put the final bow on the UNIX/TS project and formally routed all efforts to System V.
I'm sure there are other little nuggets of information hiding in there, but that's my digest from this thus far. If anyone knows of any other such efforts to produce a listing of all known UNIX documentation call numbers from AT&T, I'll happily contribute this to their efforts.
- Matt G.
P.S. SysV Gold scans are still inbound, just likely will be a winter project once the rains start and I can't go play outside.
Greetings,
I was looking at the various Usenix tapes we have in the TUHS archive,
trying to sort them all out.
In looking at ug091377.tar.gz in Applications/Usenix_77, I found this
paragraph at the end of its read_me
" Finally, if we have an executed Harvard License on file and
if there is room on your tape, the directory "h" contains the
newest (July 1977) release of the HRSTS system. We have also in-
cluded the old Toronto release in the directory "t" if it was re-
quested from a Toronto licensee."
This tape had the 'h' directory, so I'll be playing around with the HRSTS
system to see if I can get it booting in TUHS (I didn't know we had this
til now)... This tape did not have the 't' directory, however.
What is 'the old Toronoto release'? I've not seen references to it so far
in the other histories of this early period I've encountered. And does
anybody have a copy of it?
Warner
I hate myself a little bit, but I posted an answer to the 'BSD license
origin' in this twitter thread
https://twitter.com/bsdimp/status/1572521676268802049
that people might find interesting.
Please note the caveats at the end of the thread: This is a bare outline
hitting the high points taking only data from release files with no behind
the scenes confirmation about why things changed, nor in-depth exploration
of variations that I know are present, nor do I got into examples from
various USENET postings from the time that stole the license for people's
own different uses.
Nonetheless, I hope it's useful...
Warner
Mike Haertel's quest for the 5620 tools got me thinking. Does anyone
know of an archive of the USL Toolchest at large? It would be cool if
someone had the whole thing on a single tape. But, I suspect many of us
have pieces of it. I'm not sure I know all the pieces that made it up. But
I would like to see the USDL section of Warren's Archive include a sub
directory Toolchest with the collected parts - from Korn shell, the final
version of Writer workbench, to DMD tools and the like. IIRC the final
edition of PCC2 was released as part of it.
thoughts?
Clem