`quotes'
> rules used ... to create British spelling from an American
> English database often leave a lot to be desired.
Among the BUGS listed for spell(1) in v7 was "Britsh spelling was
done by an American".
Nevertheless, at least one British expat thanked me for spell -b.
He had been using the original "spell", and ignoring its reports
of British "misspellings". But, he said, long exposure to American
writing had infected his writing. Spell -b was a blessing, for
revealed where his usage wobbled between traditions.
> I am curious if anyone on the list remembers much
> about the development of the first spell checkers in Unix?
Yes, intimately. They had no relationship to the PDP 10.
The first one was a fantastic tour de force by Bob Morris,
called "typo". Aside from the file "eign" of the very most common
English words, it had no vocabulary. Instead it evaluated the
likelihood that any particular word came from a source with the
same letter-trigram frequencies as the document as a whole. The
words were then printed in increasing order of likelihood. Typos
tended to come early in the list.
Typo, introduced in v3, was very popular until Steve Johnson wrote
"spell", a remarkably short shell script that (efficiciently) looks
up a document's words in the wordlist of Webster's Collegiate
Dictionary, which we had on line. The only "real" coding he did
was to write a simple affix-stripping program to make it possible
to look up plurals, past tenses, etc. If memory serves, Steve's
program is described in Kernighan and Pike. It appeared in v5.
Steve's program was good, but the dictionary isn't an ideal source
for real text, which abounds in proper names and terms of art.
It also has a lot of rare words that don't pull their weight in
a spell checker, and some attractive nuisances, especially obscure
short words from Scots, botany, etc, which are more likely to
arise in everyday text as typos than by intent. Given the basic
success of Steve's program, I undertook to make a more useful
spelling list, along with more vigorous affix stripping (and a
stop list to avert associated traps, e.g. "presenation" =
pre+senate+ion"). That has been described in Bentley's "Programming
Pearls" and in http://www.cs.dartmouth.edu/~doug/spell.pdf.
Morris's program and mine labored under space constraints, so
have some pretty ingenious coding tricks. In fact Morris has
a patent on the way he counted frequencies of the 26^3 trigrams
in 26^3 byes, even though the counts could exceed 256. I did
some heroic (and probabilistic) encoding to squeeze a 30,000
word dictionary into a 64K data space."
Doug
Hi,
I found this paper by bwk referenced in the Unix manpages,
in v4 as: TROFF Made Trivial (unpublished),
in v5 as: TROFF Made Trivial (internal memorandom),
also in the v6 "Unix Reading List",
but not anymore in v7.
Anyone have a copy or a scan?
--
Leah Neukirchen <leah(a)vuxu.org> http://leah.zone
> From: Larry McVoy
> So tape I can see being more weird, but isn't raw disk just "don't put
> it in buffer cache"?
One machines/controllers which are capable of it, with raw devices DMA happens
directly into the buffers in the process (which obviously has to be resident
while the I/O is happening).
Noel
> From: Will Senn
> I don't quite no how to investigate this other than to pore through the
> pdp11/40 instruction manual.
One of these:
https://www.ebay.com/itm/Digital-pdp-Programming-Card-8-Pages/142565890514
is useful; it has a list of all the opcodes in numerical order; something none
of the CPU manuals have, to my recollection. Usually there are a flock of
these "pdp11 Programming Cards" on eBait, but I only see this one at the
moment.
If you do any amount of work with PDP-11 binary, you'll soon find yourself
recognizing the common instructions. E.g. MOV is 01msmr (octal), where 'm' is
a mode specifier, and s and r are source and destination register
numbers. (That's why PDP-11 people are big on octal; the instructions are easy
to read in octal.) More here:
http://gunkies.org/wiki/PDP-11_architecture#Operands
So 0127xx is a move of an immediate operand.
>> You don't need to mount it on DECTape drive - it's just blocks. Mount
>> it as an RK05 image, or a magtape, or whatever.
> I thought disk (RK05) and tape (magtape) blocks were different...
Well, you need to differentiate between DECtape and magtape - very different
beasts.
DECtape on a PDP-11 _only_ supports 256 word (i.e. 512 byte) blocks, the same
as most disks. (Floppies are an exception when it comes to disks - sort
of. The hardware supports 128/256 byte sectors, but the usual driver - not in
V6 or V7 - invisibly makes them look like 512-byte blocks.)
Magtapes are complicated, and I don't remember all the details of how Unix
handles them, but the _hardware_ is prepared to write very long 'blocks', and
there are also separate 'file marks' which the hardware can write, and notice.
But a magtape written in 512-byte blocks, with no file marks, can be treated
like a disk; that's what the V6 distribution tapes look like:
http://gunkies.org/wiki/Installing_UNIX_Sixth_Edition#Installation_tape_con…
and IIRC 'tp' format magtape tapes are written the same way, hardware-wise (so
they look just like DECtapes).
Noel
> From: Will Senn
> (e) UNIX assembler uses the characters $ and "*" where the DEC
> assemblers use "#" and "@" respectively.
Amusing: the "UNIX Assembler Reference Manual" says:
The syntax of the address forms is identical to that in DEC assemblers,
except that "*" has been substituted for "@" and "$" for "#"; the
UNIX typing conventions make "@" and "#" rather inconvenient.
What's amusing is that in almost 40 years, it had never dawned on me that
_that_ was why they'd made the @->*, etc change! "Duhhhh" indeed!
Interesting side note: the UNIX erase/kill characters are described as being
the same as Multics', but since Bell pulled out of the Multics project fairly
early, I wonder if they'd used it long enough to get '@' and '#' hardwired
into their fingers. So I recently has the thought 'Multics was a follow-on to
CTSS, maybe CTSS used the same characters, and that's how they got burned in'.
So I looked in the "CTSS Programmer's Guide" (2nd edition), and no, according
to it (pg. AC.2.02), the erase and kill characters on CTSS were '"' and
'?'. So, so much for that theory!
> (l) The names "_edata" and "_end" are loader pseudo variables which
> define the size of the data segment, and the data segment plus the bss
> segment respectively.
That one threw me, too, when I first started looking at the kernel!
I don't recall if I found documentation about it, or just worked it out: it is
in the UPM, although not in ld(1) like one might expect (at least, not in the
V6 UPM; although in V7:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/man/man1/ld.1
it is there), but in end(3):
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man3/end.3
Noel
Why does the first of these incantations not present text, but the
second does (word is a file)? Neither errors out.
$ <word | sed 20q
$ <word sed 20q
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Clem Cole <clemc(a)ccc.com>
> IIRC Tom Lyons started a 370 port at Princeton and finished it at
> Amdahl. But I think that was using VM
Maybe this is my lack of knowledge of VM showing, but how did having VM help
you over running on the bare hardware?
Noel
https://en.wikipedia.org/wiki/Leonard_Kleinrock#ARPANET
``The first permanent ARPANET link was established on November 21, 1969,
between the IMP at UCLA and the IMP at the Stanford Research Institute.''
And thus from little acorns...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Will Senn
> he is addressing an aspect that was not addressed in either of the
> manual's entries and is very helpful for making the translation between
> PDP-11 Macro Assembler and unix as.
I'm curious - what aspect was that?
Noel
> From: Will Senn <will.senn(a)gmail.com>
> To bone up on assembly language, Lions's commentary is exceptionally
> helpful in explaining assembly as it is implemented in V6. The manual
> itself is really thin
Err, which manual are you referring to there? Not the "UNIX Assembler
Reference Manual":
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/doc/as/as
I would assume, but the 'as(I)' page in the UPM?
Noel
> From: Will Senn
> I'm off to refreshing my pdp-11 assembly language skills...
A couple of things that might help:
- assemble mboot.s and 'od' the result, so when you see something that matches
in the dump of the 0th block, you can look back at the assembler source, to see
what the source looks like
- read the boot block into a PDP-11 debugger ('db' or 'cdb' on V6, 'adb' on
V7; I _think_ 'adb' was available on V7, if not, there are some BSD's that
have it) and use that to disassmble the code
Noel
> The 0th block does seem to contain some PDP-11 binary - a bootstrap of
> some sort. I'll look in more detail in a bit.
OK, I had a quick look, and it seems to be a modified version of mboot.s:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/mdec/mboot.s
I had a look through the rest of the likely files in 'mdec', and I didn't find
a better match. I'm too lazy busy to do a complete dis-assembly, and work out
exactly how it's different, though..
A few observations:
000: 000407 000606 000000 000000 000000 000000 000000 000001
An a.out header, with the 0407 'magic' here performing its original intended
function - to branch past the header.
314: 105737 177560 002375
Some console I/O stuff - this two instruction loop waits for the input
ready bit to be set.
326: 042700 177600 020027 000101 103405 020027 000132 101002
More character processing - the first instruction clears the high bits of R0,
and the next two sets of two instructions compare the contents with two
characters (0101 and 0132), and branch.
444: 000207 005000 021027 000407 001004 016020
460: 000020 020006 103774 012746 137000 005007
This seems like the code that checks to see if the thing is an a.out file
(note the 'cmp *r0, $0407'), but the code is different from that code in
mboot.s; in that, the instruction before the 'clr r0' (at 0446 here) is a
'jsr', whereas in this it's an 'rts pc'. And the code after the 'cmp r0, sp'
and branch is different too. I love the '05007' - not very often you see
_that_ instruction!
502: 012700 177350 012701 177342 012711 000003 105711
Clearly the code at 'taper:' (TC11 version).
Noel
So, I came across this tape:
http://bitsavers.trailing-edge.com/bits/DEC/pdp11/dectape/TU_DECtapes/unix6…
I was curious what was on it, so I read the description at:
http://bitsavers.trailing-edge.com/bits/DEC/pdp11/dectape/TU_DECtapes.txt
UNIX1 PURDUE UNIX TAPES
UNIX2
UNIX4
UNIX6
HARBA1 HARVARD BASIC TAPE 1
HARBA2 HARVARD BASIC TAPE 2
MEGTEK MEGATEK UNIX DRIVER
RAMTEK RAMTEK UNIX DRIVER
Cool, sounds interesting, so I downloaded the unix6.dta file and fired
up simh - after some fiddling, I figured out that I could get a boot
prompt (is that actually from the tape?) if I:
set cpu 11/40
set en tc
att tc0 unix6.dta
boot tc0
=
At that point, I was stuck - the usual tmrk, htrk, and the logical
corollary tcrk didn't do anything except return me to the boot prompt.
I was thinking this was a sixth edition install tape of some sort, but
if it is, I'm not able to figure it out. I thought I would load the tape
into v7 and look at its content using tm or tp, but then I realized that
I didn't have a device set up for TU56 and even if I did, I didn't know
how to do a dir on a tape - yeah, I know, I will go read the manual(s)
in chagrin.
In the meantime, my question for y'all is similar to my other recent
questions, and it goes like this:
When you received an unmarked tape back in the day, how did you go about
figuring out what was on it? What was your process (open the box, know
by looking at it that it was an x rather than a y, load it into the tape
reader and read some bytes off it and know that it was a z, use unix to
read the tape using tm, tp, tar, dd, cpio or what, and so on)? What
advice would you give a future archivist to help them quickly classify
bit copies of tapes :).
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I don't think we had the Fourth Research Edition Unix Programmer's
Manual available in typeset form. I played a bit with the troff manual
pages on TUHS and managed to typeset it into PDF. You can find the PDF
document at https://dspinellis.github.io/unix-v4man/v4man.pdf.
I modernized the old shell scripts and corrected some minor markup
glitches through commits that are recorded on a GitHub repository:
https://github.com/dspinellis/unix-v4man. The process was surprisingly
smooth. The scripts for generating the table of contents and the
permuted index are based on the original ones. The few problems I
encountered in the troff source had to do with missing spaces after
requests, the ^F hyphenation character causing groff to complain, a
failure of groff to honor .li requests followed by a line starting with
a ., and two uses of a lowercase letter for specifying a font. I wrote
from scratch a script to typeset everything into one volume. I could
not find a shell script for typesetting the whole manual in any of the
Research Editions. I assume the process of running the typesetter was
so cumbersome, error prone, and time-consuming that it was manually
performed on a page-by-page basis. Correct me if I'm wrong here.
Diomidis Spinellis
It can be hard to visualise what is on a tape when you have no idea
what is on there.
Attached is a simple tool I wrote "back then", shamlessly copying an
idea by Paul Scorer at Leeds Poly (My video systems lecturer).
It is called tm (tape mark).
-Steve
> From: Arthur Krewat
> For anyone reading old tapes, I implore you to attempt to read data past
> the soft EOT ;)
The guy who read my tape does in fact do that; you'll notice my program has an
option for looking for data after the soft EOT.
Noel
> From: Will Senn
> I think I understand- the bytes that we have on hand are not device
> faithful representations, but rather are failthful representations of
> what is presented to the OS. That is, back in the day, a tape would be
> stored in various formats as would disks, but unix would show these
> devices as streams of bytes, and those are the streams of bytes are what
> have been preserved.
Yes and no.
To start with, one needs to differentiate three different levels; i) what's
actually on the medium; ii) what the device controller presented to the CPU;
and iii) what the OS (Unix in this case) presented to the users.
With the exception of magtapes (which had some semantics available through
Unix for larger records, and file marks, the details of which escape me - but
try looking at the man page for 'dd' in V6 for a flavour of it), you're correct
about what Unix presented to the users.
As to what is preserved; for disks and DECtapes, I think you are broadly
correct. For magtapes, it depends.
E.g. SIMH apparently can consume files which _represent_ magtape contents (i,
above), and which include 'in band' (i.e. part of the byte stream in the file)
meta-data for things like file marks, etc. At least one of the people who
reads old media for a living, when asked to read an old tape, gives you back
one of these files with meta-data in it. Here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/tools/rdsmt.c
is a program which reads one of those files and convert the contents to a file
containing just the data bytes. (I had a tape with a 'dd' save of a
file-system on it, and wanted just the file-system image, on which I deployed
a tool I wrote to grok 4.2 filesystems.)
Also, for disks, it should be remembered that i) and ii) were usually quite
different, as what was actually on the disk included thing like preambles,
headers, CRCs, etc, none of which the CPU usually could even see. (See here:
http://gunkies.org/wiki/RX0x_floppy_drive#Low-level_format
for an example. Each physical drive type would have its own specific low-level
hardware format.) So what's preserved is just an image of what the CPU saw,
which is, for disks and DECtapes, generally the same as what was presented to
the user - i.e. a pile of bytes.
Noel
> From: Will Senn
> So, I came across this tape:
> ...
> I was curious what was on it
'od' is your friend!
If you look here:
http://mercury.lcs.mit.edu/~jnc/tech/V6Unix.html#dumpf
there's a thing which is basically 'od' and 'dd' rolled in together, which
allows you to dump any block you want in a variety of formats (ASCII, 16-bit
words in octal [very useful for PDP-11 binary], etc). I wrote it under CygWin,
for Windows, but it only uses the StdIO library, and similar programs (e.g. my
usassembler) written that way work fine under Losenux.
Try downloading it and compiling it - if it doesn't work, please let me know;
it'd be worth fixing it so it does work on Linux.
> after some fiddling, I figured out that I could get a boot prompt (is
> that actually from the tape?)
The 0th block does seem to contain some PDP-11 binary - a bootstrap of some
sort. I'll look in more detail in a bit.
> I was thinking this was a sixth edition install tape of some sort, but
> if it is, I'm not able to figure it out.
>From what I can see, it's probably a tp-format tape: the 1st block contains
some filenames which I can see in an ASCII dump of it:
speakez/sbrk.s
dcheck.c
df.c
intel/as80.c
intel/optab.8080
> v7 and look at its content using tm or tp, but then I realized that I
> didn't have a device set up for TU56
You don't need to mount it on DECTape drive - it's just blocks. Mount it as
an RK05 image, or a magtape, or whatever.
> When you received an unmarked tape back in the day, how did you go about
> figuring out what was on it?
Generally there would have been some prior communication, and the person
sending it would have told you what it was (e.g. '800 bpi tar', or whatever).
> What advice would you give a future archivist to help them quickly
> classify bit copies of tapes :).
Like I said: "'od' is your friend!"!! :-)
Noel
Random memories, possibly wrong.
In 1977/78 I was at udel and had done a fair amount of work on unix but as
a lowly undergrad did not get to go to the Columbia Usenix meeting. Ed
Szurkowski of udel went. Ed was the grad student who did hardware design
for 11s for Autotote (another story) but also stood up a lot of the early
unix 11s at udel starting in 1976, starting with an 11/70. Mike Muus used
to come up and visit us at udel and Mike and Ed would try to ask questions
the other could not answer. Mike always had a funny story or two.
Ed later went to Bell Labs and I lost track of him.
The directions for the MTA were fairly clear: it listed a stop that you
under no circumstances should get off at, and if you did get off at, you
should not go up to the street, lest you never return. This was no joke.
Some places in NY were pretty hazardous in those days.
I *think* this was the meeting where Ken showed up with a bunch of
magtapes, and Ed claimed that, in Ken's word, they were "... found in the
street."
This part I remember well: Ed returning with two magtapes and our desire to
upgrade. We at udel, like many places, had done lots of our own mods to the
kernel, which we wanted to keep. So we ran a diff between trees, and I
wrote a merge with TECO and ed which got it all put together. I later
realized this was a very early form of 'patch', as it used patterns, not
line numbers, to figure out how to paste things back together. I really got
to love regex in those years.
Except for one file: the tools just would not merge them. Ed later realized
there was one key difference that we had not noticed, a missing comment,
namely, the Western Electric copyright notice ...
I'm kinda sorry that our "udel Unix" is lost to the great /dev/null, it
would be interesting to see it now.
ron
> From: Clem Cole
> stp is from the Harvard distribution.
The MIT PWB1 system I have has the source; the header says:
M. Ferentz
Brooklyn College of CUNY
September 1976
If it can't be found on TUHS, I can upload it.
No man page, though. :-(
Noel
Ralph Corderoy:
ed(1) pre-dates pipes. When pipes came along, stderr was needed, and
lots of new idioms were found to make use of them. Why didn't ed gain a
`filter' command to accompany `r !foo' and `w !bar'?
===
I sometimes wonder that too.
When I use `ed,' it is usually really qed, an extended ed
written by the late-1970s UNIX crowd here at U of T. (Rob
Pike, Tom Duff, David Tilbrook, and Hugh Redelmeier, I think.)
qed is something of a kitchen sink, full of clumsy programmability
features that I never use. The things that keep me using it are:
-- Multiple buffers, each possibly associated with a different
file or just anonymous
-- The ability to copy or move text (the traditional t and m
commands) between buffers as well as within one
-- The ability to send part or all of a buffer to a shell command,
to read data in from a shell command, or to send data out and
replace it with that from the shell command:
>mail user ...
<ps -ef
|tr a-z A-Z
I use the last quite often; it makes qed sort of a workbench for
manipulating and mining text. One can do the same with the shell
and temporary files, but using an editor buffer is much handier.
sam has similar abilities (but without all the needless programmability).
Were sam less clumsy to use in its non-graphical mode, I'd probably
have abandoned qed for sam.
Norman Wilson
Toronto ON (for real now)
> From: Ralph Corderoy
> Then the real definition, ending in an execution of the empty `q'.
> qq/4$^Ma2^[@qq
Gah. That reminds me of nothing so much as TECO (may it long Rest in Peace).
Noel
>Speaking of which, am I the only one annoyed by Penguin/OS' silly coloured
"ls" output?
Syntax coloring, of which IDE's seem to be enamored, always
looks to me like a ransom note. For folks who like colorized
text, Writers Workbench had a tool that can be harnessed to
do a bang-up job of syntax colorizing for English: "parts"
did a remarkable job of inferring parts of spechc in running
text.
Doug
I started with V6 Unix at UC Santa Barbara in 1977. I remember
that when V7 came out, I learned about the 'make' program and
started using it with great success to help me build a large
Fortran package for signal processing.
For its size, there was a lot going on in Santa Barbara at that
time. It was one of the first 4 Arpanet nodes, and there were
a bunch of companies making networking products and doing speech
research as a result.
I was a student at UC Santa Barbara but I started toying with
the idea of finding a real job, mostly to make more money.
I found several possibilities and went to interview at one.
This place had an a need for somebody to, in essence, be a
human 'make' program. The computer they used, some kind of
Data General, was so slow that they couldn't do a build more that
once or twice a day. So, in an attempt to speed up the build,
they wanted to hire somebody who would, by hand, keep track
of the last modification date of all the components in the
package they sold, and do a build that only performed
the necessary steps to generate the package - in other
words a human make program. Apparently they figured that
this would save enough time to justify the $24K salary they
were willing to pay. $24K in 1978 wasn't a bad salary at all.
I didn't take the job, but I've often thought that what I should
have done would have been to take the job under the condition
that I could mostly work remotely. Then, I could have used the
'make' program on our V7 Unix system to generate the optimal
script to build the package, and then taken the script back
to the company to run on the Data General computer. I figure
this would have taken maybe an hour a day. The rest of the time
I could have spent on the beach thinking about ways to spend that
$24K.
Jon Forrest
The infamous Morris Worm was released in 1988; making use of known
vulnerabilities in Sendmail/finger/RSH (and weak passwords), it took out a
metric shitload of SUN-3s and 4BSD Vaxen (the author claimed that it was
accidental, but the idiot hadn't tested it on an isolated network first).
A temporary "condom" was discovered by Rich Kulawiec with "mkdir /tmp/sh".
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Hi All.
In 1983, while a grad student at Ga Tech, I did some contract programming
at Southern Bell. The system was a PDP 11/70 running USG Unix 4.0 (yes,
it existed! That's another story.)
Beside ed, the system had a screen editor named 'se' (not related to the
Ga Tech 'se' screen editor). It apparently was written within AT&T.
ISTR that it was written mainly for Vaxen but had been squeezed and made to
run on the PDP 11.
Did anyone else ever use this? Know anything about it? I never saw it
or heard it about it again; it didn't ship with System V.
Thanks,
Arnold
I am somewhat embarrassed to admit that this just occurred to me. Is the
reason that SIGKILL has the numeric value 9 because cats are reported to
have nine lives? Clearly the connection between 'cat' and 'kill -9' would
make for a irreverent but harmless inside joke if so....
- Dan C.
> I especially liked the bit in which Tom's virus infected a multi-level secured UNIX system that Doug McIlroy and Jim Reeds were developing which they didn't spot until they turned on all their protections ... and programs started crashing all over the place.
That's not quite right. The system was running nicely with a
lattice-based protection system (read from below, write to above)
working fine. Processes typocally begin at lattice bottom, but
move to hivel levels depending on what data they see (including,
of course any exec-ed file.) All the standard utilities, being
usable by anyone are at lattice bottom.
Cool, until you realize that highly trusted system programs
such as sudo are at lattice bottom and are protected only by
the old rwx bits, not by the read-write rules. So, following
an idea of Biba's, that integrity rules are the opposite of
secrecy rules. You don't want to forbid writing to high-integrity
places, nor read from low-integrity places.
This was done by setting the default security level away from
the lattice bottom. High-integrity stuff was below this floor;
high-secrecy above.
The Duff story is about the day we moved the floor off bottom.
An integrity violation during the boot sequence stopped the
system cold. Clearly we'd misconfigured something. But no, after
a couple of days of fruitless search, Jim Reeds lit up, "We
caught a virus!" We were unaware of Duff's experiment. He had
been chagrined when it escaped from one machine, but was able
to decontaminate all the machines in the center. Except ours,
which was not on the automatic software distrutioin list, since
it was running a different system.
> From: Andy Kosela
> That is why MIT and Bell Labs represented two very different cultures.
Oi! Not _everyone_ at MIT follows the "so complicated that there are no
obvious deficiencies" approach (to quote Hoare's wonderful aphorism from his
'Emperor's Old Clothes' Turing Award Lecture).
My personal design mantra (it's been at the top of my home page for decades)
is something I found as a footnote in Corbato and Saltzer, 'Multics: The First
Seven Years': "In anything at all, perfection has been attained, not when
there is nothing left to add, but when there is nothing left to take away..."
No doubt some people would be bemused that this should be in a Multics paper,
given the impression people have of Multics as incredibly - overly -
complicated. I'll avoid that discussion for the moment...
I've often tried to understand why some people create these incredibly
complicated systems. (Looking at the voluminous LISP Machine manual set from
Symbolics particularly caused this train of thought...) I think it's because
they are too smart - they can remember all that stuff.
Maybe my brain isn't like that (or perhaps I use large parts of it for other
stuff, like Japanese woodblock prints :-), but I much prefer simpler things.
Or maybe I'm just basically lazy, and like simpler things because they are
easier...
Noel
Hi,
ed(1) pre-dates pipes. When pipes came along, stderr was needed, and
lots of new idioms were found to make use of them. Why didn't ed gain a
`filter' command to accompany `r !foo' and `w !bar'?
To sort this paragraph, I
;/^$/w !sort >t
;/^$/d
-r t
I'd have thought that filtering was common enough to suggest a `^'
command with an implied `!'? (Not `|' since that was uncommon then.)
ex(1) has `!' that filters if applied to a range of lines, and this
carries through to vi's `!' that's often heavily used, especially when
the "file" is just a scratch buffer of commands, input, and output.
--
Cheers, Ralph.
https://plus.google.com/+RalphCorderoy
There's a story I heard once in supercomputing circles from the 80s, that
Ken visited CRI in Minneapolis, sat down at the console of a machine
running the then-new port of Unix to one of the Crays, typed a command, and
said something like "ah, that bug is still there."
Anybody know what the bug was?
It's time to assert my editorial control and say: no more 80 cols please!
Anybody who mentions 80 cols will be forced to use either a Hazeltine or
an ADM3 (not 3a) for a month.
Thanks, Warren
Jim "wkt" Moriarty:
> Anybody who mentions 80 cols will be forced to use either a Hazeltine or
> an ADM3 (not 3a) for a month.
=====
So who has a modern emulator for either of those terminals?
Norman Wilson
Toronto ON
(Still not really in Toronto, but no longer in Texas)
Does anyone know if the image
http://www.tuhs.org/Archive/Distributions/Research/Dennis_v6/v6root.gz
is somehow bootable as-is?
I wasn't able to figure out how to get it to boot, so I went on a quest
to make it bootable. Here's what I did - let me know if this was
overkill or misguided.
Basically, I downloaded the known bootable v6 distribution tape from
Wellsch directory in TUHS. I then extracted 101 blocks from the image
(tmrk, a bootblock, and who knows what else, but seriously what else is
on those first 100 blocks?), converted it to a simh compatible tape
format, and booted a simh generic pdp11/40 with my new little boot tape
and Dennis's root disk attached. I used tmrk to copy the bootstrap from
my little tape to Dennis's root disk (am I clobbering anything
important?). Then voila - it was bootable :)! I could have done it
straight off Ken's tape (after converting it to a simh tape format), but
I wanted to keep the little tape image around for use in other contexts.
Details for the curious are here:
https://decuser.github.io/bootable-tape-v6.txt
I thought the Ken Wellsch tape was basically the same as the Dennis
Ritchie disks, but now I'm not so sure - on Ken's tape, it boots to:
@rkunix
mem = 1035
RESTRICTED RIGHTS
Use, duplication or disclosure is subject to
restrictions stated in Contract with Western
Electric Company, Inc.
#
on Dennis' it boots to:
@rkunix
mem = 1036
#
Makes me curious to see what else is different. Maybe Dennis's was prior
to preparing an official distro where the rights were added to the kernel?
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Nemo:
And for that reason, I have never used Python. (I have a mental block
about that.)
====
I used to feel the same way. A few years ago I held my nose
and jumped in. I'm glad I did; Python is a nifty little
language that, now I know it, hits a sweet spot twixt low-level
C and high-level shell and awk scripts.
Denoting blocks solely by indentation isn't at all bad once
you do it; no worse than adapting from do ... end to C's {}.
What still bugs me about Python:
-- It is unreasonably messy to give someone else a copy of
a program composed of many internal modules. Apparently
you are expected to give her a handful of files, to be
installed in some directory whose name must be added to
the search path in every Python source file that imports
them. I have come up with my own hacky workaround but it
would be nice if the language provided a graceful way to,
e.g., catenate multiple modules into a single source file
for distribution.
-- I miss C's style of for loop, though not often. (Not
quite everything can be turned into a list or an iterator.)
-- I miss one particular case of assigment having a value:
that of
while ((val = function()) != STOP)
do something with val
Again, there are clunky ways to work around this.
As for 80 columns, I'm firmly in the camp that says that
if you need a lot of indentation you're doing it wrong.
Usually it means you should be pulling the details out
into separate functions. Functions that run on for many,
many lines (once upon a time it was meaningful to say
for many pages, but no longer) are just as bad, for the
same reason: it's hard to read and understand the code,
because you have to keep so many details in your head at
once.
Likewise for excessive use of global variables, for that
matter, a flaw that is still quite common in C code.
Having to break an expression or a function call over
multiple lines is more problematic. It's clearer not
to have to do that. It helps not to make function or
variable names longer than necessary, but one can carry
that too far too.
Style and clarity are hard, but they are what distinguishes
a crap hack programmer from a good one.
Norman Wilson
Toronto ON
(Sitting on the lower level of a train in Texas,
not on a pedestal)
So, 80 column folks, would you find this
a(b,
c,
d)
more readable than
a(b,c,d)
(this is a real example, with slightly shortened names)
would you have code review software that automatically bounces out lines
that are 82 columns wide? How far does this go?
I do recall 80 column monitors, but I started on 132 column decwriter IIs
and hence have never had sympathy for 80 columns. It's weird that so many
punched-card standards are required in our code bases now (see: Linux).
moving away from serious ... (look for Presottos' I feel so liberated ...)
http://comp.os.plan9.narkive.com/4W8iThHW/9fans-acme-fonts
Hi,
Everyone on the list is well aware that running V7 in a modern simulator
like SIMH is not a period realistic environment and some of the
"problems" facing the novice enthusiast are considerably different from
those of the era (my terminal is orders of magnitude faster and my
"tape" is a file on a disk). However, many of the challenges facing
someone in 1980, remain for the enthusiast, such as how to run various
commands successfully and how devices interoperate with unix. Of course,
we have do resources and some overlapping experience to draw on -
duckduckgo (googleish), tuhs member experience, and exposure to modern
derivatives like linux, macos, bsd, etc. We also have documentation of
the system in the form of the Programmer's Guide - as pdfs and to some
degree as man pages on the system (haven't found volume 2 documentation
on the instance).
My question for you citizens of that long-ago era :), is this - what was
it like to sit down and learn unix V7 on a PDP? Not from a hardware or
ergonomics perspective, but from a human information processing
perspective. What resources did you consult in your early days and what
did the workflow look like in practical terms.
As an example - today, when I want to know how to accomplish a task in
modern unix, I:
1. Search my own experience and knowledge. If I know the answer, duh, I
know it.
2. Decide if I have enough information about the task to guess at the
requisite commands. If I do, then man command is my friend. If not,
I try man -k task or apropos task where task is a one word summary
of what I'm trying to accomplish.
3. If that fails, then I search for the task online and try what other
folks have done in similar circumstances.
4. If that fails, then I look for an OS specific help list
(linux-mint-help, freebsd forums, etc), do another search there, and
post a question.
5. If that fails, or takes a while, and I know someone with enough
knowledge to help, I ask them.
6. I find and scan relevant literature or books on the subject for
something related.
Repeat as needed.
Programming requires some additional steps:
1. look at source files including headers and code.
2. look at library dependencies
3. ask on dev lists
but otherwise, is similar.
In V7, it's trickier because apropos doesn't exist, or the functional
equivalent man -k, for that matter and books are hard to find (most deal
with System V or BSD. I do find the command 'find /usr/man -name "*" -a
-print | grep task' to be useful in finding man pages, but it's not as
general as apropos.
So, what was the process of learning unix like in the V7 days? What were
your goto resources? More than just man and the sources? Any particular
notes, articles, posts, or books that were really helpful (I found the
article, not the book, "The Unix Programming Environment" by Kernighan
and Mashey, to be enlightening
https://www.computer.org/csdl/mags/co/1981/04/01667315.pdf)?
Regards,
Will
Reading in the AUUGN vol 1 number 4, p. 15 in letter dated April 5,
1979, from Alistair Kilgour, Glasgow writing to Ian Johnstone, New South
Wales about a Unix meeting in the U.K. at University of Kent at
Caterbury (150 attended the meeting) with Ken Thompson and Brian
Kernighan...
Two paragraphs that I found interesting and fun:
Most U.K. users were astonished to hear that one thing which has
_not_ changed in Version 7 is the default for "delete character" and
"delete line" in the teletype handler - we thought we'd seen the last of
# and @! What was very clear was that version 7 is a "snapshot" of a
still developing system, and indeed neither speaker seemed quite sure of
when the snapshot was taken or exactly what it contained. The general
feeling among users at the meeting was that the new tools provided with
version 7 were too good to resist (though many had doubts about the new
Shell). We were however relieved by the assurance that there would
_never_ be a version 8!
...
Finally a quotation, attributed to Steve Johnstone, with which
Brian Kernighan introduced his excellent sales campaign for Unix on the
first day of the conference: " Using TSO is like kicking a dead whale
along the beach". Unix rules.
...
I knew it, it's not just me - those pesky # and @ characters were and
still are really annoying! Oh, and never say never. Unix does rule :).
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I’m trying to understand the origins of void pointers in C. I think they first appeared formally in the C89 spec, but may have existed in earlier compilers.
Of course, as late as V7 there wasn’t much enforcement of types and hence no need for the void* concept and automatic casting. I suppose ‘lint’ would have flagged it though.
In the 4BSD era there was caddr_t, which I think was used for pretty much the same purposes. Did ‘lint’ in the 4BSD era complain about omitted casts to and fro’ caddr_t?
Background to my question is research into the evolution of the socket API in 4.1x BSD and the persistence of ‘struct sockaddr *’ in actual code, even though the design papers show an intent for ‘caddr_t’ (presumably with ‘void*’ semantics, but I’m not sure).
Paul
> From: Will Senn
> what was it like to sit down and learn unix V7 on a PDP? ... What
> resources did you consult in your early days
Well, I started by reading through the UPM (the 8-section thing, with commands
in I, system calls in II, etc). I also read a lot of Unix documentation which
came as larger documents (e.g the Unix Intro, C Tutorial and spec, etc).
I should point out that back then, this was a feasible task. Most man pages
were really _a_ page, and often a short one. By the end of my time on the PWB1
system, there were about 300 commands in /bin (which includes sections II, VI
and VIII), but a good chunk (I'd say probably 50 or more) were ones we'd
written. So there were not that many to start with (section II was maybe 3/4"
of paper), and you could read the UPM in a couple of hours. (I read through it
more than once; you'd get more retained, mentally, on each pass.)
There were no Unix people at all in the group at MIT which I joined, so I
couldn't ask around; there were a bunch in another group on the floor below,
although I didn't use them much - mostly it was RTFM.
Mailing lists? Books? Fuhgeddaboutit!
My next step in learning the kernel was to start reading the sources. (I
didn't have access to Lyons.) I did an 'cref' of the entire system, and
transferred the results to a large piece of paper, so I could see who was
calling who in the kernel.
> What were your goto resources? More than just man and the sources?
That's all there was!
I should point out that reading the sources to command 'x' taught you more
than just how 'x' worked - you saw how people interacted with the kernel, what
it could do, etc, etc.
Noel
> I'd been moving in this direction for a while
Now that I think about it, I may have subconciously absorbed this from Ken's
and Dennis' code in the V6 kernel. Go take a look at it: more than one level
of indent is quite rare in anything there (including tty.c, which has some
pretty complex stuff in it).
I don't know if this was subconcious but deliberate, or concious, or just
something they did for other reasons (e.g. typing long lines took too long on
their TTY's :-), but it's very noticeable, and consistent.
It interesting that both seem to have had the same style; tty.c is in dmr/, so
presumably Dennis', but the stuff in ken/ is the same way.
Oh, one other trick for simplifying code structure (which I noticed looking
through the V6 code a bit - this was something they _didn't_ do in one place,
which I would have done): if you have
if (<condition>) {
<some complex piece of code>
}
<implicit return>
}
invert it:
if (<!condition>)
return;
<some complex piece of code>
}
That gets that whole complex piece of code down one level of nesting.
Noel
I sometimes put the following in shell scripts at the beginning
> /tmp/foo
2>/tmp/foo_err
Drives some folks up the wall because they don’t get it.
David
> On Nov 8, 2017, at 3:21 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Dave Horsfall <dave(a)horsfall.org>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] pre-more pager?
> Message-ID: <alpine.BSF.2.21.1711091019480.4766(a)aneurin.horsfall.org>
> Content-Type: text/plain; charset=US-ASCII; format=flowed
>
> On Wed, 8 Nov 2017, Arthur Krewat wrote:
>
>> head -20 < filename
>
> Or if you really want to confuse the newbies:
>
> < filename head -20
I am trying to find a paper. It was written at Bell Labs,
I thought by Bill Cheswick (though I cannot find it in his name),
entitled something like:
"A hacker caught, and examined"
A description of how a hacker got into Bell labs, and was quarintined on
a single workstation whilst the staff watched what they did.
Does this ring any bells? Anyone have a link?
I know about the Cuckoo's egg, but this was a paper, in troff and -ms macros
as I remember, not a book.
Thanks,
-Steve
On 10 November 2017 at 10:50, Nemo <cym224(a)gmail.com> wrote:
> On 10 November 2017 at 04:56, Alec Muffett <alec.muffett(a)gmail.com> wrote:
>> http://www.cheswick.com/ches/papers/berferd.pdf ?
>
> Wonderful! I first read this as an appendix in his book and now I
> know a second edition of the book is out.
>
> N.
Egg on my face (and keyboard): (1) I failed to send this to the list;
and (2) I already have both editions.
Apologies, all, especially to Alec.
N.
I happened to come across a 1975 report from the University of Warwick
which includes a section on the state of computer networking.
(https://stacks.stanford.edu/file/druid:rq704hx4375/rq704hx4375.pdf)
It contains a section that appears to be a summary of a chat with Sandy
Fraser about Spider (pp. 66-69). It has some information on Spider network
software and Unix that is new to me, and I find amazing. I had not expected
some of this stuff to exist in 1975.
Below some of the noteworthy paragraphs:
[quote]
Spider is an experimental packet switched data communications system that
provides 64 full-duplex asynchronous channels to each connected terminal
(= host computer). The maximum data-rate per channel is 500K bits/sec. Error
control by packet retransmission is automatic and transparent to users.
Terminals are connected to a central switching computer by carrier transmission
loops operating at 1.544 Mb/s, each of which serves several terminals. The
interface between the transmission loop and a terminal contains a stored program
microcomputer. Cooperation between microcomputers and the central switching
computer effects traffic control, error control, and various other functions
related to network operation and maintenance.
The current system contains just one loop with the switching computer (TEMPO I),
four PDP-11/45 computers, two Honeywell 516 computers, two DDP 224 computers,
and one each of Honeywell 6070, PDP-8 and PDP-11/20. In fact many of these are
connected in turn to other items of digital equipment.
Spider has been running since 1972 and recent work has shifted away from the
design and construction of the network itself to developing user services to
support other research activities at Bell Labs. A major example of this has
been to construct a free-standing file store (extracted in fact from UNIX [88])
and connect it to the network. This is available as a service to any user of
the network.
[...]
The ring is used in different ways by the various computers connected to it.
The filestore has already been mentioned. Two computers use this for conventional
back-up, and access it on a file-by-file basis.
Two other machines - dedicated to laboratory experiments - access it on a
block-within-file basis. To help with program development for these dedicated
machines, the UNIX system (available on yet more computers) is used during
"unavailable" periods for editing and program preparation. The user then leaves
his programs in the filestore ready to load when he next gains access to the
dedicated machine.
Two other "dedicated" machines provide the user interface of UNIX, but lack
peripherals and a UNIX kernel! In place of both is a small amount of software
that transmits all calls on the UNIX system to a full UNIX system elsewhere!
The ring system with its filestore also provides a convenient buffering service.
Finally Fraser tells of the time where one of the PDP-11 machines was delivered
sans discs. A small alteration to a UNIX system diverted all disc transfer
requests to the filestore, where a suitable amount of disc was made available.
The system ran a full swapping version of UNIX at about a quarter of the
normal speed.
[/quote]
> From: Jon Forrest
> In the early days of Unix I was told that it wasn't practical to write a
> pager because such a thing would have to run in raw mode in order to
> process single letter commands, such as the space character for going on
> to the next page. Since raw mode introduced a significant amount of
> overhead on already overtaxed machines, it was considered an anti-social
> thing to do.
Something sounds odd here.
One could have written a pager which used 'Return' for each new page, and run
it in cooked mode and not used any less cycles (in fact, more, IIRC the
cooked/raw differences in handling in the TTY driver).
But that's all second-order effects anyway. If one were using a serial line
hooked up to a DZ (and those were common - DH's were _much_ more expensive, so
poor places like my lab at MIT used DZ's), then _every character printed_
caused an interrupt. So the overhead from printing each screen-ful of text was
orders of magnitude greater than the overhead of the user's input to get the
next screen.
> IIRC later versions of Unix added the ability to respond to a specific
> list of single characters without going into raw mode. Of course, that
> didn't help when full-screen editors like vi and the Rand editor came
> out.
Overhead was definitely an issue with EMACS on Multics, where waking up a
process on each character of input was significant. I think Bernie's Multics
EMACS document discusses this. I'm pretty sure they used the Telnet RCTE
option to try and minimize the overhead.
Noel
Hi,
In looking around the system v7 environment, I don't see a more command anywhere. I downloaded, converted, and attached 1bsd, 2bsd, and finally 3bsd and it was there that I found source for more... 3bsd looks like it's for VAX, not PDP-11, and it doesn't want to compile (looking for some externs that I gather are part of the distro's clib).
I may jump ship on V7 and head over to 2.9BSD, which, as I understand it, is a V7 with fixes and these kind of additional tools...
In the meantime, how did folks page through text like man sh and such before more? I know how to view sections of text using sed and ed's ok for paging file text (painful, but workable). I just can't seem to locate the idiomatic way of keeping everything from constantly scrolling out of view! Obviously, this isn't a problem on my mac as terminal works fine, but I like to try to stay in character as a 1970 time traveling unix user :).
Thanks,
Will
Sent from my iPhone
> I do recall 80 column monitors, but I started on 132 column decwriter
> IIs and hence have never had sympathy for 80 columns. It's weird that so
Interesting. I wonder if that's where the 132 column (alternative)
standard came from.
No. IBM's printers were 132 columns even before stored-program
computers.
> From: Steve Simon
> At the risk of stirring up another hornets nest...
I usually try not to join in to non-history threads (this list has too much
random flamage on current topics, IMNSHO), but I'll make an exception here...
> this means my code is usually fairly narrow.
I have what I think are somewhat unusual thoughts on nesting depth, etc, which
are: keep the code structure as simple as possible. That means no complex
nested if/then/else structures, etc (more below).
I'd been moving in this direction for a while, but something that happened a
long time ago at MIT crystalized this for me: a younger colleague brought me a
routine that had a complex nested if/etc structure in it, which had a bug; I
don't recall if he just wanted advice, or what, but I recall I fixed his
problem by..... throwing away half his code. (Literally.)
That really emphasized to me the downside of complexity like that; it makes it
harder to understand, harder to debug, and harder for someone else (or you
yourself, once you've been away from it for a while) to modify it. I've been
getting more formal about that ever since.
So, how did I throw away half his code? I have a number of techniques. I try
not to use 'else' unless it's the very best way to do something. So instead
of:
if (some-conditional) {
<some code>;
}
else {
<some other code>;
}
(iterated, recursively) do:
if (some-conditional) {
<some code>;
xxx;
}
<some other code>;
where xxx is a 'return', or a 'break', or a 'continue'. That way, you can
empty your brain once you hit that, and start with a clean slate.
This is also an example of another thing I do, which is use lots of simple
flow-control primitives; not go-tos (which I abhor), but loop exits, restarts,
etc. Again, it just makes things simpler; once you hit it, you can throw away
all that context/state in your mind.
Keep the control flow structure as simple as possible. If your code is several
levels of indent deep, you have much bigger problems than fitting it on the
screen.
I wrote up a complete programming style guide for people at Proteon working on
the CGW, which codified a lot of this; if there's interest, I'll see if I can
find a copy (I know I have a hardcopy _somewhere_).
Noel
Random832:
... and "p" (which is very minimalistic, despite using a few
V8-specific library features, but V8 isn't in the web-accessible source
archive) from Version 8 research unix.
====
p actually originated outside Bell Labs. I think it was
written in the late 1970s at the University of Toronto.
I first saw it at Caltech, on the UNIX systems in the
High Energy Physics group that I ran for four years.
The first few of those systems were set up by Rob Pike,
who was at Caltech working on his masters degree (in
astrophysics, I think); p was there because Rob brought
it.
For those who don't know it, p has quite an elegant
design. Its default page size is 22 lines, which
nicely fit the world of its time: allowed a couple
of lines of context between pages on a 24-line CRT;
evenly divided the 66-line pages (still!) output
by nroff and pr. It uttered no prompt and required
neither raw nor cbreak nor even no-echo mode: it
printed the final line of each page sans newline;
to continue, you typed return, which was echoed,
completing that line before the next was printed.
It buffered text internally to allow backing up a few
pages, so it was possible to back up even when input
didn't come from a file (something I'm not sure the
more of the time could do).
Internally the buffering was done with a standalone
library that could have been used in other programs,
though I don't know whether it ever was.
p also led me to an enlightening programming story.
One day I was looking over the code, trying to understand
just how the buffering worked; part of it made no sense
to me. To figure it out, I decided to write my own
simple, straightforward version with the same interface;
test and debug it carefully; then lay printouts of the
two implementations side-by-side, and walk through some
test cases. This revealed that the clever code I couldn't
make out in the original was actually buggy: it scrambled
data (I forget the details) when read returned less than
a full buffer.
p (my version) is one of the several programs I bring along
to every new UNIX-derivative environment. I use it daily.
I have also recently noticed a new bug: on OpenBSD, it
sometimes scrambles the last few lines of a file. I have
figured out that that has something to do with whether
fclose (my version uses stdio) is called explicitly or
by exit(3), but I don't know yet whether it's the library's
fault or my own.
Even the simplest programs have things to teach us.
Norman Wilson
Toronto ON
> From: Don Hopkins
> https://stackoverflow.com/questions/268132/invert-if-statement-to-reduce-ne…
Thanks for that - interesting read. (Among other things, it explains where the
'only have one return, at the end' comes from - which I never could understand.)
> Nested if statements have more 'join' points where control flow merges
> back together at the end instead of bailing out early
Exactly. In high-level terms, it's much like having a ton of goto's. Yes, the
_syntax_ is different, but the semantics - and understanding how it works - is
the same - i.e. hard.
Noel
From a mailing list I belong to, a back-and-forth is going on that I am
not involved in. The following sums it up nicely:
> It's really the implied equality that's the problem. For example:
>
> if (flags & DLADM_OPT_PERSIST) {
>
> would be better written as:
>
> if ((flags & DLADM_OPT_PERSIST) == 0) {
Seriously? What do (or would) "original C" programmers think of this? To me, anything non-zero is "true" so the first is perfectly acceptable.
The original assertion in the discussion was that the following is not "right" because of the mixing of bitwise and boolean.
> if ((flags & DLADM_OPT_PERSIST) && (link_flags & DLMGMT_PERSIST)) {
art k.
At the risk of stirring up another hornets nest...
I never accepted the Microsoft
“write functions for maximal indentation, with only one return”
and stick to
“get out if the function asap, try to have success at the bottom of the function”
style.
this means my code is usually fairly narrow.
-Steve
I found head on 3BSD and thought it might be as simple to compile as
cr3, unfortunately, it isn't. I did:
$ cc head.c
head.o?
$ cc -c head.c
head.o?
$ pcc head.c
head.c
head.o?
I thought the assembler, as, was cryptic, at least there you get a one
character error response. What is cc trying to say? Obviously head.o
won't exist unless cc succeeds...
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
On 6 November 2017 at 19:36, Ron Natalie <ron(a)ronnatalie.com> wrote:
> It’s worse than that. “char” is defined as neither signed nor unsigned.
> The signedness is implementation defined. This was why we have the inane
> “signed” keyword.
What was that story about porting an early UNIX to a machine with
different char polarity? I dimly recall only a few problems.
N.
> In the meantime, how did folks page through text like man sh
Chuckle. "Text like man sh" wasn't very long back then.
.,.20p was quite an effective pager. It could go backward,
and it didn't wipe out the screen (which can destroy the
record of the problem that caused you to consult a reference).
I still do it from time to time.
Doug
>
> Regarding the links and old bsd's. The binary cr3 on 1bsd works in v7.
> Also, the book I'm reading has a c program that does paging. But, I'm
> always off down the rabbit hole... I tried to compile the cr3.c source
> and I get this error:
>
> # cc cr3.c
> Undefined:
> _fout
> _flush
> _getc
> _end
>
> My understanding is that cc includes libc by default, so these must not
> be in libc. But getc is standard lib, so what am I missing?
That source is for V6 not V7. V6 did not have the stdio lib yet, but a precursor to that.
The binary you are using has the older io routines statically linked in.
Paul
From: Ron Natalie
> We actually still had some real DEC DH's on our system.
> ...
> At least the DZ doesn't loop on the ready bit like the kernel printf
This reminds me of something I recall reading about John McNamara (designer of
the DH11) admitting that he'd screwed up a bit in the DH design; IIRC it was
that if you set the input silo alarm (interrupt) level to something greater
than 1 character, and someone types one character, and then nothing
else... you never get an input interrupt!
(Which is why some Unix DH driver which sets the silo alarm level > 1 - to get
more efficient processing by reducing the number of interrupts _if possible_ -
has to call a 'input characters ready from the DH' routine in the clock
interrupt code.)
IIRC McNamara said he should have included a timeout, so that if the silo
count was non-zero,and stayed that way for a while, it should have caused
a timeout, and an interrupt.
I was just looking for this, but couldn't find it. I thought it was here:
http://woffordwitch.com/McNamaraDH11.asp
but it doesn't seem to be. Does anyone recall seeing this anywhere, and if so,
where? Web search engines didn't turn anything up, alas...
Noel
> From: Clem Cole
> many (most) Unix sites used Able DHDM's which were cheaper than DEC DZ's
Oh, our DZ's weren't DEC, but some off-brand (I forget what). We were too poor
to afford DEC gear! :-) (Our machines, first a /40, and later a /45, were
hand-me-down's.)
Noel
> From: Will Senn
> how did folks page through text like man sh and such before more?
We wrote our own versions of more. Here is one from the roughly-PWB1 systems
at MIT:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s2/p.c
but on looking at it, I'm not 100% sure it's the one I used there (which is
documented in the MIT UPM).
Here's one I wrote for myself for use on V6:
http://mercury.lcs.mit.edu/~jnc/tech/V6Unix.html#UCmds
before I retrieved all the MIT sources (above), if you want somthing to
actually use on V6/V7.
Noel
I wrote a snippet from my K&R C studies to convert tabs and backspaces
to \t \b to display them, the code looks like this:
/* ex 1-8 */
main()
{
int c, sf;
while((c = getchar()) != EOF) {
if(c == '\t')
printf("\\t");
if(c == '\b')
printf("\\b");
else
putchar(c);
}
}
I'm not looking for code review, but the code is intended to replace the
tabs and backspaces with \t and \b respectively, but I haven't been able
to test it because I can't seem to make a backspace character appear in
input. In later unices, ^V followed by the backspace would work, but
that's not part of v7. Backspace itself is my erase character, so
anytime I just type it, it backspaces :).
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Arthur Krewat <krewat(a)kilonet.net> writes on Mon, 6 Nov 2017 19:34:34 -0500
>> char (at least these days) is signed. So really, it's 7-bit ASCII.
I decided last night to investigate that statement, and updated my
C/C++ features tool to test the sign and range of char and wchar_t.
I ran it in our test lab with physical and virtual machines
representing many different GNU/Hurd, GNU/Linux, *BSD, macOS, Minix,
Solaris, and other Unix family members, on ARM, MIPS, PowerPC, SPARC,
x86, and x86-64 CPU architectures. Here is a summary:
% cat *.log | grep '^ char type is' | sort | uniq -c
157 char type is signed
3 char type is unsigned
The sole outliers are
* Arch Linux ARM on armv7l
* IBM CentOS Linux release 7.4.1708 on PowerPC-8
* SGI IRIX 6.5 on MIPS R10000-SC
for which I found these log data:
Character range and sign...
CHAR_MIN = +0
CHAR_MAX = +255
SCHAR_MIN = -128
SCHAR_MAX = +127
UCHAR_MAX = +255
char type is unsigned
signed char type is signed
unsigned char type is unsigned
The last two lines are expected, but my program checked for an
incorrect result, and would have produced the string "WRONG!" in the
output; no system had that result.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Apologies in advance if this is found too far off list or offensive. But
some how I think many on this list might find it amusing. One of my
friends who stayed academic sent this me…. his comment was this surfaced
when students were asking for better music to code to:
https://www.youtube.com/watch?v=0rG74rG_ubs
Warning language is not PG but the it is ‘rapper cursing’ and might even be
allowed over the airwaves without ‘beeping’ by some stations. That said,
I suggest/recommend head phones so not to offend someone by the language.
Many thanks for all that background to the origins of void pointers.
Now for applying that to the early sockets API.
In the first (1981) and second (4.1a, April 1982) revision of that API, socket addresses were passed as a fixed 16-byte structure, essentially the same as the current struct sockaddr. By the time of the third revision (4.1c/4.2, 1983) of that API it had become a variable sized opaque object, passed around as a pointer and a size.
The 1983 4.2BSD system manual (http://www.cilinder.be/docs/bsd/4.2BSD_Unix_system_manual.pdf) describes it that way, e.g. defining connect() as:
connect(s, name, namelen);
int s; caddr_t name; int namelen;
Note the use of caddr_t in the user level signature. Yet, the actual source code for 4.1c/4.2 BSD only uses caddr_t in the kernel (as pointed out on this list), but continues to use struct sockaddr * in the user land interface. It would seem to me today that void * would have been a more logical choice and with void * having been around for about 3 years in 1983, it might have seemed logical back then as well -- after all, this use case is similar to the malloc() use case. It would have avoided the need for a longish type cast in socket api calls (without further loss of type safety, as that was already lost with the required cast from e.g. struct sockaddr_un* to struct sockaddr* anyway).
Related to this, from the "4.2bsd IPC Primer” (1983, https://www2.eecs.berkeley.edu/Pubs/TechRpts/1983/CSD-83-145.pdf , page 5), it would appear that the format of socket addresses was perhaps unfinished business:
- Apparently at some point in time simple strings were considered as unix domain addresses, i.e. not a sockaddr_un structure. Maybe it was limping on this thought that caused the prototype soackaddr structure not to have a size field — having had that would have simplified the signature of many socket API functions (interestingly, it would seem that such a size field was added in 4.3BSD some 6, 7 years later).
- The examples show no type casts. This may have been for didactical clarity, but perhaps it also suggests a signature with void* semantics.
I’d be happy for any background on this.
Also, about halfway down this page http://tech-insider.org/vms/research/1982/0111.html there is mention of CSRG technical report #4, "CSRG TR/4 (Proposals for the Enhancement of Unix on the Vax)”. I think this may be the initial discussion document from the summer of 1981 that evolved into the 4.2 system manual. Would anybody have a copy of that document?
Paul
Ok... so I got vi to work full screen in a telnet session to the DZ port in V8. BTW TERM=vt132 seems to be the best option given the existing termcap. Yay. Now I'm a happy camper with my v8 instance and I'm reading Rochkind's book and learning lots more about everything from unix error codes to system programming etc. V8 is much more familiar ground to me than earlier versions (mostly vi) at this point.
Anyway, my first question of the day is this - is vt132 the best that I can do terminalwise?
I'm not totally up to speed on terminals in ancient (or modern) unices, but I've been reading, and it seems that if I can match a termcap entry to my emulated terminal with a majority of available capabilities, that I would reach screen nirvana in my instance. Now, it also seems like my mac will emulate different terminals and defaults to something like xterm-256. I don't expect color to be supported, but I don't really know. This leads to a second question, if I take an xterm termcap entry and put it in my termcap file, will it work better than the vt entries?
Is my logic correct, or am I thinking incorrectly about this?
Sidenote: now that I'm in v8 and having used v6 and v7 - McIlroy's reader actually is much, much more interesting and handy! Thanks, Doug!
Sent from my iPhone
Sent from my iPhone
What should I set TERM to on V8 to get the best results on my Mac
Terminal. If I set it to vt52, vt100, or vt132, only 8 lines appear at
the bottom of the terminal window (of about 24 lines):
---
root::0:4:m0130,m322:/:
daemon:x:1:1:m0000,m000:/:
sys:sorry:2:1:m0130,m322:/usr/sys:no-login
bin:sorry:3:4:m0130,m322:/bin:
ken:sorry:6:1:m0130,m322:/usr/ken:
dmr:sorry:7:4:mh1092,m069:/usr/dmr:
nuucp::238:1:mh2019,m285,uucp:/usr/spool/uucppublic:/usr/lib/uucp/uucico
uucp::48:1:mh2019,m285,nowitz:/usr/lib/uucp:
"passwd" 20 lines, 770 characters
----
The 8 line window works about like I'd expect - the arrow keys move up
and down until the screen needs to scroll, then B's and A's show up. I'm
used to that on BSD. Using the j and k keys work better and when I
scroll down enough lines, the lines move up to fill the whole terminal
window and the file can be edited in the full window. Is there a better
TERM setting that will get 24 lines to show up on file open?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
As has been explained, void came from Algol 68 via Steve Bourne.
As no object could be declared void, void* was a useless
construct. A kind of variable that could point to any object
was required to state the type of useful functions like qsort.
To avoid proliferation of keywords, the useless void* was
pressed into service. I do not remember who suggested that.
Doug
> From: Clem Cole
> typing hard started to become more important in the kernel.
I can imagine! The V6 kernel had all sorts of, ah, 'unusual' typing - as I
learned to my cost when I did that hack version of 'splice()' (to allow a
process in a pipline to drop out, and join the two pipes together directly),
which I did in V6 (my familiar kernel haunt).
Since a lot of code does pointer math to generate wait 'channel' numbers,
e.g.:
sleep(ip+2, PPIPE);
when I naively (out of habit) tried to declare my pointers to be the correct
type, the math didn't work any more! ('ip', in this particular case, was
declared to be an 'int *'.)
No doubt part of this was inherited from older versions (of the system, and
C); the code was working, and there was no call to tweak it. The lack of
casts/coercion in the V6 C compiler may have been an issue, too - I had to do
some equally odd things to make my splice() code work!
Noel
This caught my attention. Did early C really have min and max? Were they used for anything? In those days I was a BCPL user, which IIRC, did not have such things.
-Larry
> Begin forwarded message:
>
> From: Leo Broukhis <leob(a)mailcom.com>
> Subject: [Simh] An abandoned piece of K&R C
> Date: 2017, November 3 at 1:14:42 AM EDT
> To: "simh(a)trailing-edge.com" <simh(a)trailing-edge.com>
>
> https://retrocomputing.stackexchange.com/q/4965/4025 <https://retrocomputing.stackexchange.com/q/4965/4025>
>
> In the UNIX V7 version of the C language, there were the /\ (min) and the \/ (max) operators. In the source of the scanner part of the compiler,
>
> case BSLASH:
> if (subseq('/', 0, 1))
> return(MAX);
> goto unkn;
>
> case DIVIDE:
> if (subseq('\\', 0, 1))
> return(MIN);
> ...
>
> However, attempting to use them reveals that the corresponding part in the code generator is missing. Trying to compile
>
> foo(a, b) { return a \/ b; }
>
> results in
>
> 1: No code table for op: \/
>
> The scanner piece survived in the copies of the compiler for various systems for several years. I tried to look for copies of the code generator table which would contain an implementation, but failed. Has anyone ever seen a working MIN/MAX operator in K&R C?
>
> Thanks,Leo
>
> _______________________________________________
> Simh mailing list
> Simh(a)trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
Am I the only one having trouble? I mirror the site, and I'm now seeing:
aneurin# tuhs
+ rsync -avz minnie.tuhs.org::UA_Root .
rsync: failed to connect to minnie.tuhs.org (45.79.103.53): Operation timed out (60)
rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]
+ rsync -avz minnie.tuhs.org::UA_Applications Applications
rsync: failed to connect to minnie.tuhs.org (45.79.103.53): Operation timed out (60)
rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]
Etc.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
UNIX was half a billion (500000000) seconds old on Tue Nov 5 00:53:20
1985 GMT (measuring since the time(2) epoch) -- Andy Tannenbaum.
(Yeah, an American billion, not the old British one.)
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Date: Tue, 10 Oct 89 13:37 CDT
From: Chris Garrigues <7thSon(a)slcs.slb.com>
Subject: A quote from Interop '89
To: unix-haters(a)ai.ai.mit.edu
Cc: 7thSon(a)slcs.slb.com
"We all know that the Internet is Unix. We proved that last
November."
- David Mills, October 2, 1989
> From: Arnold Skeeve
> I suspect that he was also still young and fired up about things. :-)
> ...
> (In other words, he too probably deserves to be cut some slack.)
Much as RTM was cut some slack?
The thing is there's a key difference. RTM didn't _intend_ to melt down the
network, whereas Gene presumbly - hopefully - thought about it for a while
before he made his call to inflict severe punishment.
Did RTM do something wrong? Absolutely. Did he deserve some punishment?
Definitely. But years in jail? Yes, it caused a lot of disruption - but to any
one person, not an overwhelming amount.
Luckily, the judge was wise enough, and brave enough, to put the sentencing
guidelines (and the DoJ recommendation, IIRC) to one side.
However, that too was not without a cost; it was one more stone added to what
is admittedlyalready a mountain of precedent that judges can ignore the
legislature's recommendations - and once one does it, another will feel more
free to do so. And so we pass from a government of laws to a government of
men.
But I don't give Gene the lion's share of the blame: that has to go to Rasch,
and his superiors at the DoJ, who were apparently (as best I can understand
their motives) willing to crush a young man under a bus to make a point. The
power to prosecute and punish is an awesome one, and should be wielded
carefully and with judgement, and it was their failure to do so that really
was the root cause.
Noel
I think "classlessness" is intened as an antonym to "classy".
Spafford with high dudgeon called early for punishment. He had tempered
it somewhat by the time he wrote his CACM article, published in June
1985. But still some animus shows through, in "even-handedly"
speculating about whether the worm was intended as a lark or as
something nefarious. He evidently had mellowed a lot by the
time of the last quotation below.
In the CACM article Spaff quoted someone else as suggesting that
Morris did it to impress Jodie Foster, and he called Allman's
back door in Sendmail a debugging feature that people could
optionally turn off. As far as I know it was not disclosed that
DEBUG allowed remote control of Sendmail. In fact Sendmail was
so opaque that Dave Presotto declined to install it and wrote
his own (upas) for Research.
I don't recall the cited "contest". And Dennis's reaction to
the CaCM article seems somwhat harsh. But the context is that
Spafford's overheated initial reaction did not win friends in
research.
>
> Can anyone remember or decipher what this was about???
>
> Date: 24 Mar 90 06:52:43 GMT
> From: dmr(a)alice.att.com
> Subject: Re: Contest announcement
> To: misc-security(a)uunet.uu.net
>
> My own contest is "Most appalling display of classlessness in dealing with
> a serious subject." The nominees are:
>
> 1) National Center for Computer Crime Data, Security Magazine, and
> Gene Spafford, for their "How High Shall We Hang Robert Morris?"
> contest.
>
> 2) Gene Spafford, for the most tasteless article ever to appear in CACM
> (special credits for the Jodie Foster joke).
>
> Dennis Ritchie
>
> Some context maybe?
>>
>> “He has not tried to make any money or work in this area,” Purdue
>> University computer science professor Eugene Spafford said of Morris
>> in an interview with The Washington Post. “His behavior has been
>> consistent in supporting his defense: that it was an accident and he
>> felt badly about it. I think it’s very much to his credit that that has
>> been his behavior ever since.”
Arnold:
> OK, that I can understand. It's ages since I played with
> readline, but I think you can preload the buffer it works on
> (bash does that, no?) so ed + readline could be made to work
> that way.
====
Or, if you have moved beyond the era of simulated glass
teletypes on graphics screens, you could do the editing
in the terminal (program).
It's a real shame the mux/9term way of doing things never
caught on. I suppose it is because so many people are
wedded to programs that require cursor addressing; I'm
glad I never succumbed to that.
I use ed (or its cousin qed a la Toronto) for simple stuff.
Mostly I'll use the traditional commands, but sometimes
I will, in mux/9term style, print a line with p, type
c, edit the line on the screen, pick it up and send it,
type . return.
And of course I can do that sort of thing with any program,
whether or not it is compiled with some magic library.
All this is something of a matter of taste, but I have
sometimes amazed (in a good way) my colleagues with it.
Norman Wilson
Toronto ON
Robert T Morris (the son who committed the famous worm) was an
intern at Bell Labs for a couple of summers while I was there.
He certainly wasn't an idiot; he was a smart guy.
Like many smart guys (and not-so-smart guys for that matter),
however, he was a sloppy coder, and tended not to test enough.
One of the jokes in the UNIX Room was that, had it been Bob
Morris (the father) who did it,
a. He wouldn't have done it, because he would have seen that
it wasn't worth the potential big mess; but
b. Had he done it, no one would ever have caught him, and
probably no one would even have noticed the worm as it crept
around.
Norman Wilson
Toronto ON
> From: Doug McIlroy
> A little known fact is that the judge leaned on the prosecutor to reduce
> the charge to a misdemeanor and accepted the felony only when the
> prosecuter secured specific backing from higher echelons at DOJ.
I had a tangential role in the legal aftermath, and am interested to hear
this.
I hadn't had much to do with the actual outbreak, so I was not particularly
watching the whole saga. However, on the evening news one day, I happened to
catch video of him coming out of the court-house after his conviction: from
the look on his face (he looked like his dog had died, and then someone had
kicked him in the stomach) it was pretty clear that incareration (which is
what the sentencing guidelines called for, for that offense) was totally
inappropriate.
So I decided to weigh in. I got advice from the Washington branch of
then-Hale&Dorr (my legal people at the time), who were well connected inside
the DoJ (they had people who'd been there, and also ex-H+D people were
serving, etc). IIRC, they agreed with me that this was over-charging, given
the specifics of the offender, etc. (I forget exactly what they told me of
what they made of the prosecutor and his actions, but it was highly not
positive.)
So we organized the IESG to submit a filing in the case on the sentencing, and
got everyone to sign on; apparently in the legal system when there is an
professional organization in a field, its opinions weigh heavily, and the
IESG, representing as it did the IETF, was the closest thing to it here. I
don't know how big an effect our filing had, but the judge did depart very
considerably from the sentencing guidelines (which called, IIRC, for several
years of jail-time) and gave him probation/community-service.
Not everyone was happy about our actions (particularly some who'd had to work
on the cleanup), but I think in retrospect it was the right call - yeah, he
effed up, but several years in jail was not the right punsishment, for him,
and for this particular case (no data damaged/deleted/stolen/etc). YMMV.
Noel
> the idiot hadn't tested it on an isolated network first
That would have "proved" that the worm worked safely, for
once every host was infected, all would go quiet.
Only half in jest, I have always held that Cornell was right
to expel Morris, but their reason should have been his lack
of appreciation of exponentials.
(Full disclosure: I was a character witnesss at his trial. A
little known fact is that the judge leaned on the prosecutor
to reduce the charge to a misdemeanor and accepted the felony
only when the prosecuter secured specific backing from
higher echelons at DOJ.)
Doug McIlroy
I too remember TECO. In my TOPS-10 days I was quite a whiz at it.
Then I encountered UNIX and ed, and never looked back. Cryptic
programmability is fun, but a simple but well-chosen set of
commands including the g/v pair made me more efficient in the end.
it could just be that ed is a better fit for the shape of my brain.
C struck me similarly.
Norman Wilson
Toronto ON
(Actually in the Bay Area for a few days for LISA, in case any
UNIXtorians want to meet up.)
> From: Dave Horsfall
> I'm glad that I'm not the only one who remembers TECO
Urp. I wish I _didn't_ remember TECO!
"TECO Madness: A moment of convenience, a lifetime of regret." - Dave Moon
(For those who didn't catch the reference, here:
https://www.gammalyte.com/tag/reefer-madness/
you go.)
Noel
On Mon, Oct 16, 2017 at 12:39 PM, Jon Steinhart <jon(a)fourwinds.com> wrote:
>
> I have a similar and maybe even more extreme position. When I was a
> manager
> I placed restrictions on the tools and customizations for members of my
> team.
> My goal was to make sure that any team member could go over to any other
> team
> member's desk and get stuff done.
And I think this loops back to what started some of this threat. The idea
of a programmer with 'good taste.'
Rob (and Brian) railed on BSD in cat -v considered harmful
<http://harmful.cat-v.org/cat-v/> and ‘*Program Design in the UNIX
Environment*’ (pdf version
<http://harmful.cat-v.org/cat-v/unix_prog_design.pdf>, ps version
<http://harmful.cat-v.org/cat-v/unix_prog_design.ps>) but the points in it
was then and are still now, fresh: What is it that you need to get the job
done - to me, that is Doug's "Universal Unix" concept.
When I answer questions on quora about learning Linux and other UNIX
derivative, I still point them at their book: *The Unix Programming
Environment
<http://www.amazon.com/gp/product/013937681X?ie=UTF8&tag=catv-20&linkCode=as…>*
I would say, if the can login into the system and complete the exercises in
UPE without having to make changes, you are pretty close to Doug's
"Universal UNIX" environment. And if you can use the tools, without having
to think about them and they pretty much are what you rely upon everyday,
you are getting close to my ideal of 'good taste.'
Clem
Of interest to the old farts here...
At 22:30 (but which timezone?) on this day in 1969 the first packet got as
far as "LO" ("LOGIN"?) then crashed. More details, anyone?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Chris Torek:
You're not perpendicular to your own surface? :-)
===
I'm not as limber as I used to be.
Besides, I'm left-handed, so what use would I have for
right angles?
Norman Wilson
Toronto ON
(I don't wish to know that)
> From: Steve Nickolas
> I personally believe a lot of code in modern operating systems is larger
> than the task requires.
The "operating" is superfluous.
Noel
George Michaelson:
wish I hadn't read "Norman Wilson" as "Norman Wisdom" (british
prat-fall comedian in the style of Jerry Lewis)
===
It's much better than the more-common typo in which
people call me normal. Neither accurate nor an
aspiration.
Norman Wilson
Toronto ON
I've always enjoyed this paper; recently I found occasion to thumb
through it again. I thought I'd pass it on; I'm curious what some on
the list think about this given their first-hand knowledge of relevant
history (Larry, I'm looking at you; especially with respect his
comments on the VM system).
- Dan C.
http://www.terzarima.net/doc/taste.pdf
As an admirer of minimalism, who has given talks that extol
Norman Wilson's streamlining of research Unix, I naturally
like Forsythe's thesis.
I noticed unintended irony in one more or less throw-away remark:
"It is dangerous to place too much hope in any improvement coming from just
following new fashions, if we lack insight into what really went wrong
before. Without that insight, I suspect that rewriting UNIX in C++,
for example, could easily become an excuse for increasing complexity
(because by using C++ `we can handle more complexity')."
Bjarne Stroustrup's avowed reason for building cfront, which
evolved into C++, was to have a tool for building an operating
system in object-oriented style. The tool took on a life of
its own, and arguably became more complex than the old-fashioned
Unix he aspired to improve on.
Doug
On Oct 22, 2017 1:39 AM, "Will Senn" <will.senn(a)gmail.com> wrote:
[...]
What is the last bootable and installable media, officially distributed by
Berkeley?
Is that image currently publicly accessible?
What is the closest version, that is currently available, that would match
the os described in "The Design and Implementation of the 4.4 BSD Operating
System"?
Probably one of the best ways to get questions about installation media
answered is to simply email Kirk McKusick. He's a really nice guy and will
probably give you an answer pretty quickly.
That said, of the three distributions you mentioned, bootable/installable
media only existed for 4.4BSD (also called the "encumbered" distribution).
-Lite and -Lite2 were "reference distributions." It didn't take *too* much
work to get -Lite working, but it wasn't something that ran out of the box
(or more properly, off of the tape). The original idea was to release
4.4BSD-encumbered to Unix source licensees, and at the same time publish
4.4BSD-Lite sans the redacted bits as an open source distribution. These
were to be the final BSD releases from UCB, but the CSRG found they had
some coin left in the coffers a few months later, so they did -Lite2 as
something of a final hurrah snapshotting some ongoing maintenance work (and
possibly some research?) before officially shutting down.
At one point, I had a copy of a bootable exabyte tape with 4.4-encumbered
installation and source images for SPARC, specifically sun4c machines, that
I had liberated from somewhere. My understanding was that the reference
hardware at Berkeley was 68030- and 68040-based HP 9000 machines, and the
SPARC bits were a contribution from Chris Torek. I got -Lite running on an
older SPARCstation 1, but it wasn't particularly reliable (the compiler
would segfault, and it panic'ed once a day or so), so we put SunOS back on
it pretty quickly.
Hope that helps.
- Dan C.
I'm wondering, with 80s and 90s era Unix being discussed, if there are
any copies of the 80s and 90s era CAD software extant in some form or
other? (Preferably free to good archive?)
IIRC it was a major driver of graphics capabilities in Unix
workstations around that time.
Wesley Parish
> macOS requires you to have a data section aligned to 4K, even if you
> don't use it. The resulting binary is a little over 8K; again, mostly
> zeros.
Not quite. The classic empty executable file for /bin/true works
on OS X. That is not just a clever trick;it's a natural consequence
of Kernighan's ancient prrecept: do nothing gracefully. Conceivably
the 4K data section is, too--if the page has no physical presence
until it is accessed.
Doug
> From: Dan Cross <crossd(a)gmail.com>
> Hope that helps.
I don't have anything to add to this discussion, but may I point out that this
is _exactly_ the kind of thing we'd like to make available at the Computer
History Wiki:
http://gunkies.org/wiki/Main_Page
I'm too busy with other tasks to add it all myself, but I hope you all will be
able to add your pearls there, where it will be available in an organized way,
rather than having to hope Google/Bing/etc can find it in the list archives
among the megatons of other dross on the Internet.
If anyone would like an account there (due to spam issues, anon editing has
been disabled), please let me know, and I'll get you set up right away - just
send me the account name you like (a lot of us use our old time-sharing system
account names :-), and the email address you'd like associated with it.
Noel
All,
I'm not 100% sure how best to ask this, but here goes...
I own a copy of the CSRG Archives CD Set that Kirk McKusick maintained.
I bought them ages and ages ago (BTW, they are now all available on
Archive.org) I dusted them off today because I had the brilliant idea
that with my significant growth in understanding related to all things
unix and ancient unix, that I might find them interesting and useful.
They are interesting, jury's out on useful beyond being a broweasable
historical archive of individual files. One of the CD's contains a 4.4
and 4.4BSD-Lite2 folder and is labeled releases (disk 3). I opened the
4.4 folder and it appears to be a set of folders and files I would
expect to find on a release tape, but unlike a tape, which one could
mount and boot from, I have no idea if this would be usable as install
media (if you do, please let me know how).
I googled about the two releases and although the same text appears all
over the place about how Berkeley released one version, then removed
some components, then re-released, and eventually wound up at
4.4BSD-Lite2, I could not figure out whether the word release meant
sourcecode, installable media, or what. I gather a lot of this made
sense back in the early 1990's but it's all a bit muddy to me in 2017.
In trying to figure it all out, I came across a webpage talking about
2.11BSD (maintained into this decade) and another about 4.3BSD
Quasijarus (also maintained in this decade?). Both descriptions
contained the text, "It is the release of 4.4BSD-Lite, and requires the
original UNIX license" (see http://damnsmallbsd.org/pub/BSD-UNIX) My
sense of things after reading and browsing and such is that with regards
to 4.4, 4.4BSD-Lite, and 4.4BSD-Lite2, they are either not released
(4.4), encumbered and retracted (4.4BSD-Lite), or not installable
(4.4BSD-Lite2)...
Dang, so confusing...
My interest is pretty much based on a strong desire to boot up a 4.4
system that as closely as possible maps to the one described in "The
Design and Implementation of the 4.4 BSD Operating System" that I can
experiment with as I'm going through the text. I think I understand the
version history as it is described in various places, but I just can't
figure how the last handful of versions relate to real media that is
available to enthusiasts.
Questions begging answers:
What is the last bootable and installable media, officially distributed
by Berkeley?
Is that image currently publicly accessible?
What is the closest version, that is currently available, that would
match the os described in "The Design and Implementation of the 4.4 BSD
Operating System"?
Many thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Dave Horsfall reported failures for the TUHS mirror at his site.
I've just looked at our TUHS mirror in Salt Lake City, Utah, USA, and
found that
% rsync rsync://rsync.math.utah.edu
produces the expected list.
I also checked the mirror cron job logs, and found that they all look
similar for every day this year, with no indication of connection
errors.
I then checked the TUHS filesystem tree, and found only two files
created in the last month (timestamps in UTC):
-rw-rw-r-- 1 mirror mirror 99565 Oct 20 17:27 UA_Documentation/TUHS/Mail_list/2017-October.txt.gz
-rw-rw-r-- 1 mirror mirror 400419 Sep 30 17:27 UA_Documentation/TUHS/Mail_list/2017-September.txt.gz
The first of those arrived here late last night (Oct 20 23:15 MDT, Oct
21 05:15 UTC).
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> sed *n l pathname
>
> The latter also has the advantage that its output is
> unambiguous, whereas the output of historical cat *etv is not.
>
> But mind you, in preparation of this email i found a bug in
> Busybox sed(1) which simply echoes nothing for the above.
I assume that * is a typo for - . If so, sed did just what
-n tells it to--no printing except as called for by p or P.
And speaking of sed anticipating other tools, the inclusion
of "head" in v7 as a complement to "tail" was a close call
because head is subsumed by sed q.
Doug
All, behind the scenes we have had Grant Taylor and Tom Ivar Helbekkmo
helping us find a solution to the TUHS list DKIM issues. We have been
running two separate lists (unmangled and mangled TUHS headers) for a
few weeks. It looks like we can now merge them all back together and
use the settings on one to avoid (most of) the DKIM problems.
So that's what I've done: merged back to a single TUHS mailing list.
I've restored the [TUHS] on the Subject line as well.
I'll monitor the logs for further bounces. Fingers crossed there won't
be any further unsubscriptions from the list due to bounce processing.
If there are, I'll manually put you back in.
Cheers all & thanks to Grant and Tom.
Warren
[I tried to send this earlier, but was thwarted by list shenanigans.
Apologies if it's a dup.]
On Thu, Oct 19, 2017 at 10:52 AM, Ron Natalie <ron(a)ronnatalie.com> wrote:
> My favorite reduction to absurdity was /bin/true. Someone decided we
> needed shell commands for true and false. Easy enough to add a script that
> said "exit 0" or exit 1" as its only line.
> Then someone realized that the "exit 0" in /bin true was superfluous, the
> default return was 0. /bin/true turned into an empty, yet executable, file.
>
> Then the lawyers got involved. We got a version of a packaged UNIX (I
> think it was Interactive Systems). Every shell script got twelve lines of
> copyright/license boilerplate. Including /bin true.
> The file had nothing but useless comment in it.
Gerard Holzmann has something on this that I think is great:
http://spinroot.com/gerard/pdf/Code_Inflation.pdf
- Dan C.
PS: A couple of thoughts.
The shell script hack on 7th Edition doesn't work if one tries to
'execl("/bin/true", "true", NULL);'. This is because the behavior of
re-interpreting an execution failure as a request to run a script is
done by the shell, not exec in the kernel. This implies that one could
not directly exec a shell script, but rather must exec the shell and
give the path to the script as the first argument. I vaguely recall we
had a discussion about the origin of the '#!' syntax and how this was
addressed about a year or so ago.
I tried to write a teeny-tiny '/bin/true' on my Mac. Dynamically
linked, the obvious "int main() { return 0; }" is still a little over
4KB. Most of that is zeros; padding for section alignment and the
like. I managed to create a 'statically' linked `true` binary by
writing the program in assembler:
% cat true.s
# /bin/true in x86_64 assembler for Mac OS X
.text
.globl start
start:
mov $0x2000001, %rax # BSD system call #1
mov $0, %rdi # Exit status: 0 = 'true'
syscall
# OS X requires a non-empty data segment.
.data
zero: .word 0 As I recall,
%
macOS requires you to have a data section aligned to 4K, even if you
don't use it. The resulting binary is a little over 8K; again, mostly
zeros.
There are parlor tricks people play to get binary sizes down to
incredibly small values, but I found the results interesting. Building
the obvious C program on a PDP-11 running 7th Edition yields a 136
byte executable, stripped. Still infinitely greater than /bin/true in
the limit, but still svelte by modern standards.
> How realistic would the experience be to actually running the system
> described in the Unix Programming Environment [v8] if it's actually
running > BSD 4.1... Thanks for any insights y'all might have on this.
This question bears on a recent thread about favorite flavors of Unix. My
favorite flavor is Universal Unix, namely the stuff that just works
everywhere. That's essentially what K&P is about.
That's also what allowed me to use a giant Cray with no instruction
whatsoever. And to do everyday "programmering" on the previously
inscrutable Macintosh, thanks to OS X.
The advent of non-typewriter input put a damper on Universal Unix. One has
to learn something to get started with a novel device. I am impressed,
though, by the breadth of Universal Unix that survives behind those
disparate facades.
> From: Larry McVoy
>>> I was told, by someone that I don't remember, that uwisc was the 11th
>>> node on the net. ... If anyone can confirm or deny that I'd love to know.
> I dunno.
I don't have any axe to grind here. I don't care if they were the first, or
the last. You asked "If anyone can confirm or deny that I'd love to know",
and all I'm trying to do is _accurately_ answer that.
> That 1985 map has uwisc in there
I have a large collection of ARPANET maps here:
http://www.chiappa.net/~jnc/tech/arpanet.html
and the first one on which UWisc shows up is the October, 1981 geographical
map - over ten years since the ARPANet went up (December, 1969 is the earliest
map I have there).
> I do know that prior to the net there was uucp
Which "net" are we talking about here? ARPANET? CSNET? Internet? The UUCP network
long post-dated the ARPANET - I think it was started in the late 70's, no?
The earliest Internet map I have is from 1982, here:
https://upload.wikimedia.org/wikipedia/commons/6/60/Internet_map_in_Februar…
and again UWisc is not on it. (Yes, I know it's on Wikipedia, but I'm the one
who uploaded it, so I can verify it.)
CSNET I don't know much about, that may have been what the comment referred
to.
Wikipedia (for what little we can trust it) says "By 1981, three sites were
connected: University of Delaware, Princeton University, and Purdue
University"; since Lawrence Landweber at UWis was the main driver of CSNET, I
doubt it would have been far behind.
Noel
> From: Grant Taylor
> Does anyone know of a good place to discuss networking history, routing,
> email, dns, etc. I'd like to avoid getting too far off topic for TUHS.
You could try the "Internet History mailing list":
http://www.postel.org/internet-history/
which covers all of networking, including pre-Internet stuff.
Noel
> From: Larry McVoy
> I was told, by someone that I don't remember, that uwisc was the 11th
> node on the net. ... If anyone can confirm or deny that I'd love to know.
There's a copy of the July '77 revision of the HOSTS.TXT file as an appendix
here:
http://www.walden-family.com/dave/archive/bbn-tip-man.txt
The IMPs are numbered in order of deployment; so UCLA is #1, SRI is #2, Utah
is #4, BBN is #5, etc.
I don't see Wisconsin in the list at all. Maybe the person meant CSNET?
Noel
Does anyone know why stty won't accept '^?' in v7? It will accept '^h',
but then the shell expects ^h to "backspace". I am trying to get the
delete key on my mac to do the backing up and it's '^?'. # isn't my
favorite since it's used in C programs, but pressing CTRL-h to backup is
a pain too. If you've read this far, I have three more questions:
1. How do you escape # in order to write a C program if # is the erase
character in the terminal?
2. How do you enter a literal character in the v7 shell (I am used to
CTRL-v CTRL-DEL to enter the delete character on other unices)?
3. Is there a way to echo the ascii value of a keypress in v7?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Dear THUS members,
An international conference on the history of Unix will be held in
Paris, Oct. 19th, at the Conservatoire National des Arts & Métiers. Here
is the link to the (bilingual) program :
http://technique-societe.cnam.fr/colloque-international-unix-en-france-et-a…
<http://technique-societe.cnam.fr/medias/fichier/programme-colloque-unix-bil…>
There will be audio recordings of the symposium available afterwards -
check the program page to know where and when.
It will be followed, the next day, by a kick-off meeting of the research
project “What is a computer program?”:
http://technique-societe.cnam.fr/table-ronde-qu-est-ce-qu-un-programme-info…
<http://technique-societe.cnam.fr/table-ronde-qu-est-ce-qu-un-programme-info…>
Please note that an active member of this list, Clem Cole, will be
giving a much awaited talk!
Best,
Camille Paloque-Bergès, for the orgazining committee(a THUS lurker !).
--
Institutional email address : camille.paloque_berges(a)cnam.fr
<mailto:camille.paloque_berges@cnam.fr>
*Laboratory for the History of Techno-Sciences (HT2S), Conservatoire
national des arts et métiers, 2 rue Conté, 75003 Paris, France
*Associate researcher at the Digital Paths cluster of CNRS' Institute
for Communication Sciences (ISCC)
I remember a thread on the mailing list a while back where Warren
announced the availability of the V8-V10 source code and being intrigued
at the possibility of running it. Then I recently came across a note by
dmr referring to V8 and further tweaking my interest:
http://minnie.tuhs.org/pipermail/tuhs/2003-June/002195.html
Here's what he said:
As for the system aspects: K&R 1 (1978) was done on
what would soon be 7th edition Unix, on 11/70;
K&R 2 (1988) using 9th edition on VAX 8550.
Kernighan and Pike's Unix Programming
Evironment (1984) used 8th edition
on VAX 11/750.
About the releases (or pseudo releases) that
Norman mentions: actually 8th edition was
somewhat real, in that a consistent tape
and captured, probably corresponds fairly
well with its manual, and was educationally
licensed for real, though not in large quantity.
9th and 10th were indeed more conceptual in that
we sent stuff to people (e.g. Norman) who asked,
but they weren't collected in complete and
coherent form.
This combined with my tinkering with V7 and working through K&R (1978)
got me hankering to go through K&P (1984) on a Vax running V8. Then, I
came across this:
https://virtuallyfun.com/2017/03/30/research-unix-v8/
and decided to jump in and start running V8. Then it hit me - is it even
possible to run a V8 instance (similarly to V5/V6/V7, from tape) or is
it as this note says, necessary to run the bits on a 4.1 BSD base? How
realistic would the experience be to actually running the system
described in the Unix Programming Environment if it's actually running
BSD 4.1... Thanks for any insights y'all might have on this.
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Will Senn
> All that cooked and raw stuff is gobbledegook that I'll have to read up
> on.
The raw/cooked stuff isn't the source of the worst hair in the TTY driver;
that would be delays (on the output side), and delimiter processing (on the
input side).
The delays are for mechanical terminals, because they need delays after a
motion command (e.g. NL, CR, etc) before the next printing character is sent;
differing for different motion control commands, further complexified by the
current print head position - a Carriage Return from column 70 taking a lot
longer than one from column 10. The driver keeps track of the current column,
so it can calculate this! It does the delays by putting in the output queue a
chacter with the high bit set, and the delay in the low bits; the output start
routine looks for these, and does the delay.
On the input side, every time it sees a delimiter (NL, EOF), it inserts a 0xFF
byte in the input queue, and increments a counter to keep track of how many it
has inserted. I _think_ this is so that any given read call on a 'cooked'
terminal will return at most one line of input (although I don't know why they
don't just parse the buffer contents at read time - although I guess they need
the delimiter count so the read call will wait if there is not a complete line
there yet).
I should look and see how the MIT TTY driver (which also supported 8-bit input
and output) dealt with these...
Noel
> From: Will Senn
> I didn't know that the delete key served a purpose, interrupt
At MIT, the PWB1 (effectively) system that was standard at Tech Sq had had its
teletype driver completely re-written by the time I started using it, and that
was changed, so I never saw this IRL.
Recently, I needed a Unix to run under Ersatz-11, to talk to physical QBUS
-11's and download them over their console line, so I went with V6 (since I
had not at that point managed to recover the MIT system). Wow. Talk about
a rude awakening!
That was one of the things that was, ah, problematic - and in V6, there's no
way to change the interrupt character. (And no, I didn't feel like switching
to a later version!)
An even bigger problem was that in vanilla V6, there's _no way_ to do 8-bit
input _and_ output. Sheesh. I managed to fix that too, after a certain amount
of pain. (I missed a code path, or something like that, and it took me quite a
while to figure out why my fixes didn't work.)
Noel
Many thanks for on and off list replies to my query.
I happened to stumble across a paper by Krik McKusick (right here in the THUS archives) that has some more background:
http://www.tuhs.org/Archive/Documentation/Unix_Review/Berkeley_Unix_History…
and in particular on page 38 of the magazine (page 6 of the PDF).
It says (with some reformatting for clarity):
"The contract called for major work to be done on the system so the
DARPA research community could better do its work. Based on the needs
of the DARPA community, goals were set and work began to define the
modifications to the system.
In particular, the new system:
- was expected to include a faster file system that would raise
throughput to the speed of available disk technology,
- would support processes with multi-gigabyte address space requirements,
- would provide flexible interprocess communication facilities that
would allow researchers to do work in distributed systems,
- would integrate networking support so that machines running the new
system could easily participate in the ARPAnet.”
So, IPC facilities to support distributed systems were apparently an explicit goal, and that helps explain the composition of the committee.
It continues:
"To assist in defining the new system, Duane Adams, Berkeley's contract
monitor at DARPA, formed a group known as the "steering committee” to
help guide the design work and ensure that the research community's needs
were addressed.
This committee met twice a year between April, 1981 and June, 1983, and
included [name list as before]. Beginning in 1984, these meetings were
supplanted by workshops that were expanded to include many more people.”
This shift in membership after 4.2BSD shipped had already been noted. The committee seems to have had a productive start:
"An initial document proposing facilities to be included in the new system
was circulated to the steering committee and other people outside Berkeley
in July, 1981, sparking many lengthy debates.”
I would assume that those initial discussions included debates on what would become the socket API. I’ve asked Kirk McKusick if he still remembered that initial discussion document. The reply was:
"The document to which you refer became known as the "BSD System
Manual". The earliest version that I could find was the one
distributed with 4.2BSD in July 1983 which I have attached.”
If anyone knows of earlier versions of that document (prior to 1982), I’d be highly interested.
The paper also notes:
"During the summer, Joy concentrated on implementing a prototype version
of the interprocess communication facilities.”
I’ll scan the early (partial) SCCS logs for remnants of that (a long shot, but worth a try).
Paul
> From: Will Senn
> 1. How do you escape # in order to write a C program if # is the erase
> character in the terminal?
"Use the source, Luke!" V7 is simple enough that it's pretty quick to find
the answers to things like this. E.g.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/dev/tty.c>
will answer this question (in "canon()").
> 3. Is there a way to echo the ascii value of a keypress in v7?
A quick look through tty.c suggests this doesn't exist in V7 - without running
a user program that puts the TTY in 'raw' mode and prints out what it
sees. Not sure if there is one off the rack, or if you'd have to whip up a
20-line program to do it.
Noel
After installing a fresh simh V7 instance with 2 RP06's and a TU10, I
tried building the kernel and running it. I got a panic. I didn't mess
with the defaults, so I'm at a loss as to how the stock kernel is
different from the one I built. I tried building as root, then sys, same
effect. Here's what I did:
nboot.ini contents:
set cpu 11/70
set cpu 2M
set cpu idle
set rp0 rp06
att rp0 rp06-0.disk
set rp1 rp06
att rp1 rp06-1.disk
boot rp0
pdp11 nboot.ini
boot
hp(0,0)unix (actually renamed hptmunix)
mem = 2020544
CTRL-D
login: root
cd /usr/sys/conf
make allsystems
... build stuff, no errors or warnings
mv hptmunix /
sync
sync
CTRL-E
quit the sim
pdp11 nboot.ini
boot
hp(0,0)hptmunix
mem = 2021696
err on dev 0/0
bn=1 er=100000,4507
err on dev 0/0
bn=1 er=100000,4521
err on dev 0/0
... etc.
Am I doing something wrong or missing an important configuration step. I
am just trying to rebuild the stock kernel before I try any
reconfigurations.
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Hi all,
I just finished creating an updated PDF version of a blog post I did a
couple of years back, describing how to install and use Unix v7 in SimH.
It's updated for 2017 and MacOS High Sierra 10.13. I started the update
because I was wanting to do some research in v7 and thought it would be
good to have a current set of instructions but really because I was
interested in learning a bit about LaTeX and creating prettier, more
useful documents. The notes still work fine as originally written, but I
organized things a little differently and tweaked some of the language.
I thought somebody else might like having a PDF version around so I
uploaded the result, call it revision 1.1, and made it publicly
accessible (the blog still needs updating, somebody oughta do something
about link impermanence, but that's all for another day). Feel free to
comment or complain. I added a section in honor of dmr at one
commenter's suggestion. Here's the link:
https://drive.google.com/open?id=0B1_Jn6Hlzym-Zmx1TjR3TENDQTA
Later,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
When I was working at UniPress in New Jersey, we had an SGI Iris named pink on which we developed the 4Sight versions of NeWS Emacs (NeMACS).
Speaking of SGI leaks:
Those things are fucking heavy!
It was raining torrentially outside and the UniPress office started to flood, so we had to keep taking shelves down off the wall and wedging them underneath the Iris to jack it up above the water, as it kept getting deeper and deeper.
Ron will remember the emergency bailing technique MG and I developed of repeatedly filling the shop vac with water then flushing it down the toilet.
The Indigos were another story entirely: They couldn't touch the raw graphics performance of an Iris, since the rendering was all in software, but you could actually stuff one of them in the overhead compartment on an airplane!
And then there was the SGI Indy... They made up for being small on the outside, by being HUGE and BLOATED in the inside:
"Indy: an Indigo without the 'go'". -- Mark Hughes (?)
This legendary leaked SGI memo has become required reading for operating system and programming language design courses:
http://www.cs.virginia.edu/~cs415/reading/irix-bloat.txt <http://www.cs.virginia.edu/~cs415/reading/irix-bloat.txt>
-Don
> On 12 Oct 2017, at 15:16, Don Hopkins <SimHacker(a)gmail.com> wrote:
>
> https://www.youtube.com/watch?v=hLDnPiXyME0 <https://www.youtube.com/watch?v=hLDnPiXyME0>
>
>> On 12 Oct 2017, at 15:04, Michael-John Turner <mj(a)mjturner.net <mailto:mj@mjturner.net>> wrote:
>>
>> Hi,
>>
>> I came across this on Lobsters[1] today and thought it may be of interest to the list: http://www.art.net/~hopkins/Don/unix-haters/tirix/embarrassing-memo.html <http://www.art.net/~hopkins/Don/unix-haters/tirix/embarrassing-memo.html>
>>
>> It appears to be an internal SGI memo that's rather critical of IRIX 5.1. Does anyone know if it's true?
>>
>> [1] https://lobste.rs/ <https://lobste.rs/>
>>
>> Cheers, MJ --
>> Michael-John Turner * mj(a)mjturner.net <mailto:mj@mjturner.net> * http://mjturner.net/ <http://mjturner.net/>
>
We lost co-inventor of Unix and sheer genius Dennis Ritchie on this day in
2011; there's not really much more that I can say...
Sic transit gloria mundi.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
According to the Unix Tree web pages, the development of 4.2BSD was at the request of DARPA guided by a steering committee consisting of:
Bob Fabry, Bill Joy and Sam Leffler from UCB
Alan Nemeth and Rob Gurwitz from BBN
Dennis Ritchie from Bell Labs
Keith Lantz from Stanford
Rick Rashid from Carnegie-Mellon
Bert Halstead from MIT
Dan Lynch from ISI
Gerald J. Popek of UCLA
Although I can place most people on the list, for some names I’m in the dark:
* Alan Nemeth - apparently the designer of the BBN C-series mini’s (I think the C30 was designed to replace the 316/516 as IMP). It is hard to find any info on the C-series, but I understand it to be a mini with 10 bit bytes, 20 bit words and 20 bit address space, more or less modeled after the PDP11 and an instruction set optimised to be an easy target for the C compilers of the day. Any other links to Unix?
* Keith Lantz - apparently specialised in distributed computing. No clear links to Unix that I can find.
* Rick Rashid - driving force behind message passing micro-kernels and the Accent operating systems. Evolved into Mach. Link to Unix seems to be that Accent was an influential design around 81/82
* Bert Halstead - seems to have built a shared memory multiprocessor around that time, “Concert”.
* Dan Lynch - ISI program manager for TCP/IP and the switch-over from NCP on Arpanet.
* Gerald Popek - worked on a secure version of (Arpanet enabled) Unix and on distributed systems (LOCUS) at the time.
Next to networking, the link between these people seems to have been distributed computing — yet I don’t think 4.2BSD had a goal of being multiprocessor ready.
All recollections about the steering committee, its goals and its members welcome.
Paul
> From: Paul Ruizendaal
> * Alan Nemeth - apparently the designer of the BBN C-series mini's
ISTR him from some other context at BBN; don't recall off the top of my
head, though.
> (I think the C30 was designed to replace the 316/516 as IMP).
They _did_ replace the Honeywell's. At MIT, they eventually came and took away
the 516 (I offered it to the MIT Museum, but they didn't want it, as the work
on it hadn't been done by MIT - idiots!), and replaced it with a
C/30. (Actually, we had a couple of C/30 IMPs - the start was adding a C/30,
to which the MIT Internet IP gateway was connected - the other two IMPs were
full, and the only way to get another port for the gateway was to get another
IMP - something which caused a very long delay in getting MIT connected to the
Internet, to my intense frustration. I seem to recall DARPA/DCVA had stopped
buying Honeywell machines, and the C/30 was late, or something like that.)
> It is hard to find any info on the C-series, but I understand it to be a
> mini with 10 bit bytes, 20 bit words and 20 bit address space, more or
> less modeled after the PDP11 and an instruction set optimised to be an
> easy target for the C compilers of the day.
Yes and no. It was a general microprogrammed machine, but supported a
daughter-board on the CPU to help with instruction decoding, etc; so the C/30
and C/70 had different daughter-boards, specific to their function.
There's a paper on the C/70, I don't recall if I have a copy - let me look.
> Any other links to Unix?
I think the C/70 was intended to run Unix, as a general-purpose timesharing
resource at BBN (and did).
> * Bert Halstead - seems to have built a shared memory multiprocessor
> around that time
He was, as a grad student, a member of Steve Ward's group at MIT, the ones who
did the Nu machine Unix 68K port. (He wrote the Unix V6/PWB1 driver for the
Diva controller for the CalChomps they had on their -11/70, the former of
which I eventually inherited.) After he got his PhD (I forget the topic; I
know he did a language called 'D', the origin of the name should be obvious),
he became a faculty member at MIT.
> * Dan Lynch - ISI program manager for TCP/IP and the switch-over from
> NCP on Arpanet.
He was actually their facilities manager (or some title to that effect; he was
in charge of all their TENEX/TWENEX machines, too). He was part of the early
Internet crowd - I vividly remember him at a bar with Phill Gross and I in the
DC suburbs, at a _very_ early IETF meeting, discussing how this Internet thing
was going to reallly take off, and the IETF had got to get itself organized to
be ready for that.
> Next to networking, the link between these people seems to have been
> distributed computing
That wasn't really the tie - the tie was they were all part of the
DARPA-funded circle. Now, as to why whomever at DARPA picked them - I think
they probably looked for people with recognized competence, who had a need for
a good VAX Unix for their research/organization.
Noel
I've just visited Slashdot and found this little gem at the bottom of the page:
Unix is a Registered Bell of AT&T Trademark Laboratories. -- Donn Seeley
Unix seems to have garnered witticisms: Salus throws in a few on the front cover
of his book. Has anyone made a collection of them?
Wesley Parish
"I have supposed that he who buys a Method means to learn it." - Ferdinand Sor,
Method for Guitar
"A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn
Hi all. First up, it's TUHS so please no DKIM/email chatter here. Only a
few of you are involved and it's not relevant to Unix history.
Grant Taylor and Tom Ivar Helbekkmo having been working behind the scenes to
find a good solution. We are hoping to a) merge the two lists back together,
b) reinstate [TUHS] and non-mangled From: lines, and c) keep most MTAs
happy in the process.
With some luck, all of this will be resolved. So, let's get back to the
discussion of old Unix systems :)
Thanks, Warren
All, there are now two variants of the TUHS mailing list. E-mail sent
to tuhs(a)tuhs.org will propagate to both of them.
The main TUHS list now:
- doesn't strip incoming DKIM headers
- doesn't alter the From: line
- doesn't alter the Subject: line
and hopefully will keep most mail systems happy.
The alternative, "mangled", TUHS list:
- strips incoming DKIM headers
- alters the From: line
- alters the Subject: line to say [TUHS]
- puts in DKIM headers once this is done
and hopefully will keep most mail systems happy but in a different way.
You can choose to belong to either list, just send me an e-mail if
you want to be switched to the other one. But be patient to start with
as there will probably be quite a few wanting to change.
Cheers, Warren
And now, we bring the RMS/Gnu thread to a close :-)
To kick a more relevant thread off, what was the "weirdest" Unix system you used & why? Could be an emulation like Eunice, could be the hardware e.g NULL was not zero, NUXI byte ordering etc.
Cheers, Warren
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
NUMA is something that's been on my mind a lot lately. Partially in
seeding beastie ideas into Larry McVoy's brain.
I asked Paul McKenney for some history on what went down at Sequent
since that's before my time. He sent me this, which I think the group
will enjoy: http://www2.rdrop.com/users/paulmck/techreports/stingcacm3.1999.08.04a.pdf
It looks pretty nice. Not sure anyone's come as close as Irix to
solving and productizing "easy" NUMA but that's the one I have the
most hands on experience with. They can affine, place, migrate, and
even replicate many types of resources including vnodes. I'm actually
surprised all that code seems to have been spiked and it doesn't seem
like either Sequent née IBM nor SGI brought forward any of their
architecture to Linux. Paul did RCU which is a tour de force, but the
Linux topology and MM code looks like the product of sustaining
engineers instead of architectural decree. Maybe the SCO lawsuit
snubbed all of that?
HP has an out of date competitive analysis that's worth a look
http://h20566.www2.hpe.com/hpsc/doc/public/display?sp4ts.oid=5060289&docLoc….
I don't have enough seat time with Tru64 but maybe they had some good
ideas.
As open source, I do like Illumos' locality groups. I can't make much
sense of Linux on this, too much seems to be in arch/ vs a first class
concept like locality groups.
Regards,
Kevin
> From: Don Hopkins
> Solaris: so bad I left the company.
Why was Solaris so much worse than SunOS?
I guess the Sun management didn't understand that was the case? Or were they
so hot for the AT+T linkup that they were willing to live with it?
Noel
As I've said elsewhere, Sun was out of money. AT&T bought $200m of Sun
stock at 35% over market but Sun had to dump SunOS and got to SVR4.
I don't know if Scooter knew what he was dumping or not, I suspect not
but all those late nights when he came over to egg us kernel geeks on,
maybe he did know. I don't think he had a choice.
On Sun, Oct 01, 2017 at 12:59:42PM -0400, Arthur Krewat via TUHSmangle wrote:
> From Sun's point of view, what was the REAL reason to move from SunOS to
> Solaris?
>
> I don't think I've read anything anywhere as to a real technical reason. Was
> it just some stuffed-shirt's "great idea"?
>
> Or was it really a standards-based or other reality-based reason?
>
> As of SunOS 4.1.4, it seemed ready to go whole-hog into SMP, so that wasn't
> the sole reason.
>
> thanks!
> art k.
>
>
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
I’m an HPC guy. The only good OS is one that is not executing any instructions except mine. No daemons, no interrupts, nothing. Load my code, give me all physical memory, give me direct access to the interconnect and then get out of the way. If I want anything, I will let you know, but don’t wait up.
When I put on an educator’s hat I still have a soft spot for V6 and V7. Those were my first exposure to Unix and the Unix Way. One could actually learn style by reading code and writing device drivers. These days kernels (Linux at least) are too complicated and too cluttered up with ifdefs to learn much. The real recent innovations like RCU and queuing locks and NUMA affinity are buried pretty deep, and actual reliable file systems like ZFS and BTRFS are just too complicated for mortals.
As a user, what I really want are reliability, the commands and utilities, and stable APIs. I don’t like a lot of things about Posix, but it is at least a little stable and a little portable. For myself, I use MacOS and Debian Linux, and open, close, read, write.
-Larry
On Sun, 1 Oct 2017, arnold(a)skeeve.com wrote:
> Date: Sun, 01 Oct 2017 09:13:28 -0600
> Michael Parson <mparson(a)bl.org> wrote:
>
>> On 2017-09-30 12:53, Ian Zimmerman wrote:
>>> On 2017-09-30 10:40, Michael Parson wrote:
>>>
>>>> I've recently found instructions for installing SunOS 4.1.3 under
>>>> qemu-sparc that I want to try as well.
>>>
>>> Can you share a pointer to those with us?
>>
>> Sure:
>>
>> https://en.wikibooks.org/wiki/QEMU/SunOS_4.1.4
>>
>> Oops, 4.1.4, not .3. :)
>
> So then the next question is where can one find install media (or
> image thereof...)
I bought a boxed copy of 'Solaris 1.1.2' off e-bay many moon ago,
though I've been told it can be found with the google search term of
'winworldpc'.
--
Michael Parson
Pflugerville, TX
KF5LGQ
On Sep 28, 2017 11:02 PM, "Kevin Bowling" <kevin.bowling(a)kev009.com> wrote:
What is your favorite UNIX. Three possible categories, choose one or more:
1) Free
2) Forced to use a commercial platform. I guess that could include
macOS and z/OS with some vivid imagination, maybe even NT.
3) Historical
1. FreeBSD. It's super stable and tends to be logical. The documentation is great once you get over the learning curve. Debian is a close second for the same reasons. Mint with KDE Plasma 5 is beautiful and user friendly.
2. I used Sun OS with a CDE-like interface back in the day and that was ok. Mac OS X 10.5-10.12 are great.
3. I enjoy the research versions of unix and other OSes that are available for the SimH PDP 11 emulator.
Will
On Sun, Sep 3, 2017 at 11:08 AM, Warner Losh <imp(a)bsdimp.com> wrote:
>
>
> On Sat, Sep 2, 2017 at 8:54 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
>
>> On Sat, 2 Sep 2017, Nemo wrote:
>>
>> Hhhmmm... This begs the historical question: When did LF replace CR/LF
>>> in UNIX?
>>>
>>
>> Unix has always used NL as the terminator :-)
>
>
> <CR><LF> was the line terminator in DEC operating systems that grew up
> around the same time as Unix. CP/M and MS-DOS inherited that from them
> since those systems were developed, in part, using cross compilers running
> on DEC gear with DEC OSes. Unix came from the Multics world where LF was
> used as the line terminator... Thankfully, neither CP/M nor MS-DOS picked
> up DEC's RMS...
>
> Warner
>
The fun story on that Warner is after years of dogged defense of RMS, when
C was written for VMS, Cutler had to add Stream I/O. The moment is was
released, much (?most?) of customer base (including a lot of internal folks
like the compiler runtime and DB folks) switched to using it. It was so
much easier. I never heard Dave back down, but it I used to smile when I
saw the statistics.
What is your favorite UNIX. Three possible categories, choose one or more:
1) Free
2) Forced to use a commercial platform. I guess that could include
macOS and z/OS with some vivid imagination, maybe even NT.
3) Historical
Me:
1) FreeBSD - I find it to generally be the least annoying desktop and
laptop experience with admittedly careful selection of hardware to
ensure compatibility. It's ideal to me for commercial appliances and
servers due to the license, tight coupling of kernel and base, and
features like ZFS, jails, and pluggable TCP stacks. Linux distros
lost their luster for me once systemd was integrated into Debian, and
that kind of culture seems to be prevailing up and down the stack in a
way that I'd prefer to be an outside observer of Linux and not
dependent on it for now.
2) AIX - I often see people disparage AIX but I like it. I learned a
lot in my teens about C, build systems, compilers, and lots of
libraries trying to port random software to it for auto-didactic
reasons. It definitely doesn't feel like any other UNIX. It probably
supports high core count and NUMA better than any other system except
Linux, it had advanced virtualization with LPARs and containers with
WPARs before most and hot patchable kernel, fully pagable kernel, lots
of rigorous kernel engineering there that didn't get a lot of fanfare.
SMIT is kind of cool as a TUI and spits out commands that you can
learn through repetition and use at the CLI or scripting. I think it
probably peaked in the early 2000s, but the memory management, volume
management, and file systems all seemed pretty forward thinking up
until then. I don't think SMP performance was a strong suite until it
was pretty much a relegated niche though.
3) IRIX - it just screams '90s cool like an acrylic sweater. Soft
real time, immense graphics support, pro audio and video features,
lots of interesting commercial software, NUMA, supercomputers. I
enjoy tinkering on this still, but a lot of that is due to the neat
hardware.
Regards,
Kevin
Warner Losh:
It's an abundance of caution thing. This code had security problems in the
past, we're not 100% sure that we've killed all the issues, though we
believe we have.
====
And if there isn't anyone who's actively interested in the
code, willing to dig in to clean it up and make security
issues less likely, deal with multiprocessing matters, and
so on, that's a perfectly reasonable stance.
I think it's an unfortunate result, and I wonder how much
of it comes from a cultural view that sysctl >> /proc.
(Recall how Ken and Dennis originally resisted Doug's push
for pipelines and filters, because--as Dennis once put it
in a talk--it just wasn't the way programs worked?)
But as someone who is sometimes credited with removing
more code than he wrote while working on the latter-day
Research kernel, it's hard for me to argue with the principle.
A lot of the code I tossed out was complicated stuff that
was barely used if used at all, and that nobody was willing
to step up to volunteer to maintain.
Norman Wilson
Toronto ON
What's your UNIX of choice to do normal "real" things these days?
Home file server (NAS), business stuff, develop code, whatever.
Mine is Solaris 11.3 at this point. Oracle has provided almost all the
"normal" utilities that are used by Linux folk, and it runs on Intel
hardware rather well. My main storage is a raidz2 of 24TB and I get
1.2GB/sec to a bunch of 3TB 512-byte-sector SAS drives.
It serves my vmware farm with iSCSI at 10gbe using COMSTAR, which also
houses a bunch of Solaris 11 guests that perform various chores. It also
houses some Linux and Windows guests for prototyping/testing. It's also
my Samba server, servicing a few Windows workstations.
This is all in my home office where I do all my personal/professional work.
What do you all use for day-to-day development and general playing
around with new stuff?
AAK
>> The Fedora system I have access to lacks /usr/share/doc/groff
> Fedora defaults to loading only the package
"groff-base" so that man pages can be displayed. To actually use
groff for any other purpose, the packages "groff", "groff-doc",
"groff-perl", and "groff-X11" have to be installed. Annoying, but
there it is.
That explains all. Thanks.
doug
On Thu, Sep 28, 2017 at 06:44:20PM +0100, Derek Fawcus wrote:
> On Thu, Sep 28, 2017 at 08:34:28AM -0400, Chet Ramey wrote:
> > Yes, that changed in 2007 based on bug reports you filed while working at Cisco.
>
> So fd 255 is my fault? :-)
Or not - given that macOS, using an older bash already used 255:
$ set|fgrep VERSION
BASH_VERSION='3.2.57(1)-release'
$ lsof -p $$|fgrep CHR
bash 6843 derek 0u CHR 16,10 0t554677 701 /dev/ttys010
bash 6843 derek 1u CHR 16,10 0t554677 701 /dev/ttys010
bash 6843 derek 2u CHR 16,10 0t554677 701 /dev/ttys010
bash 6843 derek 255u CHR 16,10 0t554677 701 /dev/ttys010
DF
>
> It's important to note, when talking about NFS, that there was Sun's NFS
> and everyone else's NFS. Sun ran their entire company on NFS. /usr/dist
> was where all the software that was not part of SunOS lived, it was an
> NFS mounted volume (that was replicated to each subnet). It was heavily
> used as were a lot of other things. The automounter at Sun just worked,
> wanted to see your buddies stuff? You just cd-ed to it and it worked.
>
> Much like mmap, NFS did not export well to other companies. When I went
> to SGI I actually had a principle engineer (like Suns distinguished
> engineer) tell me "nobody trusts NFS, use rcp if you care about your
> data". What. The. Fuck. At Sun, NFS just worked. All the time.
> The idea that it would not work was unthinkable and if it ever did
> not work it got fixed right away.
>
> Other companies, it was a checkbox thing, it sorta worked. That was
> an eye opener for me. mmap was the same way, Sun got it right and
> other companies sort of did.
>
I remember the days of NFS Connect-a-thons where all the different
vendors would get together and see if they all interoperated. It was
interesting to see who worked and who didn’t. And all the hacking to
fix your implementation to talk to vendor X while not breaking it working
with vendor Y.
Good times indeed.
David
> From: Theodore Ts'o
> when a file was truncated and then rewritten, and "truncate this file"
> packet got reordered and got received after the "here's the new 4k of
> contents of the file", Hilar[i]ty Enused.
This sounds _exactly_ like a bad bug found in the RVD protocol (Remote Virtual
Disk - a simple block device emulator). Disks kept suffering bit rot (damage
to the inodes, IIRC). After much suffering, and pain trying to debug it (lots
of disk writes, how do you figure out the one that's the problem), it was
finally found (IIRC, it wasn't something thinking about it, they actually
caught it). Turned out (I'm pretty sure my memory of the bug is correct), if
you had two writes of the same block in quick sucession, and the second was
lost, if the first one's ack was delayed Just Enough...
They had an unused 'hint' (I think) field in the protocol, and so they
recycled that to be a sequence number, so they could tell ack's apart.
Noel
Larry McVoy:
> +1 on what Ron said. I don't get the rationale for going back to ptrace.
> Anyone know what it is? Is there a perf issue?
Kurt H Maier:
The usual rationale presented is that someone can exhaust the fd table
and then you can't get anything done. Instead of fixing that problem,
the popular approach is to add more syscalls, like with getrandom(2).
====
Funny that that rationale isn't extended to its logical
conclusion: get rid of open and creat. Then nobody needs
to worry about running out of file descriptors ever!
I too am saddened to see such a retrograde step, but perhaps
I'm biased. When I arrived in 1127, the kernel had /proc but
still had ptrace as well. Why? Because no one was brave enough
to wade into sdb and adb.
After a couple of years, I felt brave enough, so I did it.
Once the revised sdb and adb had propagated to all our systems,
I removed the syscall. I celebrated by physically removing
ptrace(2) from the Eighth Edition manual in the UNIX room: the
manual entry comprised two facing pages, which I glued together.
I can sympathize with FreeBSD excuse someone cited elsewhere,
that nobody used the code so it should go--I'm always in favour
of improving programs by chopping sutff out--but I think the
decision here was backwards. The proper answer would have been
to teach ps et al to use /proc, not to invent a new complex of
system calls.
I dislike how Linux has tossed information about processes and
other system-related data into the same namespace (and now that
there is /sys as well as /proc, I wonder whether it's time to
divorce them, or even to move /proc into /sys/proc), but the
general direction of moving things into the file system makes
sense. I have some qualms about adding more and more code to
the kernel that just does string processing (making the kernel
bigger and more complicated, and widening the attack surface
for bad guys), though; maybe most of that stuff belongs not in
the OS proper but in a user-mode program that reads /dev/mem
and presents as a file system.
Norman Wilson
Toronto ON
On Wed, 27 Sep 2017, Roland Turner wrote:
> * Simply attaching the entire message as a message/rfc822 part is an
> appealing approach, but very few mail clients would do anything
> intelligent with it.
My MUA of choice, Alpine, certainly does, and it's hardly uncommon. I
tried Elm once, and immediately went straight back to Pine (as it was
known then); never bothered with Mutt.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
All, I've set up a mirror TUHS list which munges the From: line to keep
mail systems like Gmail happy. I've migrated some of you over to this
new list: those whose subscriptions were disabled due to excessive bounces.
I'm hoping that this will a) fix most of the bounce problems and b) keep
those who don't want address munging happy.
If I've moved you and you want to be put back on the non-munged list,
let me know.
Fingers crossed that this is a good solution,
Warren
All, overnight the mail list blocked about 60 people because of excessive
bouncing. It was probably because the list has been busy, and the bounce
threshold for the (mostly gmail) addresses was exceeded. I've manually
re-enabled them all.
I have installed the script that strips DKIM and ARC header lines before
the list software processes the inbound e-mails. We will see if that helps.
Apologies, Warren
All, I just had a whole bumch of gmail addresses disabled on the TUHS
list because of DKIM. I had an idea. I'll create a second list which
munges the From: line. E-mail to tuhs(a)tuhs.org will go to both lists.
I'll move the gmail people over to the munging list.
This is just a test e-mail to see if the munging list works. I'm the only
one on it. If it seems to work, I'll move the gmail folk over.
Cheers, Warren
Clem Cole:
It was never designed for it. dmr designed Streams to replace the
tty handler. I never understood why the Summit guys insisted on
forcing networking into it.
======
You've mistaken. The point of the stream I/O setup with
stackable line disciplines, rather than the old single
line-discipline switch, was specifically to support networking
as well as tty processing.
Serial-device drivers in V7 used a single line-discipline
driver, used variously for canonical-tty handling and for
network protocols. The standard system as used outside
the labs had only one line discipline configured, with
standard tty handling (see usr/sys/conf/c.c). There were
driver source files for what I think were internal-use-only
networks (dev/pk[12].c, perhaps), but I don't think they
were used outside AT&T.
The problem Dennis wanted to solve was that tty handling
and network protocol handling interfered with one another;
you couldn't ask the kernel to do both, because there was
only one line discipline at a time. Hence the stackable
modules. It was possible to duplicate tty handling (probably
by placing calls to the regular tty line discipline's innards)
within the network-protocol code, but that was messy. It also
ran into trouble when people wanted to use the C shell, which
expected its own special `new tty' line discipline, so the
network code would have to know which tty driver to call.
It made more sense to stack the modules instead, so the tty
code was there only if it was needed, and different tty
drivers could exist without the network code knowing or caring.
When I arrived at the Labs in 1984, the streams code was in
use daily by most of us in 1127. The terminals on our desks
were plugged into serial ports on Datakit (like what we call
a terminal server now). I would turn on my terminal in the
morning, tell the prompt which system I wanted to connect to,
and so far as I could tell I had a direct serial connection.
But in the remote host, my shell talked to an instance of the
tty line module, which exchanged data with a Datakit protocol
module, which exchanged data with the low-level Datakit driver.
If I switched to the C shell (I didn't but some did), csh would
pop off the tty module and push on the newtty module, and the
network code was none the wiser.
Later there was a TCP/IP that used the stream mechanism. The
first version was shoehorned in by Robert T Morris, who worked
as a summer intern for us; it was later cleaned up considerably
by Paul Glick. It's more complicated because of all the
multiplexers involved (Ethernet packets split up by protocol
number; IP packets divided by their own protocol number;
TCP packets into sessions), but it worked. I still use it at
home. Its major flaw is that details of the original stream
implementation make it messy to handle windows of more than
4096 bytes; there are also some quirks involving data left in
the pipe when a connection closes, something Dennis's code
doesn't handle well.
The much-messier STREAMS that came out of the official System
V people had fixes for some of that, but at the cost of quite
a bit more complexity; it could probably be done rather better.
At one point I wanted to have a go at it, but I've never had
the time, and now I doubt I ever will.
One demonstration of virtue, though: although Datakit was the
workhorse network in Research when I was there (and despite
the common bias against virtual circuits it worked pretty well;
the major drawback was that although the underlying Datakit
fabric could run at multiple megabits per second, we never had
a host interface that could reliably run at even a single megabit),
we did once arrange to run TCP/IP over a Datakit connection.
It was very simple in concept: make a Datakit connection (so the
Datakit protocol module is present); push an IP instance onto
that stream; and off you go.
I did something similar in my home V10 world when quickly writing
my own implementation of PPP from the specs many years ago.
The core of that code is still in use in my home-written PPPoE code.
PPP and PPPoE are all outside the kernel; the user-mode program
reads and writes the serial device (PPP) or an Ethernet instance
that returns just the desired protocol types (PPPoE), does the
PPP processing, and reads and writes IP packets to a (full-duplex
stream) pipe on the other end of which is pushed the IP module.
All this is very different from the socket(2) way of thinking,
and it has its vices, but it also has its virtues.
Norman Wilson
Toronto ON
>> "Bah. That's just some goof-ball research toy."
> I feel like the same thing was said about Unix at some point very early
in it's history.
Amusingly the IT department of AT&T felt that way and commissioned a
Harvard prof, no less, to write a report about why VMS was the way to
go on Vaxen. The hired gun (so much for academic integrity) addressed
the subject almost entirely with meta arguments:
(1) VMS was written by OS professionals; Unix was a lab experiment.
(2) One could count on support from DEC, not from Research. (So
much for USG; as far as i know the author never asked anyone in]
Bell Labs about anything.)
(3) And the real killer: VMS was clearly far advanced, witness
its shelf of manuals vs the thin Unix volumes that fit in one's
briefcase. Lee McMahon had particular fun with this one in a
rebuttal that unleashed the full power of his Jesuit training
in analytic debate.
Doug
OK, here's another one that's good for chest thumping...
I am not a fan of texinfo. It doesn't provide any benefits (to me) over man.
I suppose that it was trailblazing in that it broke manual pages up into
sections that couldn't easily be viewed concurrently long before the www and
web pages that broke things up into multiple pages to make room for more ads.
Any benefits that texinfo might have are completely lost by the introduction
of multiple non-intersecting ways to find documentation.
This is a systemic problem. I have a section in my book-in-progress where I
talk about being a "good programming citizen". One of the things that I say
is:
Often there is a tool that does most of what you need but is lacking
some feature or other. Add that feature to the existing tool;
don't just write a new one. The problem with writing a new one
is that, as a tool user, you end up having to learn a lot of tools
that perform essentially the same function. It's a waste of time
an energy. A good example is the make utility (invented by Stuart
Feldman at Bell Labs in 1976) that is used to build large software
packages. As time went on, new features were needed. Some were
added to make, but many other incompatible utilities were created that
performed similar functions. Don't create burdens for others.
Improve existing tools if possible.
A funny example of this is when I was consulting for Xilinx in the late 80s
on a project that had to run on both Suns and PCs. Naturally, I did the
development on a Sun and ported to the PC later. When it came time to do
the port, a couple of the employees proudly came to me and told me about
this wonderful program that they wrote that was absolutely necessary for
doing the PC build. I was completely puzzled and told them that I already
had the PC build complete. They told me that that couldn't be possible
since I didn't use their wonderful utility. Turns out that their utility
wrote out all of the make dependencies for the PC. I, of course, wrote a
.c.obj rule which was all that it took. They were excessively angry at me
for inadvertently making them look like fools that they were.
Another example is a more recent web-based project on which I was advising.
I'm a big fan of jQuery; it gets the job done. Someone said "Why are you
using that instead of angular?" I did a bit of research before answering.
Turns out that one of the main reasons given for angular over jQuery was
that "it's fresh". That was a new one for me. Still unclear why freshness
is an attribute that would trump stability.
So, I'm sure that many of you have stories about unnecessary tools and
packages that were created by people unable to RTFM. Would be amused
to hear 'em.
Jon
I started using Unix in ~1977 at UC Santa Barbara. At some point
around then we decided to host a Unix users meeting in the U-Cen
student union building. We asked the facilities people to prepare
a sign pointing to the meeting room.
Imagine my reaction when I walked into the building and saw
the following sign:
"Eunuchs Group Meeting - Room 125"
I don't know if any eunuchs actually showed up.
Jon Forrest
Warner Losh <imp(a)bsdimp.com> kindly corrected my statement that kcc
compiler on the PDP-10 was done by Ken Harrenstien, pointing out that
it was actually begun by Kok Chen (whence, the name kcc).
I've just dug into the source tree for the compiler, and found this
leading paragraph in kcc5.vmshelp (filesystem date of 3-Sep-1988) that
provides proper credits:
>> ...
>> KCC is a compiler for the C language on the PDP-10. It was
>> originally begun by Kok Chen of Stanford University around 1981 (hence
>> the name "KCC"), improved by a number of people at Stanford and Columbia
>> (primarily David Eppstein, KRONJ), and then adopted by Ken Harrenstien
>> and Ian Macky of SRI International as the starting point for what is now
>> a complete and supported implementation of C. KCC implements C as
>> described by the following references:
>>
>> H&S: Harbison and Steele, "C: A Reference Manual",
>> HS1: (1st edition) Prentice-Hall, 1984, ISBN 0-13-110008-4
>> HS2: (2nd edition) Prentice-Hall, 1987, ISBN 0-13-109802-0
>> K&R: Kernighan and Ritchie, "The C Programming Language",
>> Prentice-Hall, 1978, ISBN 0-13-110163-3
>>
>> Currently KCC is only supported for TOPS-20, although there is
>> no reason it cannot be used for other PDP-10 systems or processors.
>> The remaining discussion assumes you are on a TOPS-20 system.
>> ...
I met Ken only once, in his office at SRI, but back in our TOPS-20
days, we had several e-mail contacts.
----------------------------------------
P.S. In these days of multi-million line compilers, it is interesting
to inspect the kcc source code line count:
% find . -name '*.[ch]' | xargs cat | wc -l
80298
A similar check on a 10-Oct-2016 snapshot of the actively-maintained
pcc compiler for Unix systems found 155896 lines.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Gentlemen,
Below some additional thoughts on the various observations posted
about this. Note that I was not a contemporary of these developments,
and I may stand corrected on some views.
> I'm pretty sure the two main System V based TCP/IP stacks were STREAMS
> based: the Lachman one (which I ported to the ETA-10 and to SCO Unix)
> and the Mentat one that was done for Sun. The socket API was sort of
> bolted on top of the STREAMS stuff, you could get to the STREAMS stuff
> directly (I think, it's been a long time).
Yes, that is my understanding too. I think it goes back to the two
roots of networking on Unix: the 1974 Spider network at Murray Hill and
the 1975 Arpanet implementation of the UoI.
It would seem that Spider chose to expose the network as a device, whereas
UoI chose to expose it as a kind of pipe. This seems to have continued in
derivative work (Datakit/streams/STREAMS and BBN/BSD sockets respectively).
When these systems were developed networking was mostly over serial lines,
and to use serial drivers was not illogical (i.e. streams->STREAMS). By 1980
fast local area networks were spreading, and the idea to see the network as
a serial device started to suck.
Much of the initial modification work that Joy did on the BBN code was to
make it perform on early ethernet -- it had been designed for 50 kbps arpanet
links. Some of his speed hacks (such as trailing headers) were later
discarded.
Interestingly, Spider was conceived as a fast network (1.5 Mbps); the local
network at Murray Hill operated at that speed, and things were designed to
work over long distance T1 connections as well. This integrated fast LAN/WAN
idea seems to have been abandoned in Datakit. I have a question out to Sandy
Fraser to ask about the origins of this, but have not yet received a reply.
> The sockets stuff was something Joy created to compete with the CMU Accent
> networking system. [...] CMU was developing Accent on the Triple Drip
> PascAlto (aka the Perq) and had a formal networking model that was very clean
> and sexy. There were a lot of people interested in workstations, the Andrew
> project (MIT is about to start Athena etc). So Bill creates the sockets
> interface, and to show that UNIX could be just as modern as Accent.
I've always thought that the Joy/Leffler API was a gradual development of
the UoI/BBN API. The main conceptual change seem to have been support for
multiple network systems (selectable network stack, expansion
of the address space to 14 bytes).
I don't quite see the link to Accent and Wikipedia offers little help here
https://en.wikipedia.org/wiki/Accent_kernel
Could you elaborate on how Accent networking influenced Joy's sockets?
> * There's no reason for
> a separate listen() call (it takes a "backlog" argument but
> in practice everyone defaults it and the kernel does strange
> manipulations on it.)
Perhaps there is. The UoI/BBN API did not have a listen() call;
instead the open() call - if it was for a listening connection - blocked until
a connection occurred. The server process would then fork of a worker process
and re-issue the listening open() call for the next connection. This left a
time gap where the server would not be 'listening'.
The listen() call would create up to 'backlog' connection blocks in the
network code, so that this many clients could connect simultaneously
without user space intervention. Each accept() would hand over a (now
connected) connection block and add a fresh unconnected one to the backlog
list. I think this idea came from Sam Leffler, but perhaps he was inspired
by something else (Accent?, Chaos?)
Of course, this can be done with fewer system calls. The UoI/BBN system
used the open() call, with a pointer to a parameter data block as the 2nd
argument. Perhaps Joy/Leffler were of the opinion that parameter data
blocks were not very Unix-y, and hence spread it out over
socket()/connect()/bind()/listen() instead.
The UoI choice to overload the open() call and not create a new call
(analogous to the pipe() call) was entirely pragmatic: they felt this
was easier for keeping up with the updates coming out of Murray Hill
all the time.
> In particular, I have often thought that it would have been a better
> and more consistent with the philosophy to have it implemented as
> open("/dev/tcp") and so on.
I think this is perhaps an orthogonal topic: how does one map network names
to network addresses. The most ambitious was perhaps the "portal()" system
call contemplated by Joy, but soon abandoned. It may have been implemented
in the early 90's in BSD, but I'm not sure this was fully the same idea.
That said, making the the name mapping a user concern rather than a kernel
concern is indeed a missed opportunity.
Last and least, when feeling argumentative I would claim that connection
strings like "/dev/tcp/host:port" are simply parameter data blocks encoded
in a string :^)
> This also knocks out the need for
> SO_REUSEADDR, because the kernel can tell at the time of
> the call that you are asking to be a server. Either someone
> else already is (error) or you win (success).
Under TCP/IP I'm not sure you can. The protocol specifies that you must
wait for a certain period of time (120 sec, if memory serves my right)
before reusing an address/port combo, so that all in-flight packets have
disappeared from the network. Only if one is sure that this is not an
issue one can use SO_REUSEADDR.
> Also, the profusion of system calls (send, recv, sendmsg, recvmsg,
> recvfrom) is quite unnecessary: at most, one needs the equivalent
> of sendmsg/recvmsg.
Today that would indeed seem to make sense. Back in 1980 there seems
to have been a lot of confusion over message boundaries, even in
stream connections. My understanding is that originally send() and
recv() were intended to communicate a borderless stream, whereas
sendmsg() and recvmsg() were intended to communicate distinct
messages, even if transmitted over a stream protocol.
Paul
> On Sep 23, 2017, at 3:06 PM, Nelson H. F. Beebe <beebe(a)math.utah.edu> wrote:
>
> Not that version, but I have the 4.4BSD-Lite source tree online with
> these files in the path 4.4BSD-Lite/usr/src/usr.bin/uucp:
Thanks, but I have the 44BSD CDs.
> If they look close enough to what you need, I can put
> a bundle online for you.
I'm looking for the seismo/uunet version that Rick hacked on for so many years. It started off as the 4.3BSD version, but grew to embrace the volume of traffic uunet handled in its heyday. It wasn't your daddy's uucico ;-)
--lyndon
Dario Niedermann <dario(a)darioniedermann.it> wrote on Sat, 23 Sep 2017
11:17:04 +0200:
>> I just can't forgive FreeBSD for abandoning the proc filesystem ...
It can be there, if you wish.
Here are two snippets from a recent log of a recent "pkg update -f ;
pkg upgrade" run on a one of my many *BSD family systems, this one
running FreeBSD 11.1-RELEASE-p1:
Message from openjdk8-8.131.11:
======================================================================
This OpenJDK implementation requires fdescfs(5) mounted on
/dev/fd and procfs(5) mounted on /proc.
If you have not done it yet, please do the following:
mount -t fdescfs fdesc /dev/fd mount -t procfs proc
/proc
To make it permanent, you need the following lines in
/etc/fstab:
fdesc /dev/fd fdescfs rw 0 0 proc /proc procfs rw 0 0
======================================================================
Message from rust-1.18.0:
======================================================================
Printing Rust backtraces requires procfs(5) mounted on /proc .
If you have not already done so, please do the following:
mount -t procfs proc /proc
To make it permanent, you need the following lines in /etc/fstab:
proc /proc procfs rw 0 0
======================================================================
I've seen such messages in many package installations in the *BSD
family, and I generally add the suggested lines to /etc/fstab.
Perhaps others more familiar with BSD internals might comment on
whether it is many non-BSD software, like the Java Developer's Kit,
and Mozilla's rust language, that mostly would like /proc support, or
whether there are plenty of native-BSD packages that expect it too.
The second edition of
Marshall Kirk McKusick, George V. Neville-Neil, and Robert N. M. Watson
The Design and Implementation of the FreeBSD Operating System
ISBN 0-201-70245-2 (hardcover), 0-321-96897-2 (hardcover)
has 5 pages with mention of the /proc filesystem, and nothing that
suggests that it is in any way deprecated.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Sadly no longer with us (he exited in 2011), he was forked in 1941. Just
think, if it wasn't for him and Ken, we'd all be running Windoze, and
thinking it's wonderful.
A Unix bigot through and through, I remain,
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Tom Ivar Helbekkmo:
Why should anyone need to? Of all the mailing lists I'm on, this one is
the only one that has this problem.
=====
Beware tunnel vision. Another mailing list I'm on has exactly
the same problem, made worse because it's being run by a central
Big Company Mailing List Provider so the rules keep changing under
foot and it's up to the poor-sod list maintainer (who is not a
programmer) to cope.
To bring the focus back to this mailing list, not every program
runs on a little-endian computer with arbitrary word alignment
and pointers that fit in an int.
Norman Wilson
Toronto ON
Does anyone have a copy of Rick's uunet version of the 4.3BSD UUCP source? The disk I had it on seized up, and I can't figure out a fine-grained-enough set of search keywords to find it through a web search :-(
--lyndon
Lyndon Nerenberg:
I really like mk. 8ed was where it first rolled out? I remember
reading about it in the 10ed books. It's a joy to use in Plan 9.
======
Later than that. I was around when Andrew wrote mk, so it
definitely post-dated the 8/e manual.
mk does a number of things better, but to my mind not quite
enough to justify carrying it around. Just as I decided long
ago (once I'd come out of the ivory hothouse of Murray Hill)
that I was best off writing programs that hewed to the ISO C
and POSIX interfaces (and later put some work into bringing
my private copy of post-V10 nearer to the standards), because
that way I didn't have to think much about porting; so I
decided eventually that it is better just to use make.
As with any other language, of course, it's best to use it
in as simple a way as possible. So I don't care much whether
it's gmake or pmake or qmake as long as it implements more
or less the 7/e core subset without breaking anything.
Larry McVoy:
I do wish that some simple make had stuffed a scripting language in there.
Anything, tcl, lua, even (horrors, can't believe I'm saying this) a little
lisp. Or ideally a built in shell compat language. All those backslashes
to make shell scripts work get old.
======
This is something mk got right, and it's actually very simple to do:
every recipe is a shell script. Not a collection of lines handed
one by one to the shell, but a block of text. No backslashes (or
extra semicolons) required for tests. Each script is run with sh -e,
so by default one failed command will abort the rest, which is
usually what one wants; but if you don't want that you just insert
set +e
(So it's not that I dislike mk. Were it available as an easy
add-on package on all modern systems, rather than something I'd
have to carry around and compile, I'd be happy to use it.)
Norman Wilson
Toronto ON
I tried running my own server on mcvoy.com but eventually gave up, the
spam filtering was a non-ending task.
If someone has a plug and chug setup for MX I'd love to try it.
Thanks,
--lm
This question is motivated by the posters for whom FreeBSD is not Unix
enough :-)
Probably the best known contribution of the Berkeley branch of Unix is
the sockets API for IP networking. But today, if for no other reason
than the X/Open group of standards, sockets are the preferred networking
API everywhere, even on true AT&T derived UNIX variants. So they must
have been merged back at some point, or reimplemented. My question is,
when and how did that happen?
And if there isn't a simple answer because it happened at different
times and in different ways for each variant, all the better :-)
--
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
Do obvious transformation on domain to reply privately _only_ on Usenet.
I run my own mail server, on systems in my basement.
It is a setup that no one in their right mind would
replicate, but the details may actually be proper for
this list.
A firewall/gateway system runs a custom SMTP server,
which can do simple filtering based on the SMTP envelope,
SMTP commands, calling IP address and hostname. It is
also able to call external commands to pass judgement on
a caller or a particular message.
If mail is accepted, it is passed through a simple
MTA and a stupidly-simple queueing setup (the latter
made of shell scripts) to be sent via SMTP to a
different internal system, which uses the same SMTP
server and MTA to deliver to local mailboxes.
Outbound mail is more or less the obvious inverse.
I have put off naming names for dramatic effect. The
two systems in question are MicroVAX IIIs running
my somewhat-hacked-up version of post-10/e Research
UNIX. The MTA is early-1990s-vintage upas. The SMTP
server, SMTP sender, and queuing stuff are my own.
I wrote the SMTP server originally not long after I left
Bell Labs; I was now in a world where sendmail was the
least-troublesome MTA, but in those days every month
brought news of a new sendmail vulnerability, so I wrote
my own simple server to act as a condom. Over time it
grew a bit, as I became interested in problems like
what sorts of breakin attempts are there in real life
(back then one received occasional DEBUG or WIZ commands,
but I haven't seen any since the turn of the century);
what sorts of simple filtering at the SMTP level will
get rid of most junk mail. The code is more complicated
than it used to be, but is still small enough that I am
reasonably confident that it is safe to expose to the
network.
The SMTP sender and the queueing scripts came later,
when I decided to host my own mail. Both were designed
in too much of a hurry.
There is no official spam filtering (no bogofilter or
the like). A few simple rules that really just enforce
aspects of the SMTP standard seem to catch most junk
callers: HELO argument must contain at least one . (standard
says it must be your FQDN) and must not be *.* (I see dozens
of those every day!); sender must not speak until my server
has issued a complete greeting (I follow Wietse Venema in
this: send a line with a continuation marker first, then
sleep five seconds or so, then send a finish). I also
have a very simple, naive greylisting implementation that
wouldn't work well for a site with lots of users, but is
fine for my personal traffic. The greylisting is implemented
with a pair of external shell scripts.
I have had it in mind for a long time to consult the Spamhaus
XBL too. It would be easy enough to do with another plug-in
shell script. There are stupid reasons having to do with my
current DNS setup that make that impractical for now.
The mail setup works, but is showing its age, as is the
use of Research UNIX and such old, slow hardware as a network
gateway. One of these years, when I have the time, I'd like
first to redo the mail setup so that mailboxes are stored
on my central file server (a Sun X2200 running Solaris 10,
or perhaps something illumos-based by the time I actually
do all this); then set up a new gateway, probably based on
OpenBSD. Perhaps I should calculate how much hardware I
could buy from the power savings of turning off just one of
the two MicroVAXes for a year.
I have yet to see an MTA that is spare enough for my taste,
but the old upas code just doesn't quite do what I want any
more, and is too messy to port around. (Pursuant to the
conversation earlier here about autoconf: these days I try
to need no configuration magic at all, which works as long
as I stick to ISO C and POSIX and am careful about networking.
upas was written in messier days.) At the moment I'm leaning
toward qmail, just because for other reasons I'm familiar with
it, though for my personal use I will want to make a few changes
here and there. But I'll want to keep my SMTP server because
I am still interested in what goes on there.
Norman Wilson
Toronto ON
> When you say MIT you think about ITS and Lisp. That is why emacs IMHO
> was against UNIX ideals. RMS was thinking in different terms than Bell
> Labs hackers.
Very different. Once, when visiting the Lisp machine, I saw astonishingly
irrelevant things being done as first class emacs commands, and asked
how many commands there were. The instant answer was to have emacs
print the list. Nice, but it scrolled way beyond one screenful. I
persisted: could the machine count them? It took several minutes of
head-scratching and false starts to do a task that was second nature
to Unix hands.
With hindsight, I realize that the thousand emacs commands were but a
foretaste of open-source exuberance--witness this snippet from Linux:
!ls /usr/share/man/man2|wc
468 468 6766
Even a "kernel" is as efflorescent as a tropical rainforest.
On Tue, Sep 19, 2017, at 10:42, Larry McVoy wrote:
> slib.c:1653 (bk-7.3): open failed: permission denied
>
> which is way way way more useful than just permission denied.
Random832 replied:
Well. It's less useful in one way - it doesn't say what file it was
trying to open. You could pass the filename *instead* of "open failed",
but that still omits the issue I had pointed out: what were you trying
to open the file for (at the very least, were you trying to read, write,
or exec it). Ideally the function would have a format and arguments.
====
Exactly.
The string interpretation of errno is just another
item of data that goes in an error message. There is
no fixed place it belongs, and it doesn't always
belong there, because all that is error does not
fail from a syscall (or library routine).
I do often insert a function of the form
void errmsg(char *, ...)
in my C programs. It takes printf-like arguments.
Normally they just get passed to vfprintf(stderr, ...),
though sometimes there is something more esoteric,
and often fprintf(stderr, "%s: ", progname) ends up
in front.
But errmsg never knows anything about errno. Why
should it? It's supposed to send complaints to
a standard place; it's not supposed to invent the
complaints for itself! If an errno is involved,
I write something like
errmsg("%s: cannot open: %s", filename, strerror(errno));
(Oh, yes, errmsg appends a newline too. The idea
is to avoid cluttering code with minutiae of how
errors are reported.)
I don't print the source code filename or line number
except for `this shouldn't have happened' errors.
For routine events like the user gave the wrong
filename or it had the wrong permissions or his
data are malformed, pointers to the source code are
just unhelpful clutter, like the complicated
%JARGON-OBSCURE-ABBREVIATION prefixes that accompanied
every official error message in VMS.
Of course, if the user's data are malformed, he
should be told which file has the problem and
where in the file. But that's different from
telling him that line 193 of some file he doesn't
have and will probably never see contains the system
call that reported that he typed the wrong filename.
Norman Wilson
Toronto ON
I received a private request for info on my Postfix config. I'm happy to
post to list.
This is the interesting bit:
https://pastebin.com/tNceD6zM
Running under Debian 8, soon to be upgraded to Debian 9.
Postgrey is listening on TCP/10023.
As an aside I just saw this in my mail queue:
# mailq
-Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient-------
2182087EA 1618 Thu Sep 21 10:41:07 robert(a)timetraveller.org
(host aneurin.horsfall.org[110.141.193.233] said: 550 5.7.1
<dave(a)horsfall.org>... No reporting address for linode.com; see RFC 2142
(in reply to RCPT TO command))
dave(a)horsfall.org
That is aggressive standards compliance ;)
Rob
All, sorry for the test post. Grant Taylor has been helping me resolve
the mail bounces, which we think are due to the mailing list preserving the
existing DKIM information when forwarding to e-mail.
This e-mail is going to a test address which should strip the inbound
DKIM stuff before passing to the TUHS list. Hopefully we can observe
the result and check the logs.
Warren
And ... we now bring the threads on current Unix-like systems and current
mail configuration to a close, and remind the group that the mailing list
is about _old_ things :-)
Mind you, if the list lasts another 25 years, these two threads will
meet that criterion.
Thanks, Warren
I use Exchange 5.5 & MacOS + Outlook... I know very un-unixy but it's more
so a test bed for a highly modified version of Basilisk II (more so to test
appletalk of all things)
I route it through Office 365, since I use that for my company, and they
have a 'connector' to route a domain through their spam filters and then
drop it to my legacy Exchange server. I gave up on the SPAM fight, it
really was far too much of a waste of my time. That and this email address
is in far far too many databases... :|
I'm on the fence if it's really worth the effort though. I wanted to setup
some kind of UUCP / Exchange relay, and maybe go full crazy with X.25 but at
some point I need to maybe let some of this old stuff just die... It's the
same reason I don't run ATM at home.
> ----------
> From: Larry McVoy
> Sent: Thursday, September 21, 2017 12:25 AM
> To: TUHS main list
> Subject: [TUHS] Who is running their own mail server and what do you
> run?
>
> I tried running my own server on mcvoy.com but eventually gave up, the
> spam filtering was a non-ending task.
>
> If someone has a plug and chug setup for MX I'd love to try it.
>
> Thanks,
>
> --lm
>
Maybe I'm the odd one out here, but at home I've only got a Windows/10
notebook :-)
Mind you, at work I play with
. aDEC 400xP, DECpc MTE, Prioris XL server running SCO UNIX 3.2V4.2
. AlphaServer DS10 running Digital Unix 4.0g
. AlphaServer DS15 running Tru64 Unix 5.1B
. HP(E) rx-servers rx1620, rx2620, rx2660 running HP-UX 11.23
. HP(E) rx-servers rx2800 i2/i4 running HP-UX 11.31
. DOS 6.22, Windows/Xp, Windows/7 clients
Maintaining applications which were conceived late 80s is fun :-)
I worked on, and co-managed, TOPS-20 on DECsystem 20/40 and 20/60
systems with the PDP-10 KL-10 CPU from September 1978 to 31 October
1990, when our 20/60 was retired. (A second 20/60 on our campus in
the Department of Computer Science had been retired a year or two
earlier).
There were two C compilers on the system, Ken Harrenstien's kcc, and
Steve Johnson's pcc, the latter ported to TOPS-20 by my late friend
Jay Lepreau (1952--2008).
pcc was a straightforward port intended to make C programming, and
porting of C software, fairly easy on the PDP-10, but without
addressing many of the architectural features of that CPU.
kcc was written by Ken Harrenstien from scratch, and designed
explicitly for support of the PDP-10 architecture. In particular, it
included an O/S system call interface (the JSYS instruction), and
support for pointers to all byte sizes from 1 to 36. Normal
addressing on the PDP-10 is by word, with an 18-bit address space.
Thus, two 18-bit fields fit in a 36-bit word, ideally suited for
Lisp's CAR and CDR (contents of address/decrement register, used for
first and rest addressing of lists). However, PDP-10 byte pointers
encode the byte size and offset in the second half of a word.
Pointer words could contain an indirect bit, which caused the CPU to
automatically load a memory word at that address, and repeat if that
word was found to be an indirect pointer. That processing was handled
by the LOAD instructions, so it worked for all programming languages.
Characters on the ten-or-so different PDP-10 operating systems were
normally 7-bit ASCII, stored left to right in a word, with the
right-most low-order bit set to 0, UNLESS the word was intended to be
a 5-decimal-digit line number, in which case, that bit was set to 1.
Compilers and some other tools ignored line-number words.
As the need to communicate with other systems with 8-, 16-, and 32-bit
words grew, we had to accommodate files with 8-bit characters, which
could be stored as four left-adjusted characters with 4 rightmost zero
bits, or handled as 9 consecutive 8-bit characters in two adjacent
36-bit words. That was convenient for binary file transfer, but I
don't recall ever seeing 9-bit characters used for text files.
By contrast, on the contemporary 36-bit Univac 11xx systems running
EXEC-8, the O/S was extended from 6 six-bit Fieldata chararacters per
word to 9-bit extended ASCII (and ISO 8859-n Latin-n) characters: the
reason was that the Univac CPU had quarterword access instructions,
but not arbitrary byte-size instructions like the PDP-10. I don't
think that there ever was a C compiler on those Univac systems.
On the PDP-10, memory locations 0--15 are mapped to machine registers
of those numbers: short loops could be copied into those locations and
would then run about 3x faster, if there weren't too many memory
references. Register 0 was not hardwired to a zero value, so
dereferencing a NULL pointer could return any address, and could even
be legitimate in some code. The kcc documentation reports:
>> ...
>> The "NULL" pointer is represented internally as a zero word,
>> i.e. the same representation as the integer value 0, regardless of
>> the type of the pointer. The PDP-10 address 0 (AC 0) is zeroed and
>> never used by KCC, in order to help catch any use of NULL pointers.
>> ...
In kcc, the C fopen() call second argument was extended with extra
flag letters:
>> ...
>> The user can override either the bytesize or the conversion
>> by adding explicit specification characters, which should come after
>> any regular specification characters:
>> "C" Force LF-conversion.
>> "C-" Force NO LF-conversion.
>> "7" Force 7-bit bytesize.
>> "8" Force 8-bit bytesize.
>> "9" Force 9-bit bytesize.
>> "T" Open for thawed access (TOPS-10/TENEX only)
>>
>> These are KCC-specific however, and are not portable to other
>> systems. Note that the actual LF conversion is done by the USYS (Unix
>> simulation) level calls (read() and write()) rather than STDIO.
>> ...
As the PDP-10 evolved, addressing was extended from 18 bits to 22
bits, and kcc had support for such extended addresses.
Inside the kcc compiler,
>> ...
>> Chars are aligned on 9-bit byte boundaries, shorts on halfword
>> boundaries, and all other data types on word boundaries (with the
>> exception of bitfields and the _KCCtype_charN types). Converting any
>> pointer to a (char *) and back is always possible, as a char is the
>> smallest possible object. If the original object was larger than a
>> char, the char pointer will point to the first byte of the object; this
>> is the leftmost 9-bit byte in a word (if word-aligned) or in the halfword
>> (if a short).
>> ...
That design choice meant that the common assumption that a 32-bit word
holds 4 characters remained true on the PDP-10. The _KCCtype_charN
types could have N from 1 to 36. The case N = 6 was special: it
handled the SIXBIT character representation used by compilers,
linkers, and the O/S to encode external function names mapped to a
6-bit character set unique to the PDP-10, allowing 6-character unique
names for symbols.
I didn't readily find documentation of kcc features on the Web, so for
those who would like to learn more about support of C and Unix code on
the PDP-10, I created this FTP/Web site today:
http://www.math.utah.edu/pub/kccftp://ftp.math.utah.edu/pub/kcc
It supplies several *.doc files; the user.doc file is likely the one
of most interest for this discussion.
Getting C onto TOP-20 was hugely important for us, because it gave us
access to many Unix tools (I was the first to port Brian Kernighan's
awk language to the PDP-10, and also to the VAX VMS native C
compiler), and eased the transition from TOPS-20 to Unix that began
for our users about 1984, and continued until our complete move in
summer 1991, when we retired our last VAX VMS systems.
Finally, here is a pointer to a document that I wrote about that
transition:
http://www.math.utah.edu/~beebe/reports/1987/t20unix.pdf
P.S. I'll be happy to entertain further questions about these two C
compilers on the PDP-10, offline if you prefer, or on this list.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
All, I just had this question popped into my inbox.
Cheers, Warren
----- Forwarded message from Evan Koblentz <evan(a)snarc.net> -----
Hi Warren. Evan K. here from Vintage Computer Festival, etc.
I'm trying to find out who invented the Chroot command in Version 7 Unix.
Could you help, possibly including a post to TUHS email list on my behalf?
I posted to our local (northeast US) list and also emailed Brian Kernighan and
Bill Cheswick.
Hoping this leads to an answer. I'm looking for a name, not just generalities.
Thanks very much.
----- End forwarded message -----
Random832:
Just out of curiosity, where does perror(filename), quite possibly the
*most* common error message on Unix as a whole, fall on your scale? It
says which of the file location or permissions (or whatever else) it is,
but not whether it was attempting to open it for reading or writing.
=====
I never liked perror much. It's a little too primitive:
you get exactly one non-formatted string; you get only
stderr, so if you're sending messages separately to a log
or writing them to a network connection or the like, you're
out of luck.
strerror(errno) hits the sweet spot for me. Until it
appeared in the standard library (and until said standard
spread enough that one could reasonably expect to find it
anywhere) I kept writing more or less that function into
program after program.
I guess the problem with perror is that it isn't sufficiently
UNIX-like: it bundles three jobs that are really separate
(convert errno to string message, format an error message,
output the message) into one function, inseparably and
inflexibly.
Norman Wilson
Toronto ON
Does anyone know of the existence of source code written in B (C's predecessor?)
I have encountered a few snippets here and there, but nothing
substantial. The only "real" program that I know of is AberMUD-1. It
looks like there exists a physical print-out of it:
https://dropsafe.crypticide.com/article/12714
Could it be that this is the only artifact left of this most important
"missing link" of C and Unix History? How can this (and any other B
source) be gathered and preserved?
> What a pain, almost like Unix, and not quite. l It was a clone of Unix for the 68k.
Funny, I was just poking through some ccpm68k source/tools, which happened to contain
the Alcyon C compiler source, and one of the files is:
$ cat v103/doc/files/Sectioname
.fo 'REGULUS Reference Manual'- % -'FILES'
$
The same system?
DF
Was there ever a UNIX or even the thought of porting one to a PDP-10?
36-bit machine, 18-bit addresses (more on KL10 and KS10), and:
*0 would return register 0 instead of a SIGSEGV ;)
8-bit bytes would have been a wasteful exercise, but you never know.
(losing 4 bits of every 36-bit word)
thanks!
art k.
> That makes sense if it's '73. That would be the Ritchie front end and
> v5/v6 syntax as I remember=20
Here:
http://publications.csail.mit.edu/lcs/specpub.php?id=717
is the TR describing it (well, this report covers one by him for the Honeywell
6000 series, but IIRC it's the same compiler). I didn't read the whole thing
slowly, but glancing quickly at it, it sounds like it's possible a 'from
scratch' thing?
Noel
> From: Alec Muffett
> "threaded code" in the old sense could be smaller than the equivalent
> CISC binary on the same machine
One can think of 'threaded code' as code for a new virtual machine, one
specialized to the task at hand.
> https://en.m.wikipedia.org/wiki/Threaded_code
For those who really want to delve in some depth, see the chapter "Turning
Cousins into Sisters" (Chapter 15, pg. 365) in "Computer Engineering: A DEC
View of Hardware Systems Design", by Bell, Mudge and McNamara.
Interesting factoid: The PDP-11 initially used a threaded FORTRAN
implementation. In line with the observation above (about a new virtual
machine), DEC actually looked into writing microcode for the -11/60 (which had
a writeable control store) to implement the FORTRAN virtual machine.
Noel
On 2017-09-17 18:33, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Was there ever a UNIX or even the thought of porting one to a PDP-10?
Definitely a thought. An attempt was started on NetBSD for the PDP-10,
and it sortof got halfway of getting into single-user, but I'm not sure
if the person who worked on it just got distracted, or if he hit
problems that were really hard to solve. I certainly know the person,
and can find out more if people really are interested.
> 36-bit machine, 18-bit addresses (more on KL10 and KS10), and:
>
> *0 would return register 0 instead of a SIGSEGV ;)
Yes. Not the first machine that would be true for. You don't have
address 0 unmapped on a PDP-11 either.
> 8-bit bytes would have been a wasteful exercise, but you never know.
> (losing 4 bits of every 36-bit word)
Uh... Why 8 bit bytes? That way lies madness. There exists a really good
C compiler for TOPS-20 - KCC. It uses 9 bits per byte. Works like a
charm, except when some people write portable code that is not so
portable. ;-)
KCC was written by KLH, unless I remember wrong. Same guy who also wrote
the KLH-10 emulator.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
What a pain, almost like Unix, and not quite. l It was a clone of Unix for the 68k. The APIs were ever so slightly different because the authors were concerned about copyright infringement. libc calls had different argument orders or types and in general it was just off enough that you wanted to claw at the screen every time something went wrong.
To top it off, the system we were hosting it on was so slow that a full rebuild of our meager (10k lines) software took overnight.
I eventually ported all the software to a SparcStation-2 cross compiling to the 68k target we were embedded on.
> To kick a more relevant thread off, what was the "weirdest" Unix system you used & why? Could be an emulation like Eunice, could be the hardware e.g NULL was not zero, NUXI byte ordering etc.
>
> Cheers, Warren
On Thu, Sep 14, 2017 at 4:09 PM, Jon Steinhart <jon(a)fourwinds.com> wrote:
>
> Well, I'd suggest that a lot of this has to do with people who have vision
> and people who don't. When you look at UNIX, you see something created by
> a bunch of very talented people who had a reasonably shared vision of what
> they were trying to achieve.
>
Jon - I mostly agree, but would twist it a little differently (hey, we've
been arguing since the 1970s, so why stop now).
I think you are actually touching on an idea that has been around humanity
for a long time that is independent of the computer field. We call it
"good taste." Part of acquiring good taste is learning what is in bad
taste, a heavy dose of experience and frankly the ability to evaluate your
own flaws. What I always love about Dennis, Ken, Doug, Steve and the rest
if the team is their willingness to accept the shortcomings and compromises
that were made as the developed UNIX as a system. I never saw them trying
to claim perfection or completeness, much less and end state had been
reached. Always looking for something better, more interesting. And
always, standing on the shoulders of others...
What I really dislike about much of the crowd that has been discussed is
that they often seem more contented to kick people in the shins that
standing on their shoulders.
I used to say, when we were hiring people for some of my start-ups, what we
wanted was experienced folks that had demonstrated good taste. Those are
people you can trust; and will get you pretty close to where you want to be.
In fact, to pick on one of my previous employers, I have always said, what
DEC got wrong, was it was always striving to be perfect. And lots of
things never shipped, or when they did (like Alpha) it was wonderful, but
it did not matter. The opportunity window had passed.
Part of "good taste" is getting the job done and on time. Being "good
enough" and moving on to the next thing. Sun (certainly at the beginning)
was pretty good at this idea. The UNIX team clearly got a lot of it right.
It is easy to throw stones at others. It is hard to repeatedly get so much
right for so long and UNIX has and did.
Clem
On Sep 15, 2017, at 1:32 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: "Steve Johnson" <scj(a)yaccman.com>
> To: "Dan Cross" <crossd(a)gmail.com>, "Bakul Shah" <bakul(a)bitblocks.com>
> Cc: "TUHS main list" <tuhs(a)minnie.tuhs.org>
> Subject: Re: [TUHS] really Pottering vs UNIX
> Message-ID:
> <d92047c5a36c6e72bd694322acb4ff33e3835f9f(a)webmail.yaccman.com>
> Content-Type: text/plain; charset="utf-8"
>
>
>
> More to do with a sense for quality. Often developed through
> experience
> (but not just that). I think what we need is a guild system for
> programmers/engineers. Being an apprentice of a master craftsman is
> essential for learning this "good taste" as you call it.
>
> Back when I was writing FORTRAN, I was
> working for a guy with very high standards who read my code and got me
> to comment or, more usually, rewrite all the obscure things I did.
> He made the point that a good program never dies, and many people
> will read it and modify it and try to understand it, and it's almost a
> professional duty to make sure that you make this as easy as possible
> for them.
>
When I taught at UCSD I always made it a point to inform the students
that the person who will be maintaining their programs in the future will
all be reformed axe murderers. These nice folks learned C (at the time)
on MS-DOS 3.1 and weren’t as homicidal as they used to be. They would
however be given your home address and phone number in case they
had questions about your code.
It was always good for a laugh and I went on to explain how code outlives
the author and so you should take care to make it easy for someone else
to work on your code.
The other thing I did was to have students give their programs half
way through the project to a randomly chosen (by me) other student.
They were not allowed to assist the recipient and grades were based
on how well the final program met the requirements given at the beginning
of the project. Code quality went way up on the second project compared
to the first.
David
I had almost wiped any memory of DG/UX from my memory. Now I’m
quite sure I must resume therapy for it.
I wrote device drivers for that . . . thing to drive graphics cards for
Megatek and its custom version of X11 that buried about 1/2 of the
server in the hardware.
David
> On Sep 17, 2017, at 12:01 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Chet Ramey <chet.ramey(a)case.edu>
> To: arnold(a)skeeve.com, wkt(a)tuhs.org, tuhs(a)tuhs.org
> Subject: Re: [TUHS] And now ... Weirdnix?
> Message-ID: <58b4bb3e-1b94-0e3d-312d-9151e8a057a6(a)case.edu>
> Content-Type: text/plain; charset=utf-8
>
> On 9/17/17 3:28 AM, arnold(a)skeeve.com wrote:
>
>> Whatever Data General called their Unix layer on top of their native
>> OS for the Eclipse or whatever it was (32 bit system).
>
> I think they called it DG/UX -- just like they called their wretched
> System V port.
arnold(a)skeeve.com:
> This not true of ALL the GNU project maintainers. Don't tar everyone
> with RMS's brush.
John Steinhart:
What are we supposed to to then? cpio?
===
I guess we're supposed to tp his house.
Norman Wilson
Toronto ON
On Fri, Sep 15, 2017 at 11:21 AM, Noel Chiappa <jnc(a)mercury.lcs.mit.edu>
wrote:
>
>
> Why not just write a Unix-native one? They aren't that much work - I
> created a
>
> uassembler overnight (literally!) for the QSIC project Dave Bridgham and I
>
> have been working on.
Agreed Terry Hayes, tjt and I hacked an assembler and loader for Masscomp
together years ago pretty fast. We actually, made all those tools look a
lot like the DEC ones because a lot of same HW people were writing the
uCode for the Masscomp FP/APs as had written the much of the 11 and Vax
code.
[Fun story, that a few other tools that had been written for UNIX that
patriots older RSX/VMS support tools were quietly traded to DEC WRL for
some HW libraries. We both were using the same brand CAD system and our
HW guys wanted some design rule stuff WRL had done for Jupiter, and they
wanted UNIX based tools to run on Ultrix].
As for Tektronix earlier, we did not know much about the WSC unit and
basically the CSAV/CRET stuff was supposed to be a one shot thing. We
just wanted to use the tool that came with it; cause we did not think we
were going to do much with it. In hind sight and knowing what I would
learn 3-5 years later, writing our own would have made more sense; but I'm
not sure it was very well documented.
Clem
> From: Dave Horsfall
> Did anyone actually use the WCS?
Well, the uassembler was a product for a while, so they must have..
> I had visions of implementing CSAV and CRET on our -60, but never did
> get around to it.
I recently had the idea of programming them into an M792 ROM card (100nsec
access time); user programs couldn't have used it without burning a segment
(to map in the appropriate part of the I/O space), but it might have sped up
the kernel some (and it would have been trivial to add, once the card was
programmed - with a soldering iron - BTDT, BITD :-).
Haven't gotten to it yet - still looking for an M792 (I don't want to trash
any of my pre-programmed M792-xx's :-).
> From: Clem Cole <clemc(a)ccc.com>
> A big issue, again IIRC, was the microcode compiler/tools for the WSC
> ran on RSX so it meant UNIX was not running, which was not popular.
Why not just write a Unix-native one? They aren't that much work - I created a
uassembler overnight (literally!) for the QSIC project Dave Bridgham and I
have been working on.
It's been improved a lot since the first version (e.g. the entire uengine
description is now read in from a config file, instead of being compiled in),
but that first version did work fine...
Or was the output format not documented?
Noel
I been watching and thinking a bit about this exchange particularly, since
I had a paper accepted in "Unix in Europe: between innovation, diffusion
and heritage" Symposium which touches on this topic. I think it is
really gets to the core of the problem that UNIX was caught with and I
certainly did not understand at the time.
The issue here is were are all technologist and as such, we think in terms
of the technology and often forget that its the economics that is the long
pole in the tent. *Simply, computers are purchased as a tool to help to
solve problems for people*. And the question remains who controls the
money being spend.
*UNIX was written originally by a group of people for themselves.* The
problem that they were solving, *was how to build better programs and
systems for those programs*. Vendors, particularly hardware centric
vendors, really only care that you buy their (HW) product which is where
they make their money. As it turns out applications software vendors,
don't care about operating systems - as we used to say Larry Ellison never
cared for VMS, UNIX, Solaris, or MVS, he cared how many copies of Oracle's
DB he sold.
So UNIX comes on the scene and strange thing happens. First AT&T is
required by 1956 consent decree to make the IP available, they have
processes and procedures to do so, so they do. So in the 70s certainly,
when the HW delivery platform for UNIX is a DEC, the people that want it
are the same type of people that it was originally written -- programmers.
UNIX is a hit.... by the 80s and 90s, now we have two different group of
peoples working with UNIX. As Steve and Larry point out, those same type
of developers that originally had used UNIX in the first place [which was
cool... I'm one of them too]. The problem was the economics started
getting driven by a different group, the people that did not care - it was
purely a means to get a job done (we the programmers lost control of the
tiger in return for a lot of money to start to develop the systems).
As Larry pointed, most of the care takers of the second class of UNIX
customer, did not give a hoot about the programmers or many of the 'norms'
from the previous world. Sun, Masscomp and eventually DEC did make SunOS,
RTU, and Ultrix sources available to university customers (IBM may have
also I'm not sure), but the hoops to get it could be painful; because they
did not really understand that customer base as Steve pointed out (which
turns out to have been an undoing).
But that was the issue. Sun was able too see that trying to help the
programmer was a good thing, but in the end, they could not sustain it
either. In the end, they got sucked in the same economics of cutting at
deal with AT&T to create Solaris SVR4 etc. Opposed Sun Forever, might
have made it if they had actually make OSF 'Open Source' but they were too
caught in fighting with each other. Imagine if the Mach based OSF1/386
has been released, which was running before Linux with a window manager and
you had IBM, DEC, HP, et al working to make it 'FOSS' - how different the
world might have been.
But that would have gotten back to my point... they made their money
selling an application delivery vehicle. They had liked to the control it,
because it was easy to keep the customer if they did, but in the end, they
really did not care.
Now they have seeded the field to Linux and just left the OS to the SW
developers and fallen back to what they really care about. A place to run
applications.
Clem
> I really kind of liked that toolkit, it was all key/value like so:
>
> panel = xv_create(
> frame, PANEL,
> XV_WIDTH, WIDTH,
> XV_HEIGHT, 30,
> PANEL_LAYOUT, PANEL_HORIZONTAL,
> XV_SHOW, FALSE,
> NULL);
>
> So the order of the args didn't really matter, I think the first one
> maybe did because that was the parent but not sure, the rest could
> be any order you wanted. Pretty simple.
The first two were fixed; the prototype was
Xv_object xv_create (Xv_opaque owner, Xv_pkg *pkg, ...);
The keywords (XV_WIDTH etc) contained a bitfield encoding the type and
cardinality of the value argument(s) so that the argument list could
be traversed without knowing every package's keywords.
Using NULL as the terminating argument looks unportable.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> Check out: ybc: a compiler for B <https://github.com/Leushenko/ybc>
>From a historical standpoint, a plain B compiler lacks a very important
attribute of B in Unix. Yes, B presaged some C syntax. But its shining
property was that it produced threaded code, for which both compact
and expanded runtime support was available. The latter had software
paging. Thus B transcended the limited physical memory of the early
PDP-11s.
If you can't compile something, you can't run it. A prime example was B
itself. Without software paging it would not have been able to recompile
itself, and Unix would not have been self-supporting.
Doug
> The rest of your story is great, just one small correction. SunView started
> as something Sun specific but it pretty quickly became a library on top of
> X11. I'm not sure if it ever worked on X10, I think it did but I'm not sure.
As I recall it, SunTools (the original Sun window system) was renamed
SunView, and the API was ported to X11 under the name XView.
> Source: I've hacked up GUI interfaces for the SCM I did at Sun in Sunview.
> This would have been around 1990, is that still X10 or X11?
X11 came out in 1987. I The first version I remember using is X11R3,
which came out in 1988.
See https://www.x.org/wiki/X11R1 (and .../X11R2 etc)
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Here is a BibTeX entry for a book on the NeWS system:
@String{pub-SV = "Spring{\-}er-Ver{\-}lag"}
@String{pub-SV:adr = "Berlin, Germany~/ Heidelberg,
Germany~/ London, UK~/ etc."}
@Book{Gosling:1989:NBI,
author = "James Gosling and David S. H. Rosenthal and Michelle
Arden",
title = "The {NeWS} Book: an introduction to the {Network\slash
extensible Window System}",
publisher = pub-SV,
address = pub-SV:adr,
pages = "vi + 235",
year = "1989",
ISBN = "0-387-96915-2",
ISBN-13 = "978-0-387-96915-2",
LCCN = "QA76.76.W56 A731 1989",
bibdate = "Tue May 25 07:20:00 1999",
bibsource = "http://www.math.utah.edu/pub/tex/bib/unix.bib",
keywords = "NeWS (computer file); Windows (computer programs)",
}
It is the only book that I have recorded on NeWS (I have it on my
office bookshelf). If there is interest, I can post links to 10 or
so journal articles and conference proceedings about NeWS from
1986 to 1990.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Trying a one solution fits all approach was always Microsoft's approach.
Back over a decade ago when I was big into MFC development, it was amazing
how much of the Windows bloat was present in WINCE (or Windows Moble as they
later styled it). It's almost impossible to differentiate a desktop and
mobile on the Windows side. They tout it as an advantage.
Oddly, Android gives an indication of how to do it right. Sure, you can
take the essentials out of the common system, but you don't have to move
inappropriate bloat over.
> On the programming side, there wasn't either the memory capacity or
> processing power to implement a modern disk file system. One of the
> first computers I worked with was a System/360 model 25 running
> DOS/360. The machine had 48K of core memory, 12K of which was for the
> OS, leaving 36K for programs. No virtual memory.
Unix was a counterexample. Recall that v1 worked on a 24K
machine, 16K of which was OS and 8K user. And it had a modern
file system. Programming was so much easier that it lured
people (e.g. me) away from big mainframe computers.
Doug
> From: Dave Horsfall <dave(a)horsfall.org>
> Just think, if it wasn't for him and Ken, we'd all be running Windoze,
> and thinking it's wonderful.
It's actually worse than that.
We'd be running a Windows even worse than current Windows (which has managed
to pick up a few decent ideas from places like Unix).
Noel
Evening,
Okay - now that I've completed moving and settling in, I am slowly
bringing some stuff back up. UCBVAX should come back in the next few
weeks (now much closer to Berkeley...).
One advantage of the new location: cheaper power so my PBX runs all the
time now. ;)
Once I acquire a second modem I will accept UUCP at 115200 and 2400 baud
via telephone :)
--
Cory Smelosky
b4(a)gewt.net
> does anyone know anything about the 1961 DoD 8-bit
> character set standard it refers to?
>
> This does not appear to say anything about LF vs "Newline" (as either a
> name or a function), though the 1986 version of ASCII deprecates it, so
> was most likely acknowledged in versions between these in response to
> practices on OSes such as Multics. ECMA-6:1973 acknowledges it
I wouldn't say the "practices" of Multics influenced the recognition
of NL in the ASCII standard, for Multics didn't go into use until
1970, while NL was specified by 1965 (see below) with direct
reference to the properties of equipment, not operating systems.
Just what equipment, I don't know. IBM Selectric perhaps?
I recall Multics discussions that specifically cited the standard,
in particular Joe Ossanna's liaison between Multics and the TTY 37
design team at Western Electric, circa 1967. Thus it is my
understanding that Multics was an early adopter of ASCII's NL
convention, not an influencer of it.
Quotation from "Proposed revised American standard for information
interchange", CACM 8 (April 1965) 207-214:
The controls CR and LF are intended for printer equipment
which requires separate combinations to return the carriage
and feed a line.
As an alternative, for equipment which uses a single combination
for a combined carriage-return and line-feed operation
(called New-Line), NL will be coded at FE 2 [LF]. Then FE 5
[CR] will be regarded as Backspace BS.
If the latter type of equipment has to interwork with the
former, it may be necessary to take steps to introduce the
CR character.
One might read the preceding paragraph as advice not only to
writers of driver software but also to a future standards
committee to undo the curious notion of regarding CR
as BS. Unix effectively took it both ways, and kept the
original meaning of CR.
doug
> troff has a substantial history. Significant
changes in troff could invalidate most of the old documents leaving troff
with no usage base, and a poor tool at rendering all of the troff documents
out there.
As a living example, I have troff files from as far back as 1975 that
still work, and perhaps some even older that have lost their dates due
to careless copying. The only incompatibility with groff is easy to
fix: inserting a space before the arguments of a troff request. The
few other incompatibilities I've encouuntered have been graciously
corrected by groff maintainers. You get no such help for old Word files.
doug