> According to the Wikipedia article on FORTRAN, CAS was usually not the
> most efficient way to implement arithmetic IF anyway. CAS takes up
> four words of memory and takes three cycles to execute, whereas you
> can do it in two words and two cycles with the transfer instructions.
Though the article's conclusion was right, its numbers were wrong.
As a correct discussion of possible implementations of arithmetic-IF
would be long and essentially off topic, I have chopped most of it
out of the article.
doug
>> the 709 added CAS
> It was my understanding that the 704 had this instruction too
Yes. It's in the manual.
(I searched in vain for an online manual. A quick trip to the
attic turned up the real thing in less time.)
Doug
On Sun, 13 Aug 2017, Dave Horsfall wrote:
> On Sat, 12 Aug 2017, Steve Johnson wrote:
>
>> A little Googling shows that the IF I mentioned was called the
>> "arithmetic IF".
>
> Ah yes. It was in FORTRAN II, as I recall.
It turns out the original FORTRAN (manual published in October 1956; code first shipped around April 1957) included the arithmetic IF as well as the assigned and computed GOTO statements — see chapter 4 of this manual:
J.W. Backus, R.J. Beeber, S. Best, R. Goldberg, H.L. Herrick, R.A. Hughes, L.B. Mitchell, R.A. Nelson, R. Nutt, D. Sayre, P.B. Sheridan, H. Stern, I. Ziller. The FORTRAN Automatic Coding System for the IBM 704 EDPM : Programmer's Reference Manual. Applied Science Division and Programming Research Department, International Business Machines Corporation, October 15, 1956, 51 pages.
http://www.bitsavers.org/pdf/ibm/704/704_FortranProgRefMan_Oct56.pdf
(For more on the original FORTRAN compiler, see http://www.softwarepreservation.org/projects/FORTRAN/.)
On Tue, 15 Aug 2017, Dave Horsfall <dave(a)horsfall.org> wrote:
>> On Mon, 14 Aug 2017, Paul Winalski wrote:
>>
>> [ Ye olde 704 ]
>>
>>> TMP (transfer on plus)
>>
> That's rather an odd mnemonic…
Actually, the 704 manual of operation gives the mnemonic as TPL:
http://www.bitsavers.org/pdf/ibm/704/24-6661-2_704_Manual_1955.pdf
Jon Steinhart <jon(a)fourwinds.com> asked this question (not on tuhs).
It highlights some less well known contributors to research Unix.
> I'm trying to find out who came up with strcmp and the idea of
> returning -1,0,1 for a string comparison. I can see that it's not in
> my V6 manual but is in V7. Don't see anything in Algol, PL/I, BCPL, or B
The -1,0,1 return from comparisons stems from the interface of qsort,
which was written by Lee McMahon. As far as I know, the interface for
the comparison-function parameter originated with him, but conceivably
he borrowed it from some other sort utility. The negative-zero-positive
convention for the return value encouraged (and perhaps was motivated by)
this trivial comparison function for integers
int compar(a,b) { return(a-b); }
This screws up on overflow, so cautious folks would write it with
comparisons. And -1,0,1 were the easiest conventional values to return:
int compar(a,b) {
if(a<b) return(-1);
if(a>b) return(1);
return(0);
}
qsort was in v2. In v3 a string-comparison routine called "compar"
appeared, with a man page titled "string comparison for sort". So the
convention was established early on.
Compar provided the model for strcmp, one of a package of basic string
operations that came much later, in v7, under the banner of string.h
and ctype.h.
These packages were introduced at the urging of Nils-Peter Nelson, a
good friend of the Unix lab, who was in the Bell Labs comp center.
Here's the story in his own words.
I wrote a memo to dmr with some suggestions for additions to C. I asked
for the str... because the mainframes had single instructions to implement
them. I know for sure I had a blindingly fast implementation of isupper,
ispunct, etc. I had a table of length 128 integers for the ascii character
set; I assigned bits for upper, lower, numeric, punct, control, etc. So
ispunct(c) became
#define PUNCT 0400
return(qtable[c]&PUNCT)
instead of
if(c==':' || c ==';' || ...
[or
switch(c) {
default:
return 0;
case ':':
case ';':
...
return 1;
}
MDM]
dmr argued people could easily write their own but when I showed
him my qtable was 20 times faster he gave in. I also asked for type
logical which dmr implemented as unsigned, which was especially useful
when bitfields were implemented (a 2 bit int would have values -2, -1,
0, 1 instead of 0, 1, 2, 3). I requested a way to interject assembler,
which became asm() (yes, a bad idea).
TL;DR - I learned procedural programming in Fortran, wrote essentially the same queueing network solution algorithms and simulator in Fortran, then PL/I, then Pascal. For those purposes, PL/I seemed best, but Fortran was OK. (With Fortran and PL/I) the critical issue was to avoid undesirable constructs of the language.
When I got started with procedural programming at U.T. Austin C.S. in 1971, the dominant machine and language was CDC 6600 Fortran. For my dissertation I needed to write a queueing network simulator to attempt to validate approximate numerical methods I developed. Though by then Pascal was an option, Fortran seemed expedient.
In 1975 I joined IBM Yorktown and was asked to dramatically enhance my simulator. It became the core of the “Research Queueing Package” (http://web.archive.org/web/20130627040507/http://www.research.ibm.com/comps…) For the first year or so, I continued to use Fortran. From my perspective, the biggest problems weren’t the bad features, which I could avoid, but the lack of more natural control structures, pointers, and something like structs. My managers lamented that I wasn’t using PL/I. I spent a couple of weeks crafting a SNOBOL program to successfully translate the Fortran to PL/I. For the next decade plus, RESQ development was in PL/I (even after I left Yorktown and subsequently left IBM).
While writing Computer Systems Performance Modeling (https://www.amazon.com/exec/obidos/ISBN=0131651757/0596-2808191-282542) I wanted to illustrate the analysis, algorithms, and simulation concepts developed with RESQ, but be careful not to take anything directly from RESQ. So I wrote everything for the book in PASCAL.
For a variety of reasons, I remember much preferring PL/I over Fortran and Pascal. The Pascal development environments I used weren’t as productive as the PL/I environments. Fortran was missing very useful constructs. But Fortran was OK, in my experience.
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com <mailto:sauer@technologists.com>
fax: +1.512.346.5240 web: http://technologists.com/sauer/ <http://technologists.com/sauer/>
Facebook/Google/Skype/Twitter: CharlesHSauer
On 2017-08-13 19:24, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Sat, 12 Aug 2017, Steve Johnson wrote:
>> A little Googling shows that the IF I mentioned was called the
>> "arithmetic IF".
> Ah yes. It was in FORTRAN II, as I recall.
Still there in FORTRAN 77.
>> There was also a Computed GOTO that branched to one of N labels
>> depending on the value of the expression.
> I think that was still in FORTRAN IV?
Still there in FORTRAN 77.
>> And an Assigned GOTO whose main use, as I remember, was to allow for
>> error recovery when a subroutine failed...
> A real ugly statement; you assigned a statement number to a variable, then
> did a sort of indirect GOTO (or did the compiler recognise "GOTO I")?
The compiler recognize "GOTO I". And I have to be assigned to a
statement number (label). It has to be an integer variable, and when you
assign it to a label, you cannot do any arithmetic with it anymore. And
you assign it with a special statement. Thus, it can be used to store
what label to jump to, but you cannot use arithmetic to set what it
should jump to.
> How those poor devils ever debugged their code with such monstrous
> constructions I'll never know.
It's actually not that hard. All this stuff is fairly simple to deal
with. The real horror in FORTRAN is EQUIVALENCE, which can give C a fair
fight for real horror stories.
But of course, bad programmers can mess things up beyond belief in any
language.
(And I never went beyond FORTRAN 77, so I don't know how current
versions look like. I stayed with PDP-11s (well, still do), and nothing
newer than FORTRAN 77 exists there. :-) )
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=401792
I think this qualifies as history :-) At least for me personally, the
last 12 years feel like a lifetime.
It is significant that so far the report has not been ported forward to
a newer emacs version (as it had been previously from 21 to 24). I have
no newer version installed so I cannot check but I bet the bug is still
present. Maybe someone else on the list can check?
--
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
Do obvious transformation on domain to reply privately _only_ on Usenet.
Hi all, sorry for this off-topic posting.
I just came across this series of videos where a guy built a CPU out
of 7400 series chips on a breadboard: https://eater.net/8bit/
Anyway, I thought it would be fun to do something similar but with
a bigger address space and as few chips as I could get away with.
I've got a design that works in Logisim, but I've never actually
built anything before with real chips. So, if there's someone on the
list who could quickly look at what I've done and point out problems
or gotchas (or indeed, what ROM/RAM chips to use), that would be great!
The design so far: https://github.com/DoctorWkt/eeprom_cpu
Thanks in advance, Warren
Hi all,
Sorry if this is the wrong place to ask...
I started my home UNIX hobby in the mid-1980's with Microport SVR2 on Intel
286. With a couple of modems I started a public access UNIX system in
Hampton Roads, VA.
That system graduated to SVR3.0, 3.1, 3.2 on 386, and finally 4.2 with 4
modems running on an AST 4-port card on 486. That was about 1992 when I
started an ISP using SVR4.2. That ISP, also in Hampton Roads, grew quite
large. We were in 100 cities and partnered with newspapers and managed
their content on the brand new web. By then we had graduated to large Alpha
systems and Sun Enterprise.
Now I'm all grown up and and experiencing a 2nd childhood. I run SVR4.2MP
here on a real dual processor Pentium system, but I'd like to get back to
SVR3.2 on period hardware, and later to r2.
My problem is I don't have a copy and don't know where to find one. Do any
of you happen to have a diskette (or disk image) set you can part with?
Ideally I'd like a development system and networking. But I've always been
an optimist :)
I'm happy to pay reasonable fees.
Tom
Thanks for the replies!
I figured that like other lists I frequent, most here would be BSDish folk.
Glad to know there are others with commercial AT&T experience.
I know r2 has no networking and used UUCP extensively back when. I'll be
using it for local transfer along with Kermit.
My particular sickness requires me to run these operating systems on mostly
period hardware. My SVR4 runs on all period stuff except I use SCSI2SD for
the disk. Old disks are becoming hard to find and expensive, and I really
don't want to be playing with MFM or RLL anymore. I'll likely try to find
some kind of substitute. I know they are out there.
I think r3.2 supported SCSI, so I should be ok there.
Tom
I have no actual information about the lantern character, but
a tapered "storm lantern" would be far down my list of guesses.
The tapered chmney would much more likely be called a "lamp",
for it's a standard shape for the oil (kerosene) lamps
that everyone had before electricity.
My top guess would be a carriage lantern with a Japanese
garden ornament as a distant second. The carriage lantern
would be an unfilled circle superimposed on a vertical
rectangle, filled or unfilled. The rectangle might be
simplified to two (interrupted) vertical sides.
An alternate form of lantern would be a side view of
a carriage (or picture-projection) lantern, schematized
as a box, with a flaring projection to the right--an
icon for shining light on a subject, also interpretable
as a movie camera.
A Japanese lantern would be tripartite: cap, body, and
feet.
Do any of these possibilities ring a bell?
Doug
> Message: 1
> Date: Thu, 27 Jul 2017 11:58:38 -0400
> From: Random832 <random832(a)fastmail.com>
> To: tuhs(a)minnie.tuhs.org
> Subject: [TUHS] Anyone know what a LANTERN is?
> Message-ID:
> <1501171118.69633.1054588920.11864815(a)webmail.messagingengine.com>
> Content-Type: text/plain; charset="utf-8"
>
> There is a character in the terminfo/curses alternate character set,
> ACS_LANTERN, which is mapped to "i" in the VT100 alternate grapical
> character set. This character is, in fact, on a real VT100/VT220 (and
> therefore in most modern terminal emulators that support the full ACS),
> "VT" (in 'control character picture' format, along with HT FF CR LF NL).
> The ASCII mapping uses "#", and some CP437/etc mappings map it to the
> double box drawing intersection character.
>
> Was there ever a real 'lantern' character? The manpage mentions "some
> characters from the AT&T 4410v1 added". What did it look like?
There's two references in the termcap manpages:
http://invisible-island.net/ncurses/man/terminfo.5.html
and
http://invisible-island.net/ncurses/man/curs_add_wch.3x.html
The second link mentions that the AT&T 4410 terminal added this glyph in the location of the VT100 VT glyph. Apparently what it looked like is lost, unless someone finds a detailed 4410 manual (or has a working one in the attic).
There is a character in the terminfo/curses alternate character set,
ACS_LANTERN, which is mapped to "i" in the VT100 alternate grapical
character set. This character is, in fact, on a real VT100/VT220 (and
therefore in most modern terminal emulators that support the full ACS),
"VT" (in 'control character picture' format, along with HT FF CR LF NL).
The ASCII mapping uses "#", and some CP437/etc mappings map it to the
double box drawing intersection character.
Was there ever a real 'lantern' character? The manpage mentions "some
characters from the AT&T 4410v1 added". What did it look like?
That is such a great shot!
Since we are on the topic of photos…
I’ve been shooting portraits of some of these same people as part of a larger photo project called Faces of Open Source.
If anyone is interested in taking a look, here they are: http://facesofopensource.com
-P-
—
Peter Adams Photography | web: http://www.peteradamsphoto.com | Instagram/twitter: @peteradamsphoto @facesopensource
> On Jul 19, 2017, at 7:00 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
> Today's Topics:
>
> 1. Photo of some Unix greats (Dave Horsfall)
> 2. Re: Photo of some Unix greats (Larry McVoy)
> 3. Re: Photo of some Unix greats (Dan Cross)
>
> From: Dave Horsfall <dave(a)horsfall.org>
> Subject: [TUHS] Photo of some Unix greats
> Date: July 19, 2017 at 3:48:48 PM PDT
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>
>
> https://en.wikipedia.org/wiki/Steven_M._Bellovin#/media/File:Usenix84_1.jpg
>
> Dennis Ritchie, Steve Bellovin, Eric Allman, Andrew Hume (I know him), Don Seeley, Mike Karels, Clem Cole...
>
> --
> Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>
>
>
>
> From: Larry McVoy <lm(a)mcvoy.com>
> Subject: Re: [TUHS] Photo of some Unix greats
> Date: July 19, 2017 at 4:35:05 PM PDT
> To: Dave Horsfall <dave(a)horsfall.org>
> Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>
>
> It's a cool picture, thanks. Brings back lots of memories and makes me
> hate being younger than that crowd, would have loved to have been there.
> I was just far enough along at that point to have a job sys admining some
> of Clem's work products, 3 Masscomps. Think I was a junior in college.
>
> Great picture, good people.
>
> On Thu, Jul 20, 2017 at 08:48:48AM +1000, Dave Horsfall wrote:
>> https://en.wikipedia.org/wiki/Steven_M._Bellovin#/media/File:Usenix84_1.jpg
>>
>> Dennis Ritchie, Steve Bellovin, Eric Allman, Andrew Hume (I know him), Don
>> Seeley, Mike Karels, Clem Cole...
>>
>> --
>> Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>
> --
> ---
> Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
>
>
>
>
> From: Dan Cross <crossd(a)gmail.com>
> Subject: Re: [TUHS] Photo of some Unix greats
> Date: July 19, 2017 at 6:22:47 PM PDT
> To: Dave Horsfall <dave(a)horsfall.org>
> Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>
>
> I like how Andrew Hume is defying the weather. Like a boss.
>
> On Wed, Jul 19, 2017 at 6:48 PM, Dave Horsfall <dave(a)horsfall.org <mailto:dave@horsfall.org>> wrote:
> https://en.wikipedia.org/wiki/Steven_M._Bellovin#/media/File:Usenix84_1.jpg <https://en.wikipedia.org/wiki/Steven_M._Bellovin#/media/File:Usenix84_1.jpg>
>
> Dennis Ritchie, Steve Bellovin, Eric Allman, Andrew Hume (I know him), Don Seeley, Mike Karels, Clem Cole...
>
> --
> Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>
>
>
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
On 2017-07-09 23:44, ron minnich <rminnich(a)gmail.com> wrote:
> On Sun, Jul 9, 2017 at 2:29 PM Dave Horsfall <dave(a)horsfall.org> wrote:
>>
>> I vaguely remember something like "PIP *.TXT *.OLD" to rename files (the
>> "*" was interpreted by the command itself, not the interpreter).
Well, that would not rename files, but copy them and at the same time
changing their names. But you could also do renaming in a similar way,
but usually it would require a switch to PIP telling it that you wanted
the files renamed, and not copied.
Also, the syntax of PIP, and the order of arguments is a bit different.
At least the versions I can remember right now, it would be:
PIP *.OLD=*.TXT
to copy, and
PIP *.OLD/RE=*.TXT
to rename.
And yes, it is the program who process the wildcard expansions, and not
the command interpreter. Which is why commands like the ones above
worked. This is one of those classical examples you get to when
comparing Unix with DEC OSes about wildcarding, and the effects the
different ways they are done have on the result.
(In Unix, you can't do such a mass copy and rename in the same way.)
> All the DEC-10 and 11 operating systems I used had that wildcard, as well
> as IIRC even the PDP-8, maybe someone can confirm the -8.
Yes. It's the same on the OSes I've used on PDP-8s as well.
I would say that the globbing in Unix have much less to do with regular
expressions and much more to do with trying to mimic what DEC was doing
in their OSes.
> It would have been nice had RE's been the standard way to glob files, but,
> that said, when I mention .*\.c to people instead of *.c they don't much
> like it.
In a way, it would have made more sense to just use standard RE's for
globbing, but that didn't happen. And like I said, I suspect it was
because DEC OSes did it this way, and Unix just mimicked it. Same I
guess with the convention of '.' to separate filename from type. Even
though it's less pervasive in Unix than in DEC systems.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Glob was an an accident. When Ken and Dennis wanted to put wildcards
(an anachronistic word--it wasn't used in the Unix lab at the time)
into the shell, there wasn't room, so they came up with the clever hack
of calling another process to do the work.
I have always understood that glob meant global because commands like
rm *
would be applied to every file in a directory. A relationship to ed's
g command was clear, but not primary in my mind.
One curious fact is that from day one the word hase been pronounced glob,
not globe. (By contrast, creat has been variously pronounced cree-at
and create.) It is also interesting to speculate on whether there would
be a glob library routine in Linux had glob only been an identifier in
sh.c rather than an entry in /bin.
I believe the simple * was borrowed from somewhere else. If the g command
had been the driving model, glob would probably have had ? and ?*, not
? and *. (It couldn't use ed's . because . was ubiquitous in file names.)
My etymology is somewhat different from Steve's. But I never asked the
originator(s). Steve, did you?
Doug
On 7/9/17, ron minnich <rminnich(a)gmail.com> wrote:
>>
> All the DEC-10 and 11 operating systems I used had that wildcard, as well
> as IIRC even the PDP-8, maybe someone can confirm the -8.
>
> It would have been nice had RE's been the standard way to glob files, but,
> that said, when I mention .*\.c to people instead of *.c they don't much
> like it.
So when were REs first designed and implemented? I would imagine that
they came about as a way to extend the old '*' and '?' wildcard
syntax, but that is only a guess.
-Paul W.
> From: Paul Winalski
> So when were REs first designed and implemented? I would imagine that
> they came about as a way to extend the old '*' and '?' wildcard syntax,
> but that is only a guess.
I would suspect in the context of editors, not command file-naming. Don't
have time to research it, though. Try checking CTSS, early Multics, etc.
Noel
Doug McIlroy:
One curious fact is that from day one the word hase been pronounced glob,
not globe. (By contrast, creat has been variously pronounced cree-at
and create.)
=====
On the other hand, the UNIX Room pronunciation of `cron' rhymed with
bone, not with spawn.
Norman Wilson
Toronto ON
> From: Ron Minnich
> Why was it called glob? I always wondered.
Something about global expressions.
I recall reading about this somewhere; I tried looking in the man page:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/man/man7/glob.7
but it didn't go into any detail. I don't know where I could have seen it,
alas...
Noel
Probably no one here wants it, but I have a DR11-W UNIBUS board:
http://bitsavers.trailing-edge.com/pdf/dec/unibus/DR11W_UsersMan.pdf
It's basically a 16-bit DMA interface that can actually do 500Kw per
second (woo hoo! 1MB/sec!) to another DR11-W
Anyone need it? Want it?
Figured I'd try here first, in case we had some historic UNIX people
that were still running a UNIBUS PDP-11 (or VAX).
No takers in a few weeks, I'll try the museums next.
thanks
art k.
> From: Toby Thain
> Are we to infer that neither Noel and Clem are "good homes"?
Well, I said something like 'I don't have an immediate need for it, but I'd
be happy to take it', so I guess the question is 'does someone have an
actual, immediate use for it' (which I don't)?
Noel
Does anyone here remember the Adventure Shell?
Doug wrote it back in 83, and I just stumbled across a copy in an RCS directory.
Invoked as ‘ash’ it was pretty clever. I’ve lost the instructions and help files, however I’ve got the main script.
Back when people did weird things because it was fun.
David
> Browsing the source for "cc" in v6 and v7, if invoked with -2 would
> replace crt0.o with crt2.o. If the -2 were followed by another character
> (probably intended to be -20), it would use crt20.o and use -l2 instead
> of -lc.
>
> These options seem to be undocumented, and I can't find any source code
> of these libraries or indication as to what the purpose was.
The "scc" man page for System V may be enlightening, as it mentions
similarly-named files:
NAME
scc - C compiler for stand-alone programs
SYNOPSIS
scc [ +[ lib ] ] [ option ] ... [ file ] ...
DESCRIPTION
Scc prepares the named files for stand-alone execution.
[...]
FILES
/lib/crt2.o execution start-off
/usr/lib/lib2.a stand-alone library
/usr/lib/lib2A.a +A configuration library
/usr/lib/lib2B.a +B configuration library
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Hi,
two remarks on the issues around FPSIM and tcsh:
I of course wondered by a line like
mov $4..,r0
is accepted by 'as', I naively expected that this should cause an error.
I didn't locate the 211bsd 'as' manual, so checked 7th Edition manuals,
which can be found under
https://wolfram.schneider.org/bsd/7thEdManVol2/
The assembler manual, see
https://wolfram.schneider.org/bsd/7thEdManVol2/assembler/assembler.pdf
states
6.1 Expression operators
The operators are:
(blank) when there is no operand between operands,
the effect is exactly the same as if a
‘+’ had appeared.
So the lexer sees two tokens
$4. --> number
. --> symbol for location counter
and, because the default operator is '+', interprets this as
mov $4. + . , r0
which ends up being a number in the 160000 to 177777 range.
So 'as' is not to blame, works as designed.
Noel Chippa wrote:
> I'm fairly amazed that apparently nobody has run across one of these 4 before!
> (Or, at least, not bothered to report it.)
> I wonder how long that bug has been in the code?
The answer is: this bug was in 211bsd all the time.
Steven Schultz told me that that they simply didn't have a way to
test FPSIM because all machines had FPP, and the only way of testing
would have been to physically remove the FP11 from a 11/70.
With best regards, Walter
Browsing the source for "cc" in v6 and v7, if invoked with -2 would
replace crt0.o with crt2.o. If the -2 were followed by another character
(probably intended to be -20), it would use crt20.o and use -l2 instead
of -lc.
These options seem to be undocumented, and I can't find any source code
of these libraries or indication as to what the purpose was.
I was hoping someone on the list who remembers 'mhit.c' from
the Blit library code could shed some light on some members
of the 'NMitem' structure. I believe that 'mhit' (sometimes
'hmenunit') was written by Andrew Hume.
The structure in question is this:
typedef struct NMitem
{
char *text;
char *help;
struct NMenu *next;
void (*dfn)(), (*bfn)(), (*hfn)();
long data; /* user only */
} NMitem;
The three functions are called at different times when a menu
is being traversed, but 'dfn' and 'bfn' are only called before
a submenu is entered, and after a submenu is exited, respectively.
'hfn' is called whenever an item has been selected.
I have never seen 'dfn' and 'bfn' used in any Bell Labs code so
I was wondering what the rationale for their existence was.
Noel Hunt
Dear THUS list,
I am a recent lurker on this list, also a history and communication
scholar looking at pieces of Unix history for my research. I hope it is
okay if I share with you the news of a historical conference on Unix
that we are organizing in France next Fall with fellow humanities
scholars and computer scientists and engineers involved in the history
and heritage of computing and computers.
You are of course welcomed, even encouraged, to submit a proposal. It
will be a pleasure to meet you at this conference in any case.
Best,
Camille Paloque-Berges
***
Please find enclosed the CFP for the *international conference**"Unix in
Europe: between innovation, diffusion and heritage"* that will take
place in Cnam (Paris, France), October, 19th, 2017.
A one-page abstract (maximum 500 words) with a short biography is
expected for June 30th 2017.
The Cfp is also available at :
<http://technique-societe.cnam.fr/international-symposium-unix-in-europe-bet…>.
Best regards,
The organization comitee : Isabelle Astic, Raphaël Fournier-S'niehotta,
Pierre-Eric Mounier-Kuhn, Camille Paloque-Berges, Loïc Petitgirard
------------------------------------------------------------
*Call for contributions*
*International symposium *
*Unix in Europe: between innovation, diffusion and heritage*
*/Conservatoire National des Arts et Métiers, Paris, France –
October 19 2017/*
**
Communications and discussions will be held in French or English.
*
*
*Rationale*
*
*
**The Unix system was born in the 1970s at the crossroads between
two interacting worlds: industry (the Bell Labs at AT&T) and
academia (the University of Berkeley computer science network). Its
fast adoption throughout computer research and engineering networks
across the world signaled the future success of the new system,
fostering software experiments within its open, multi-user and
multi-tasking system running on mini-computers – and later
compatible with a larger part of computer hardware. In the European
context, how was this American innovation propagated, adopted and
adapted? Why was Unix of so much interest in this context, then and
now? A solid culture of Unix users might also explain this success,
as well as subsequent processes of appropriation and inheritance,
due to the long and complex history of Unix versioning. The memory
of Unix users is vivid indeed, fed by early accounts within the
computer world (Salus, 1994) as well as preservation initiatives
(Toomey, 2010). Moreover, the Unix system is a crucial reference in
the history of computing, in particular in the field of free and
open source software (Kelty, 2008), computer networks
(Paloque-Berges, 2017), as well as in programming language
philosophy (Mélès, 2013).
In order to explore the variety of these interrogations, this
symposium encourages contributions from historians as well as
philosophers, social science researchers, and heritage professionals
interested in the history of computer open systems and software with
a focus on Unix or who have a wider perspective. It will also
welcome protagonists and witnesses of Unix culture and carriers of
its memory. We wish to discuss and shed light on several aspects of
the development of Unix in Europe (including in comparison or
relation with the rest of the world) along three main lines:
historical and sociological, philosophical and epistemological, and
heritage- and preservation-oriented.
*1/ Historical and sociological perspectives*
*
*
Historically, the Unix system is linked to the promotion and
development in research on open systems and computer networks. How
does this fit in the context of industrial, scientific and
technological policies defined at the national and European level?
The history of Unix thus reaches at least three levels of
interrogations: 1/ the forms, places and practices of innovation
around Unix in R&D labs and computing centers in companies, schools
and universities; 2/ planning, promoting and negotiating open
systems (norms and standards) from the perspective of science and/or
politics; 3/ international geopolitical relations, whether
economical or geopolitical and even geostrategic (for example
between Unix users, with users of other computer equipment or other
hardware and software companies, the role of embargos in the
shipping of mini-computers, of code, and military uses of Unix).
In parallel, how has the world of computer research welcomed,
encouraged, negotiated and propagated uses and innovations related
to Unix systems? This begs the question of how Unix-related research
and development was legitimized - or played a part in the
legitimization of computer science experimentalism in the scientific
field and beyond. We would also like to highlight practices of
resistance, the failure to acknowledge, ignorance of or even the
limits of the Unix system, its software tools and hardware
environment (beginning with the famous PDP and Vax machines from
Digital Equipment where the first Unix versions were implemented).
With a focus on occupational computer uses, we call for analysis
which aims to explore and clarify:
- the role of developers, users, and user associations – from the
point of view of pioneers as well as helpers, maintainers and other
witnesses of the implementation of Unix;
- the context, process, and people who determined its propagation,
appropriation, and development over time;
- the meaning of concepts of Unix philosophy and ethics such as
“openness” and “autonomy”, from a social, political or economic
point of view.
*2. Philosophical and epistemological perspectives*
We will foster research and reflection at the crossroad of the
theoretical foundations of computer systems and engineering
pragmatism, between the philosophy of computer systems and Unixian
practices.
Protagonists in the conception and diffusion of Unix often claim to
have a ‘Unix philosophy’ . But beyond statement of principle, what
was the real influence of this idea on the technical choices
underlying the system’s developments? What are the ethical, moral,
and philosophical motivations – alongside the social, political or
economic dimensions discussed earlier – underpinning the adoption of
Unix or pretending to extend it (for instance in relation to the
notions of sharing, modularity or freedom)? How is the idea of
‘openness’ attached to Unix practices and heritage (free software,
open source) conceived? What are the theoretical developments to be
drawn from it (for instance with the idea of open software)?
The logical and mathematical foundations of Unix should be
readdressed. Do the fundamental concepts of Unix have an ontological
or metaphysical significance beyond the sole research aim of
technical efficiency? What role do aesthetics play in the
formulation of general principles and technical choices? How can we
analyze programming languages such as C and its successors, scripts,
software, and generally speaking, the proliferating source codes of
Unix? How do we consider the system, the software environment, as
well as the hardware in which Unix is implemented and executed?
Such philosophical questions also cover the modalities of the
transmission of Unix, extending to the investigation of the
respective roles of theory and practice in the teaching of the
system, the teaching of knowledge and tools underlying the system or
supporting the system.
*3. Unix heritage and ‘heritagization’*
**
France is now the home to multiple initiatives taking place to build
and preserve a material and immaterial heritage of computer science
and technology – such as ‘Software Heritage’ at INRIA, a global
software archive in progress. The Museum of Arts et Métiers gave
impetus to the MINF initiative (‘Pour un Musée de l’informatique et
du numérique’) and coordinates the ‘Patstec Mission’ dealing with
contemporary scientific and technological heritage preservation,
including computer science. At an international scale and with a
grassroots perspective carried by the community of Unix users, the
TUHS (The Unix Heritage Society) demonstrates the current interest
in the specific heritage linked to Unix. We encourage reflections on
this heritage and its specific features:
- What is the place of Unix in the construction of computer science
heritage? Is it possible to map Unix systems and their heritage,
from the standpoint of machines, languages and software? What has
already been collected? What corpus, data bases, and/or platforms
with a patrimonial mission are concerned with Unix and to what purpose?
- How are the questions of training, constitution and diffusion of a
Unix culture incorporated in the effort to collect heritage? How do
we evaluate and put forward the importance of immaterial heritage
attached to Unix, considering the effects of community and memory in
its history and for the writing of its history?
- What are the practices and modalities advocated by the unixian
heritage itself? What has been its influence on the field of
computer engineering and research as well as diverse fields such as:
popularization of science and technology, ‘hacker’ movements and
many ‘maker’ practices today (Lallement, 2016)?
*Schedules*
Please send a one-page abstract (maximum 500 words) with a short
biography by June 30, 2017 to: camille.paloque-berges(a)cnam.fr
<mailto:camille.paloque-berges@lecnam.net>and
loic.petitgirard(a)cnam.fr <mailto:loic.petitgirard@cnam.fr>. Accepted
contributions and speakers will be notified by July 15, 2017.
*Organizing committee*
Isabelle Astic (Musée des arts et métiers)
Raphaël Fournier-S’niehotta (Cédric, Cnam)
Pierre-Eric Mounier-Kuhn (CRM, Paris 1)
Camille Paloque-Berges (HT2S, Cnam)
Loïc Petitgirard (HT2S, Cnam)
*Scientific committee *
François Anceau (UMPC-LIP6)**
Pierre Cubaud (Cédric, Cnam)
Liesbeth de Mol (STL, Lille 3)
Claudine Fontanon (CAK, EHESS)
Gérald Kembellec (DICEN, Cnam)
Baptiste Mélès (Archives Henri Poincaré, CNRS)
Pierre Paradinas (Cédric, Cnam, SIF)
Giuseppe Primiero (Middlesex University)
Lionel Tabourier (LIP6, Paris 6)**
*Institutional partners and support: *
- Project « Hist.Pat.info.Cnam », HT2S, Cnam – Research program
supported by the Excellence laboratory History and Anthropology of
Knoweldge, Technics and Beliefs (HASTEC), and in partnership with
the laboratories CEDRIC (Cnam), DICEN (Cnam), and the Center
Alexandre Koyré (EHESS).
- « Histoire de l’informatique » (« History of computing » seminar)
seminar - (Musée des arts et métiers, CRM, Paris 1, UMPC-LIP6)
- « Source code » seminar - (CNRS, Cnam, Université Paris 6).
With support from the DHST/DLMPST for the History and Philosophy of
Computing (HAPOC)
*Bibliography *
Kelty, Christopher M. 2008. /Two Bits: The Cultural Significance of
Free Software/. Durham: Duke University Press Books.
Lallement, Michel. 2016. /L’âge du faire, /Seuil.
Mélès, Baptiste. 2003. « Unix selon l’ordre des raisons : la
philosophie de la pratique informatique ». /Philosophia Scientiæ/17
(3): 181‑98.
Salus, Peter H. 1994. /A quarter century of UNIX/. Addison-Wesley.
Reading.
Toomey, Warren. 2010. « First Edition Unix: Its Creation and
Restoration ». /IEEE Annals of the History of Computing/ 32 (3): 74‑82.
*
*
**
> From: "Walter F.J. Mueller"
> the kernel panic after tcsh here documents is understood.
Very nice detective work!
> The kernel panic is due to a coding error in mch_fpsim.s. ... After
> fixing the "$SIGILL." ... and three similar cases
I'm fairly amazed that apparently nobody has run across one of these 4 before!
(Or, at least, not bothered to report it.)
I wonder how long that bug has been in the code?
Noel
On 2017-06-11 04:00, "Walter F.J. Mueller" <w.f.j.mueller(a)retro11.de> wrote:
> Hi,
>
> the kernel panic after tcsh here documents is understood.
> And fixed, at least on my system.
Nice work. Looking forward to patch #250. And to respond to Noels remark
about this being around for a long time without reports - since this is
in FPSIM, and I believe the notes for 2.11BSD even says that this is an
untested piece of code, which are not even know if it works or not, it's
not something that have been used for ages. I'm in a way surprised it
even worked at all. I think I've seen somewhere that it was last tested
around 2.9BSD, and have not been officially tested since.
> The essential hint was Johnny's observation that on his system he gets
> an "Illegal instruction - core dumped" and no kernel panic.
Well, had I had FPP simulated, I would maybe not have gotten a kernel
panic anyway. It would all depend on where the address ended up. With my
current build, the kernel would have been able to read the address,
since it pointed into the boot diagnostics rom. So it's a dicey error at
best.
But the tcsh error was very good that you also figured out. And I guess
it means we now have a known working FPSIM. :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> Who started this? Why was the change made?
Arrays in Fortran and Algol were indexed from 1 by default, but
Algol (IIRC) generalized that to allow first:last declarations.
NPL used first,last for the SUBSTR operation. But first,last
begets off-by-one errors. The successor slice begins at last+1.
The formula for the position of 1-indexed a[i,j] is a mess.
First,length is much cleaner: the successor begins at
first+length. I convinced the committee of that, so when
NPL became PL/I, first,length was the convention. Zero-
indexing is also a clean first,length notation. BCPL,
where a[i] was synonymous with rv(a+i), had it. Dennis, who
knew a good thing when he saw it, took it over. Dijkstra,
too, often inveighed against 1-indexing, and opined that
zero was the true computer-science way.
1-indexing certainly came into programming languages from
the math tradition for matrix notation. Of course in
math series are often indexed from zero, so one may
pick and choose. Off hand I can only think of one CS
context where 1 comes in handy: heapsort.
Doug
Hi,
the kernel panic after tcsh here documents is understood.
And fixed, at least on my system.
The essential hint was Johnny's observation that on his system he gets
an "Illegal instruction - core dumped" and no kernel panic.
I'm using a self-build PDP 11/70 on an FPGA, see
https://github.com/wfjm/w11/https://wfjm.github.io/home/w11/
which doesn't have a floating point unit. Therefore the kernel is build
with floating point emulation, thus with
FPSIM YES # floating point simulator
In a kernel with FPSIM activated the trap handler trap(), see
http://www.retro11.de/ouxr/211bsd/usr/src/sys/pdp/trap.c.html
calls for each user mode illegal instruction trap fpsim(). In case
it was a floating point instruction fpsim() emulates it, returns 0,
and trap() simply returns. If not, fpsim() returns the abort signal
type, and trap() calls psignal() with this signal type, which in
general will terminate the offending process.
The kernel panic is due to a coding error in mch_fpsim.s. Look in
http://www.retro11.de/ouxr/211bsd/usr/src/sys/pdp/mch_fpsim.s.html
the code after label badins:
badins: / Illegal Instruction
mov $SIGILL.,r0
br 2b
The constant SIGILL is defined in assym.h as
#define SIGILL 4.
Thus after substitution the mov instruction is
mov $4..,r0
with *two dots* !!! The 'as' assembler generates from this
mov #160750,r0
So r0 will contain a invalid signal number, which is returned by fpsim() to
trap(). This signal number is passed to psignal(), which starts with
mask = sigmask(sig);
prop = sigprop[sig];
The access to sigprop[sig] results into an address in IO space, causes an
UNIBUS timeout, and in consequence the kernel panic.
After fixing the "$SIGILL." to "$SIGILL" (removing the extraneous '.') and
three similar cases the kernel doesn't panic anymore, tcsh crashed with an
illegal instruction trap.
Remains the question why tcsh runs onto an illegal instruction. Getting now
a tcsh core dump adb gives the answer
adb tcsh tcsh.core
$c
0172774: _rscan(0176024,0174434) from ~heredoc+0246
0176040: _heredoc(067676) from ~execute+0234
0176126: _execute(067040,01512,0,0) from ~execute+03410
0176222: _execute(066754,01512,0,0) from ~process+01224
0176274: _process(01) from ~main+06030
0177414: _main() from start+0104
heredoc(), which is located in OV1, calls rscan(), which is in OV6 with
rscan(Dv, Dtestq);
where Dtestq is a function pointer to Dtestq(), which is as heredoc() in OV1.
rscan(), which has the signature
rscan(t, f)
register Char **t;
void (*f) ();
uses 'f' in the statement
(*f) (*p++);
The problem is that
- heredoc() and Dtestq() are in OV1
- that's why in the end ~Dtestq is used a function pointer, like
for all overlay internal function invocations
- rscan() is in OV6, when it's called, overlay is switched OV1 -> OV6
- this invalidates the function pointer, which points to some random
code location, which happens to hold '000045', causing a trap.
It is clear that in this context _Dtestq, the forwarder in the base, must
be used and not ~Dtestq, the entry point in the overlay. The generated
code for 'rscan(Dv, Dtestq)' is
~heredoc+0230: mov $0174434,(sp) # arg Dtestq: uses ~Dtestq
~heredoc+0234: mov r5,-(sp)
~heredoc+0236: add $0177764,(sp) # arg Dv
~heredoc+0242: jsr pc,*$_rscan
Since rscan() is very small and only used by heredoc() I simply moved the
code of rscan() from sh.glob.c (OV6) to sh.dol.c where also heredoc() and
Dtestq() is defined.
After that tcsh works fine with here documents
./tcsh
cat >x.x <<EOF
1
$TERM
$PWD
EOF
cat x.x
1
vt100-long
/usr/src/bin/tcsh
Bottom line
- fpsim was broken all the time
- tcsh was broken all the time
I'm convert this into proper patches and send them to Steven, but this will
take some time because I've to tidy up my system to be again in the
position to provide proper and clean patch sets.
With best regards, Walter
P.S.: debugging the kernel issue was quite easy because the w11a CPU has
three essential 'build into the cpu' debug tools:
- a 'cpu monitor', which records 144 bits of processor state for the last 256
instructions or vector fetches, see
https://github.com/wfjm/w11/blob/master/rtl/w11a/pdp11_dmcmon.vhd
- a 'breakpoint unit' which allows to set instruction of data breakpoints
- an 'ibus monitor' which records the last 512 ibus transactions
After setting a breakpoint on the trap 004/010 handler an inspection of the
instruction trace gave the essential information. Below a very condensed
and annotated excerpt
nc ....pc cprptnzvc ..dsrc ..ddst ..dres vmaddr vmdata
#
# the "(*f) (*p++)" in tcsh, running onto an illegal instruction
#
15 145210 uu00-.... 000105 173052 000105 w d 173052 000105 mov r0,(sp)
25 145212 uu00-.... 173050 174434 174434 w d 173050 145216 jsr pc,@n(r5)
19 174434 uu00-.... 000010 173064 000010 r i 174434 000045 ?000045?
1 174434 uu00-.... 000012 173064 000012 r d 000010 000045 !VFETCH 010 RIT
#
# the "mov $SIGILL.,r0" in fpsim(), load 160750 instead of 000004
#
17 160744 ku00-n..c 160750 000045 160750 r i 160746 160750 mov #n,r0
14 160750 ku00-n..c 160752 160750 160732 r i 160750 000770 br .-14
#
# the "sigprop[sig]" access in psignal(), which accesses 174036
# which leads to a external bus (or UNIBUS) time out and IIT trap
#
23 161314 ku00-.z.. 000000 147500 000000 w d 147500 000000 mov r1,n(r5)
9 161320 ku00-.z.. 174036 000000 000000 Ebto 174036 013066 movb n(r3),r0
3 161320 ku00-.z.. 000006 000000 000006 r d 000004 013066 !VFETCH 004 IIT
Arnold gets it right on the Pascal indexing.
In UCSD Pascal you could specify any array bounds you would like and
the compiler would 0 base them for you by always doing a subtraction,
or addition if your min was negative, of your min array index. So a little
run time cost for non-zero based arrays.
I’m not sure how other Pascal compilers did this.
I find it interesting that there are now a slew of testing programs
(Valgrind, Address Sanitizer, Purify, etc) that will add the ‘missing’
array bounds checking for C.
David
> On Jun 7, 2017, at 10:01 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> Date: Wed, 07 Jun 2017 07:20:43 -0600
> From: arnold(a)skeeve.com
> To: tuhs(a)tuhs.org, ag4ve.us(a)gmail.com
> Subject: Re: [TUHS] Array index history
> Message-ID: <201706071320.v57DKhmJ026303(a)freefriends.org>
> Content-Type: text/plain; charset=us-ascii
>
> Pascal (IIRC) allowed you to specify upper and lower bounds, something
> like
>
> foo : array[5..10] of integer;
>
> with runtime bounds checking on array accesses. (I could be wrong ---
> it's been a LLLLOOONNNGGG time.)
>
> HTH,
>
> Arnold
On 2017-06-07 19:01, "Ron Natalie"<ron(a)ronnatalie.com> wrote:
> The original FORTRAN and BASIC arrays started indexing at one because everybody other than computer scientists start counting at 1.
FORTRAN, yes. BASIC (which dialect might we be talking about?) normally
actually start with 0. However, BASIC is weird, in that the DIM
statement is actually specifying the highest usable index, and not the
size of the array.
Thus:
DIM X(10)
means you get an array with 11 elements. So, people who wanted to use
array starting at 1 would still be happy, and if you wanted to start at
0, that also worked. You might unintentionally have a bit of wasted
memory, though.
> These languages were for scientists and the beginner, so you wanted to make things compatible with their normal concepts.
True.
> PASCAL on the other hand required you to give the minimum and maximum index for the array.
In a way, PASCAL makes the most sense. You still what range you want,
and you get that. Anything works, and it's up to you.
That said, PASCAL could get a bit ugly when passing arrays as arguments
to functions because of this.
> Of course, C’s half-assaed implementation of arrays kind of depends on zero-indexing to work.
:-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2017-06-07 22:14, "Walter F.J. Mueller"<w.f.j.mueller(a)retro11.de> wrote:
> Hi,
>
> a few remarks on the feedback on the kernel panic after a 'here document' in tcsh.
>
> To Michael Kjörling question:
> > I'm curious whether the same thing happens if you try that in some
> > other shell? (Not sure how widely here documents were supported back
> > then, but I'm asking anyway.)
> And Johnny Billquist remark
> > Not sure if any of the other shells have this.
>
> 'here documents' are available and work fine in sh and csh.
> And are in fact used, examples
Ah. Thanks. Too lazy to check.
> To Michael Kjörling remark
> > The PC value in the panic report ("pc 161324") strikes me as high
> and Johnny Billquist remark
> > This is in kernel mode, and that is in the I/O page.
>
> 211bsd uses split I/D space and uses all 64 kB I space for code.
D'oh! Color me stupid. I should have thought of that.
> The top 8 kB are in fact the overlay area, and the crash happened
> in overlay 4 (as indicated by ov 4). With a simple
>
> nm /unix | sort | grep " 4"
>
> one gets
>
> 161254 t ~psignal 4
> 162302 t ~issignal 4
>
> so the crash is just 050 bytes after the entry point of psignal. So the
> PC address is fine and not the problem. For psignal look at
>
> http://www.retro11.de/ouxr/211bsd/usr/src/sys/sys/kern_sig.c.html#s:_psignal
>
> the crash must be one of the first lines. psignal is an internal kernel
> function, called from
>
> http://www.retro11.de/ouxr/211bsd/usr/src/sys/sys/kern_sig.c.html#xref:s:_p…
>
> and has nothing to do with the libc function psignal
>
> http://www.retro11.de/ouxr/211bsd/usr/man/cat3/psignal.0.html
> http://www.retro11.de/ouxr/211bsd/usr/src/lib/libc/gen/psignal.c.html
The libc function would be in user mode, so that one was pretty clear.
Ok. Digging through this a little for real then.
psignal gets called with a signal from the trap handler. The actual
signal is weird. It would appear to be 0160750, which would be -7704 if
I'm counting right. That does not make sense as a signal.
The psignal code pulls a value based on the signal number, which is the
line:
prop = sigprop[sig];
which uses the signal number as an index. With a random, weird signal
number, this access wherever that might end up. Which is when you get
the crash.
On my system, sigprop is at address 0012172, which, with a signal of
-7704 ends up at address 0173142, which by (un)luck happens to be in the
middle of the diagnostics bootstrap rom space. So I don't get a Unibus
timeout error, while you do. Probably because sigprop is at a slightly
different address in your kernel.
So, the real question is how trap can be calling psignal with such a
broken signal number.
I might dig further down that question another day. But unless you
already got this far, I might have saved you a few minutes of digging. I
did start looking into the trap code, which is in pdp/trap.c, but this
is not entirely straight forward. It goes through a bunch of things
trying to decide what signal to send, before actually calling psignal.
> To Johnny Billquist remark
> > Could you (Walter) try the latest version of 2.11BSD and see if you
> > still get that crash?
>
> very interesting that you see a core dump of tcsh rather a kernel panic.
Indeed.
> Whatever tcsh does, it should not lead to a kernel panic, and if it does,
> it is primarily a bug of the kernel. It looks like there are two issues,
> one in tcsh, and one in the kernel. I've a hunch were this might come from,
> but that will take a weekend or two to check on.
Agree that the kernel should not crash on this.
Also, tcsh should not really crash either, but it's a separate issue,
even though one might have triggered the other here.
But yes, there are two bugs in here.
If you can recreate the kernel crash on the latest version, that would
be good.
But it smells like trap.c have some path where it does not even set what
signal to deliver, and then calls psignal with whatever the variable i
got at the function start. Which would be some random stuff on the stack.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2017-06-08 22:17, Dave Horsfall<dave(a)horsfall.org> wrote:
>
> Just to diverge from this thread a little, it probably isn't all that
> remarkable that programming languages tend to reflect the hardware for
> which they were designed.
>
> Thus, for example, we have the C construct:
>
> do { ... } while (--i);
>
> which translated right into the PDP-11's "SOB" instruction (and
> reminiscent of FORTRAN's insistence that DO loops are run at least once
> (there was a CACM article about that once; anyone have a pointer to it?)).
>
> And of course the afore-mentioned FORTRAN, which really reflects the
> underlying IBM 70x architecture (shudder).
FORTRAN stopped running the loops at least once already with FORTRAN 77.
The last who insisted on running loops at least once was FORTRAN IV.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I learned the other day that array indexes in some languages start at 1
instead of 0. This seems to be an old trend that changed around the 70s?
Who started this? Why was the change made?
It seems to have come about around the same time as C, but interestingly
enough Lua is kinda in between (you can start an array at 0 or 1).
Smalltalk can probably have a 0 base index just by it's nature, but I
wonder whether that would work in a 40 year old interpreter.
> Basically, until C came along, the standard practice was for indices
> to start at 1. Certainly Fortran and Pascal did it that way.
Mercury Autocode used 0.
http://www.homepages.ed.ac.uk/jwp/history/mercury/manual/autocode/4.jpg
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Hi,
a few remarks on the feedback on the kernel panic after a 'here document' in tcsh.
To Michael Kjörling question:
> I'm curious whether the same thing happens if you try that in some
> other shell? (Not sure how widely here documents were supported back
> then, but I'm asking anyway.)
And Johnny Billquist remark
> Not sure if any of the other shells have this.
'here documents' are available and work fine in sh and csh.
And are in fact used, examples
/usr/adm/daily (a /bin/sh script)
su uucp << EOF
/etc/uucp/clean.daily
EOF
/usr/crash/why (a /bin/csh script)
adb -k {unix,core}.$1 << 'EOF'
version/sn"Backtrace:"n
$c
'EOF'
To Michael Kjörling remark
> The PC value in the panic report ("pc 161324") strikes me as high
and Johnny Billquist remark
> This is in kernel mode, and that is in the I/O page.
211bsd uses split I/D space and uses all 64 kB I space for code.
The top 8 kB are in fact the overlay area, and the crash happened
in overlay 4 (as indicated by ov 4). With a simple
nm /unix | sort | grep " 4"
one gets
161254 t ~psignal 4
162302 t ~issignal 4
so the crash is just 050 bytes after the entry point of psignal. So the
PC address is fine and not the problem. For psignal look at
http://www.retro11.de/ouxr/211bsd/usr/src/sys/sys/kern_sig.c.html#s:_psignal
the crash must be one of the first lines. psignal is an internal kernel
function, called from
http://www.retro11.de/ouxr/211bsd/usr/src/sys/sys/kern_sig.c.html#xref:s:_p…
and has nothing to do with the libc function psignal
http://www.retro11.de/ouxr/211bsd/usr/man/cat3/psignal.0.htmlhttp://www.retro11.de/ouxr/211bsd/usr/src/lib/libc/gen/psignal.c.html
To Johnny Billquist remark
> Could you (Walter) try the latest version of 2.11BSD and see if you
> still get that crash?
very interesting that you see a core dump of tcsh rather a kernel panic.
Whatever tcsh does, it should not lead to a kernel panic, and if it does,
it is primarily a bug of the kernel. It looks like there are two issues,
one in tcsh, and one in the kernel. I've a hunch were this might come from,
but that will take a weekend or two to check on.
With best regards, Walter
On 2017-06-06 04:00, Michael Kjörling <michael(a)kjorling.se> wrote:
>
> On 5 Jun 2017 16:12 +0200, from w.f.j.mueller(a)retro11.de (Walter F.J. Mueller):
>> I'm using 211bsd (Version 447) and found that a 'here document' in tcsh
>> leads to a kernel panic. It's absolutely reproducible on my system, both
>> when run it on my FPGA PDP-11 or in simh. Just doing
>>
>> tcsh
>> cat << EOF
> I'm curious whether the same thing happens if you try that in some
> other shell? (Not sure how widely here documents were supported back
> then, but I'm asking anyway.)
Not sure if any of the other shells have this. We're basically talking
csh, sh and ksh unless I remember wrong.
But it's a good question. If noone else have tried it by tomorrow, I
could check.
>> is enough, and I get
>>
>> ka6 31333 aps 147472
>> pc 161324 ps 30004
>> ov 4
>> cpuerr 20
>> trap type 0
>> panic: trap
>> syncing disks... done
>>
>> looking at the crash dump gives
>>
>> cd /etc/crash
>> ./why 4
>> Backtrace:
>> 0147372: _boot(05000,0100) from ~panic+072
>> 0147414: _etext(011350) from ~trap+0350
>> 0147450: ~trap() from call+040
>> 0147516: _psignal(0101520,0160750) from ~trap+0364
>> 0147554: ~trap() from call+040
>>
>> so the crash is in psignal, which is afaik the kernel internal
>> mechanism to dispatch signals.
> The PC value in the panic report ("pc 161324") strikes me as high, but
> 161324 octal is 58068 decimal, so it's not excessively so, and perhaps
> in line with what one might expect to see with a kernel pinned near
> top of memory. Are the offsets in the backtrace constant, i.e. does it
> always crash on the same code?
161324 is way high. This is in kernel mode, and that is in the I/O page.
Basically no code lives in the I/O page (some boot roms and hardware
diagnostics excepted). This smells like corrupted memory (pointer or
stack), or something else very funny.
> Not knowing what cpuerr 20 is specifically doesn't help, and at least
> http://www.retro11.de/ouxr/29bsd/usr/src/sys/sys/trap.c.html#n:112
> (which doesn't seem to be too far from what you are running) isn't
> terribly enlightening; CPUERR is simply a pointer into a memory-mapped
> register of some kind, as seen at
> http://www.retro11.de/ouxr/29bsd/usr/include/sys/iopage.h.html#m:CPUERR,
> and at least pdp11_cpumod.c from the simh source code at
> http://simh.trailing-edge.com/interim/pdp11_cpumod.c wasn't terribly
> enlightening, though of course I could be looking in entirely the
> wrong place.
Like others said - the cpu error register is documented in the processor
handbook.
020 means Unibus Timeout, which is consistent with trying to access
something in the I/O page, where there is no device configured to
respond to that address.
I just tried the same thing on a simh system here, and I do not get a
crash. This on 2.11BSD at patch level 449, running on an emulated 11/94.
I do however get tcsh to crash.
simh:/home/bqt> su -
Password:
erase, kill ^U, intr ^C
# tcsh
simh:/# cat << EOF
Illegal instruction - core dumped
#
Suspended (tty input)
simh:/home/bqt>
simh:/home/bqt> cat /VERSION
Current Patch Level: 448
Date: January 5, 2010
Yes, it says patch level 448, but it really is 449. This was the system
where I worked together with Steven when doing the 449 patch set, but I
never got around to actually updating the VERSION file itself.
Also, this was while running on the console.
Could you (Walter) try the latest version of 2.11BSD and see if you
still get that crash?
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Hi,
I'm using 211bsd (Version 447) and found that a 'here document' in tcsh
leads to a kernel panic. It's absolutely reproducible on my system, both
when run it on my FPGA PDP-11 or in simh. Just doing
tcsh
cat << EOF
is enough, and I get
ka6 31333 aps 147472
pc 161324 ps 30004
ov 4
cpuerr 20
trap type 0
panic: trap
syncing disks... done
looking at the crash dump gives
cd /etc/crash
./why 4
Backtrace:
0147372: _boot(05000,0100) from ~panic+072
0147414: _etext(011350) from ~trap+0350
0147450: ~trap() from call+040
0147516: _psignal(0101520,0160750) from ~trap+0364
0147554: ~trap() from call+040
so the crash is in psignal, which is afaik the kernel internal
mechanism to dispatch signals.
Questions:
1. has anybody seen this before ?
2. any idea what the reason could be ?
With best regards, Walter
> From: Jacob Ritorto
> Where might one find the list of trap_types
Look in:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/sys/pdp/scb.s
which maps from trap vector locations (built into the hardware; consult a
PDP-11 CPU manual for details) to trap type numbers, which are defined here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/sys/pdp/trap.h
and handled here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=2.11BSD/sys/pdp/trap.c
> and cpuerrs?
That just prints the contents of the CPU Error Register; see an appropriate
PDP-11 CPU manual - 11/70, /44, /73, /83 or /84 for what all the bits mean.
Also the "KDJ11-A CPU Module User's Guide", which also documents it.
In theory, there's also a KDJ11-B UG, but it's not online. If anyone has one,
can we please get it scanned? Thanks!
Noel
> The people working on TCP/IP did know of the Spider work (like they knew of
> the Cambridge ring work), but it didn't really have any impact; it was a
> totally different direction than the one we were going in.
I'm aware of that, and I think it was the same the other way around. My
interest is tracing how the networking API of Unix developed in the very
early days, and that's were there is a link.
When I asked a few months back why Bell Labs did not jump onto the work
done at UoI, Doug observed that the lab's focus was on Datakit and that
triggered my interest.
>>>> it turns out that the TIU driver was in Warren's repo all along:
>
> V4?! Wow. I'd have never guessed it went that far back.
My current understanding is that Spider development began in 1969 and
that it was first operational in 1972. By '73/'74 it connected a dozen
computers at Murray Hill and Unix had gained basic network programs.
From Sandy Fraser's "Origins of ATM" video lecture I understand that the
Spider learnings included that using a mini to simulate a switch/router
was too slow and too costly, and that doing flow control inside the network
induced avoidable complexity (I guess Fraser/Cerf/Pouzin all learned that
lesson around the same time). The follow-on, custom designed Datakit switch
was to correct these issues.
Work started in 1974 and I guess that prototypes may have been available
around 1978 (when Spider was apparently switched off at Murray Hill).
By 1981 a multi-site Datakit network connected various Bell labs and by
1983 Datakit was introduced as a commercial service.
As to the Spider network API, it currently seems that it was relatively
simple: it exposed the switch as a group of character mode devices, with
the user program responsible for doing all protocol work. Interestingly,
Spider used a high speed DMA based I/O board (DR11-B), whereas the
Datakit switch was apparently connected to a low speed polled I/O board
(DR11-C).
I did not find the Datakit device driver(s) in the V7 source tree (only a
few references in tty.h), so it is hard to be sure of anything. However,
it seems that in V7 the Datakit switch was used as "a fancy modem" so to
speak, supporting the uucp software stack.
There is source for a Datakit driver in the V8 tree, but I currently
have no time to study that (and perhaps it is beyond my scope anyway).
All input and corrections much appreciated.
> From: Paul Ruizendaal
>>> The report I have is: "SPIDER-a data communication experiment"
>>> ...
>>> I think it can be public now, but doing some checks.
OK, that would be great to have online. I _think_ the hardcopy I have
(somewhere! :-) is that report, but my memory should not be trusted.
The people working on TCP/IP did know of the Spider work (like they knew of
the Cambridge ring work), but it didn't really have any impact; it was a
totally different direction than the one we were going in.
>>> it turns out that the TIU driver was in Warren's repo all along:
V4?! Wow. I'd have never guessed it went that far back.
>>> The code calls snstat()
>> The object code for snstat() is in libc.a in the dmr's V5 image.
>> Reconstructed, the source code is here:
>> ...
>> In short, snstat() is a modified stty call
Yes, I looked and found the original source, appended below.
>>> Could that be the tiu sys call (#45) in the sysent.c table for V4-V6?
I wonder if we'll ever be able to find a copy of the kernel code for that
tiu() system call. And I wonder what it did?
> [1] Oldest alarm() code I can find is in PWB1
> ...
> Either alarm existed in V5 and V6 .. or is was added after V6 was
> released, perhaps soon after. In the latter case the 'nfs' code that we
> have must be later than 1974
Remember, that source came from the MIT system, which is a modified PWB1.
So it's not surprising it's using PWB1 system calls.
Noel
--------
/ C interface to spider status call
.globl _snstat
.globl cerror
_snstat:
mov r5,-(sp)
mov sp,r5
mov 4(r5),r0
mov 6(r5),0f
mov 8(r5),0f+2
sys stty; 0f
bec 1f
jmp cerror
1:
clr r0
mov (sp)+,r5
rts pc
.data
0: .=.+6
Below what I've been able to find about alarm():
[1] Oldest alarm() code I can find is in PWB1, dated July 1977:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/sys/os/sys4.chttp://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/sys/os/clock.c
Either alarm existed in V5 and V6, and was removed from distributions
(which seems unlikely to me), or is was added after V6 was released,
perhaps soon after. In the latter case the 'nfs' code that we have must
be later than 1974 (even though the man page is dated that way).
It could be from the 2nd half of 1975.
[2] Interestingly, the idea to implement sleep() in terms of alarm()
seems to originate in UoI network unix:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC/ken/sys2.c -- sslep()
This occurs in Oct 1977. In V7 this idea is taken to user space and
sleep() is no longer a system call.
[3] The UoI code has an instance of alarm() being used to break out of
a potentially stalled network call, so that usage seems to have
established itself early on.
Progressed a little further:
[1] The 'ufs' command was a variation on the 'nfs' command. The man page
that Noel provided for nfs includes the paragraph:
"There is a command /usr/usg/tom/ufs which transfers files to
the USG Unix systems. The option letter 7 for the 11/70 or
4 for the 11/45 should be used. Otherwise 'ufs' is similar to
'nfs'."
This means there must have been a Unix based File Store (server).
Does anybody have a suggestion who 'tom' at USG might have been?
[2] The V5 man pages in the archive have a man page for 'npr',
in section VI. It says:
NAME
npr - print file on Spider line-printer
SYNOPSIS
npr file …
DESCRIPTION
Npr prints files on the line, printer in the Spider room,
sending them over the Spider loop. If there are no arguments,
the standard input is read and submitted. Thus npr may be used
as a filter.
FILES
/dev/tiu/d2 tiu to loop
It suggests that the printer was hooked up to the Spider switch and that
channel 2 was hardcoded to it.
[3] Upon closer inspection, the tiu.c driver is a character mode device,
the use of disk buffers and a strategy() routine had me confused.
It is just a reflection of the fact that it uses DMA hardware.
The code for tiu.c in NSYS/V4 is rather different from the code in
the SRI-NOSC tree: thinking on how to select channels seems to have
changed in between these two versions.
[4] Also I found the below post that mentions the snstat() call:
http://minnie.tuhs.org/pipermail/tuhs/2015-December/006286.html
The object code for snstat() is in libc.a in the dmr's V5 image.
Reconstructed, the source code is here:
http://chiselapp.com/user/pnr/repository/Spider/artifact/a93175746bd9f94f
In short, snstat() is a modified stty call, an evolution in the direction of
the later ioctl() system call.
No progress as yet on the early history of 'alarm()'.
Paul
>> It is a paper copy, but I can make a scan for you.
>
> That makes it sounds like it might not be possible to put it online?
> What's the exact title, so I can look and see if it's already online?
> I'm pretty sure I've got a hardcopy of some Spider thing, but it would
> probably take me a while to find it... ;-)
The report I have is:
"SPIDER-a data communication experiment", Tech Report 23 , Bell Lab, 1974.
I did not find it online, but it may be out there somewhere.
I think it can be public now, but doing some checks.
> OK, the only one I have is 'nfs'. Here's the source, and man page:
>
> http://ana-3.lcs.mit.edu/~jnc/tech/unix/s2/nfs.a
> http://ana-3.lcs.mit.edu/~jnc/tech/unix/man6/nfs.6
Many thanks! There is some puzzling stuff in there that I'd like to
figure out, but that is easier to discuss once the report is online.
Also, it turns out that the TIU driver was in Warren's repo all along:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys/dmr/tdir/tiu.chttp://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/man/man4/tiu.4
It's fun to read that the V4 man page says "The precise nature of the
UNIX interface has not been defined yet." and Noel's version says:
"The precise nature of the UNIX interface is defined elsewhere." (yet
the dates are the same!).
Some things are surprising (to me, at least):
First of all, opening a connection to the File Store is a single open on
data channel 1:
http://chiselapp.com/user/pnr/repository/Spider/artifact/854a591c0e7a3a54?l…
I would expected the code to first have sent a connection request to the
switch on control channel 1. Perhaps the File Store was an integral part
of the switch/router (a Tempo minicomputer
ftp://bitsavers.informatik.uni-stuttgart.de/pdf/tempoComputers/TEMPO-1_ad_Nov69.pdf)
with channel 1 functionality hardwired.
Next, the code has a hackish form of non-blocking I/O:
http://chiselapp.com/user/pnr/repository/Spider/artifact/55ee75831bd98d6c?l…
I'm puzzled about the alarm() sys call. That did not exist in 1973 -- or did it
only exist in Bell Labs private builds?
The code calls snstat(), for instance here:
http://chiselapp.com/user/pnr/repository/Spider/artifact/55ee75831bd98d6c?l…
That seems to be a sys call to here:
http://chiselapp.com/user/pnr/repository/Spider/artifact/2c7d65073a7cb0a5?l…
Could that be the tiu sys call (#45) in the sysent.c table for V4-V6?
Ok, I just did an experiment with the rm command and the results surprised me.
On Unix v5 logged in as root I created a small test file then did
chmod 444 on it. Unfortunately it appears that mere users can still rm
the file and also directories are not safe from the rmdir command
(even directories set to mode 444).
This seems to be the case for v6 and v7 as well.
To be fair rm will prompt the user with: test1: 0100444 mode
but the user only has to type y and hit enter and the file is toast.
Is there no way to completely protect files from being deleted?
Mark
> From: Paul Ruizendaal
>>> the 1974 report on Spider
>> Is that online? If not, any chances you can make it so?
> It is a paper copy, but I can make a scan for you.
That makes it sounds like it might not be possible to put it online?
What's the exact title, so I can look and see if it's already online?
I'm pretty sure I've got a hardcopy of some Spider thing, but it would
probably take me a while to find it... ;-)
> I think that in the lifespan of Spider (1972-1978) there were 3 main
> network programs (basing myself on McIlroy's Unix Reader):
> - 'nfs' an FTP-like program ...
> - 'ufs' not sure what it was, but I think a telnet-like facility
> - 'npr' a network printing program
OK, the only one I have is 'nfs'. Here's the source, and man page:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s2/nfs.ahttp://ana-3.lcs.mit.edu/~jnc/tech/unix/man6/nfs.6
Noel
>
>> I'm looking into the history of Spider and early Datakit. Sandy Fraser
>> was kind enough to send me the 1974 report on Spider
>
> Is that online? If not, any chances you can make it so?
It is a paper copy, but I can make a scan for you.
> which contains the drivers tiu.c, mpx.c - I'm not sure what other files there
> are part of it?
I think tiu.c might be all. The TIU ("terminal access unit") was the network card,
so to speak (actually some 5 boards in a rack) and did a lot of the heavy lifting.
From the tiu.c file I understand that a DR11-B parallel I/O card was used on
the PDP side to connect to the TIU, and that access was structured as a block
device driver.
> I'm not at all clear how this stuff got there - someone at Bell must have just
> dumped the contents of the 'dmr' directory, and sent it all off?
Looks like it.
> The PWB1-based MIT systems also have a lot of the Spider software (although it
> was never used). It's a slightly different version than the one above: 'diff'
> shows that 'tiu.c' is almost identical, but mpx.c has more significant
> differences.
>
> It also contains man pages, and sources for some (?) user programs; I have the
> source and manpage for 'nfs'. What other names should I be looking for? (The
> man page for 'nfs' doesn't list any other commands.) I'll put them up
> momentarily.
I think that in the lifespan of Spider (1972-1978) there were 3 main network
programs (basing myself on McIlroy's Unix Reader):
- 'nfs' an FTP-like program to copy files to/from a central File Store.
I'm not sure whether the File Store was a Unix machine or something else.
- 'ufs' not sure what it was, but I think a telnet-like facility
- 'npr' a network printing program
A little surprising, but no reference to a Spider mail program in that document.
> In the meantime, I'll append the 'tiu' man page.
Thanks! It is from October 1973, which sounds right for Spider. I guess this
code is the first networking on Unix, predating the UoI work by about 18 months.
> From: Paul Ruizendaal
> I'm looking into the history of Spider and early Datakit. Sandy Fraser
> was kind enough to send me the 1974 report on Spider
Is that online? If not, any chances you can make it so?
> Does anybody know of surviving v5/v6/v7 code for Spider networking (e.g.
> the 'tiu' device driver, the 'nfs' file transfer package, etc.)?
You're in luck.
To start with, check out:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC/dmr/oldstuff
which contains the drivers tiu.c, mpx.c - I'm not sure what other files there
are part of it?
I'm not at all clear how this stuff got there - someone at Bell must have just
dumped the contents of the 'dmr' directory, and sent it all off?
The PWB1-based MIT systems also have a lot of the Spider software (although it
was never used). It's a slightly different version than the one above: 'diff'
shows that 'tiu.c' is almost identical, but mpx.c has more significant
differences.
It also contains man pages, and sources for some (?) user programs; I have the
source and manpage for 'nfs'. What other names should I be looking for? (The
man page for 'nfs' doesn't list any other commands.) I'll put them up
momentarily.
In the meantime, I'll append the 'tiu' man page. There isn't one for mpx,
alas.
Noel
--------
.th TIU IV 10/28/73
.sh NAME
tiu \*- Spider interface
.sh DESCRIPTION
Spider
is a fast digital switching network.
.it Tiu
is a directory which contains
files each referring to a Spider control
or data channel.
The file /dev/tiu/d\fIn\fR refers to data channel \fIn;\fR
likewise /dev/tiu/c\fIn\fR refers to control channel \fIn\fR.
.s3
The precise nature of the UNIX interface
is specified elsewhere.
.sh FILES
/dev/tiu/d?, /dev/tiu/c?
.sh BUGS
>> There are two other routes to TCP/IP on a PDP11 without split I/D:
>> ...
>> DCEC's adaptation of the Wingfield TCP/IP library, designed to work
>> with V6. It is mostly a user space daemon, but requires some kernel
>> enhancements.
>
> I wonder what the performance would be like, since the TCP is in a user
> process (a different one from the application), i.e. there's a process switch
> every time the application goes to send or receive data. This wouldn't have
> been such an issue when the code was written, since ARPANet-type networks
> were not very fast, but with a better network, it would have been limiting.
IEN98 (http://www.rfc-editor.org/rfc/ien/ien98.txt, page 2) has the answer: about 10kb/s.
The DCEC version used shared memory instead of rand ports and was claimed to be
a bit more performant, but I have no number. I'd be surprised if it was twice as fast,
so perhaps 15kb/s.
Paul
> From: Clem Cole
> So some other mechanism (also discussed here) needed to be created to
> avoid blocking in the application.
> ...
> Rand, UNET & Chaos had something else that gave the save async function,
> who's name I've forgotten at the moment
I don't think the RAND code had the non-blocking stuff; AFAICR, all it had was
named pipes (effectively). Jack Haverty at BBN defined and implemented two new
calls (IIRC, 'capac()' and 'await()') to do non-blocking I/O. The
documentation for that is in the 'BBN' branch at TUHS:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=BBN-V6/doc/ipc/awaithttp://minnie.tuhs.org/cgi-bin/utree.pl?file=BBN-V6/doc/ipc/ipc
My memory might be incorrect, but I don't think it was asynchronous (i.e. a
process issued a read() or write(), and that returned right away, before the
I/O was actually done, and the system notified the process later when the I/O
actually completed).
I actually did implement asyn I/O for an early LAN device driver - and just to
make it fun, the device was a DMA device, and we didn't want the overhead of a
copy, so the DMA was direct to buffers in the process - i.e. 'raw' I/O. So
that required some major system tweaks, to keep the process from being swapped
out - or moved around - while the I/O was pending.
> I believe Noel posted the code for same in the last year from one of the
> MIT kernels
I found it on the dump of an MIT machine, but it was never run on any machine
at MIT - we just had the source in case we had any use fot it.
Noel
> From: Paul Ruizendaal
> There are two other routes to TCP/IP on a PDP11 without split I/D:
> ...
> DCEC's adaptation of the Wingfield TCP/IP library, designed to work
> with V6. It is mostly a user space daemon, but requires some kernel
> enhancements.
I wonder what the performance would be like, since the TCP is in a user
process (a different one from the application), i.e. there's a process switch
every time the application goes to send or receive data. This wouldn't have
been such an issue when the code was written, since ARPANet-type networks
were not very fast, but with a better network, it would have been limiting.
> From: Steve Simon
> do you have pointers to any documentation on the rand/MIT network API?
There was no 'MIT' network API. He was talking about the CHAOSNet API. The
TCP/IP done in the CSR group at MIT used a totally different API.
The various Unix systems at MIT were pretty well out of touch with each other,
and did not exchange code. The only exceptions were the DSSR (later RTS) and
CSR groups in Tech Sq, who used pretty much the same system.
Noel
I had somehow convinced myself that Ultrix-11 needed split I/D, but indeed it does not:
# file unix
unix: (0450) pure overlay executable not stripped
# size unix
14784+(8192,8000,8064,8000,8064,8128,8000,7808,7936,7936,7680,7360,1344)+3524+13500 = 31808b = 076100b (111296 total text)
With only 16KB of permanent kernel there will be a lot of overlay switching. I'm not entirely sure why bss could not be 1KB smaller, enabling 8KB more of permanent kernel. The loss of performance from 2 disk buffers less really outweighed less overlay switching?
If I understand correctly, the network code continuously switches around segment 5 to access the right mbuf.
According to the notes in the TUHS archive (http://www.tuhs.org/Archive/Distributions/DEC/Ultrix-11/Fred-Ultrix3/setup-…) running Ultrix-11 with networking on a 11/40 class machine is borderline workable:
"I have personally tested it on a 23+, 53 and 83. I know it runs
fine on the 73. The smaller machines (34, 40 etc) should work
akin to the 23, meaning using overlays and be very tight on RAM
for the drivers. TCP/IP is a biiiiig load for those systems!"
There are two other routes to TCP/IP on a PDP11 without split I/D:
- 3COM's TCP/IP package (initially an overlay over V7, soon after also over 2BSD); I believe the source to this is lost.
- DCEC's adaptation of the Wingfield TCP/IP library, designed to work with V6. It is mostly a user space daemon, but requires some kernel enhancements. The Wingfield code is in the TUHS archive, but that version has a modified V6 kernel that also supports NCP networking and requires split I/D. If used with a minimally enhanced V6 kernel, it would easily fit in 64KB, without overlays.
Note that these last two options have very different API's and would not be so easy to work with.
Paul
-----Original Message-----
From: pechter(a)gmail.com
To: arnold(a)skeeve.com
Sent: Sat, 20 May 2017 16:41
Subject: Re: [TUHS] Unix with TCP/IP for small PDP-11s
Missed the reply all on the phone. Phil Karn had KA9Q in the 80s... It is mentioned on Wikipedia... Don't know much more. PPP might be better than slip.
Bill
-----Original Message-----
From: arnold(a)skeeve.com
To: pechter(a)gmail.com
Sent: Sat, 20 May 2017 16:12
Subject: Re: [TUHS] Unix with TCP/IP for small PDP-11s
Yes! Do you want to follow up to the list please?
Thanks,
Arnold
William Pechter <pechter(a)gmail.com> wrote:
> KA9Q sound right?
>
> -----Original Message-----
> From: arnold(a)skeeve.com
> To: imp(a)bsdimp.com, bqt(a)update.uu.se
> Cc: tuhs(a)minnie.tuhs.org
> Sent: Sat, 20 May 2017 15:06
> Subject: Re: [TUHS] Unix with TCP/IP for small PDP-11s
>
> Warner Losh <imp(a)bsdimp.com> wrote:
>
> > I read the sources to see the TCP/IP support was there (that's the bit
> > about adding Berkeley Sockets). I see nowhere that it's excluded for the
> > non I/D machines, but haven't tried it first hand. I got interested not
> > because of the PDP-11, but because I have an old Rainbow that recently
> > started running Venix (v7-based version) and was trolling around for some
> > way to do TCP/IP to it (though w/o readily available ethernet cards, I'm
> > not sure it is a viable project).
>
> Boy is the memory going. What was the TCP/IP implementation people
> ran on DOS to do connections over serial lines? Could that be found
> and revived for such a system?
>
> Thanks,
>
> Arnold
On 2017-05-21 23:27, Clem Cole <clemc(a)ccc.com> wrote:
> Actually, the 11/60's main claim to fame was it supposed to be a
> 'commercial' PDP-11 and was built for the small business market. The WCS
> was a side effect.
As you mentioned in another mail, it's main claim to fame was probably
the short time it actually existed in the market. DEC seriously did some
things wrong on it, such as only 18-bit address, and no split I/D space,
at a time when pretty much any other PDP-11 was going there.
But it was also one of a couple of -11 models where you had the ability
to write your own microcode. But I've never heard of anyone who had the
WCS option. (But at one point, I was playing with four 11/60 machines in
a computer club.)
I don't remember if it was you or someone else who said that there were
several microcode bugs in the 11/60. It ran the DEC OSes, but there were
some issues with Unix. I seem to also remember seeing some special code
in the RSX kernel for some 11/60 oddity, but I would have to search
through the code to remember exactly what that was about.
> It was to built to run RSTS and RSX and had a commercial instruction set
> exten *etc.*. Somebody had written a 'dentist office' package for it and a
> 'car dealership package' IIRC. And was physically packaged a tad
> differently than the other 11's as it was trying to be marketing to places
> that might want to show it off instead of hiding it in a computer room.
Uh? I have not seen any plans ever mentioning CIS for the 11/60, but I
guess it's possible it was considered at some point. But I can assure
you no CIS ever existed for the 11/60.
The only machines that ever had the CIS option was the 11/44 and
11/23,11/24.
But I have no idea how many machines ever actually had the option installed.
One nice detail of the 11/60 is that it had the FPP build in. But there
was also an optional hardware FPP accelerator available.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I'm trying to find out if there is any existing Unix for the PDP-11
which supports TCP/IP on /40-/34-23 class machines (i.e. non-I+D
machines)?
Does 2.9 BSD with TCP/IP (assuming such a thing exists) fit on those
machines? (I know 2.9 does run on them, but I don't know about the TCP/IP
part.)
The reason I ask is that MIT did a TCP/IP for V6 which would run on them
(only incoming packet de-mux is in the kernel - the TCP is in with the
application, in the user process), which has recently been recovered from a
backup tape.
I'm trying to figure out if there is any use for it, as it would take some
work to make it usable (I'd have to write device drivers for available
Ethernet cards, and adapt an ARP implementation for it).
Noel
On 2017-05-20 04:00, Warner Losh <imp(a)bsdimp.com> wrote:
> https://ia601901.us.archive.org/10/items/bitsavers_decpdp11ulLTRIX112.0SPDS…
>
> Looks like it requires MMU, but not split I/D space as it lists the
> following as compatible: M11, 11/23+, 11/24, 11/34, 11/40 and 11/60. It
> does require 256kb of memory. See table 2, page 6 for details.
Uh...? Where do you see that there is any TCP/IP support in Ultrix-11?
If any was done by someone else, there is no saying that it would be
usable on a machine without split I/D. To be honest, I've never seen any
mention of TCP/IP on any machine without split I/D space. I guess it
could be done, but it would be a rather big headache...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
At Celerity we were porting Unix to a new NCR chipset for our washing machine sized Workstation.
We had a VAX 750 as the development box and we cross compiled to the NCR box. We contracted
out the 750 maintenance to a 3rd party and had no problems for a couple of years. Then one day I
came in to work to find the VAX happy consuming power and doing nothing. Unix wasn’t running and
nothing I could do would bring it back. After about 2 hours I got my boss and we contacted the maintenance
company. They guy they sent did much what I’d done and then went around the back. He pushed on the
backplane of the machine and Lo, it started working. He then removed the pressure and it failed quite
immediately. Turns out the backplane had a broken trace in it. We had done no board swaps in many
months and the room had had no A/C faults of any kind.
The company got a new backplane and had it installed in 2 days. Being 3rd party we couldn’t get it
replaced any quicker. After that it worked like a champ.
Celerity eventually became part of Sun as Sun Supercomputer.
David
OK, I'll kick it off.
A beauty in V6 (and possibly V7) was discovered by the kiddies in Elec
Eng; by sending a signal with an appropriately-crafted negative value (as
determined from inspecting <user.h>) you could overwrite u.u_uid with
zero... Needless to say I scrambled to fix that one on my 11/40 network!
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Dave Horsfall
> Err, isn't that the sticky bit, not the setuid bit?
Oh, right you are. I just looked in the code for ptrace(), and assumed that
was it.
The fix is _actually_ in sys1$exec() (in V6) and sys1$getxfile() (in PWB1 and
the MIT system:
/*
* set SUID/SGID protections, if no tracing
*/
if ((u.u_procp->p_flag&STRC)==0) {
if(ip->i_mode&ISUID)
if(u.u_uid != 0) {
u.u_uid = ip->i_uid;
u.u_procp->p_uid = ip->i_uid;
}
The thing is, this code is identical in V6, PWB1, and MIT system!?
So now I'm wondering - was this really the bug? Or was there some
bug in ptrace I don't see, which was the actual bug that's being
discussed here.
Because is sure looks like this would prevent the exploitation that I
described (start an SUID program under the debugger, then patch the code).
Or perhaps somehow this fix was broken by some other feature,, and that
introduced the exploit?
Noel
> From: "Steve Johnson"
> a DEC repairperson showed up to do "preventive maintenance" and managed
> to clobber the nascent file system.
> Turns out DEC didn't have any permanent file systems on machines that
> small...
A related story (possibly a different version of this one) which I read (can't
remember where, now) was that he trashed the contents of the RS04 fixed-head
hard disk, because on DEC OS's, those were only used for swapping.
Noel
Some interesting comments:
"You all are missing the point as to what the cost of passing
arrays by value or what other languages do"
I don't think so. To me the issues is that the model of what it
means to compute has changed since the punch-card days. When you
submitted a card deck in the early days, you had to include both the
function definition and the data--the function was compiled, the data
was read, and, for the most part there were no significant side
effects (just a printout, and maybe some stuff on mag tape).
This was a model that had served mathematics well for centuries, and
it was very easy to understand. Functional programming people still
like it a lot...
However, with the introduction of permanent file systems, a new
paradigm came into being. Now, interactions with the computer looked
more like database transactions: Load your program, change a few
lines, put it back, and then call 'make'. Trying to describe this
with a purely functional model leads to absurdities like:
file_system = edit( file_system, file_selector,
editing_commands );
In fact, the editing commands can change files, create new ones, and
even delete files. There is no reasonable way to handle any
realistic file systems with this model (let alone the Internet!)
In C's early days, we were just getting into the new world. Call by
value for arrays would have been expensive or impossible on the
machine with just a few kilobytes of memory for program + data. So
we didn't do it.
Structures were initially handled like arrays, but the compiler chose
to make a local copy when passed a structure pointer. This copy was,
at one time, in static memory, which caused some problems. Later, it
went on the stack. It wasn't much used...
This changed when the Blit terminal project was in place. It was
just too attractive on a 68000 to write
struct pt = { int x; int y } /* when int was
16-bits */
and I made PCC pass small structures like this in registers, like
other arguments. I seem to remember a dramatic speedup (2X or so)
from doing this...
"(did) Dennis / Brian/ Ken regret this design choice?
Not that I recall. Of course, we all had bugs in this area. But I
think the lack of subscript range checking was a more serious problem
than using pointers in the first place. And, indeed, for a few of
the pioneers, BCPL had done exactly the same thing.
Steve
Bjarne agrees with you. He put the * (and the &) with the type name to emphasize it is part of the type.
This works fine as long as you only use one declaration per statement.
The problem with that is that * doesn't really bind to the type name. It binds to the variable.
char* cp1, cp2; // cp1 is pointer to char, cp2 is just a char.
I always found it confusing that the * is used to indicate an pointer here, where as when you want to change an lvalue to a pointer, you use &.
But if we're going to gripe about the evolution of C. My biggest gripe is when they fixed structs to be real types, they didn't also do so for arrays.
Arrays and their degeneration to poitners is one of the biggest annoyances in C.
> Am I the only one here who thinks that e.g. a char pointer should be
> "char* cp1, cp2" instead of "char *cp1, *cp2"? I.e. the fundamental type is "char*", not "char", and to this day I still write:
> Fortran, for the record, passes nearly everything by reference
Sort of. The Fortran 77 standard imposes restrictions that appear to
be intended to allow the implementation to pass by value-and-result
(i.e. values are copied in, and copied back at return). In particular
it disallows aliasing that would allow you to distinguish between
the two methods:
If a subprogram reference causes a dummy argument in the referenced
subprogram to become associated with another dummy argument in the
referenced subprogram, neither dummy argument may become defined
during execution of that subprogram.
http://www.fortran.com/F77_std/rjcnf-15.html#sh-15.9.3.6
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> From: Random832
> Ah. There's the other piece. You start the SUID program under the
> debugger, and ... it simply starts it non-suid. *However*, in the
> presence of shared text ... you can make changes to the text image
> ... which will be reused the *next* time it is started *without* the
> debugger.
So I actually tried to do this (on a V6 system running on an emulator), after
whipping up a tiny test program (which prints "1", and the real and current
UIDs): the plan was to patch it to print a different number.
However, after a variety of stubbed toes and hiccups (gory details below, if
anyone cares), including a semi-interesting issue with the debugger and pure
texts), I'm punting: when trying to set a breakpoint in a pure text, I get the
error message "Can't set breakpoint", which sort of correlates with the
comment in the V6 sig$ptrace(): "write user I (for now, always an error)".
So it's not at all clear that the technique we thought would work would, in
fact, work - unless people weren't using a stock V6 system, but rather one
that had been tweaked to e.g. allow use of debuggers on pure-text programs
(including split I+D).
It's interesting to speculate on what the 'right' fix would be, if somehow the
techique above did work. The 'simple' fix, on systems with a PWB1-line XWRIT
flag, would be to ignore SETUID bits when doing an exec() of a pure text that
had been modified. But probably 'the' right fix would be to give someone
debugging a pure-text program their own private copy of the text. (This would
also prevent people who try to run the program from hitting breakpoints while
it's being debugged. :-)
But anyway, it's clear that back when, when I thought I'd found the bug, I
clearly hadn't - which is why when I looked into the source, it looked like it
had been 'already' been fixed. (And why Jim G hemmed and hawed...)
But I'm kind of curious about that mod in PWB1 that writes a modified pure
text back to the swap area when the last process using it exits. What was the
thinking behind that? What's the value to allowing someone to patch the
in-core pure text, and then save those patches? And there's also the 'other
people who try and run a program beind debugged are going to hit breakpoints'
issue, if you do allow writing into pure texts...
Noel
--------
For the gory details: to start with, attempting to run a pure-text program
(whether SUID or not) under the debugger produced a "Can't execute
{program-name} Process terminated." error message.
'cdb' is printing this error message just after the call to exec() (if that
fails, and returns). I modified it to print the error number when that
happens, and it's ETXTBSY. I had a quick look at the V6 source, to see if I
could see what the problem is, and it seems to be be (in sys1$exec()):
if(u.u_arg[1]!=0 && (ip->i_flag&ITEXT)==0 && ip->i_count!=1) {
u.u_error = ETXTBSY;
goto bad;
}
What that code does is a little obscure; I'm not sure I understand it. The
first term checks to see if the size of the text segment is non-zero (which it
is not, in both 0407 and 0410 files). The second is, I think, looking to see
if the inode is marked as being in use for a pure text (which it isn't, until
later in exec()). The third checks to make sure nobody else is using the file.
So I guess this prevents exec() of a file which is already open, and not for a
pure text. (Why this is the Right Thing is not instantly clear to me...)
Anyway, the reason this fails under 'cdb' is that the debugger already has it
open (to be able to read the code). So I munged the debugger to close it
before doing the exec(), and then the error went away.
Then I ran into a long series of issues, the details of which are not at all
interesting, connected with the fact that the version of 'cdb' I was using
(one I got off a Tim Shoppa modified V6 disk) doesn't correspond to either of
the sources I have for 'cdb'.
When I switched to the latest source (so I could fix the issue above), it had
some bug where it wouldn't work unless there was a 'core' file. But eventually
I kludged it enough to get the 'can't set breakpoints' message, at which point
I threw in the towel.
> From: Clem Cole
> it was was originally written for the for the 6th edition FS (which I
> hope I have still have the sources in my files) ...
> I believe Noel recovered a copy in his files recently.
Well, I have _something_. It's called 'fcheck', not 'fsck', but it looks like
what we're talking about - maybe it was originally named, or renamed, to be in
the same series as {d,i,n}check? But it does have the upper-case error
messages... :-) Anyway, here it is:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/fcheck.chttp://ana-3.lcs.mit.edu/~jnc/tech/unix/man8/fcheck.8
Interestingly, the man page for it makes reference to a 'check' command, which
I didn't recall at all; here it is:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/check.chttp://ana-3.lcs.mit.edu/~jnc/tech/unix/man8/check.8
for those who are interested.
> Noel has pointed out that MIT had it in the late 1970s also, probably
> brought back from BTL by one of their summer students.
I think most of the Unix stuff we got from Bell (e.g. the OS, which is clearly
PWB1, not V6) came from someone who was in a Scout unit there in high school,
of all bizarre connections! ISTR this came the same way, but maybe I'm wrong.
It definitely arrived later than the OS - we'd be using icheck/dcheck for
quite a while before it arrived - so maybe it was another channel?
The only thing that for sure (that I recall) that didn't come this way was
Emacs. Since the author had been a grad student in our group at MIT, I think
you all can guess how we got that!
Noel
> Are there languages that copy arrays in function calls defaultly?
> Pascal is an example.
Pascal's var convention, where the distinction between value
and reference is made once and for all for each argument of
each function, is sound. The flexibility of PL/I, where the
distinction is made at every call (parenthesize the name to
pass an array by value) is finicky, though utterly general.
> Where is all that [memory] going to come from if you pass a
> large array on a memory-constrained system of specs common back in the
> days when C was designed
Amusingly, under the customary linkage method in the even earlier
days when Fortran was designed, pass-by-reference entailed a big
overhead that could easily dominate pass-by-value for small arrays.
[In the beginning, when CPUs had only one register, subroutine
preambles plugged the reference into every mention of that variable
throughout the body of the subroutine. This convention persisted
in Fortran, which was designed for a machine with three index
registered. Since reference variables were sometimes necessary
(think of swap(a,b) for example) they were made standard.]
Doug
> From: Random832
> It seems to me that this check is central to being able to (or not)
> modify the in-core image of any process at all other than the one being
> traced (say, by attaching to a SUID program that has already dropped
> privileges, and making changes that will affect the next time it is
> run).
Right, good catch: if you have a program that was _both_ sticky and SUID, when
the system is idle (so the text copy in the swap area won't get recycled),
call up a copy under the debugger, patch it, exit (leaving the patched copy),
and then re-run it without the debugger.
I'd have to check the handling of patched sticky pure texts - to see if they
are retained or not.
{Checks code.}
Well, the code to do with pure texts is _very_ different between V6 and
PWB1.
The exact approach above might not work in V6, because the modified (in-core)
copy of pure texts are simply deleted when the last user exits them. But it
might be possible for a slight variant to work; leave the copy under the
debugger (which will prevent the in-core copy from being discarded), and then
run it again without the debugger. That might do it.
Under PWB1, I'm not sure if any variant would work (very complicated, and I'm
fading). There's an extra flag bit, XWRIT, which is set when a pure text is
written into; when the last user stops using the in-code pure text, the
modified text is written to swap. (It lools like the in-core copy is always
discarded when the last user stops using it.) But the check for sticky would
probably stop a sticky pure-text being modified? But maybe the approach that
seems like it would work under V6 (leave the patched, debugger copy running,
and start a new instance) looks like it should work here too.
So maybe the sticky thing is irrelevant? On both V6 and PWB1, it just needs a
pure text which is SETUID: start under the debugger, patch, leave running, and
start a _new_ copy, which will run the patched version as the SUID user.
Noel
> From: Clem Cole
> I said -- profil - I intended to say ptrace(2)
Is that the one where running an SUID program under the debugger allowed one
to patch the in-core image of said program?
If so, I have a story, and a puzzle, about that.
A couple of us, including Jim Gettys (later of X-windows fame) were on out way
out to dinner one evening (I don't recall when, alas, but I didn't meet him
until '80 or so), and he mentioned this horrible Unix security bug that had
just been found. All he would tell me about it (IIRC) was that it involved
ptrace.
So, over dinner (without the source) I figured out what it had to be:
patching SUID programs. So I asked him if that was what it was, and I don't
recall his exact answer, but I vaguely recall he hemmed and hawed in a way
that let me know I'd worked it out.
So when we got back from dinner, I looked at the source to our system to see
if I was right, and.... it had already been fixed! Here's the code:
if (xp->x_count!=1 || xp->x_iptr->i_mode&ISVTX)
goto error;
Now, we'd been running that system since '77 (when I joined CSR), without any
changes to that part of the OS, so I'm pretty sure this fix pre-dates your
story?
So when I saw your email about this, I wondered 'did that bug get fixed at
MIT when some undergrad used it to break in' (I _think_ ca. '77 is when they
switched from an OS called Delphi on the -11/45 used for the undergrad CS
programming course - I _think_ they switched that machine from Delphi to
Unix), or did it come with PWB1? (Like I said, that system was mostly PWB1.)
So I just looked in the PWB1 sources, and... there it is, the _exact_ same
fix. So we must have got it from PWB1.
So now the question is: did the PWB guys find and fix this, and forget to
tell the research guys? Or did they tell them, and the research guys blew
them off? Or what?
Noel
> We all took the code back and promised to get patches out ASAP and not tell any one about it.
Fascinating. Chnages were installed frequently in the Unix lab, mostly
at night without fanfare. But an actual zero-day should have been big
enough news for me to have heard about. I'm pretty sure I didn't; Dennis
evidently kept his counsel.
Doug
> From: "Ron Natalie"
> Ordered writes go back to the original BSD fast file system, no? I seem
> to recall that when we switched from our V6/V7 disks, the filesystem got
> a lot more stable in crashes.
I had a vague memory of reading about that, so I looked in the canonical FFS
paper (McKusick et al, "A Fast File System for UNIX" [1984)]) but found no
mention of it.
I did find a paper about 'fsck' (McKusick, Kowalski, "Fsck: The UNIX File
System Check Program") which talks (in Section 2.5. "Updates to the file
system") about how "problem[s] with asynchronous inode updates can be avoided
by doing all inode deallocations synchronously", but it's not clear if they're
talking about something that was actually done, or just saying
(hypothetically) that that's how one would fix it.
Is is possible that the changes to the file system (e.g. the way free blocks
were kept) made it more crash-proof?
Noel
> The problem with that is that * doesn't really bind to the type name.
> It binds to the variable.
>
> char* cp1, cp2; // cp1 is pointer to char, cp2 is just a char.
>
> I always found it confusing that the * is used to indicate an pointer
> here, where as when you want to change an lvalue to a pointer, you use
> &.
The way to read it is that you are declaring *cp1 as a char.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> From: Warner Losh
> There's a dcheck.c in the TUHS v7 sources. How's that related?
That was one of the earlier tools - not sure how far back it goes, but it's in
V6, but not V5. It consistency checks the directory tree. Another tool, 'icheck',
consistency checks file blocks and the free list.
Noel
I've made available on GitHub a series of tables showing the evolution
of Unix facilities (as documented in the man pages) over the system's
lifetime [1] and two diagrams where I attempted to draw the
corresponding architecture [2]. I've also documented the process in a
short blog post [3]. I'd welcome any suggestions for corrections and
improvements you may have, particularly for the architecture diagrams.
[1] https://dspinellis.github.io/unix-history-man/
[2] https://dspinellis.github.io/unix-architecture/
[3] https://www.spinellis.gr/blog/20170510/
Cheers,
Diomidis
On 6 May 2017 at 11:23, ron minnich <rminnich(a)gmail.com> wrote (in part):
[...]
> Lest you think things are better now, Linux uses self modifying code to
> optimize certain critical operations, and at one talk I heard the speaker
> say that he'd like to put more self modifying code into Linux, "because it's
> fun". Oh boy.
Fun, indeed! Even self-modifying chips are being touted -- Yikes!
N.
> tr -cs A-Za-z '\n' |
> tr A-Z a-z |
> sort |
> uniq -c |
> sort -rn |
> sed ${1}q
>
> This is real genius.
Not genius. Experience. In the Bentley/Knuth/McIlroy paper I said,
"[Old] Unix hands know instinctively how to solve this one in a jiffy."
While that is certainly true, the script was informed by my having
written "spell", which itself was an elaboration of a model
pioneered by Steve Johnson. By 1986, when BKM was published,
the lore was baked in: word-processing scripts in a similar
vein were stock in trade.
A very early exercise of this sort was Dennis Ritchie's
enumeration of anagrams in the unabridged Merriam-Webster.
Since the word list barely fit on the tiny disk of the time,
the job entailed unimaginable marshalling of resources. I
was mightily impressed then, and still am.
Doug
On Thu, May 4, 2017 at 7:14 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> some of those Berkeley flags (not specifically for cat, but almost
> certainly including those for cat) were really quite useful.
Amen!!! I think that this is key point. What is in good taste or good
style? Doug's distain for the results of:
less --help | wc
is found in bad design, a poor user interface, little fore thought, *etc.*
Most of what is there has been added over Eric Scheinbrood's
original more(1)
I do not believe are used that often, but some of them are of course, and
you can not tell who uses what!! So how do you decide what get rid of?
How do you learn what people really need -- IMO: much of that is
* experience *and this thing we call 'good taste.' As opposed to what I
think happened with less(1) and many others similar programs (programmers
peeing on the code because they code and the source was available -- I can
add this feature to it and I think it is cool. As opposed to asking,
what do really get and not having a 'master builder' arbitrating or vetting
things).
The problem we have is that we don't yet have a way of defining good taste.
One might suggest that it takes years of civilization and also that
tastes do change over time. Pike's minimalistic view (which I think is
taken to the extreme in the joke about automobile dashboard on Brian
Kernighan's car) sets the bar to one end and probably one of the reason why
UNIX had a bad reputation, certainly by non-computer scientists, when it
first appeared as being difficult. Another extreme is systems that put in
you a box and never let you do anything but what we told you do; which I
find just as frighten and frustration builds there when I use them.
Clearly systems that are so noisy that can not find what really want, or
need is another dimension of the same bad design. So what to do? [more in
a minute...]
Larry is right. Many of the 'features' added to UNIX (and Linux) over time
have been and *are useful*. Small and simple as it was (and I really
admire Ken, Dennis and the Team for its creation), but in 2017 I really
don't want to run the Sixth Edition for my day to day work anymore - which
I happily did in 1977. But the problem is, as we got 'unlimited address
space' of 32 and 64 bits, and more room for more 'useful' things, we also
got a great deal of rubbish and waste. I am interpreting Doug's point
about less --help | wc is that is that are so many thorns and weeds, it
hard to see to see the flowers in the garden.
I'd like to observe that is that most college courses in CS I have seen
talk about how to construct a programs, algorithms, structures - i.e. the
mechanics of some operation. But this discussion is about the human
element. What we feel is good or bad and how it related to how to use
the program.
I think about my friends that have degrees in literature, art and
architecture. In all cases, they spend a lot of time examining past
examples, of good and bad - and thinking and discussing what makes them so.
I'm actually happy to see it was Professor McIlroy that is one of the
folks taking a stand on current craziness. I think this is only going to
get better if a new crop of students that have been trained in 'good
taste.' So, I wonder do any of the schools like Darthmouth and the like
teach courses that study 'style' and taste in CS. Yes, it is a young
field but we have been around long enough that we do a body of work good
and bad to consider.
I think there is a book or two and a few lectures in there somewhere.
Thoughts?
Clem
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Discuss of style and design of computer programs
> from a user stand point
> Message-ID: <20170506091857.GE12539(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I would actually take that one step further: When you are writing
> code, you are _first and foremost_ communicating with whatever human
> will need to read or modify the code later. That human might be you, a
> colleague, or the violent psychopath who knows both where you live and
> where your little kids go to school (might as well be you). You should
> strive to write the code accordingly, _even if_ the odds of the threat
> ever materializing are slim at most. Style matters a lot, there.
>
Interesting, I was going to say about the same thing about the violent psychopath
who has to maintain your code after you leave. When I lectured at UCSD or was
giving talks on style for ViaSat I always said the same thing:
Whatever you write, the fellow who is going to wind up maintaining it is a known
axe killer, now released from prison, completely reformed. He learned computer
programming on MS/DOS 3.1 and a slightly broken version of Pascal. He will be
given your home phone number and address so if he has any questions about the
code you wrote he can get in contact with you.
This always got a few chuckles. I then pointed out that whenever anyone gets code
that someone else wrote, the recipient always thinks that they can ‘clean up’ what
is there because the original author clearly doesn’t understand what proper code
looks like.
Over time, I’ve learned that everyone has a style when writing code, just like handwriting
and given enough time, I can spot who the author of a block of code is just from the
indenting, placement of ( and ) around a statement and other small traits.
What makes good code is the ability to convey the meaning of the algorithm
from the original author to all those who come after. Sometimes even the most
unusual code can be quite clear, while the most cleanly formatted and commented
code can be opaque to all.
David
> tr -cs A-Za-z '\n' |
> tr A-Z a-z |
> sort |
> uniq -c |
> sort -rn |
> sed ${1}q
>
> This is real genius.
Not genius. Experience. In the Bentley/Knuth/McIlroy paper I said,
"[Old] Unix hands know instinctively how to solve this one in a jiffy."
While that is certainly true, the script was informed by my having
written "spell", which itself was an elaboration of a model
pioneered by Steve Johnson. By 1986, when BKM was published,
the lore was baked in: word-processing scripts in a similar
vein were stock in trade.
A very early exercise of this sort was Dennis Ritchie's
enumeration of anagrams in the unabridged Merriam-Webster.
Since the word list barely fit on the tiny disk of the time,
the job entailed unimaginable marshalling of resources. I
was mightily impressed then, and still am.
Doug
On 3 May 2017 at 09:09, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Not to mention, you can cat multiple files - as in concatenate :)
Along these lines, who said "Cat went to Berkely, came back waving flags."
N.
> I believe I was the last person to modify the Linux man page macros.
> Their current maintainer is not the kind of groff expert to whom it
> would occur to modify them; it would work as well to ask me questions
Question #1. Which tmac file to they use? If it's not in the groff
package, where can it be found?
Doug
OK, I recall a note dmr wrote probably in the late 70s/early 80s when folks
at UCB had (iirc) extended the symbol name size in C programs to
essentially unlimited. This followed on (iirc) file names going beyond 14
characters.
The rough outline was that dmr was calling out the revisions for being too
general, and the phrase "BSD sins" sticks in my head (sins as a verb).
I'm reminded of this by something that happened with some interns recently,
as they wanted to make something immensely complex to cover a case that
basically never happened. I was trying to point out that you can go
overboard on that sort of thing, and it would have been nice to have such a
quote handy -- anyone else remember it?
ron
There you go:
http://harmful.cat-v.org/cat-v/
Em 2 de mai de 2017 17:29, "Diomidis Spinellis" <dds(a)aueb.gr> escreveu:
On 02/05/2017 19:11, Steve Johnson wrote:
> I recall a paper Dennis wrote (maybe more like a note) that was titled
> echo -c considered harmful
> (I think it was -c). It decried the tendency, now completely out of
> control, for everybody and their dog to piddle on perfectly good code
> just because it's "open".
>
There's definitely Rob Pike's talk "UNIX Style, or cat -v Considered
Harmful", which he delivered at the 1983 Usenix Association Conference and
Software Tools USers Group Summer Conference. Unfortunately, I can't find
it online. It's interesting that the talk's date is now closer to the
birth of Unix than to the present.
Diomidis
I'm this close to figuring out how to get netbsd to work on fs-uae with
no prior amiga experience. Searching around the English Amiga Users's
board for clues, I found a guide on downloading and installing Amix.
Complete with amix download links. Haven't tried it myself -I'm still
working on my bsd tangent. But for anyone interested:
http://eab.abime.net/showthread.php?t=86480
> From: Josh Good
> Would the command "cd /tmp ; rm -rf .*" be able to kill a V6 ... system?
Looking at the vanilla 'rm' source for V6, it cannot/does not delete
directories; one has to use the special 'rmdir' command for that. But,
somewhat to my surprise, it does support both the '-r' and '-f' flags, which I
thought were later. (Although not as 'stacked' flags, so you'd have to say
'rm -r -f'.)
So, assuming one did that, _and_ (important caveat!) _performed that command
as root_, it probably would empty out the entire directory tree. (I checked,
and "cd /tmp ; echo .*" evaluates to ". .." on V6.
Noel
The JHU version of the V6 kernel and the mount program were modified (or
should I say buggered) so that unprivileged users could mount user packs.
There were certain restrictions added as well: no setuid on mounted
volumes etc.
The problem came up that people would mount them using relative paths and
the mtab wouldn't really show who was using the disk as a result. I
suggested we just further bugger it by making the program chdir to '/dev'
first. That way you wouldn't have to put /dev/ on the drive device and
you'd have to give an absolute path for the mount point (or at least one
relative to /dev). I pointed out to my coworker that there was nothing in
/dev/ to mount on. He started trying it. Well the kernel issued errors
for trying to use a special file as a mount point. He then tried "."
Due to a combination of bugs that worked!
The only problem, is how do you unmount it? The /dev nodes had been
replaced by the root of directory of my user pack. Oh well, go halt and
reboot.
There were supposed to be protections against this. Mind you I did not
have root access at this point (just a lowly student operator), so we
decided to see where else we could mount. Sure enough cd /etc/ and mount
on "." there. We made up our own password file. It had one account with
uid 0 and the name "Game Player" in the gcos field. About this one of the
system managers calls and tells us to halt the machine as it'd had been
hacked. I told him we were responsible and we'd undo what we did.
I think by this time Mike Muuss came out and gave me the "mount" source and
told me to fix it.
Tim Newsham:
I'm not sure what fd 3 is intended to be, but its the telnet socket in p9p.
====
By the 10/e days, file descriptor 3 was /dev/tty. There was
no more magic driver for /dev/tty; the special file still
existed, but it was a link to /dev/fd/3.
Similarly /dev/stdin stdout stderr were links to /dev/fd/0 1 2.
(I mean real links, not mere symbolic ones.)
I have a vague recollection that early on /dev/tty was fd/127
instead, but that changed somewhere in the middle 8/e era.
None of which says what Plan 9 did with that file descriptor,
though I suppose it could possibly have copied the /dev/tty
use.
And none of that excuses the hard-coded magic number file
descriptor, but hackers will be hackers.
Norman Wilson
Toronto ON
Here are my notes to run 8th Edition Research Unix on SIMH.
http://9legacy.org/9legacy/doc/simh/v8
These notes are quite raw and unpolished, but should be
sufficient to get Unix running on SIMH.
Fell free to use, improve and share.
--
David du Colombier
Many years ago I was at Burroughs and they wanted to do Unix (4.1c) on a new machine. Fine. We all started on the project porting from a Vax. So far so good. Then a new PM came in and said that intel was the future and we needed to use their machines for the host of the port. And an intel rep brought in their little x86 box running some version of Unix (Xenix?, I didn’t go anywhere near the thing). My boss, who was running the Unix port project did the following:
Every Friday evening he would log into the intel box as root and run “/bin/rm -rf /“ from the console. Then turn off the console and walk away.
Monday morning found the box dead and the intel rep would be called to come and ‘fix’ his box.
This went on for about 4 weeks, and finally my boss asked the intel rep what was wrong with his machine.
The rep replied that this was ‘normal’ for the hardware/software and we would just have to “get used to it”.
The PM removed the intel box a couple of days later.
David
> On Apr 25, 2017, at 7:19 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Larry McVoy <lm(a)mcvoy.com>
> To: Clem Cole <clemc(a)ccc.com>
> Cc: Larry McVoy <lm(a)mcvoy.com>, TUHS main list <tuhs(a)minnie.tuhs.org>
> Subject: Re: [TUHS] was turmoil, moving to rm -rf /
> Message-ID: <20170425140853.GD24499(a)mcvoy.com>
> Content-Type: text/plain; charset=us-ascii
>
> Whoever was the genuis that put mknod in /etc has my gratitude.
> We had other working Masscomp boxen but after I screwed up that
> badly nobody would let me near them until I fixed mine :)
>
> And you have to share who it was, I admitted I did it, I think
> it's just a thing many people do..... Once :)
I don't know if this is of any interest to anyone here, but 1999 is 18 years
ago, so maybe it counts as old?
Over on nextcomputers.org various users had found a backup of next68k.org
which included a wget of the old source
http://nextftp.onionmixer.net/next.68k.org/otto/html/pub/Darwin/PublicSource
/Darwin/index.html
So I found a copy of Rhapsody DR-2, the last binary version of this Mach
2.5+4.4BSD and after a day got a kernel to build. Another day and I had it
interfacing to the driverkit to load drivers.
After a post on reddit someone gave me a link to some kdx p2p network, where
they had a Darwin 0.3 toast image.
using what I learned with Darwin 0.1 I got the 0.3 to build as well.
I uploaded a bunch of stuff here:
https://sourceforge.net/projects/aapl-darwin/
although it seems to not let me upload the toast images themselves.
I did slam together a minimal Darwin 0.3 qemu image that can sort-of boot to
single user mode. It's not even slightly useful, but it does show that it
works.
https://sourceforge.net/projects/aapl-darwin/files/qemu-images/Darwin03_qemu
090_24_4_2017.7z/download
> From: Kurt H Maier
> /etc/glob, which appears to report no-match if the first character is .
So I couldn't be bothered to work out how 'glob' worked exactly, so I just did
an experiment: I created a hacked version of 'rm' that had the directory
handling call to glob call 'echo' instead of 'rm'; it also printed 'tried'
instead of the actual unlink call, and printed 'cd' when it changed
directories.
I then set up two subsidiary directors, foo and .bar, with one containing
'.foo0 foo1' and the other '.bar0 bar1'.
Saying 'xrm -r -f .*' produced this:
cd: .
-r -f foo xrm xrm.c
cd: ..
-r -f foo xrm xrm.c
cd: .bar
-r -f bar1
(This system has /tmp on a mounted file system, which is why the 'cd ..' was a
NOP. And a very good thing, too; at one point the phone rang, and it
distracted me, and I automatically typed 'rm', not 'xrm'... see below for what
happened. No biggie, there were only my test files there. The output lines
are "-r -f foo xrm xrm.c" because that's what 'glob' passed to 'echo'.)
Saying 'xrm -r -f *' produced this:
cd: foo
-r -f foo1
xrm: tried
xrm.c: tried
So apparently 'glob', when presented with '*' , ignores entries starting with
'.', but '.*' does not.
When I stupidly typed 'rm -r -f .*', it did more or les what I originally
thought it would: deleted all the files in all the directories (but only on
the /tmp device, because .. linked to the itself in /tmp, so it didn't escape
from that volume); leaving all the directories, but empty, except for the
files .foo0 and .bar0. So files and inferior directories with names starting
with '.' would have escaped, but nothing else.
Noel
> From: "Ron Natalie"
> Actually, it's the shell that calls glob.
Yes, in the initial expansion of the command line, but V6 'rm' also uses
'glob' internally; if the '-r' flag is given, and the current name in the
command argument list is a directory, viz.:
if ((buf->mode & 060000) == 040000) {
if (rflg) {
...
execl("/etc/glob", "glob", "rm", "-r",
fflg? "-f": "*", fflg? "*": p, 0);
printf("%s: no glob\n", arg);
exit();
}
(where 'p' is 0 - no idea why the writer didn't just say '"*": 0, 0').
So "rm -f foo*", where the current directory contains file 'foo0 foo1 foo2'
and directoty 'foobar', and directory 'foobar' contains 'bar0 bar1 bar2', the
first instance of 'glob' (run by the shell) expands the 'foo0 foo1 foo2 foobar'
and the second instance (run by 'rm') expands the 'bar0 bar1 bar2'.
> Glob then invokes the command (in this case rm).
I don't totally grok 'glob', but it is prepared to exec() either the command
name, /bin/{command} or /usr/bin/{command}; but if that file is not executable
it tries feeding it to the shell, on the assumption it must be a shell command
list:
execv(file, arg);
if (errno==ENOEXEC) {
arg[0] = file;
*--arg = "/bin/sh";
execv(*arg, arg);
}
I guess (too lazy to look) that the execv() must return a different error
number if the file doesn't exist at all, so it only tries the shell if the
file existed, but wasn't executable.
Noel
There was an incident at Pixar that a runaway rm ate most of the Toy Story 2
movie. The only thing that saved them was an employee had their own copy
on a machine at home.
We never lost the whole disk through one of these, but we did have a guy
wipe out /etc/passwd one day. Our password fields had an rfc-822ish user
name in the gcos field, so it looked
something like:
ron::51:50:Ronald Natalie <ron>:/sys1/ron:
Well, one of our users decided to grep for a user (alas while root) with the
command
grep <howard> /etc/passwd
Hello all.
It's off-topic for this list, but there is turmoil in Linux-land. A bug
was discovered in systemd, whereby systemd re-implemented "rm"
functionality without following POSIX "rm" behaviour. This could kill a
system, as explained here: https://github.com/systemd/systemd/issues/5644
The reference POSIX "rm" behaviour is that "rm -rf .*" should NOT delete
the current and parent directories, as stated here:
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm.html#tag_20_11…
So, to get on-topic, I have a question for UNIX historians: when was it
first defined in the UNIX realm that "rm -r .*" should NOT delete the
current and parent directories? Would the command "cd /tmp ; rm -rf .*"
be able to kill a V6 or V7 UNIX system?
Regards,
--
Josh Good
I have a memory of having seen a Zilog Z-80 (not Z8002 like the Onyx) based Unix, possibly v6, at a vendor show or conference - perhaps the West Coast Computer Faire (WCCF) in the late 1970s or early 1980s.
I recall asking the people in the booth how they managed without an MMU, and don't recall their answer. I do remember thinking that since Unix had "grown up" with MMUs to stomp on obvious pointer mistakes, the software ought to be relatively well-behaved ... you know: not trying to play "core war" with itself?
I searched the TUHS archives cursorily with Google to see if this has been previously mentioned, but pretty much all Z80 CPU references have for its use in "smart" I/O devices back in the day.
Does anyone else remember this Z80 Unix and who did it? Or maybe that it was a clone of some kind ... ?
looking for a little history,
Erik Fair
I was trying to configure C news on 2.9BSD today and I found that its
Bourne shell doesn't grok # comments. The Bourne shell in 2.11BSD does.
So I thought: when did the Bourne (and other) shells first grok # as
indicating a comment? Was this in response to #! being added to the
kernel, or was it the other way around? And was the choice of #!
arbitrary, or was it borrowed from somewhere else?
Datum point: 2.9BSD's kernel can recognise #!, but the sh can't recognise #.
Cheers, Warren
"Shebang". Nice coinage (which I somehow hadn't heard before).
Very much in tune with Bell Labs, where Vic Vyssotsky had instilled
"sharp" as the name of # -- not "number", not "pound", and definitely
not "hash" -- so shell scripts began with sharp-bang.
Doug
Hi,
I've imaged (with ImageDisk) some floppies I've got with my "new" 8560
system.
You can find them at
ftp://ftp.groessler.org/pub/chris/tektronix/8560/diskimages .
Among other things there are cross-assemblers for 68000, 6809, and 6800.
From the TNIX installation disk set one is missing (disk 5 of 5).
I'm looking for the Z8000 cross-assembler for TNIX. Does anyone have it?
regards,
chris
Oldest actual use in a post I can find is 1997
I did find something I had completely forgot about. The csh used to (probably still does?) differentiate between Bourne shell scripts and csh scripts by looking for #.
> From: "Erik E. Fair"
> I have a memory of having seen a Zilog Z-80 .... based Unix, possibly v6
> ... I recall asking the people in the booth how they managed without an
> MMU, and don't recall their answer. I do remember thinking that since
> Unix had "grown up" with MMUs to stomp on obvious pointer mistakes, the
> software ought to be relatively well-behaved
I don't know about the Z80 part, but for the MMU aspect, recall that the first
couple of versions of PDP-11 Unix ran on a model (the -11/20) which didn't
have an MMU (although, as mentioned before here, it apparently did later use a
thing called a KS11, the specifications for which seem to be mostly lost).
Although recall the mention of calling out "a.out!", as to the hazards of
doing so...
And of course there was the 'Unix for an LSI-11' (LSX), although I gather that
was somewhat lobotomized, as the OS and application has to fit into 56KB
total.
So it was possible to 'sorta kind-of' do Unix without an MMU.
Noel
> Is this a reason why "#" was chosen as the root prompt, by the way?
No. # was adopted as superuser prompt before the shell had a
comment convention.
Doug
Looks like INN isn't quite as trusting as CNews, so it doesn't
automatically create a group when it receives a control message.
Anyone know how to make it do that, or at least email me
to notify of the new group?
john
All,
Does anyone have SunOS 4 documentation?
I am trying to properly configure UUCP and I am unable to find proper
documentation.
--
Cory Smelosky
b4(a)gewt.net
>From M.Douglas.McIlroy(a)dartmouth.edu Mon Apr 17 10:43:01 2017
X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on
mail.cs.dartmouth.edu
X-Spam-Level:
X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,
DKIM_VALID,DKIM_VALID_AU,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H3,
RCVD_IN_MSPIKE_WL,RCVD_IN_SORBS_SPAM,SPF_HELO_PASS autolearn=no
autolearn_force=no version=3.4.1
Received: from NAM01-BY2-obe.outbound.protection.outlook.com (mail-by2nam01lp0181.outbound.protection.outlook.com [216.32.181.181])
by mail.cs.Dartmouth.EDU (8.15.2/8.15.2) with ESMTPS id v3HEh0wA031412
(version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=OK)
for <doug(a)cs.dartmouth.edu>; Mon, 17 Apr 2017 10:43:00 -0400
Resent-Date: Mon, 17 Apr 2017 10:43:00 -0400
Resent-Message-Id: <201704171443.v3HEh0wA031412(a)mail.cs.Dartmouth.EDU>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dartmouth.edu;
s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version;
bh=4n+i0BYc99O1LsJUlq/3+Fe7LWU50mdGY+GqJ4s6gE4=;
b=DMhdEDi8FZGl0vh183BK4hRB0JlC9zz1vFrtXGj2Xu+zC14yq2nErQ6GklgyH/McZ14nEEr3pd6GCt/XwC8IPiJHpb6ezLmsbHqNixcefKg0zgKWqNwZ2UI/bo+SGD6nBF2iWvJ2mPNwEYatCD38aJnIhhSIstPwBFMseWGzzY0=
Resent-From: <M.Douglas.McIlroy(a)dartmouth.edu>
Received: from BN6PR03CA0091.namprd03.prod.outlook.com (10.164.122.157) by
DM5PR03MB2761.namprd03.prod.outlook.com (10.168.198.10) with Microsoft SMTP
Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
15.1.1034.10; Mon, 17 Apr 2017 14:42:56 +0000
Received: from BY2FFO11FD033.protection.gbl (2a01:111:f400:7c0c::199) by
BN6PR03CA0091.outlook.office365.com (2603:10b6:405:6f::29) with Microsoft
SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1034.10 via
Frontend Transport; Mon, 17 Apr 2017 14:42:56 +0000
Authentication-Results: spf=pass (sender IP is 209.85.213.43)
smtp.mailfrom=gmail.com; dartmouth.edu; dkim=pass (signature was verified)
header.d=gmail.com;dartmouth.edu; dmarc=pass action=none
header.from=gmail.com;
Received-SPF: Pass (protection.outlook.com: domain of gmail.com designates
209.85.213.43 as permitted sender) receiver=protection.outlook.com;
client-ip=209.85.213.43; helo=mail-vk0-f43.google.com;
Received: from mail-vk0-f43.google.com (209.85.213.43) by
BY2FFO11FD033.mail.protection.outlook.com (10.1.14.218) with Microsoft SMTP
Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id
15.1.1019.14 via Frontend Transport; Mon, 17 Apr 2017 14:42:55 +0000
Received: by mail-vk0-f43.google.com with SMTP id n73so16676432vke.1
for <M.Douglas.McIlroy(a)dartmouth.edu>; Mon, 17 Apr 2017 07:42:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20161025;
h=mime-version:sender:in-reply-to:references:from:date:message-id
:subject:to:cc;
bh=4n+i0BYc99O1LsJUlq/3+Fe7LWU50mdGY+GqJ4s6gE4=;
b=FYyXrTL0N56YECium8aaGcHxeArd7+4M0CcK8rV/t84BfWu/kBACFrqKOUWtxH6WQN
OsRTLjEcRCrJjNiO4jJriKHJJkdq12hwXXMSYYoooCb03Serd3dambVmTmVagAyVFN/W
sgWqIQTNE4VRSTlsMs+9rd3cCkBdboMwePNZOYUs0C+O/ZL+Fj4W07B8tbJKKYCit+Y2
MoQpnpt79ncTmnVusb59PayARQizGLK2ned1BiPGiNFuCWHvNWZBqP17uLZSdMCdJXLv
NQs/6Ht0gd8f50ssH06Wu7IU7FZJhocAFl35a9KGLjlL/9H74upFWybii6r7qYDmOiJG
Fkyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20161025;
h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
:date:message-id:subject:to:cc;
bh=4n+i0BYc99O1LsJUlq/3+Fe7LWU50mdGY+GqJ4s6gE4=;
b=jBOwl2StXxH4FDuHqiQsoUYRum/TjG/xhsdKrzYLGUWV/3Ydc2dhK2HWubVeOQvJG3
+rjbpGtLNBEtrAfbGSpy1+vpTek9lqwKP/K+gYlME1wbXAaaHHLiW2O0R7Hb9QVJ8HmG
oupdoMUnbICeFo37zdjbFxLFL9bk2ca4yNs/k8spElc4M2Y1ttAggjlXDXoUNgc378Ps
1xVJwXpHExY7abcimj8ZxyjJZ5RlRZ8upGC0sXhhkshO143bHE29IaurXZmoHN8JsvNd
iRTj70MdXrCzS2KBdJR/O9K3k5VJGZVqs0ar8cyL26cBSo8JGcoW+w48lEP26pJc3LNx
EsoA==
X-Gm-Message-State: AN3rC/4bSJNvC0yEJ++sTJonVdycbsnEyE5wc/AsLdtTFx0FNPC8pxq4
49Qu9Mw/gf+OzpNO2E3JBTNpqmugaw==
X-Received: by 10.31.7.14 with SMTP id 14mr8644558vkh.132.1492440174480; Mon,
17 Apr 2017 07:42:54 -0700 (PDT)
MIME-Version: 1.0
Sender: bigroryg(a)gmail.com
Received: by 10.176.92.46 with HTTP; Mon, 17 Apr 2017 07:42:34 -0700 (PDT)
In-Reply-To: <MWHPR03MB27650B0F0110942E146E552CAD050(a)MWHPR03MB2765.namprd03.prod.outlook.com>
References: <MWHPR03MB27650B0F0110942E146E552CAD050(a)MWHPR03MB2765.namprd03.prod.outlook.com>
From: Rory Gawler <rgawler(a)gmail.com>
Date: Mon, 17 Apr 2017 10:42:34 -0400
X-Google-Sender-Auth: DMbKB3AZJKP7IXygQ6qYGNTVH4k
Message-ID: <CAF-wkQ1NTp8Ch_CqCHoOKsTpx4_pdc8pc+TkcssDK+e=L9e_Ug(a)mail.gmail.com>
Subject: Re: Indian Ridge
To: "M. Douglas McIlroy" <M.Douglas.McIlroy(a)dartmouth.edu>
Cc: "trailsbill(a)gmail.com" <trailsbill(a)gmail.com>,
"doug(a)cs.dartmouth.edu" <doug(a)cs.dartmouth.edu>
Content-Type: multipart/alternative; boundary="001a1143d7a672d411054d5dceaa"
X-EOPAttributedMessage: 0
X-EOPTenantAttributedMessage: 995b0936-48d6-40e5-a31e-bf689ec9446f:0
X-Forefront-Antispam-Report:
CIP:209.85.213.43;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(8196002)(2980300002)(438002)(199003)(189002)(2171002)(53546009)(98316002)(110136004)(82202002)(7116003)(55446002)(106466001)(356003)(5660300001)(3480700004)(6246003)(42186005)(87572001)(93516999)(61266001)(4326008)(59536001)(50986999)(221733001)(54356999)(76176999)(305945005)(83322999)(8676002)(16003)(7596002)(1096003)(73972006)(73392002)(236005)(6392003)(7636002)(76482006)(84326002)(9896002)(6916009)(189998001)(7846003)(2950100002)(956001)(54906002);DIR:INB;SFP:;SCL:1;SRVR:DM5PR03MB2761;H:mail-vk0-f43.google.com;FPR:;SPF:Pass;MLV:sfv;A:1;MX:1;LANG:en;
X-Microsoft-Exchange-Diagnostics: 1;BY2FFO11FD033;1:aUgSGmASM+up/A1T0NTlHs3f/ljhfjFjTfxKd66WpgAd+4Ok71jt1PrxNk/prUooNjmvXSGblDccFX6pNr6vCkQZFOj5fSZv2kIVtobBdB9NyEtkr2L9n8hVg5gPJIY+PbNymVbd+Svxhro0bTAMCTdiHCp+VpLlJMhUbBDmONzZUk4B1iVj5SDh757u+e26UrLJGPj4xMFuHTCukKvzAMA7lKsQB/LOWIcjCwQM04N7rHDMZDru+EiVYirKwZ2EzQKnlJuBR1G9BGsPbNew08U9LUfs+e8hsegBr78njkiTij1thYeSvZTGe+/yZIOWTIC4wWFLIv3sYanVjmbrpD1AsnlYDbWxnRd+7iKagaN5DJZBcRqjPwa0mN9PwOfUk1I6aLID5vbJRvih3i8CCyJa++a0EDhjbMquAJ2CY/HE9q68BhNf66xayvwS+xqpiYYVPLc4Kz9MOcZSQs79Gv8yiXHmpi1drdxlXr7bmwxxeCuQ1K473+nCZyx6iN73BYfdnpiTRa0h+l1Wc7TrYgYEENgtigRSnY9/uVtEBgOTqTz/uS+8jjYs+fu2F/7tDyHqFZassAV9K8UGsm04/HjtuBDyMLxDLGIq2o+fgmD+O3UXF++K6J8IcJH+Dpo8j6dEWtC68CjJEFxywo4bYA==
X-MS-Office365-Filtering-Correlation-Id: 9409f223-7b61-4a8e-5c11-08d485a008db
X-DkimResult-Test: Passed
X-Microsoft-Antispam:
UriScan:;BCL:0;PCL:0;RULEID:(22001)(81800236)(8251501002)(3001016)(3010002)(71702078);SRVR:DM5PR03MB2761;
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;3:v2wlUFM8zxmK6Tj/CNGWkGAahZw8E8kklsdwj4pSYZSWsbQ+ua8F9a88hf2fyGbKRZTvHxHozJrQkhGJs81/SXmsQmj3XYVRns/Q3ciXOxQFLzwRAG5/7kGBrFnocFNDgDoq960Vc4kN3VH7VoCoqCljLaNWAtKADDqlyfMTxTgp2TbcTiLRMvmTQQKPVId0xEru3cQ3H5EOVnRg+BiATNNqb2w8n2hirSzCbOwM3FJR3ig0BPnpYGfCGOrf+bw3TPM9vUrRsxbIjQ6skgBI8zIq4W3uX9Ihuom5lnYdYot6066sBihzyGF2AHCETqcLy/101dlrwagkcakpppKgkvW2du8q1Ll6f118Bk89eSBjeu0Jkd5gL0mZpQNKpKFdbEEp0wO14ptjszHCOYJxDxYo0ZSOiIt5Z6R2L9AiEgvxtRZbm8AQnkBOC4tEN+mvgiVIBHPIQQI81KGaNRoj326hJz2rpiSHgmiSxPFMfD0=
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;25:tpVJ7VrLshcTggjSpYMoAudpwyvfFwFbEUltyIX2OGI/wmm9Fw5TTnK5qXi3AmaGmEYCXRBU5yMWciSHOesKA5tv3swmL0Othemww86bU8/hhHtqhVYJMbwzeUKw9S6n/fRx50eG3p2kD+a6PnXIATUzTdgk4ISM6uDNtTjJmbK5bHaztBvL5ge+wu2j1x4dKXYvxhJCA6jBESWq3+1OvuG0vNiwedKHxSMLQ8uJofSn9yKppzK7gzn+CHMOKOYAr3HDc3/Lsruh7pfROtM7be89kZr1CwwFBekPuezwPgyYFV3zjlrSNu0NAIs1I9oyTYjClMzGN8jIAJpmwdLeXp0aVC0DIIxNta/NRNb6Y2ZTHvOoviV16XnV8K11COCmrUpENpRlsfmDM8i/lZUcUQaVVhEuch7zQWyZiGL4Q4xIydLNmy8n+UoJasoVzYTwTO935Qn7DqbkCwGpdlCyVg==;31:E1S7SDJ3pY3roIb5uZS3ABMFxbq+0Oj0z4fL1JfLkqAZ6KhXftLxL5qCO1pIrYFoLAIIabqM6REYNfY1A782zjDopiVOzZZBSax6EM0HTls1W5dE/psFJS/tPQA/iQN28DcUun1mgnF3QY7yxan/VMJl6tBNc8419a9yFhA3Z2311HHmQd4gSbYH/FThKah3ss936OIlUrwQXFsR4C7boxsqpxHY2FqDrPgGwlFSgCasXiM9nh/JKAe3Fj4JqzjgSwHWPpIJH7eTmf7clJdJySCF4w7Zl6mZiRY1EP8fxFUXHrwVA8Cpj1QOSJ2BeQPL6dNTortiAhrrjTd5MNXHPw==
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;20:EFJPXJwWCb7APxjszxZB+54sL0bczY6kOwPeGSV+1iPNTzrAytEhA9zRsoZcwjljRXFJJBg41Edusaa5WXjqVLeduU9g8Vb6TDDTTrlUEiD3nFqECivNrbZLoZuUwdp0uk/mM8wqTgHBTj7i5GcGMDSkTjx+qoGIy8u0KlygnHuUelpCl17VN8stN40XwOhuLXG/qOQfK1XLZZ/JtBYqSWeXiJaCQP0Xzuhg5JjqqgPvWytot+clwc7p9Ik6QlQMDEXb2x1gx1XTnsSrDpS5Xd0HsOTxwnbKPw7gxrBIjJVwY+xtLBxSnTKLGOHquCcihj13uvrr2kTxVGdRj3imLxDeMApi3EX4BVzP+lOm91icgByTpkA7SqFBM8y4klk6Ifd2Guy1vUGZqii9R5BDjQmOYAdy4DE5uiBremOIFzoLt6cH/pdkAVKP3gaOGaAJrU+6DUzCh/ercJaIGCrb1CxqazMqv6fX2O4KAI1caZ19GWKY7GvmPph2FAyGkDKX
X-Exchange-Antispam-Report-Test: UriScan:;
X-Exchange-Antispam-Report-CFA-Test:
BCL:0;PCL:0;RULEID:(601004)(2401047)(13023025)(13024025)(13018025)(13016025)(8121501046)(9101536074)(93006095)(93005095)(3002001)(10201501046);SRVR:DM5PR03MB2761;BCL:0;PCL:0;RULEID:;SRVR:DM5PR03MB2761;
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;4:na1PgqJ0tpiE5q000iAyjM0WxQVQsV8KycOYXJPblbncH1WD1TcjHT8Zw93PShN9wAauArgEPCrSVJqW7d6gZdyPTWxtJI2hB109WEkgCucy/nj5Pr6hi5G5+q3y+851YnIbIPb2U2uyWrnuEkazfyXG+WkyqL/x4+E7GQlveWxWFQrWEXWTPpXqYKuXGVqM88f3375c6oNioYIunLThm1KcnW/w7w0Nn8SHsOF/6KsWTCi0RKwvzCcbgHQEauj99rgUPKXBT1TMTp5BvkE8zAOazwKCp5i9w2pFPqewwV/DESQhsKRTZ77crY1upsuTyOWUZC27vDBJGzuo4BFNu8TX/K73UtGmyOznvdFHM5zqkkul6KqaiQriQ7FzSPTEJ2YPH3oyNSf8bqdcw3dwzKSNxRBikbPT9krP2Hv9P/8StgtRHRuiFi2oo4pYc0C83sU/kQfpIKoxyFrytE5mf85UMKitTFKRS+mjbu4XciA2UxqBTRccfzPLl3OD26BQ
X-Microsoft-Exchange-Diagnostics:
=?us-ascii?Q?1;DM5PR03MB2761;23:LwuISP4y+h8NukX4EBZQL9NltYyu4fu9xPdz3RVeI?=
=?us-ascii?Q?i9z6sgK3AYAwFGziyKKDVpWMCR4AwEo54jaKEzxc9gvY2+EjXAgfXSiFkB2s?=
=?us-ascii?Q?TIdz07q8qS0AhyBuH4rFCpaLBexf/4e87GkFNlPi2TgRik0aNU6x6uK3Nd51?=
=?us-ascii?Q?Nb2u8GwQHik7EKiO1dgjWccV1EHH5/uqZ1z8ZiZiu2+eAERDpS+oHAY7OIKX?=
=?us-ascii?Q?Oya5xa/3biO6jRfleRoVErQwYAkTpqlR4qwxhnCnquvzCkFP6kzkSUuHv+oQ?=
=?us-ascii?Q?EAUNJxh8O5jEoxk7YpqiwibfWPlpSh9aGfcEIoXa8oQyH5c6ZKTn1PucLOpl?=
=?us-ascii?Q?nTSWy1ipaL8k92nVUJj+XuMcACZ0pziBz+8U+UaaM38D73Hd8CAd0TmwpuUR?=
=?us-ascii?Q?Mb1MQB891HSHtoraaS3ksfw5gY7aP9XmK0g2RUa1tK7VAM2HbkUZEu5MAHgg?=
=?us-ascii?Q?AxrmXYYfczTTMZV1X/PlurFgU0kz3ouDZjrelzH2EpVYe1/gApHFmX01syU5?=
=?us-ascii?Q?JlCaOlPhR0gwB7Er0wJZrJ9vokUw/nOJsVDnSbpPPwUzhqGGkqWqXBnAl4/x?=
=?us-ascii?Q?iXTHNezvb7vrmcZReej4lOL0ADKWrA3jIQAk2I0znoK/EBfXiotCMHdGOKs3?=
=?us-ascii?Q?h9W/nADQNUdGJkr8YTUMrRlBmdB5r/4d9UIxhyOw5oOeqgfysZYJG2Zca0PD?=
=?us-ascii?Q?Ksq98WsRe95TGUbFvxKeucK+fxKk0Y9EuuePIHImpdAevif0RZg2ffBCizVn?=
=?us-ascii?Q?iQ7ZF20S+6lNh2nTZIkkbIHu962yF+PT2Kf6FAlqJFADKQ11xC0NClqnz/+k?=
=?us-ascii?Q?FMWbx5gR0C6/hO2LWLCr1wiHnYaOPQVrjiiHvQZbVAVk4F1bSnDMYSBeZMMR?=
=?us-ascii?Q?AHnLJaMNLGYdLJnlv+VOkmJJAvwvSZAuqnjIMwtpFAhLtkFVzHMasRioCazp?=
=?us-ascii?Q?zD7wvYLdSaPa6EP2gLoaUFhggxOkBWpqY1vOvS15qd+4FuwO5nqxd/rtJdg5?=
=?us-ascii?Q?nrRO4XxQ+r0v4AGIqYeGEyEqRCJEgBak12uzsyeIBJn0OuuQ71X6wh8US5Sp?=
=?us-ascii?Q?Ij1nktNhe6iNwwgGBsjdIJrlYFPrFN5/Itoha0WEwhLnpbvQg=3D=3D?=
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;6:SX+ugIsCZ//tDyF0NKcgWSZYh99MZ2poV/mHF/RfXeqPnK1NfCKPCEzvwDOc5QO3pWoGqnaeSZCUhXsbQMjgtM0kZbWutv74sNXdCYbxSes81RTHiPW2qPYIaAD6Nx2KBTpXm/j+bB9YwvJtP1rGbbEmRC6grGINauqnUGzlyJFFXjbgHew2VqjNMTe0IkQ2lZemCDzmJ2IJRzEa+2ksuyay993c75Qun7tyzrxCm6Z4MdvG/BhGG/nAx12tbmqy8xrKriiW/V81dS+5YSq2fmDWuYugvXmjTMsx6uZOLghfdxvDhusbE2iMZwCvhgy30IPYO/ZMEkEiu5C1VbGauJ+MWaHz/INwK6eM+FdkzbmIwzXqo9kpQCVt0A/x6Qp8OCN/Ud7TxOQH2Xk9cmUptZOXdkb9eDARkbuLzL2kHvWrLNZprYtauNLkfkkFk77L;5:X99pImDjLUnqQiBWX30Pg0bbmBOa0GaC27aqlYNiTAMzfRh8R49PIt7ABRaTirneg2R9MlsetzTlue3Ako1aK0lcCYCyQu5SmmBDXYPs7PcYuXWS4K9MGNN3RT9LjskVynrOccGNJs2Tme5cYgaClQ==;24:06s0RlYYWCLALfPGeRPKwtvKIMJb57tc6eSqggLi3oiOiFyawkJt88wQ1VVldEaXkWlblsG3sPJ0qrseupFnro6P4ieZR1KUTb/I+ilFnn4=
SpamDiagnosticOutput: 1:99
SpamDiagnosticMetadata: NSPM
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;7:6EchJJQ7hB88FkeDdY59Btwp8XSA6vzo7/U/zdeKa2LOlqjhYTDPvefsF8tENYR6fXqYfGQmFG3mBhiA15ktVFutdIGp0GIYTx/YB7D+lFNl11O/0hggB5Y4qLj7TmcplbzKiHsaSjc4MyqMGMF/JAwZ0KB7SiTtFNdVmZEkzHHotal9uNzDbvB1Zks1bax5SmAGq+T6q3G3t/YP3A0XXAHuLbTJs8+O2cP7/2Gog4Z6Ueca1CZdSLa0KRG3AlpzyiH8b4x6I0eJ+D2ZUFxwhH0McBwkBLPI5JrhrsXYl9bnnVvuROlXey+tl9e67Nc+3av57kJ7n91ctJfQjOn6CDbJXeTJMAnVkMSWSw67LXUUpSmqBtW9XNXGLE11uh84ivyJNJVvxFu7WXLcGg4Ddw==
X-MS-Exchange-Inbox-Rules-Loop: M.Douglas.McIlroy(a)dartmouth.edu
X-Microsoft-Exchange-Diagnostics:
1;DM5PR03MB2761;20:NCNUDGcUA2CZcPMhlym+ZEJHgEXbBBYdzdEeh9nOe30wfKVGKU071HfnnudgymTpvbkLlP4cRKfwzr+kdvU3Igw1L/ryHROQLZK1DFoXqKE+wqQ1ErbtgeVq0TtMOHWzqx8Z6TpYnzJgmOU67RbwaXVrPXtUZc7HxvTd7qpGVdoLMtS1YEfd9O8QXH1HNVK8ulWxEFboGfBfQlpPQzs9ORv9NVTnYZmp1NEdbGwt1Xqo7BZc3AY14pgk5rzEyhP0QfFpeBMo16LPc9u7bykXzgU9KCM9AyDmV+pSlXzgxWQ+SVU0PtpUThWCDEDy3FQ1Bjc7sFS+RmAARzCQXLTyk/6v5dIOsqy2aHiIq0hdQVmU1iNuK3IXf2lKJwKKVEfqcfpDRjjVDUX9jZXs4Y2UHbglCGHpzb6Por+WloCYQCu+oqMbZfMqpAHaegLSoh2hWype1qHRPlSWUsRRIRZqh0SE7UpZaoU0VPe1Dc73csv8Mp1qlqoXnw4A2q+FsiQjDyqXpX1TA34tooGBtaU0wcDNpBIG0Kx5L68WYCSqd7GDZSSy6M65OHiW+PJ3sJ5+VDKguppSUKKIE4No3Qk7sz/cRxsmPV4AUVzwsweFLQE=;23:GqZ/zFtEitUvsd68daINl4rzH89zLSZ56+awbx+l6cb8sFrb/e0mS/Zz4evWvd0mVY4C7zMZY8Fb+HsDv5bsgri7KDzDPB3IKXnJoAgt2s2sLnZl8FciQik1b0WLVoRtSp1frmpM47vBL6C4C9Wk6A==
X-ExternalRecipientOutboundConnectors: 995b0936-48d6-40e5-a31e-bf689ec9446f
X-OriginatorOrg: dartmouth.edu
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2017 14:42:55.5625
(UTC)
X-MS-Exchange-CrossTenant-Id: 995b0936-48d6-40e5-a31e-bf689ec9446f
X-MS-Exchange-CrossTenant-FromEntityHeader: Internet
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2761
Status: R
X-Status: A
--001a1143d7a672d411054d5dceaa
Content-Type: text/plain; charset=UTF-8
Fabulous.
Tim McNamara said we should make sure with Vicki Smith that it's not in the
conservation easement area and if so, if that's okay?
On 14 April 2017 at 17:37, M. Douglas McIlroy <
M.Douglas.McIlroy(a)dartmouth.edu> wrote:
> Rory,
>
> A possible tree-felling apportunity exists at the lookout just
> north of the summit. The former view toward Baker Tower has
> been totally obscured by new growth. If Dartmouth is amenable,
> the taller trees might be felled in that patch. There may be
> a dozen trees something like 10" in diaameter.
>
> I saw three other trees at scattered intervals along the
> trail, which could be removed, though none is a pressing
> hazard: a dead pine >18" and two leaners ~8".
>
--001a1143d7a672d411054d5dceaa
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr"><div class=3D"gmail_extra">Fabulous.=C2=A0</div><div class=
=3D"gmail_extra"><br></div><div class=3D"gmail_extra">Tim McNamara said we =
should make sure with Vicki Smith that it's not in the conservation eas=
ement area and if so, if that's okay?=C2=A0</div><div class=3D"gmail_ex=
tra"><br></div><div class=3D"gmail_extra"><br></div><div class=3D"gmail_ext=
ra"><br><div class=3D"gmail_quote">On 14 April 2017 at 17:37, M. Douglas Mc=
Ilroy <span dir=3D"ltr"><<a href=3D"mailto:M.Douglas.McIlroy@dartmouth.e=
du" target=3D"_blank">M.Douglas.McIlroy(a)dartmouth.edu</a>></span> wrote:=
<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-lef=
t:1px #ccc solid;padding-left:1ex"><div id=3D":1mg" class=3D"a3s aXjCH m15b=
6e663c119796f">Rory,<br>
<br>
A possible tree-felling apportunity exists at the lookout just<br>
north of the summit. The former view toward Baker Tower has<br>
been totally obscured by new growth. If Dartmouth is amenable,<br>
the taller trees might be felled in that patch. There may be<br>
a dozen trees something like 10" in diaameter.<br>
<br>
I saw three other trees at scattered intervals along the<br>
trail, which could be removed, though none is a pressing<br>
hazard: a dead pine >18" and two leaners ~8".</div></blockquot=
e></div><br><br></div></div>
--001a1143d7a672d411054d5dceaa--
Just finished it. He isn’t the greatest presenter, and it is an interesting overview of how the Bourne shell got written, including some of the quirks it has to this day.
David
> On Apr 15, 2017, at 7:00 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> Subject: [TUHS] Steven Bourne - early days of unix talk
> Message-ID: <201704151756.v3FHuUI4019407(a)freefriends.org>
> Content-Type: text/plain; charset=us-ascii
>
>> From the 9fans list:
>
> https://www.youtube.com/watch?v=2kEJoWfobpA
May I recommend you use the ftp-proxy setup on OpenBSD? It is well-documented here:
https://www.openbsd.org/faq/pf/ftp.html
So far it has solved all passive FTP issues behind NAT for me.
Arrigo
Comparing documents produced by Heirloom troff and modern versions of
LaTeX, I just can't see a huge difference. The main thing TeX has going for
it is LyX, which makes composing documents a whole lot more comfortable for
folks raised on WYSIWYG. If a tool like that was available for troff...
Mike
On Apr 14, 2017 6:24 PM, "Toby Thain" <toby(a)telegraphics.com.au> wrote:
On 2017-04-14 9:56 AM, Michael Kerpan wrote:
> Of course, these days, there's a version of troff that borrows TeX's
> layout rules, while also adding vastly improved font handling, support
> for the most useful/widely used groff extensions, and more. Why Heirloom
> troff isn't more widely used is a puzzle for the ages.
>
No matter how far you tart up the former, Troff and TeX just aren't playing
the same ballgame.
--T
> Mike
>
Another one i would be interested to know more of.
whilst at college i used an inter data 3210 running edition 7, which was version 7 with bits of 2.1 bsd (very much from memory).
there was an editor on that machine i have never seen or heard of since - le. it was a visual editor, and i think supported multiple windows, of termcap style.
anyone know more?
-Steve
While we are on the topic of old Unix editors,
I once made Caltech qed build again:
https://github.com/chneukirchen/qed-caltech
Also, I've been trying to contact David Tilbrook, who maintains(-ed?)
his own version of qed, without success. I got an evaluation copy of
his QEF build system, which contains a bit of documentation about it,
but no binary.
Perhaps someone here can help out, or knows more?
--
Christian Neukirchen <chneukirchen(a)gmail.com> http://chneukirchen.org
In the autumn of 1984 as an undergrad at Durham University (UK) I
remember using a Pascal compiler (pc) on a BSD4.1 system (bumped after
several months to 4.1c and running I would guess on a small VAX?) and
using a strange line editor (probably because the terminal had crude
screen handling capabilities?).
I can't remember much about it other than it seemed to resemble ex. I
think I was told it was written in the UK and doing some Googling
suggests it may have been "em" (Editor for Mortals) from Queen Mary
College.
However, the time frame for that editor was late 70s and it would have
been quite old by 1984. So my current theory is that it was a fork
(maybe with a different name) or later version?
Anyone use this editor or anything similar around 1984?
--
4096R/EA75174B Steve Mynott <steve.mynott(a)gmail.com>
Apologies if this is already on the list somewhere.
What's the best way to transfer files in and out of the simh 4.3BSD Wisc
version? I can do it with tape files, but it seems like FTP or ssh or
NFS ought to be possible, and none is behaving at first blush.
Also, what's the recommended way to shut down the system? I shutdown
now to single user, then sync a few times, then ^E, but when I boot
again I get fsck errors serious enough to require a manual fsck (which
generally works fine.)
Thanks,
Mary Ann
All, in the 25 years of running this list, generally things have gone
well and I've not had to make many unilateral decisions. But today I
have chosen to unsubscribe Joerg Schilling from the list.
I'm sending this e-mail in so that there is a level of transparency here.
I've sent Joerg an e-mail outlining my reasons.
Cheers, Warren
The Internet (spelled with a capital "I", please, as it is a proper noun)
was born in 1969, when RFC-1 got published; it described the IMP and
ARPAnet.
As I said at a club lecture once, there are many internets, but only one
Internet.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Mary Ann Horton
> What's the best way to transfer files in and out of the simh 4.3BSD Wisc
> version? I can do it with tape files, but it seems like FTP or ssh or
> NFS ought to be possible, and none is behaving at first blush.
Someone should add the equivalent of Ersatz-11's 'DOS' device to SIMH; it's a
pseudo-device that can read files on the host filesystem. (Other stuff too,
but that's the relevant one here.) A short device driver in the emulated OS,
and a program to talk to it, and voila, getting a file into the emulated
system is a short one line command, none of this hassle with putting the bits
on a virtual tape, etc, etc.
I found editing files with 'ed' on my simulated V6 system painful (although i
still have the mental microcode to do it), so I did my editing under Windows
(Epsilon), and then read the file down to the Unix to compile it. Initially I
was doing it by putting the file on a raw virtual pack, and doing something
similar to that tape kludge. Then I got smart, and whipped up a driver for the
DOS device in Ersatz-11, and a program that used it, to allow me to easily
read a file from the Windows filesystem down to the Unix. Going around the
compile-debug-edit loop is totally painless now.
Noel
In the 1980s an important resource for Sun users was the sun-spots
mailing list. I can't find an archive of it, though some digests were
posted to comp.sys.sun and are accessible (with some difficulty)
through Google Groups.
Does anyone know of a complete archive?
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> That's interesting that that sort of thing dates back (at least) to the Labs.
Research couldn't hold a candle to Development on making smooth transitions.
You don't take a telephone switch offline to change a file format or the like.
The development cycle used to be about three years: one year for design,
one for implementation, and one to build a hybrid to bridge the transition.
At 2AM on Sunday, you'd install the hybrid on one of the dual cross-checked
processors at a time, so the switch was never interrupted. Later you'd
dispense with the hybrid the same way.
Doug
> While going through papers recently we found what was I am reasonably
> sure the quote for the first Sun sold in Scotland which might be of some
> interest (inevitably I now don't know where it is, although we did not
> throw it away). We're not sure whether it is for that machine, but we
> are sure that my wife (who isn't on the list) ran it (a 2/120 we think).
> It started out with SunOS 1 (or perhaps before).
In 1984 the Programming Systems Groups in Edinburgh's AI department
was contracted by SERC to evaluate workstations for its "Common Base"
program. We had a Sun 2/120, a Whitechapel MG-1, an Apollo Domain
system, and I think we already had at least one PNX Perq.
We recommended Suns. I can't find the report we wrote anywhere
online, but I'm fairly sure I've seen it in the last couple of years.
Presumably we gave the evaluation 2/120 back to Sun and bought the one
mentioned by Tim (it was called "islay" unless I have become confused)
a bit later, in 1985.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> On 7 Apr 2017, at 03:00, tuhs-request(a)minnie.tuhs.org wrote:
>
> That's a good point Josh. I've been trying to find copies of UKUUG and
> EUUG newsletters to add to the archive, along with the AUUG newsletters.
>
> So if you're on this list and outside of the US, now is the time to
> speak up with anecotes etc. Oh, and if you have anything worth adding
> to the Unix Archive, please let me know
Sheesh! Where to begin....
When I lived in Aus my wife and I were very close friends with John and Marrion. When he passed away, Marrion asked me to clean up his office at UNSW and collect anything of importance. Suffice to say I collected an awful lot of extremely important Unix memorabilia including copies of his books and his first original copy with hand written editing and signed by both Ken and Dennis. There's also the original Unix licenses signed off by BWK. There is so much stuff I can't list it all here but it's boxes (emphasing plural). When I left Aus I brought all this stuff for safe keeping back to the UK. That was 1996. Some time ago, I think at leat 15 years past I was in contact with someone from AUUG (grog may recall) hoping that they would send to collect it all but nothing happened. I also spoke to Armando about all this stuff he suggested a few things but even USENIX group weren't interested. So here I am with all this important stuff.....I would dearly love to hand it off. However I want some sort of guarantee that it would be housed somewhere safe for prosperity and not eventually ending up on eBay...if you know what I mean.
As to AUUGN...well one of the boxes contains just about every copy of the newsletter that was published since issue 1 through to the 1996 editions.
Warren please email me if you want to discuss further.
Cheers
Berny
Sent from my iPhone
On Wed, Apr 5, 2017 at 9:18 PM, Wesley Parish <wes.parish(a)paradise.net.nz>
wrote:
> The mention of UNOS a realtime "clone" of Unix in a recent thread raises a
> question for me. How many
> Unix clones are there?
>
An interesting question.... I'll take a shot at this in a second, note
there is a Wikipedia page:
https://en.wikipedia.org/wiki/Category:Unix_variants that I don't fully
agree with.
The problem with all of this question is really depends where you place
which boundary on the following continuum:
non-unix add-unix ideas trying to be
unix might as well be unix research unix
stream
eg VMS eg Domain eg UNOS
eg Sys V, BSD/386 & Linux Vx & BSD VAX
Different people value different things. So here is my take from the
"cloned" systems I used/was basically aware....
Idris was a V6 clone for the PDP-11, which I saw 1978ish. I can say I was
able to recompile code from v6 and it "just worked" so from a user's
standpoint it might as well has been. But the compilers and assemblers
were different and I never tried anything "hard"
The first attempt to "clone" v7 that I knew about was in France, and
written in Pascal - I think at Ecole Tech in Paris? The name of the
project escapes me, but they presented the work in the 1979/80 winter
USENIX (Blackhole) conference in Denver. There were no proceedings in
those days. I believe it also ran on the PDP-11, but I never ran it so; so
I have no idea how easy it was to move things from Seventh Edition. But I
also don't think they were working binary compatibility, so I think it
landed more toward the center.
The Cruds folks (Goldberg) wrote UNOS shortly there after (early 80s)
It was definitely not UNIX although it tried to have be mostly. We had
CRDS box at Masscomp and before I arrived they plan had been to use it get
code working before the RTU was running. But the truth was it failed
because it was not UNIX. The 68000 vs Vax issues were far, far less of an
issue than UNOS != UNIX. To Goldberg's credit, he did have a couple of
cool things in it. I believe only system commercial systems that used
Kanodia & Reed's Sequences and Eventcounts, were UNOS, Apollo Doman, and
Stellar's Stellix (I'm not sure about DG - they might have also at one
point). But these were hidden in the kernel. Also the driver model he
had was different, so there was no gain writing drivers there.
Mike Malcom & Dave Cheriton at Waterloo developed Thoth (Thoth - Thucks),
which was written in B, IIRC. Ran on the PDP-11 and was very fast and
light. It was the first "ukernel" UNIX-like/clone system.. Moving code
from V7 was pretty simple and there was attempt to make it good enough to
make it easy to move things, but it was not trying to be UNIX so it was
somewhere in the middle.
The Tunis folks seem to have been next. This was more in the left side of
the page than the right. I think they did make run on the PDP-11, but I'm
not so sure how easy it was to move code. If you used their concurrent
Pascal, I suspect that code moved. But I'm not sure how easy it was to
move a raw K&R "White Book" C code.
CMU's Accent (which was redo of Rochester's RIG) came around the same time.
Like Tunis the system language was an extended Pascal and in fact the
target was the triple drip Perq (aka the Pascalto). The C compiler for it
was late, and moving code was difficult, the UNIX influence was clear.
Apollo's Aegis/Domain really came next - about 82/83 ish. Like Accent it
was written in hacked up Pascal and the command were in Ratfor/Fortran
(from the SW Tool User's Group). C showed up reasonably early, but the
focus did not start trying to be UNIX. In fact, they were very
successfully and were getting ISV's to abandon VMS for them at a very good
clip. UNIX clearly influenced the system, but it was not trying to be
UNIX, although moving code from BSD or V7 could be done fairly easily.
Tannebaum then did MINIX. Other than 8086 vs PDP-11-ism, it was a pretty
darned good clone. You could recompile and most things pretty much "just
worked." He did not support ptrace and few other calls, but as a basic V7
system running on a pure PDP PC, it was remarkably clean. It also had a
large number of languages and it was a great teaching system - which is
what Andy created it be. A problem was that UNIX had moved on by the
time Andy released it. So BSD & V8 were now pretty much the definition of
"UNIX" - large address spaces were needed. As were the BSD tools
extensions, such as vi, csh. Also UUCP was now very much in the thing,
and while it was a pure v7 clone, it was the lack of "tools" that made it
not a good system to "use" and it's deficiencies out weighed the value.
Plus as discussed elsewhere, BSD/386 would appear.
Steve Ward's crew at MIT created TRIX, which was a UNIX-like, although
instead of everything being a file, everything was a process. This was
supposed to be the system that rms was originally going to use for GNU, but
I never knew what happened. Noel might. I thought it was a cool system,
although it was a mono-kernel and around this time, most of the OS research
had gone ukernel happy.
Coherent was announcement and its provenance is questioned, although as
discussed was eventually released from the AT&T official inquiry and you
can look it your self. It was clearly a V7 clone for the PC and was more
complete than Minix. I also think they supported the 386 fairly quickly,
which may have made it more interesting from a commercial standpoint. It
also had more of the BSD tools available than Minix did when it was first
released.
CMU rewrites Accent to create Mach, but this time splices the BSD kernel
inside of it so that the 4.1BSD binaries "just work." So it's bit UNIX
and a new system all in one. So which is it? This system would begat
OSF/1 and eventually become Apple's Mac OS? I think its UNIX, but one can
claim its not either....
By this point in time the explosion occurs. You have Lion's book, Andy's
and Maury Bach's book on the street. he genie is clearly out of the bottle,
and there is a ton of code out there and the DNS is getting all mixed up.
Doug Comer does Xinu, Sheraton does V-kernel, Thoth is rewritten to become
QNX, and a host of others I have not repeated. BSD's CSRG group would
break up, BSDi would be created and their 386 code come out. It was
clearly "might as well be" if it was not. Soon, Linus would start with
Minix and the rest is history on the generic line.
Clem
Ah, so someone rattled our cages… I see Alec has already chimed in so I guess it is time for the others to admit culpability.
I’m Italian, born & raised in Milan and first touched Unix in 1978 on my father’s TTY via an acoustic coupler into the “Unix machine” at the University of Milan. It was actually a completely “illegal” venture because the acoustic coupler was not the official one from the Italian monopoly telco, SIP (now Telecom Italia), but one which my dad had imported from the US as he used to work for Honeywell.
Not only, the Unix machine was another amazing story because it belonged to the “Cybernetics” group of the Physics Department as the proper Computer Science department did not yet exist (it would be later born as an offspring of the Physics department as “Scienze dell’ Informazione”, Information Sciences aka dsi.unimi.it when “the Internet” arrived) and was run out of God knows who’s funds (Italian academic funding is particular in that you get handed pots of money under generic titles and then what you do with them is your problem). I distinctly remember being asked to change a disk pack aged 8 and causing quite a kerfuffle when I switched the wrong pack. I *think* it was a Vax but cannot remember (age…). At some point I was handed my first Unix book, an Italian translation of a McGraw-Hill book which contained a series of exercises. I ended up following them faithfully, including e-mailing root at the time who was a lady called Anna who’s password was “favola” (fable) and, er, it actually worked when I tested by copying straight out the book. I assume that they thought the readership would not have access to the exact machine the book was written on!
At home we eventually landed a string of fancy kit plus access to my father’s GECOS & other Honeywell kit including a Western Digital Pascal uEngine (gorgeous), Apple ][e, etc. but my first “home Unix” was an Onyx C8002, a Z8000-based system with a 40Mb disk and “my" lovely ADM 3a serial terminal on which I learned C on Unix Version 7. That was 1980.
After the Onyx we “upgraded" to a series of disastrous Xenix systems but eventually *the* machine came into our house: a gorgeous Data General Aviion pizza box followed by an even more powerful “radiator” later model. Cue: learn X11 :)
In parallel by then I was also managing a set of Sun workstations (both Sun3 & Sun4) and a Silicon Graphics at the University of Milan for a professor researching “eidomatics”. I had a memorable joust, my first security gig, with the guy running “idea” (name censored as he turned out to be exactly who he was predicted to be in his youth).
Shifted myself to the UK for uni and landed at Imperial College where within a few months I was root on the RS/6000 cluster which had just been purchased and, as they say, the rest is history including running SunOS, installing the first three DEC Alpha workstations in the UK (tera, the server, giga and mega, the “clients”) along with a slew of Ultrix MIPS DECstations which were then upgraded to Alpha via the, then available, “upgrade kit”. Ended up running Alphas & Suns a bit everywhere in Imperial plus a few HPs for the Aeronautical Engineering bunch. I hate HP/UX, for reference.
Following Imperial ended up at the now defunct London Parallel Applications Centre where I had an Alpha farm, several Alpha workstations and “my” MasPar plus a rarity, an AMT DAP. Cue: HPF, MPI, etc. etc. plus Tandem K10000. Then Mathematics again where it was Linux & Sun Solaris. At some point I ended up on IPv6 & 6BONE with my very own 3FFE:: prefix.
Startup time because it was the dot.com thing and K2 Defender was born (http://www.k2defender.com/) as a co-founder, a gigantic distributed NIDS based around a Tandem S-series (Cue: more TACL) and then ported “down” to a simple Unix database.
Then death of the startup because the product was far too early for the world and the only customers would not readily buy from an Italian & South-African/German combo.
Since then independent security consultant with an eternal adoration for old Unix systems, in particular Motorola 88K-based.
Machines owned in various ways:
* VAX w/ Unix
* Onyx C8002
* Data General Aviion with System V
* tons of PCs running whatever Unix I could lay my hands on
* Sun3
* Sun4 until my Ultra & SS10 died
* SGI Irix on different machines
* Apollo DN10000
* A/UX 1.0 (yes, that too…)
* RS/6000 w/ AIX
* RIOS with RISCOS (I think…)
* Linux since 0.12 booting off a floppy in the Mathematics undergrad PC room :)
* OpenBSD since 2.2
* FreeBSD since forever
* NetBSD only occasionally
* HP/UX
* OSF/1 since T1.0 until the bitter end (I still have a gorgeous PWS 600au with the Evans & Sutherland 3D graphics card)
* Ultrix
* WD9000 w/ Pascal
* Xenix
* Tandem Unix layer
* Convex
* other absurd Unix variants I have forgotten…
Arrigo
Hello all!
Thanks Warren for accepting / inviting me!
Old timer of computing... Did and contributed the port of PGP 2.6.3i
to MIPS RC/3230 in 1993-1994 (cause I needed that to order a
Munititions T-Shirt from Adam Back).
Yeah... I was running crypto on my corporate server, in France, while
it was illegal. But I wanted the T-Shirt...
In effect, I can say "been there... done that... got the t-shirt" :)
Gilles
All, I'm trying to build a split I/D kernel for V7M. I've installed
the system following my notes at
http://www.tuhs.org/Archive/Distributions/DEC/Jean_Huens_v7m/simh_notes.txt
Reading the setup.txt document in the same place, I should be able to do:
cd /sys/conf
make all44 (build the kern & dev components split I/D)
mkconf < hptmconf (set up for hp and tm devices)
make unix44 (link the kernel)
cp unix_id /nunix (copy it to the root)
However, when I try to boot the kernel:
PDP-11 simulator V4.0-0 Beta git commit id: 24f1c06d
#nunix
Trap stack push abort, PC: 071752 (BIS #1,(R3))
Thanks in advance for any help.
Warren
The mention of UNOS a realtime "clone" of Unix in a recent thread raises a question for me. How many
Unix clones are there?
(My interest in Unix was the result of a local computer magazine, Bits'n'Bytes in the late 80s and early 90s
discussing two clones, Minix and Coherent in its Unix column. Then came Linux ...)
We've got a timeline (in several forms, in the 4.3BSD and 4.4BSD books and The Magic Garden, on Groklaw,
and elsewhere) for Unix and its developments; has anyone done one for the clones?
Thanks
Wesley Parish
"I have supposed that he who buys a Method means to learn it." - Ferdinand Sor,
Method for Guitar
"A verbal contract isn't worth the paper it's written on." -- Samuel Goldwyn
I suppose that it would make sense that all of AT&T's leading edge projects would use research Unix. I've always heard of the original C++ to C translator but this is the first time I've actually seen it.
It doesn't look like it had the wide scale following that C or Fortan had at this point.
Sadly my experience with C++ was mostly tied to Borland on the micro in early 90's, which makes it look mature compared to these early versions.
It's great finding stuff like this in the tree hiding in plain sight, if only you know what to look for. (http://unix.superglobalmegacorp.com/cgi-bin/cvsweb.cgi/researchv9/cmd/cfron…)
Or that emacs was in the v9 tree, in the religious wars I always imagined NJ being more vi.
Thanks again for making this release happen!
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Thu, 30 Mar 2017, Warren Toomey via Uucp wrote:
> On 03/29/2017 11:09 PM, Dave Horsfall via Uucp wrote:
> > Let the cancel/rmgroup/flame wars begin :-)
>
> :-P
And I still bear the scars from the aus.bizarre war... And I'll bet that
not many people remember that little episode :-)
> > (Been too busy to set up "utzoo" yet, so if anyone is desperate for it
> > then they can have it instead; my long-term goal is to run SimH on a
> > RasbPi but first I have to afford one...)
>
> What's your address? I've got an unused Raspberry Pi that I'll send you
> (or anyone else). ;-) First come, first serve.
To be sent privately...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
wow thats perfect, thanks!
> ----------
> From: Paul McJones
> Sent: Thursday, April 6, 2017 10:42 AM
> To: tuhs(a)minnie.tuhs.org
> Cc: Jason Stevens
> Subject: Re: [TUHS] I just noticed all the cfont aka C++ in research
>
> I suppose that it would make sense that all of AT&T's leading edge
> projects would use research Unix. I've always heard of the original C++
> to C translator but this is the first time I've actually seen it.
>
>
>
> In case you're interested, Bjarne Stroustrup has been helping me collect
> early versions of cfront here:
>
>
> http://www.softwarepreservation.org/projects/c_plus_plus/#cfront
>
>
> Previously we had located a listing of Release E (which we scanned),
> source for Release 1.0 of 10/10/85, and source for Release 3.0.3. From
> these 9th and 10th edition snapshots, cfront 1.2.2 6/10/87, AT&T C++
> Translator 2.00 06/30/89, AT&T C++ Translator 2.1.0+ 04/01/90, and AT&T
> C++ Translator 2.1++ 08/24/90 join the list.
>
> I suppose that it would make sense that all of AT&T's leading edge projects would use research Unix. I've always heard of the original C++ to C translator but this is the first time I've actually seen it.
In case you’re interested, Bjarne Stroustrup has been helping me collect early versions of cfront here:
http://www.softwarepreservation.org/projects/c_plus_plus/#cfront
Previously we had located a listing of Release E (which we scanned), source for Release 1.0 of 10/10/85, and source for Release 3.0.3. From these 9th and 10th edition snapshots, cfront 1.2.2 6/10/87, AT&T C++ Translator 2.00 06/30/89, AT&T C++ Translator 2.1.0+ 04/01/90, and AT&T C++ Translator 2.1++ 08/24/90 join the list.
Joerg Schilling:
BTW: UNOS has been sold to real customers from it's beginning. Was UNIX V8
available outside AT&T?
=====
I'm not sure what that has to do with anything. Which
of your body parts is so small as to make you insecure,
and which UNIX distributions are your body parts drawn
from?
To answer the question seriously, though: as I think I've
already explained here, Eighth Edition UNIX was available
under special per-site licensing (a letter agreement) to
educational institutions. I'm not sure what the official
criterion was: I helped make the tape, but wasn't involved
in the paperwork. I believe the total was about a dozen
places. A few of them did interesting work with the
system that was published e.g. at USENIX conferences
(Princeton comes to mind), but most I think never even
booted the system up. By then there were other members
of the UNIX family that were more comfortable for general
use, and people were more interested in the ideas than
in the code.
And of course we were a research group. We weren't making
things for customers. We were sharing our work, to the
extent the laywers and our own limited resources allowed.
That was the last time the Computing Science Research
Center attempted anything like a formal distribution.
Any `distributions' after that are just snapshots of
a constantly-evolving system.
Norman Wilson
Toronto ON
(Body parts not available on github. Sorry.)
> > Can you characterize what the 3rd-party material might be?
>
> Me personally, no. But there are others on the list who can help do this.
> Hopefully they will chime in!
Here's a list, gathered from the manuals, of stuff that Bell Labs
may not have the right to redistribute, with some (uncertain)
attributions of origin. I did not check to see which of them appear in
the TUHS archives; I doubt that /usr/src fully covered /bin and /usr/bin.
This list is at best a first draft. Please weigh in with corrections.
Doug
Kernel internet code. BSD
Imported commands
esterel INRIA
lisp, liszt, lxref MIT
icont, iconc Arizona
macsyma MIT
maple Maplesoft
Mail BSD
matlab Mathworks
more BSD (From the manpage: "More, a paginator that lives up to its name, has
too many features to describe." Its prodigality has been eclipsed by "less".)
netnews Duke
ops5 CMU
pascal, pc BSD
pxp BSD
readnews, checknews, postnews Duke
sdb BSD
smp Wolfram
spitbol IIT
telnet BSD
tex Stanford
tset BSD
vi, ex, edit BSD
Commands I'm not sure about, could be from Bell Labs
cyntax
news
ropy
strings
Library functions
termcap BSD
Imported games
adventure, zork, aarvark, rogue
atc
doctor MIT
mars
trek, ogre, sol, warp, sail
Games I'm not sure about
back
boggle, hangman
cribbage, fish
ching
gebam
imp
mille
pacman
pengo
swar
tso
Joerg Schilling:
Interesting that they created a name clash:
"p" was the name of a pager on UNOS, the first realtime
UNIX lookalike from former AT&T employees.
=====
p was something Rob Pike brought when he arrived in
1980. I believe he wrote its first version several
years earlier, when he was at the University of Toronto.
Since UNOS dates from 1981 (says Wikipedia), I think
Rob's p gets precedence.
Not that it matters. There never was, nor should there
ever have been, some global register of UNIX command
names during its formative years. UNIX was a research
platform and a living work-in-progress until it became
productized in the latter part of the 1980s.
And, of course, UNOS was a lookalike written from scratch.
It wasn't UNIX. If it wanted to be, it should have
adopted Rob's p!
Norman Wilson
Toronto ON
(Not in a particularly serious mood)
> Does someone have a more complete distribution of the Ninth Edition
>From the README and file dates, it is clear that what tuhs has had
evolved considerably from the Research system described in the
v9 manual. It had been ported to Sun and outfitted with X11. Some
lacunae are attributable to the absence of /bin shell scripts;
many things were apparently pruned as being of no interest to
the installation at hand.
It should be borne in mind that there never was such a thing
as a "distribution" of v8, v9, or v10. The manuals described
the Research computing environment, not a package prepared
for shipment. Responsibility for the latter had been taken
over by the Unix Support Group.
It would be interesting to have a precis of the provenance
of the system on view.
Doug
For the 99% of us that don’t have a SUN-3, it’ll run on TME. I followed the instructions on abiyo.net on installing SunOS 4.1.1 onto TME ( http://www.abiyo.net/retrocomputing/installingsunos411tosun3emulatedintme08… ), and then added in the v9.tar file, and walked through the instructions on bootstrapping v9 from SunOS.
I just cheated by copying the SunOS disk into a new scsi disk so I didn’t have to go through the fun of labeling it. TME emulates a SUN3-150 although the BIOS appears to be for a SUN3-160(?).. so the kernel unix.v75 will work when altering the proto0a file.
TME can be a little (lot!) touchy to get working, but with enough persistence you can get it booting the ‘funinthe’ v9 image.
Sent from Mail for Windows 10
I'll have to have a look at what's actually in the archive,
but it's important to understand that there was barely a
`distribution' of 8/e and never really any of subsequent
systems.
There was a distinct V8 tape, assembled mainly by Dennis
with some help from me, in the fall of 1984. It did not
exactly match the manuals that were printed, I think only
for internal use, a few months before. It was sent out
to about a dozen universities (and perhaps other sorts of
non-commercial places), with an individual letter agreement
with each destination to cover licensing.
Anything after that from the original Computing Science
Research Centre was effectively just random snapshots.
There was a Ninth Edition manual printed up, but it still
didn't really match the state of the system, partly
because nobody felt up to doing the all that work and
partly because various parts of the system were still
changing rapidly.
I remember personally making a few such snapshots at
various times, e.g. one for a certain university (again
under a one-off letter agreement), another for the
official UNIX System Labs folks in Summit (I took the
tape over there personally and helped get the system
running). I have no idea whether any of those is in
Warren's archives, and I don't remember whether anyone
else made any such snapshots, though my role in the group
by then was such that I'd probably have been involved.
The `V9 for Sun' distribution was done by someone from
one of our sister groups. He took a snapshot of our
system at some point and worked from that. There wasn't
any organized way to keep his stuff in sync with ours, and
I don't think his stuff got a lot of use in the long run
so there was little motivation to fix that.
All of that at least partly explains the skew between
system and manual pages; it was really like that. (Remember,
we were a research group, not a production computing
centre or a development shop.) Snapshots may have been
made hastily enough that some things were missed, too.
The 10/e manual came out in early 1990. It happened
because enough of us wanted to have a current manual
again, Doug was willing to take on the big task of
overall editing for Volume 1, and Andrew Hume was
energized to make Volume 2 happen--the first Volume 2
since the Seventh Edition. There was a lot of rewriting,
cleanup, merging of related entries, and discarding of
stuff we no longer used or no longer considered an
official part of the system. I remember that the first
printed copies arrived just in time for me to get one
before I left the wretched suburbs forever in June 1990.
Since I'd spent a lot of time working over the power-of-two
sections (2 4 8), I was pleased about that.
One thing that helped energize others about that manual,
by the way, was that I felt the parts I was responsible
for were way, way out of date, and that it was no longer
accurate for the system to call itself Ninth Edition
when it booted. But Edition always meant the manual,
so Tenth Edition would be wrong too. I made the boot
message say 9Vr2. I figured that would annoy people
enough to help convince them to help get a new manual
out. I have no data as to how big a help that was.
I don't know how many `V10 distributions' Warren has
at this point, but one of them is derived from a
snapshot I made during a visit to Bell Labs in 1994
or 1995. I had rescued some MicroVAXes before they
disappeared into dumpsters, and decided it would be
fun to set up a system or two running Research UNIX
for my private enjoyment. (I was working at a
university that had a letter agreement for 9/e--one
of the tapes I'd made, in fact--and a certain
department head at Bell Labs decided that as long
as I didn't spread the code around, that was probably
enough to keep lawyers happy.) I made rather a raw
snapshot of the root, /usr, and the whole master
source-code area, but with /etc/passwd trimmed of
any real passwords. Some years later (and with the
help of the resulting running systems) I made a
few tar images for Warren to keep in his secret
box pending the license issue (which we were discussing
even back then). I removed some stuff that didn't
belong to Bell Labs and wasn't really part of the
system (e.g. some big mathematical packages, a huge
bolus of X11 code that had never compiled and never
would), and segregated in a separate tar image some
stuff that was arguably part of the system but that
might technically belong to others (e.g. our workhorse
C compiler was based on pcc2, work scj had done over
at USG/USL after he'd left the Research world).
None of that was really curated either, and there had
certainly been further changes to the system since
the final 1990 manual was printed, not all of which
had been properly reflected in /usr/man.
So don't call those systems distributions, because
they're not. More important, don't expect them to be
fully coherent, because they aren't: they're snapshots,
not formal releases.
Norman Wilson
Toronto ON
First, many thanks to all people who made it possible to release v8 to
v10 and especially to Warren for bringing them together.
I went through the files in the Ninth Edition available at
http://www.tuhs.org/Archive/Distributions/Research/Norman_v9/ and I fear
that the distribution may be incomplete. The manual pages for most
sections are missing. Also, many v8 /usr/src/cmd commands are not
available in v9 /cmd. This is the list of the difference between the
two sets.
2621 300 300s 4014 450 512restor ac accton Admin apply arcv arff as asa
ascii asd at awk bad144 basename bcd bundle byteyears c2 cal calendar
cb cbt cc ccom cflow cflow checkeq chgrp clri col comm compact compat
config cref crypt csh ct ctags cvcrypt cyntax daemon dcheck deroff des
descrypt diction diff3 dircmp dired dmesg dskcpy dump dumpcatch dumpdir
efl ether ex expand f77 factor false fcopy finddev flcopy fold fsplit
fstat getopt getuid graph group gsi head hideblock hist hoc hp icheck
ideal idiff inet install iostat kasb labmake last lcomp ld learn lfactor
lint load log logdir look m4 mail Mail makekey man map mesg mips mkbitfs
mkstr monk morse ncheck neqn netfsbug newer news nm number numdate oops
pack paste pcc1 PDP11 plot primes prof pstat pti ptx punct qed quot
random rarepl ratfor rcp readslow refer reloc renice reset restor rev
rp07dump rp07rest sa savecore sdb sdiff seq server settod showq snocone
spell spline split struct style sum swapon tabs tape tcat tk tp tpr tr
trace track trim tsort ul und unexpand uniq units upas uucp uudecode
uuencode v8 value view2d vis where wwb wwv xref xstr yacc yes
Anyone knows what is going on? Does someone have a more complete
distribution of the Ninth Edition that Warren can put online?
Diomidis
I was pleased to see the announcement from David du Colombier
<0intro(a)gmail.com> of a SIMH VAX image for the newly-released 8th
Edition Research Unix, and I had it downloaded and running in short
order.
I compiled and ran the traditional hello-world test in C,
successfully, but the C++ and Fortran companions would not link:
# cat hello.cpp
#include <stream>
int
main(void)
{
cout << "Hello, world...this is C++" << endl;
return (0);
}
# CC hello.cpp
cc hello.cpp -lC
ld:hello.cpp: bad magic number
I also tried extensions .C, .cxx, and .cc, with the same error.
With Fortran, I get
# cat hello.f
print *,'hello ... This is a Fortran 77 program!'
end
# f77 hello.f
hello.f:
MAIN:
Undefined:
__bufendtab
_setvbuf
The missing symbols do not appear to be defined anywhere in /lib or
/usr/lib, suggesting that some V8 libraries have been lost.
David reports privately that he sees the same issues that I see.
Can anyone on this list identify the problem? Normally, Unix
compilers should supply the necessary libraries for
standard-conforming code, and f77 would silently add -lI77 and -lF77.
The C++ compiler issue seems to be quite different: adding the -v
(trace) option shows what it does:
# CC -v hello.cpp
cc -v hello.cpp -lC
+ ld -X /lib/crt0.o hello.cpp -lC -lc
ld:hello.cpp: bad magic number
The source code, rather than the object code, is passed to the loader.
This may have to do with the choice of extension: what did early C++
compilers expect for filenames?
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
All, I've just had an e-mail from someone to say that the two 10th Edition
versions we now have in the Unix Archive are missing /usr/include:
----- Forwarded message from someone -----
/usr/include is missing from both Dan's and Norman's archive. Here it
is (with a README). The timestamp on these files is:
-rw-r--r-- 1 root staff 307 Sep 3 1997 README
-rw-r--r-- 1 root staff 829440 Sep 3 1997 r70include.tar
----- End forwarded message -----
I've put these files into
http://www.tuhs.org/Archive/Distributions/Research/Norman_v10/
Now that we have 9th Edition up and running, it would be good to see 10th
Edition also up and running. I know there are people out there who can
help with this, so this is a call out for help and for volunteers.
Cheers, Warren
P.S The UUCP project is now up to 23 sites:
https://github.com/DoctorWkt/4bsd-uucp/blob/4.3BSD/uucp.png
All,
I have created a TUHS-related github org to store:
1). UUCP-related patched software
2). patched/maintained c-news
3). forks et al of newsreaders
Largely just to give a central locatiom.
Usual disclaimers apply...if you want me to delete the org/change
name/transfer ownership/etc just shoot me an email.
https://github.com/TUHS
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
> If anyone has any information about 'samuel' or 'cin',
> I would be delighted to hear from him or her.
cin appears in the v9 and v10 manuals. Thus if the
program is found in the wild, there should be no
compunction about adding it to the tuhs archive.
Doug
Due to having an actual NSA VAX...I think I should get NSAVAX ;)
Sent from my iPhone
> On Mar 30, 2017, at 23:13, Dave Horsfall via Uucp <uucp(a)minnie.tuhs.org> wrote:
>
>> On Thu, 30 Mar 2017, Kurt H Maier via Uucp wrote:
>>
>> kremvax and moskvax are online. What's the policy on uucp.map entries
>> for hosts that never really existed in the first place?
>
> Hmmm... I wouldn't mind having "kgbvax" in that case (I've relinquished
> "utzoo" to Henry Spencer himself). Or possibly "ciavax" or "fbivax"...
>
> --
> Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I was just the other day apprised of the fact that
Warren Toomey had finally got permission to make public
sources from Eight, Ninth and Tenth Edition Unix. It
was Paul McJones from the Software Preservation Group
who made me aware of this because some months ago I was
in touch with him in regard to the status of source for
'pads/pi' which is on the SPG website. That was donated
by Bjorn Stroustrup in 2004 I believe, as an example of
a large C++ project, along with sources for 'cfront'.
I myself have been working on pads/pi in a desultory
fashion since 1990 when I left the Basser Department of
Computer Science at the University of Sydney; we had an
Eighth Edition licence, hence source code, and all of
the blit/jerq utilities.
I ported 'pads' to the Plan9 graphics model many years
ago, and finally started to port 'pi' to Solaris 11 a
couple of years ago. This works in a rudimentary
fashion, but needs a Dwarf interface.
Given that Bjorn Stroustrup released this code long ago
(a much later version than what I had), I daresay there
are no problems with its publication.
I have, however, also worked on 'samuel' and ported its
functionality to the Plan9 'sam'. 'samuel' does not
appear in any of the archives that Wareen Toomey has
made available, so I am wondering what its status may
be. It was written by John Puttress at the labs in the
late '80s, when he was apparently working under Ted
Kowalski. Ted Kowalski is also a 'person of interest'
because he wrote 'cin', the C interpreter, an interface
to which exists in 'samuel'. Ted Kowalski
unfortunately passed away some years ago and no-one
seems to know where the source is. It again, isn't in
any of the archives that Wareen Toomey has been able to
make available.
If anyone has any information about 'samuel' or 'cin',
I would be delighted to hear from him or her.
Bruce Ellis, who wrote 'cyntax', told me that he worked
on 'cin' a lot (in the '90s I think, when he worked at
Bell Labs) but it was succeeded by something named
'vice', about which I can find no information. Again, I
would be grateful for any information about this.
Regards,
Noel Hunt
Joerg Schilling:
This is done on a UNIX implementation that uses STREAMS.
SVr4 is such a UNIX.
======
I know that SVr4 has STREAMS (somewhat more elaborate than
the original stream I/O system, but the same principles),
and knew (though I'd forgotten) that Ethernet devices are
stream-capable. I did an implementation of the late
lamented Coraid's AoE protocol that took advantage of that.
Somewhat like the Research IP implementation, in fact:
there was an AoE line discipline to be pushed onto an
Ethernet device, coupled to devices in /dev/dsk and /dev/rdsk.
But is IP done that way in SVr4 (or at least in Solaris, its
most-visible descendant)? I had the impression that the
IP stack was more like the BSD one, with everything coupled
together within the kernel and a fundamentally socket interface.
I've never actually looked at the code, though.
Norman Wilson
Toronto ON
Arnold:
> Well, I'll take credit for pushing Norman yet again. :-)
ARGGH. That should have been
Well, I'll take credit for pushing Warren ...
^^^^^^
=====
I confess I didn't notice the fumble the first time.
But in fact Warren has had the dubious pleasure of
pushing me for something more than once. So just
think of it as another level of indirection.
Norman Wilson
Toronto ON
Ron Natalie:
It's a violation of the network layering concept to require or even allow
the user to bind the application data streams to a physical device.
=====
Oh, come on.
If you mean an applications programmer shouldn't have to wallow
in low-level network details, I agree. That's one of the reasons
I think the socket interface now standard in nearly every UNIX
is an abomination. It reminds me of the binary data structures
you had to assemble just to open a file in TOPS-10, only
ten times worse.
But if you mean it's a violation of layering for the kernel to
expose the pieces and let user-mode code to the work, I strongly
disagree. By that argument, the very idea of inetd is an abomination.
Possibly even the ifconfig command. And don't even get the
hypothetical you started on microkernels. User-mode code for device
drivers or file systems? Outrageous violation of layering! Send
in the New Jersey Inquisition!!
Or perhaps you misunderstand how it all works. Device files
for Ethernet devices, /dev/ip*, and so on are like those for
disk devices: you could allow anyone to read and write them,
but in practice you probably wouldn't: you'd restrict access
to the super-user and perhaps a special group to admit some
sort of privilege reduction (like the group allowed to read
disks on some systems, or to read /dev/kmem).
That parts of the stack are assembled in user mode is a feature,
not a bug.
The one glaring flaw, as I said, is that no permissions are
checked when pushing a stream line discipline onto a file.
I think that happened because when Dennis first wrote the
code, he was thinking about modules to implement canonical-tty
semantics, or to invoke the very-different Datakit networking
model. It's a fundamental flaw, though. I have had thoughts
about fixing it, but never enough time nor enough motivation.
(My technical mind is pretty much filled up by what I am
paid to do these days; I haven't done much hobby computing
in years.)
Norman Wilson
Toronto ON
Josh Good:
Which brings up a question I have: why didn't UNIX implement ethernet
network interfaces as file names in the filesystem? Was that "novelty" a
BDS development straying away from AT&T UNIX?
======
Remember that UNIX has long been a family of systems;
it's risky to make blanket statements!
The following is from memory; I haven't looked at this
stuff for a while and am a few kilometers from my
manuals as I type this. I can dig out the complete
story later if anyone wants it; for now, this is just
the flavour and the existence proof.
Research UNIX, once it supported Ethernet at all, did
so using devices in /dev; e.g. /dev/qe0[0-7] were the
conventional names for the first DEQNA/DELQA device on a
MicroVAX. There were eight subdevices per physical
device, each a channel that might receive datagrams
of a particular 16-bit type, programmed by an ioctl.
To set up what we now call an IP interface, one did
the following:
a. Open an unused channel for the proper Ethernet
physical device. (I think the devices were exclusive-
open to make this easier: open /dev/qe00, then qe01,
and so on until one succeeds.)
b. Issue the ioctl to set the desired datagram type,
usually 0x800.
c. Push the IP stream line discipline onto the open file.
d. Issue an ioctl to inform that IP instance of its
address and netmask.
Now datagrams of the specified type arriving on that
device are fed to the IP subsystem, and the IP
subsystem uses the IP address and mask and possibly
other information to decide which datagrams to route
to that IP instance, which sends them out that physical
device.
I forget how ARP and Ethernet encapsulation fit in.
I know that they were someone more naive early on,
and that in the 10/e systems I can now admit I have
running at home I made things a bit smarter and less
brittle. But that's the basic architecture.
So how does one actually make, say, a TCP connection?
Another layer of the same sort:
There are devices /dev/ipX, served by an IP device
driver that is part of the IP subsystem. Originally
minor device X was hard-connected to IP protocol X;
I later changed that to be ioctl-configured as well.
To make TCP usable:
a. Open /dev/ip6 (old school), or find an unused
/dev/ipX (again they are exclusive-open) and configure
it to accept protocol 6.
b. Push the TCP stream line discipline onto the
open file.
c. There are probably things one could then configure,
but I don't remember them offhand.
To make a TCP call, open an unused /dev/tcpNN device;
write something to it (or maybe it was an ioctl) with
the desired destination address; wait for an error or
success. On success, writes to the file descriptor send
to the network, encapsulated as a TCP stream; reads
receive.
To receive a TCP call, open an unused /dev/tcpNN device,
and write something (or ioctl) to say `I want to listen
on this local port.' Then read the file. When a call
arrives, you will read a message saying who's calling,
and what /dev/tcpXX device you should open to accept the
call.
Notice the general scheme: for both TCP and IP (and there
was a primitive UDP at one point, but it has fallen out
of use on my systems), the protocol implementation
comprises:
1. A line discipline: push it onto devices that will
transport your data.
2. A device driver: use those devices to send and
receive calls.
The two are inextricably coupled inside the operating
system, of course.
There are all sorts of unfortunate details surrounding
communications; e.g. the TCP code knows it is talking
to IP, and constructs datagrams with partly-filled-in
IP headers. (It is not clear one can do better than
that in practice, because the protocols themselves really
are linked, but I still think it's unfortunate.)
On the other hand, that the junctions between plumbing
are accessible makes some things very simple. I wrote
a PPP implementation in user mode, with no kernel
changes: to plug itself into IP, it just pushed the
IP line discipline onto one end of a pipe, and read
and wrote datagrams on the other. I later extended it
to PPPoE by having it open an Ethernet device set to
the proper datagram types (there is one for data and
another for connection setup).
On the other other hand, there are no permissions on
stream line disciplines, so an untrustworthy person
(if I allowed such on my systems) could push the IP
line discipline onto his own pipe and send whatever
datagrams he liked. This is decidedly a flaw.
Those familiar with the original stream-system
implementation have already spotted a lesser flaw:
the file descriptor with IP pushed on (or TCP, or
whatever) must remain open; when it is closed,
everything shuts down. In practice it is usually
useful to have a daemon listening to that file anyway;
that's a good way for the system to report errors or
confusion. In practice, TCP incalls and outcalls
all went through a special daemon anyway, so that
programs didn't have to be full of TCP-specific crap;
that's what Dave Presotto's `connection server' is
all about.
Norman Wilson
Toronto ON
Thanks for the hints, and whatnot... I got v8 running!
> ----------
> From: Tim Newsham
> Sent: Wednesday, March 29, 2017 7:50 AM
> To: David du Colombier
> Cc: The Eunuchs Hysterical Society
> Subject: Re: [TUHS] 8th Edition Research Unix on SIMH
>
> Would be great if someone scripted it up to make it dog-simple.
> Here's how I did it for v6: http://www.thenewsh.com/~newsham/myv6/README
> (I should do this, but I'm not sure I'll have time in the near future).
>
> On Tue, Mar 28, 2017 at 8:58 AM, David du Colombier < 0intro(a)gmail.com>
> wrote:
>
>
> Here are my notes to run 8th Edition Research Unix on SIMH.
>
> http://9legacy.org/9legacy/doc/simh/v8
>
> These notes are quite raw and unpolished, but should be
> sufficient to get Unix running on SIMH.
>
> Fell free to use, improve and share.
>
> --
> David du Colombier
>
>
>
>
>
> --
>
> Tim Newsham | www.thenewsh.com/~newsham | @newshtwit |
> thenewsh.blogspot.com
>
I am maintaining a BSD Mail derivative and one goal of the project
is to have the complete history of Unix mail and BSD Mail in the
repository.
There yet exists a [timeline] branch, which has been fed with data
from TUHS (thank you!) and CSRG; but it is coarse and further linear
history and so the new V8 and V10 cannot be inserted, i will need
to add new [unix-mail] and [bsd-Mail] (and maybe [bsd-csrg] with
all commits preserved, like Spinellis did so in the Unix-history
repository).
Thanks to TUHS the former can be almost completed, except that
there is no trace of 9th Edition mail. It would be fantastic if
finally this project could provide the complete history of Unix
mail. I would be thankful for informations where to get a copy of
9th Edition mail. Thank you.
P.S.: apologizing for capturing the other thread with such
a coarse message.
--steffen
All, today after some heroic efforts by numerous people over many years,
Nokia-Alcatel has issued this statement at
https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%…
Statement Regarding Research Unix Editions 8, 9, and 10
Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia
Bell Laboratories agrees, to the extent of its ability to do so, that it
will not assert its copyright rights with respect to any non-commercial
copying, distribution, performance, display or creation of derivative
works of Research Unix®1 Editions 8, 9, and 10. The foregoing does not
(i) transfer ownership of, or relinquish any, intellectual property rights
(including patent rights) of Nokia Corporation, ALU-USA or any of their
affiliates, (ii) grant a license to any patent, patent application,
or trademark of Nokia Corporation, ALU-USA. or any of their affiliates,
(iii) grant any third-party rights or licenses, or (iv) grant any rights
for commercial purposes. Neither ALU-USA. nor Nokia Bell Laboratories will
furnish or provided support for Research Unix Editions 8, 9, and 10, and
make no warranties or representations hereunder, including but not limited
to any warranty or representation that Research Unix Editions 8, 9, and
10 does not infringe any third party intellectual property rights or that
Research Unix Editions 8, 9, and 10 is fit for any particular purpose.
There are some issues around the copyright of third party material in
8th, 9th and 10th Editions Unix, but I'm going to bite the bullet and
make them available in the Unix Archive. I'll post details later today.
Cheers, Warren
> From: Tim Newsham
> Would be great if someone scripted it up to make it dog-simple.
But if people just have to press a button (basically), they won't learn
anything. I guess I'm not understanding the point of the exercise? To say they
have V6 running? So what? All they did was press a button. If it's to
experience a retro-computing environment, well, a person who's never used one
of these older systems is going to be kind of lost - what are they going to
do, type 'ls -ls' and look at the output? Not very illuminating. (On V6,
without learning 'ed', they can't even type in a small C program, and compile
and run it.) Sorry, I don't mean to be cranky, but I'm not understanding the
point.
Noel
> From: Grant Taylor
> However, I've had to teach enough people to know that they need a way to
> boot strap themselves into an environment to start learning.
Right, but wouldn't they learn more from a clear and concise hand-holding
which explains what they are doing and why - 'do this which does that to get
this'?
There is no more a royal road to knowing a system, than there is to
mathematics.
> I do consider what (I believe) Warren put together for the UUCP project
> to be a very good start. Simple how to style directions that are easy
> to follow that yield a functional system.
Exactly....
Noel
On 2017-03-28 09:37, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
>
> > From: Johnny Billquist
>
> > the PDP-11 have the means of doing this as well.... If anyone ever
> > wondered about the strangeness of the JSR instruction of the PDP-11, it
> > is precisely because of this.
> > ...
> > I doubt Unix ever used this, but maybe someone know of some obscure
> > inner kernel code that do. :-)
>
> Actually Unix does use JSR with a non-PC register to hold the return address
> very extensively; but it also uses the 'saved PC points to the argument'
> technique; although only in a limited way. (Well, there may have been some
> user-mode commands that were not in C that used it, I don't know about that.)
>
> First, the 'PC points to arguments': the device interrrupts use that. All
> device interrupt vectors point to code that looks like:
>
> jsr r0, _call
> _iservice
>
> where iservice() is the interrupt service routine. call: is a common
> assembler-language routine that calls iservice(); the return from there goes
> to later down in call:, which does the return from interrupt.
Ah. Thanks for that. I hadn't dug into those parts, but that's the kind
of place where I might have suspected it might have been, if anywhere.
> Use of a non-PC return address register is used in every C routine; to save
> space, there is only one copy of the linkage code that sets up the stack
> frame; PDP-11 C, by convention, uses R5 for the frame pointer. So that common
> code (csv) is called with a:
>
> jsr r5, csv
>
> which saves the old FP on the stack; CSV does the rest of the work, and jumps
> back to the calling routine, at the address in R5 when csv: is entered. (There's
> a similar routine, cret:, to discard the frame, but it's 'called' with a plain
> jmp.)
Hah! Thinking about it, I actually knew that calling style, but didn't
reflect on it, as you're not passing any arguments in the instruction
stream in that situation.
But it's indeed not using the PC as the register in the call, so I guess
it should count in some way. :-)
Johnny
In some sense the "command subcommand" syntax dates from ar in v1,
though option flags were catenated with the mandatory subcommand.
The revolutionary notion that flags/subcommands might be denoted
by more than one letter originated at PWB (in "find", IIRC).
Doug
> From: Johnny Billquist
> the PDP-11 have the means of doing this as well.... If anyone ever
> wondered about the strangeness of the JSR instruction of the PDP-11, it
> is precisely because of this.
> ...
> I doubt Unix ever used this, but maybe someone know of some obscure
> inner kernel code that do. :-)
Actually Unix does use JSR with a non-PC register to hold the return address
very extensively; but it also uses the 'saved PC points to the argument'
technique; although only in a limited way. (Well, there may have been some
user-mode commands that were not in C that used it, I don't know about that.)
First, the 'PC points to arguments': the device interrrupts use that. All
device interrupt vectors point to code that looks like:
jsr r0, _call
_iservice
where iservice() is the interrupt service routine. call: is a common
assembler-language routine that calls iservice(); the return from there goes
to later down in call:, which does the return from interrupt.
Use of a non-PC return address register is used in every C routine; to save
space, there is only one copy of the linkage code that sets up the stack
frame; PDP-11 C, by convention, uses R5 for the frame pointer. So that common
code (csv) is called with a:
jsr r5, csv
which saves the old FP on the stack; CSV does the rest of the work, and jumps
back to the calling routine, at the address in R5 when csv: is entered. (There's
a similar routine, cret:, to discard the frame, but it's 'called' with a plain
jmp.)
Noel
Lots of tools now seem to use this strategy: there's some kind of wrapper which has its own set of commands (which in turn might have further subcommands). So for instance
git remote add ...
is a two layer thing.
Without getting into an argument about whether that's a reasonable or ideologically-correct approach, I was wondering what the early examples of this kind of wrapper-command approach were. I think the first time I noticed it was CVS, which made you say `cvs co ...` where RCS & SCCS had a bunch of individual commands (actually: did SCCS?). But I think it's possible to argue that ifconfig was an earlier example of the same thing. I was thinking about dd as well, but I don't think that's the same: they're really options not commands I think.
Relatedly, does this style originate on some other OS?
--tim
(I realise that in the case of many of these things, particularly git, the wrapper is just dispatching to other tools that do the werk: it's the command style I'm interested in not how it's implemented.)
On 2017-03-27 04:00, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Monday, 27 March 2017 at 6:49:30 +1100, Dave Horsfall wrote:
>> And as for subroutine calls on the -8, let's not go there... As I dimly
>> recall, it planted the return address into the first word of the called
>> routine and jumped to the second instruction; to return, you did an
>> indirect jump to the first word. Recursion? What was that?
> This was fairly typical of the day. I've used other machines (UNIVAC,
> Control Data) that did the same. Later models added a second call
> method that stored the return address in a register instead, only
> marginally easier for recursion.
>
> At Uni I was given a relatively simple task to do in PDP-8 assembler:
> a triple precision routine (36 bits!) to clip a value to ensure it
> stayed between two limits. Simple, eh? Not on the PDP-8. Three
> parameters, each three words long. only one register, no index
> registers. I didn't finish it. Revisiting now, I still don't know
> how to do it elegantly. How *did* the PDP-8 pass parameters?
This is probably extremely off-topic, so I'll keep it short.
This is actually very simple and straight forward on a PDP-8, but it
might seem strange to people used to todays computers.
Essentially, you pass parameters in memory, as a part of the code
stream. Also, the PDP-8 certainly do have index registers.
The first thing one must do is stop thinking of the AC as a register.
The accumulator is the accumulator. Memory is registers.
Some memory locations autoincrement when used indirectly, they are
called index registers.
That said, then. A simple example of a routine passing two parameters
(well, three):
First the calling:
CLA
TAD (42 / Setup AC with the value 42.
JMS COUNT
BUFPTR
BUFSIZ
. / Next instruction executed, with AC holding
number of matching words in buffer.
.
Now, this routine is expected to count the number of occurances of a
specific word in a memory buffer with a specific size.
At calling, AC will contain the word to search for, while the address
following the JMS holds the address, and the following address holds the
size.
The routine:
COUNT, 0
CIA
DCA CHR / Save the negative of the word to search for.
CMA
TAD I COUNT
DCA PTR / Setup pointer to the address before the buffer.
ISZ COUNT / Point to next argument.
TAD I COUNT
CIA
DCA CNT / Save negative value of size.
DCA RESULT / Clear out result counter.
LOOP, TAD I PTR / Get next word in buffer.
TAD CHR / Compare to searched for word.
SNA / Skip if they are not equal.
ISZ RESULT / Equal. Increment result counter.
ISZ CNT / Increment loop counter.
JMP LOOP / Repeat unless end of buffer.
CLA / All done. Get result.
TAD RESULT
JMP I COUNT / Done.
PTR=10
CNT=20
CHR=21
RESULT=22
Addresses 10-17 are the index registers, so the TAD I PTR instruction
will autoincrement the pointer everytime, and the increment happens
before the defer, which is why the initial value should be one less than
the buffer pointer.
Hopefully this gives enough of an idea, but unless you know the PDP-8
well, you might be a little confused by the mnemonics.
As you can see, the return address at the start is used for more than
just doing a return. It's also your argument pointer.
Johnny
> From: Doug McIlroy
> As Erdos would say of a particularly elegant proof, "It comes from the
> Book," i.e. had divine inspiration.
Just to clarify, Erdos felt that a deity (whom he referred to as the 'Supreme
Facist') was unlikly to exist; his use of such concepts was just a figure of
speech. 'The Book' was sort of a Platonic Ideal. Nice concept, though!
Noel
> From: Dave Horsfall
> And as for subroutine calls on the -8, let's not go there... As I dimly
> recall, it planted the return address into the first word of the called
> routine and jumped to the second instruction; to return, you did an
> indirect jump to the first word.
That do be correct.
That style of subroutine call goes back a _long_ way. IIRC, Whirlwind used
that kind of linkage (alas, I've misplaced my copy of the Whirlwind
instruction manual, sigh - a real tresure).
ISTVR there was something about the way Whirlwind did it that made it clear
how it came to be the way it was - IIRC, the last instruction in the
subroutine was normally a 'jump to literal' (i.e. a constant, in the
instruction), and the Whirlwind 'jump to subroutine' stored the return address
in a register; there was a special instruction (normally the first one in any
subroutine) that stored the low-order N bits of that register in the literal
field of the last instruction: i.e. self-modifying code.
The PDP-6 (of which the PDP-10 was a clone) was on the border of that period;
it had both types of subroutine linkage (store the return in the destination,
and jump to dest+1; and also push the return on the stack).
Noel
> From: Arthur Krewat
>> The PDP-6 (of which the PDP-10 was a clone)
> Um. More like a natural progression.
> Like 8086->80186->80286->80386->80486->...
No, the PDP-6 and PDP-10 have identical instruction sets, and in general, a
program that will run on one, will run on the other. See "decsystem10 System
Reference Manual" (DEC-10-XSRMA-A=D", pg. 2-72., which provides a 7-instruction
code fragment which allows a program to work out if it's running on a PDP-6, a
KA10, or a KI10.
The KA10 is a re-implementation (using mostly B-series Flip Chips) of the
PDP-6 (which was built out of System Modules - the predecessor to Flip Chips).
Noel
Nudging the thread back twoard Unix history:
> I really
> like how blindingly obvious a lot of the original Unix code was. Not saying
> it was all that way, but a ton of it was sort of what you would imagine it
> to be before you saw it. Which means I understood it and could bugfix it.
That's an important aspect of Thompson's genius--code so clean and right
that, having seen it, one cannot imagine it otherwise. But the odds are
that the same program from another hand would not have the same ring of
inevitability. As Erdos was wont to say of an elegant proof, "It comes
> like how blindingly obvious a lot of the original Unix code was
> ... sort of what you would imagine it to be before you saw it.
That's a facet of Thompson's genius--code so clean and right that,
having seen it, one cannot imagine it otherwise. Odds are, though,
that the same program from another hand would not have the same
aura of inevitability. As Erdos would say of a particularly elegant
proof, "It comes from the Book," i.e. had divine inspiration.
Doug
OT, but of interest to a few people here :-)
The venerable PDP-8 was introduced in 1965 today (or tomorrow if you're on
the wrong side of the date line). It was the first computer I ever
used, back around 1970 (I think I'd just left school and was checking out
the local University's computer department, with a view to majoring in
Computer Science (which I did)).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I think Noel put it very well, when I saw the read()/write() vs
recvfrom()/sendto() stuff mentioned earlier I was going to say that part of
the contract of read()/write() is that they are stream oriented thus things
like short reads and writes should behave in a predictable way and have the
expected recovery semantics. So having it be boundary preserving or having
a header on the data would in my view make it not read()/write() even if
the new call can be overlaid on read()/write() at the ABI level. I think
the contract of a syscall is an important part of its semantics even if it
may be an "unwritten rule". However, Noel said it better: If it's not
file-like then a file-like interface may not be appropriate.
Having said that, Kurt raises an equally valid point which is that the
"every file is on a fast locally attached harddisk" traditional unix
approach has serious problems. Not that I mind the simplicity: contemporary
systems had seriously arcane file abstractions that made file I/O useless
to all but the most experienced developer. I come from a microcomputer
background and I am thinking of Apple II DOS 3.3, CP/M 2.2 and its FCB
based interface and MSDOS 1.x (same). When MSDOS 2.x came along with its
Xenix-subset file API it was seriously a revelation to me and others.
Microcomputers aside, my understanding is IBM 360/370 and contemporary DEC
OS's were also complicated to use, with record based I/O etc.
So it's hard to criticize unix's model, but on the other hand the lack of
any nonblocking file I/O and the contract against short reads/writes (but
only for actual files) and the lack of proper error reporting or recovery
due to the ASSUMPTION of a write back cache, whether or not one is actually
used in practice... makes the system seriously unscaleable, in particular
as Kurt points out, the system is forced to try to hide the network
socket-like characteristics of files that are either on slow DMA or
PIO/interrupt based devices (think about a harddisk attached by serial
port, something that I actually encountered on a certain cash register
model and had to develop for back in the day), or an NFS based file, or
maybe a file on a SAN accessed by iSCSI, etc.
Luckily I think there is an easy fix, have the OS export a more socket-like
interface to files and provide a userspace compatibility library to do
things like detecting short reads/writes and retrying them, and/or blocking
while a slow read or write executes. It woukd be slightly tricky getting
the EINTR semantics correct if the second or subsequent call of a multipart
read or write was interrupted, but I think possible.
On the other hand I would not want to change the sockets API one bit, it is
perfect. (Controversial I know, I discussed this in detail in a recent
post).
Nick
On Mar 26, 2017 12:46 PM, "Kurt H Maier" <khm(a)sciops.net> wrote:
On Sat, Mar 25, 2017 at 09:35:20PM -0400, Noel Chiappa wrote:
>
> For instance, files, as far as I know, generally don't have timeout
> semantics. Can the average application that deals with a file, deal
reasonably
> with the fact that sometimes one gets half-way through the 'file' - and
things
> stop working? And that's a _simple_ one.
The unwillingness to even attempt to solve problems like these in a
generalized and consistent manner is a source of constant annoyance to
me. Of course it's easier to pretend that never happens on a "real"
file, since disks never, ever break.
Of course, there are parts of the world that don't make these
assumptions, and that's where I've put my career, but the wider IT
industry still likes to pretend that storage and networking are
unrelated concepts. I've never understood why.
khm
Hi all, I don't mind a bit of topic drift but getting into generic OS
design is a bit too far off-topic for a list based on Unix Heritage.
So, back on topic please!
Cheers, Warren
> From: Ron Minnich
> There was no shortage of people at the time who were struggling to find
> a way to make the Unix model work for networking ... It didn't quite
> work out
For good reason.
It's only useful to have a file-name _name_ for a thing if... the thing acts
like a file - i.e. you can plug that file-name into other places you might use
a file-name (e.g. '> {foo}' or 'ed <foo>', etc, etc).
There is very little in the world of networking that acts like a file. Yes,
you can go all hammer-nail, and use read() and write() to get data back and
forth, and think that makes it a file - but it's not.
For instance, files, as far as I know, generally don't have timeout
semantics. Can the average application that deals with a file, deal reasonably
with the fact that sometimes one gets half-way through the 'file' - and things
stop working? And that's a _simple_ one. How does a file abstraction match
to a multi-cast lossy (i.e. packets may be lost) datagram group?
For another major point (and the list goes on, I just can't be bothered to go
through it all), there's usually all sorts of other higher-level protocol in
there, so only specialized applications can make use of it anyway. Look at
HTTP: there's specialized syntax one has to spit out to say what file you
want, and the HTML files you get back from that can generally only be usefully
used in a browser.
Etc, etc.
Noel
Maybe of interest, here is something spacewarish..
|I was at Berkeley until July 1981.
I had to face a radio program which purported that..
Alles, was die digitale Kultur dominiert, haben wir den Hippies
zu verdanken.
Anything which dominates the digital culture is owed to the
Hippies
This "We owe it all to the Hippies" as well as "The real legacy of
the 60s generation is the Computer Revolution" actually in English
on [1] (talking about the beginning of the actual broadcast), the
rest in German. But it also lead(s) (me) to an article of the
Rolling Stone, December 1972, on "Fanatic Life and Symbolic Death
Among the Computer Bums"[2].
[1] http://www.swr.de/swr2/programm/sendungen/essay/swr2-essay-flowerpowerdaten…
[2] http://www.wheels.org/spacewar/stone/rolling_stone.html
That makes me quite jealous of your long hair, i see manes
streaming in the warm wind of a golden Californian sunset.
I don't think the assessment is right, though, i rather think it
is a continous progress of science and knowledge, then maybe also
pushed by on-all-fronts efforts like "bringing a moon to the moon
by the end of the decade", and, sigh, SDI, massive engineer and
science power concentration, etc. And crumbs thereof approaching
the general public, because of increased knowledge in the
industry. (E.g., the director of the new experimental German
fusion reactor that finally sprang into existence claimed
something like "next time it will not be that expensive due to
what everybody has learned".) And hunger for money, of course,
already in the 70s we had game consoles en masse in Italy, for
example, with Pacman, Donkey Kong and later then with Dragons Lair
or what its name was ("beautiful cool graphics!" i recall, though
on the street there were Italian peacocks, and meaning the
birds!).
--steffen
> From: Random832
> Does readlink need to exist as a system call? Maybe it should be
> possible to open and read a symbolic link - using a flag passed to open
What difference does it make? The semantics are the same, only the details of the
syntax are different.
Noel
On 3/23/17, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> I only know a tiny amount about Plan 9, never dove into deeply. QNX,
> on the other hand, I knew quite well. I was pretty good friends with
> one of the 3 or 4 people who were allowed to work on the microkernel, Dan
> Hildebrandt. He died in 1998, but before then he and I used to call each
> other pretty often to talk about kernel stuff, how stuff ought to be done.
> The calls weren't jerk off sessions, they were pretty deep conversations,
> we challenged each other. I was very skeptical about microkernels,
> I'd seen Mach and found it lacking, seen Minix and found it lacking
> (though that's a little unfair to Minix compared to Mach). Dan brought
> me around to believing in the microkernel but only if the kernel was
> actually a kernel. Their kernel didn't use up a 4K instruction cache,
> it left room for the apps and the processes that did the rest. That's
> why only a few people were allowed to work on the kernel, they counted
> every single instruction cache footprint.
>
> So tell me more about your OS, I'm interested.
Where do I start? I've got much of the design planned out. I've
thought about the design for many years now and changed my plans on
some things quite a few times. Currently the only code I have is a
bootloader that I've been working on somewhat intermittently for a
while now, as well as a completely untested patch for seL4 to add
support for QNX-ish single image boot. I am planning to borrow Linux
and BSD code where it makes sense, so I have less work to do.
It will be called UX/RT (for Universally eXtensible Real Time
operating system, although it's also a kind of a nod to QNX originally
standing for Quick uNiX), and it will be primarily intended as a
workstation and embedded OS (although it would be also be good for
servers where security is more important than I/O throughput, and also
for HPC). It will be a microkernel-based multi-server OS.
Like I said before, it will superficially resemble QNX in its general
architecture (for example, much like QNX, the lowest-level user-mode
component will be a single root server called "proc", which will
handle process management and filesystem namespace management),
although the specifics of the IPC model will be somewhat different. I
will be using seL4 and/or Rux as the microkernel (so unlike that of
QNX, UX/RT's process server will be a completely separate program).
proc and most other first-party components will be mostly written in
Rust rather than C, although there will still be a lot of third-party
C code. The network stack, disk filesystems, and graphics drivers will
be based on the NetBSD rump kernel and/or LKL (which is a
"librarified" version of the Linux kernel; I'll probably provide both
Linux and NetBSD implementations and allow switching between them and
possibly combining them) with Rust glue layers on top. For
performance, the disk drivers and disk filesystems will run within the
same server (although there will be one disk server per host adapter),
and the network stack will also be a single server, much like in QNX
(like QNX, UX/RT will mostly avoid intermediary servers and will
usually follow a process-per-subsystem architecture rather than a
process-per-component one like a lot of other multi-sever OSes, since
intermediary servers can hurt performance).
As I said before, UX/RT will take file-oriented architecture even
further than Plan 9 does. fork() and some thread-related APIs will be
pretty much the only APIs implemented as non-file-based primitives.
Pretty much the entire POSIX/Linux API will be implemented although
most non-file-based system calls will have file-based implementations
underneath (for example, getpid() will do a readlink() of /proc/self
to get the PID). Even process memory like the process heap and stack
will be implemented as files in a per-process memory filesystem (a bit
like in Multics) rather than being anonymous like on most other OSes.
ioctl() will be a pure library function for compatibility purposes,
and will be implemented in terms of read() and write() on a secondary
file.
Unlike other microkernel-based OSes, UX/RT won't provide any way to
use message passing outside of the file-based API. read() and write()
will use kernel calls to communicate directly with the process on the
other end (unlike some microkernel Unices in which they go through an
intermediary server). There will be APIs that expose the underlying
transport (message registers for short messages and a shared per-FD
buffer for long ones), although they will still operate on file
descriptors, and read()/write() and the low-level messaging APIs will
all use the same header format so that processes don't have to care
which API the process on the other
end uses (unlike on QNX where there are a few different incompatible
messaging APIs). There will be a new "message special" file type that
will preserve message boundaries, similar to SEQPACKET Unix-domain
sockets or SCTP (these will be ideal for RPC-type APIs).
File descriptors will be implemented as sets of kernel capabilities,
meaning that servers won't have to check permissions like they do on
QNX. The API for servers will be somewhat socket-like. Each server
will listen on a "port" file in a special filesystem internal to the
process server, sort of like /mnt on Plan 9 (although it will be
possible for servers to export ports within their own filesystems as
well). Reads from the port will produce control messages which may
transfer file descriptors. Each client file descriptor will have a
corresponding file descriptor on the server side, and servers will use
a superset of the regular file API to transfer data. Device numbers as
such will not exist (the device number field in the stat structure
will be a port ID that isn't split into major and minor numbers), and
device files will normally be exported directly by their server,
rather than residing on a filesystem exported by one driver but being
serviced by another as in conventional Unix. However, there will be a
sort of similar mechanism, allowing a server to export "firm links"
that are like cross-filesystem hard links (similar to in QNX).
Per-process namespaces like in Plan 9 will be supported. Unlike in
Plan 9, it will be possible for processes with sufficient privileges
to mount filesystems in the namespaces of other processes (to allow
more flexible scoping of mount points). Multiple mounts on one
directory will produce a union like in both QNX and Plan 9. Binding
directories as in Plan 9 will also be supported. In addition to the
per-process root on / there will also be a global root directory on //
into which filesystems are mounted. The per-process name spaces will
be constructed by binding directories from // into the / of the
process (neither direct mounts under / nor bindings in // will be
supported, but bindings between parts of / will of course be
supported).
The security model will be based on a per-process default-deny ACL
(which will be purely in memory; persisting process ACLs will be
implemented with an external daemon). It will be possible for ACL
entries to explicitly specify permissions, or to use the permissions
from the filesystem with (basically) normal Unix semantics. It will
also be possible for an entry to be a wildcard allowing access to all
files in a directory. Unlike in conventional Unix, there will be no
root-only system calls (anything security-sensitive will have a
separate device file to which access can be controlled through process
ACLs), and running as root will not automatically grant a process full
privileges. The suid and sgid bits will have no effect on executables
(the ACL management daemon will handle privilege escalation instead).
The native API will be mostly compatible with that of Linux, and a
purely library-based Linux compatibility layer will be available. The
only major thing the Linux compatibility layer specifically won't
support will be stuff dealing with logging users in (since it won't be
possible to revert to traditional Unix security, and utmp/wtmp won't
exist). The package manager will be based on dpkg and apt with hooks
to make them work in a functional way somewhat like Nix but using
per-process bindings, allowing for multiple environments or universes
consisting of different sets of packages (environments will be able to
be either native UX/RT or Linux compatibility environments, and it
will be possible to create Linux environments that aren't managed by
the UX/RT package manager to allow compatibility environments for
non-dpkg Linux distributions).
The init system will have some features in common with SMF and
systemd, but unlike those two, it will be modular, flexible, and
lightweight. System initialization such as checking/mounting
filesystems and bringing up network interfaces will be script-driven
like in traditional init systems, whereas starting daemons will be
done with declarative unit files that will be able to call (mostly
package-independent) hook scripts to set up the environment for the
daemon.
Initially I will use X11 as the window system, but I will replace it
with a lightweight compositing window server that will export
directories providing a low-level DRI-like interface per window.
Unlike a lot of other compositing window systems, UX/RT's window
system will use X11-style central client-side window management. I was
thinking of using a default desktop environment based on GNUstep
originally but I'm not completely sure if I'll still do that (a long
time ago I had wanted to put together a Mac OS X-like or NeXTStep-like
Linux distribution using a GNUstep-based desktop, a DPS-based window
server, and something like a fork of mkLinux with a stable module ABI
for a kernel, but soon decided I wanted to write a QNX-like OS
instead).
>
> Sockets are awesome but I have to agree with you, they don't "fit".
> Seems like they could have been plumbed into the file system somehow.
>
Yeah, the almost-but-not-quite-file-based nature of the socket API is
my biggest complaint about it. UX/RT will support the socket API but
it will be implemented on top of the normal file system.
> Can't speak to your plan 9 comments other than to say that the fact
> that you are looking for provided value gives me hope that you'll get
> somewhere. No disrespect to plan 9 intended, but I never saw why it
> was important that I moved to plan 9. If you do something and there
> is a reason to move there, you could be worse but still go farther.
I'd say a major problem with Plan 9 is the way it changes things in
incompatible ways that provide little advantage over the traditional
Unix way of doing things (for example, instead of errno, libc
functions set a pointer to an error string instead, which I don't
think provides enough of a benefit to break compatibility). Another
problem is that some facilities Plan 9 provides aren't general enough
(e.g. the heavy focus on SSI clustering, which never really was widely
adopted, or the rather 80s every-window-is-a-terminal window system).
UX/RT will try to be compatible with conventional Unices and
especially Linux wherever it is reasonable, since lack of applications
would significantly hold it back. It will also try to be as general as
possible without overcomplicating things.
All, I'm setting up a uucp site 'tektronix'. When I send e-mail, I'm seeing
this error:
ASSERT ERROR (uux) pid: 235 (3/24-00:09) CAN'T OPEN D.tektronX00D0 (0)
Something seems to be trimming the hostname to seven chars. If I do:
# hostname
tektronix
Thanks, Warren
On Fri, Mar 24, 2017 at 4:08 PM, Andy Kosela <andy.kosela(a)gmail.com> wrote:
>
> [snip]
> Dan, that was an excellent post.
>
Thanks! I thought it was rather too long/wordy, but I lacked the time to
make it shorter.
I always admired the elegance and simplicity of Plan 9 model which indeed
> seem to be more UNIX like than todays BSDs and Linux world.
>
> The question though remains -- why it has not been more successfull? The
> adoption of Plan 9 in the real world is practically zero nowadays and even
> its creators like Rob Pike moved on to concentrate on other things, like
> the golang.
>
I think two big reasons and one little one.
1. It wasn't backwards compatible with the rest of the world and forced you
to jump headlong into embracing a new toolset. That is, there was no
particularly elegant way to move gradually to Plan 9: you had to adopt it
all from day one or not at all. That was a bridge too far for most. (Yeah,
there were some shims, but it was all rather hacky.)
2. Relatedly, it wasn't enough of an improvement over its predecessor to
pull people into its orbit. Are acme or sam really all that much better
than vi or emacs? Wait, don't answer that...but the reality is that people
didn't care enough whether they were or not. The "everything is a
file(system)" idea is pretty cool, but we've already had tools that worked
then and work now. Ultimately, few people care how elegant the abstractions
are or how well the kernel is implemented.
And the minor issue: The implementation. Plan 9 was amazing for when it was
written, but now? Not so much.
I work on two related kernels: one that is directly descended from Plan 9
(Harvey, for those interested) and one that's borrowed a lot of the code
(Akaros) and in both we've found major, sometimes decades old bugs. There
are no tests, and there are subtle race conditions or rarely tickled bugs
lurking in odd places. Since the system is used so little, these don't
really get shaken out the way they do in Linux (or to a lesser extent the
BSDs or commercial Unixes). In short, some code is better than other code
and while I'd argue that the median quality of the implementation is
probably higher than that of Linux or *BSD in terms of elegance and
understandability, it's not much higher and it's certainly buggier.
And the big implementation issue is lack of hardware support. I stood up
two plan 9 networks at work for Akaros development and we ran into major
driver issues with the ethernets that took a week or two to sort out. On
the other hand, Linux just worked.
Eventually, one of those networks got replaced with Linux and the other is
probably headed that way. In fairness, this has to do with the fact that no
one besides Ron and I was interested in using them or learning how they
work: people *want* Linux and the idea that there's this neat system out
there for them to explore and learn about the cool concepts it introduced
just isn't a draw. I gave a presentation on Plan 9 concepts to the Akaros
team a year and a half or so ago and a well-known figure in the Linux
community who working with us at the time had only to say that, "the user
interface looks like it's from 1991." None of the rest didn't interest him
at all: the CPU command usually kind of blows people's minds, but after I
went through the innards of it the response was, "why not just use SSH?"
I've had engineers ask me why Plan 9 used venti and didn't "just use git"
(git hadn't been invented yet). It's somewhat lamentable, but it's also
reality.
- Dan C.
> From: Random832
> "a stream consisting of a serialized sequence of all of whatever
> information would have been supplied to/by the calls to the special
> function" seems like a universal solution at the high level.
Yes, and when the only tool you have is a hammer, everything look like
a nail.
Noel
> From: Nick Downing
> Programming is actually an addiction.
_Can be_ an addition. A lot of people are immune... :-)
> What makes it addictive to a certain type of personality is that little
> rush of satisfaction when you try your code and it *works*... ... It was
> not just the convenience and productivity improvements but that the
> 'hit' was coming harder and faster.
Joe Weizenbaum wrote about the addiction of programming in his famous book
"Computer Power and Human Reason" (Chapter 4, "Science and the Compulsive
Programmer"). He attributes it to the sense of power one gets, working in a
'world' where things do exactly what you tell them. There might be something
to that, but I suspect your supposition is more likely.
> This theory is well known to those who design slot machines and other
> forms of gambling
Oddly enough, he also analogizes to gamblers!
Noel
> From: "Ron Natalie"
> I was thinking about Star Wars this morning and various parodies of it
> (like Ernie Foss's Hardware Wars)
The best one ever, I thought, was Mark Crispin's "Software Wars". (I have an
actual original HAKMEM!)
> I rememberd the old DEC WARS.
I seem to vaguely recall a multi-page samizdat comic book of this name? Or am
I mis-remembering its name? Does this ring any bells for anyone?
Noel
I realized after writing that I was being slightly unfair since one valid
use case that DOES work correctly is something like:
ssh -X <some host> <command that uses X>
This is occasionally handy, although the best use case I can think of is
running a browser on some internet-facing machine so as to temporarily
change your IP address, and this use case isn't exactly bulletproof since
at least google chrome will look for a running instance and hand over to it
(despite that instance having a different DISPLAY= setting). Nevertheless
my point stands which is that IMO a programmatic API (either through .so or
.dll linkage, or through ioctls or dedicated syscalls) should be the first
resort and anything else fancy such as remoting, domain specific languages,
/proc or fuse type interfaces, whatever, should be done through extra
layers as appropriate. You shouldn't HAVE to use them.
cheers, Nick
On Mar 15, 2017 9:15 PM, "Tim Bradshaw" <tfb(a)tfeb.org> wrote:
On 15 Mar 2017, at 01:13, Nick Downing <downing.nick(a)gmail.com> wrote:
>
> But the difficulty with X Windows is that the remoting layer is always
there, even though it is almost completely redundant today.
It's redundant if you don't ever use machines which you aren't physically
sitting next to and want to run any kind of graphical tool run on them. I
do that all the time.
--tim
> From: Tim Bradshaw
> I don't know about other people, but I think the whole dope thing is why
> computer people tend *not* to be hippies in the 'dope smoking' sense. I
> need to be *really awake* to write reasonably good code ... our drugs
> of choice are stimulants not depressants.
Speak for yourself! :-)
(Then again, I have wierd neuro-chemistry - I have modes where I have a large
over-sppply of natural stimulant... :-)
My group (which included Prof. Jerry Salzter, who's about as straight an arrow
as they make) was remarkably tolerant of my, ah, quirks... I recall at one
point having a giant tank of nitrous under the desk in my office - which they
boggled at, but didn't say anything about! ;-)
Noel
"Two Bacco, here, my Bookie.”
Awesome.
David
> Date: Wed, 22 Mar 2017 21:26:16 -0400
> From: "Ron Natalie" <ron(a)ronnatalie.com>
> To: <tuhs(a)minnie.tuhs.org>
> Subject: [TUHS] DEC Wars
> Message-ID: <001d01d2a374$77e02dc0$67a08940$(a)ronnatalie.com>
> Content-Type: text/plain; charset="utf-8"
>
> I was thinking about Star Wars this morning and various parodies of it (like
> Ernie Foss's Hardware Wars) and I rememberd the old DEC WARS. Alas when I
> tried to post it, it was too big for the listserv. So here's a link for
> your nostalgic purposes. I had to find one that was still in its
> fixed-pitch glory complete with the ASCII-art title.
>
>
>
> http://www.inwap.com/pdp10/decwars.txt
>
> From: Steffen Nurpmeso
> This "We owe it all to the Hippies"
Well, yes and no. Read "Hackers". There wasn't a tremendous overlap between
the set of 'nerds' (specifically, computer nerds) and 'hippies', especially in
the early days. Not that the two groups were ideologically opposed, or
incompatible, or anything like that. Just totally different.
Later on, of course, there were quite a few hackers who were also 'hippies',
to some greater or lesser degree - more from hackers taking on the hippie
vibe, than the other way around, I reckon. (I think that to be a true computer
nerd, you have to start down that road pretty early on, and with a pretty
severe commitment - so I don't think a _lot_ of hippied turned into hackers.
Although I guess the same thing, about starting early, is true of really
serious musicians.)
> "The real legacy of the 60s generation is the Computer Revolution"
Well, there is something to that (and I think others have made this
observation). The hippie mentality had a lot of influence on everyone in that
generation - including the computer nerds/hackers. Now, the hackers may have
had a larger, impact, long-term, than the hippies did - but in some sense a
lot of hippie ideals are reflected in the stuff a lot of hackers built:
today's computer revolution can be seen as hippie idealism filtered through
computer nerds...
But remember things like this, from the dust-jacket of the biography of
Prof. Licklider:
"More than a decade will pass before personal computers emerge from the
garages of Silicon Valley, and a full thirty years before the Internet
explosion of the 1990s. The word computer still has an ominous tone,
conjuring up the image of a huge, intimidating device hidden away in an
over-lit, air-conditioned basement, relentlessly processing punch cards for
some large institution: _them_. Yet, sitting in a nondescript office in
McNamara's Pentagon, a quiet ... civilian is already planning the revolution
that will change forever the way computers are perceived. Somehow, the
occupant of that office ... has seen a future in which computers will empower
individuals, instead of forcing them into rigid conformity. He is almost
alone in his conviction that computers can become not just super-fast
calculating machines, but joyful machines: tools that will serve as new media
of expression, inspirations to creativity, and gateways to a vast world of
online information.
Now, technically Lick wasn't a hippie (he was, after all, 40 years old in
1965), and he sure didn't have a lot of hippie-like attributes - but he was,
in some ways, an ideological close relative of some hippies.
Noel
Some pointers. Warren, worth grabbing these IMHO.
I will ask him if he's willing to donate whatever troff
he has.
Arnold
> Date: Wed, 22 Mar 2017 16:47:42 -0400 (EDT)
> From: Brian Kernighan <bwk(a)CS.Princeton.EDU>
> To: arnold(a)skeeve.com
> Subject: Re: CSTRs?
>
> There are a few things here:
> http://www.netlib.org/cgi-bin/search.pl
> but it seems to be mostly the numerical analysis ones.
>
> But Google reveals this one:
> http://www.theobi.com/Bell.Labs/cstr/
> which seems to be all postscript.
>
> I have some odds and ends, like the troff manual and tutorial,
> but otherwise only PDF.
>
> Sorry -- not much help.
>
> Brian
>
> On Wed, 22 Mar 2017, arnold(a)skeeve.com wrote:
>
> > Hi.
> >
> > Do you by chance happen to have copies of the CSTRs that used to be
> > available at the Bell Labs web site?
> >
> > And/or troff source for any? The TUHS people would like to archive
> > at least the Unix-related ones...
> >
> > Thanks,
> >
> > Arnold
I was thinking about Star Wars this morning and various parodies of it (like
Ernie Foss's Hardware Wars) and I rememberd the old DEC WARS. Alas when I
tried to post it, it was too big for the listserv. So here's a link for
your nostalgic purposes. I had to find one that was still in its
fixed-pitch glory complete with the ASCII-art title.
http://www.inwap.com/pdp10/decwars.txt
Early on when I was consulting for what would become my company, I got stuck
on a weekend to fix something with the coffee pot and a box of Entenmann's
chocolate donuts. These have a coating that's kind of like wax you have to
soften up in the hot coffee to be digestable. As a result of that weekend
any crunch time was referred to as waxy chocolate donut time. Another
crunch weekend I was working on the firmware for an esoteric digital data
tape player. I would test it. Find the fault. Go to one machine
running Xenix on a 286 which had the editor and the assembler. I'd then
floppy it over to a DOS machine that had the EPROM burner. I then would
take the eprom and stick it into the controller. The president of the
company had two jobs. He was to follow behind me and refill my coffee cup
and scarf up the used EPROMS and dump them into the eraser so we wouldn't
run out of ones to program.
For years, we were a six person company of which only me and the president
drank coffee. When the one pot we made in the morning was gone, that was
it for coffee. As the company got larger and there were more coffee
drinkers, people would just make a new pot. This coincided with me having
my office moved adjacent to the coffee maker. Every time I had a long
compile or something I'd look down and see my cup was empty and I'd pop
outside and get a new cup. Not surprisingly, I started to get heart
palpitations. The doctor asks how much coffee I drank, and I tell her
something like thirty cups a day. She tells me I may want to cut back on
that.
My best job was working for a friend whose company operates out of his home.
He'd make espresso for me and we'd drink that (and eat his wife's excellent
leftover food) until about six and then being another wine judge, we'd
switch to wine.
-----Original Message-----
From: Tim Bradshaw [mailto:tfb@tfeb.org]
Sent: Wednesday, March 22, 2017 8:51 AM
To: Ron Natalie
Cc: Dave Horsfall; The Eunuchs Hysterical Society
Subject: Re: [TUHS] Were all of you.. Hippies?
I don't know about other people, but I think the whole dope thing is why
computer people tend *not* to be hippies in the 'dope smoking' sense. I
need to be *really awake* to write reasonably good code (if ever I do write
reasonably good code) in the same way I need to be really awake to do maths
or physics. So I live on a diet of coffee and sugar and walk around
twitching as a result (this is an exaggeration, but you get the idea). I
have the strength of will to not use stronger stimulants (coffee is mostly
self-limiting, speed not so much).
I was searching today to find where the Unix pipeline spell checking
method "tr | sort | uniq | comm" was first published. I found it in a
document by Brian Kernighan titled "UNIX for Beginners".
"The pipe mechanism lets you fabricate quite complicated operations out
of spare parts already built. For example, the first draft of the spell
program was (roughly) [...]"
http://www.psue.uni-hannover.de/wise2014_2015/material/Unix-Beginners.pdf#p…
Then my problem became properly citing the document. Searching on
Google, Google Scholar, and IEEE Xplore didn't help me. In the end I
found the reference in a 1993 refer file of all Bell Labs Computer
Science Technical Reports I had saved from my student days.
%cstr 75
%report Comp. Sci. Tech. Rep. No. 75
%keyword CSTR OBS
%author B. W. Kernighan
%title UNIX for Beginners
%date February 1979
%journal UNIX Programmer's Manual
%volume 2
%other Section 3
%date January 1979
%type techreport obsolete
I couldn't find the refer file online, so I'll send a copy to Warren for
archiving.
However, I'm wondering whether we should/could do something to also
archive the actual pages of all the Bell Labs Computer Science Technical
Reports. I think some are the only authoritative primary source for
many Unix-related gems and a lack of an electronic archive means they
will slowly fade into non-existence. I remember we had many of those at
the library of Imperial College London. Any suggestions on what we can
do to archive this material?
Diomidis
On Wed, 15 Mar 2017, Warren Toomey wrote:
> On Wed, Mar 15, 2017 at 10:24:22AM +1000, Warren Toomey wrote:
> > Then, perhaps a better news reader. Any preferences :-)
>
> So far I've though of (and found)
[...]
After going through several readers, I ended up with "trn". Also, Alpine
has a passable reader.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Brings back memories...
Back in early 1981 I worked for a shipping line in Cranford NJ in their IT department. The company had just ordered 4 new super-wide cargo ships that just fit the Panama Canal and the Chief Marine Architect came to the IT department to ask for assistance in programming a PDP-8 to write a load distribution check program so that the ship would not keel over, or break in the middle - when being loaded 12 stack high containers. Had to take into account stress and strain - mathematical algorithms. My boss called me in to talk to him and he asked " if I knew how to determine the area under a curve..." - I knew my engineering math - Simpson's rule and also FORTRAN IV and was immediately drafted. What was needed also was a graphical way of entering the data, and displaying the results optionally graphically on the screen (tty ?). My friend Wayne Rawls knew BASIC - he wrote the front end - passed me the input on a large floppy - my FORTRAN IV program ran and did the stress/strain analysis for the ship - and I passed the output back to him on the floppy that he then displayed on-screen.
A lot of grinding of the floppy drives for the FORTRAN compiler - no spinning hard disks as the PDP-8 would be installed on-board ship - and in those days of A/C computer rooms would be a non-starter.
It all worked well - Wayne took the PDP-8 on a ship to Norfolk to get it checked out and the company used it for many years !
Atindra.
-----Original Message-----
>From: Dave Horsfall <dave(a)horsfall.org>
>Sent: Mar 21, 2017 5:34 PM
>To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>Subject: [TUHS] Happy birthday, PDP-8!
>
>OT, but of interest to a few people here :-)
>
>The venerable PDP-8 was introduced in 1965 today (or tomorrow if you're on
>the wrong side of the date line). It was the first computer I ever
>used, back around 1970 (I think I'd just left school and was checking out
>the local University's computer department, with a view to majoring in
>Computer Science (which I did)).
>
>--
>Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."