On Tue, May 29, 2018 at 9:10 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Wed, 30 May 2018, Johnny Billquist wrote:
>
> Uh? Say what? What does XON/XOFF flow control have to do with 8bit data?
>>
>
> Err, how do you send data that happen to be XON/OFF? By futzing around
> with DLE (which I've never seen used)?
Right - you have to escape things and it is a real mess. I have seen some
of the SI/SO/DLE stuff in mechanical systems like ticker tape. I never saw
a real 8-bit interface try to do it and succeed -- its messy and suspect
when data overruns occurs all hell break loose. By the time of 8 bits and
speed of 9.6K protocol's like UUCP, or IP over serial did not even bother.
I suspect in the old 5-bit baudot code times, it was more popular to get a
larger character set, but the speeds were much slower (and the UART not yet
invented by Gordon Bell).
By the time of the PDP-11, hardware flow was the norm in many (most)
interfaces. I never understood why DEC got ^S/^Q happy. You can really
see the issues with it on my PiDP-8, running DOS/8 because the flow is done
by the OS and its much to late to be able to react. So you get screens
full of data before the ^S is proceeded.
Once Gordon creates the UART, Western Digital licensed it and made it into
a chip which most folks used. DEC started to use the WD chipset in PDP-8
serial interfaces - it always struck me as strange that HW flow was not
used more. The KL11/DL11 supported the wires, although the SW had to do
the work. As I said, MacNamara supported it in the DH11, if I recall (I
don't have the prints in any more), he put an AND gate in the interrupt
bit, so the OS does not get an transfer complete interrupt unless the "I'm
ready" signal is available from the other side.
When I lectured data com to folks years ago, I liked to express the ECMA
serial interface in this manner:
There are 6 signal wires and one reference (signal) ground:
1. Data XMT (output)
2. Data RCV (input)
3. I'm Alive (output)
4. He's Alive (input)
5. I'm Ready (output)
6. He's Ready (input)
Everything else is either extra protocol to solve some other problem, or is
for signal quality (*i.e. *shielding, line balancing etc.). The names of
which signals match to which actual pins on the ECMA interface can be a
little different depending on the manufacturer of the 'Data Communicating
Equipment (DCE - *a.k.a.* the modem) and the Data Terminating Equipment
(DTE - *a.k.a.* the host).
BTW: ECMA and the WE did specify both signal names and the connectors to
use and I think the European Telco's did also (but I only ever saw a German
spec and it was in German which I do not read). DCE's are supposed to be
sockets (female and originally DB25)s and DTE's were suppose to plugs
(male). If you actually follow the spec properly, you never have issues.
The problem was the a number terminal manufactures used the wrong sex
connector, as a lot of them never read the specs [the most famous being the
Lear Siegler ADM3 which was socketed like a DCE but pinned as a DTE -
probably was it was cheap and thus became very popular).
Also to confuse things where it got all weird was how the DCE's handled
answering the phone. And it was because of the answering that we got all
the messy stuff like Ring, Data Set/Terminal Ready/Carrier and the like.
The different DCE manufacturers in different countries had different answer
protocols, than the original Bell system. IIRC the original WE103 needed
help from host. Support for Auto-answer was a later feature of the WE212.
The different protocols are all laid in well in another late 60s/early 70s
data com book from an UMass Prof, who's name I now forget (his book is
white with black letters called 'Data Communications' , as opposed to
MacNamaras dark blue DEC Press book 'Principles of Data Communications').
So ... coming back to Unix for this list... AT&T owned Western Electric
(WE) who was the largest US manufacturer of DCE's [see 1949 law suit/the
consent decree et al]. In fact at Bell Labs, a lot people using UNIX did
not have 'hardwired terminals' - they had a modem connection. So, Unix
as a system and tends to have support for DTE/DCE in the manner WE intended
it to be as a result. It's not surprising that see it in the solutions
that are there.
ᐧ
Back in 1980 or 1981, when I first started hacking
on UNIX but still had some TOPS-10 DNA lingering in
my blood, I put in a really simple control-T
implementation. Control-T became a new signal-
generating character in the tty driver; it sent
signal 16. Unlike interrupt and quit, it did not
flush input or output buffers. Unlike any other
signal, SIG_DFL caused the signal to be silently
ignored. (I don't remember why I didn't just teach
/etc/init and login to set that signal to SIG_IGN
by default; maybe I looked and found too many other
programs that monkeyed with every signal, maybe I
just didn't think of it.)
I then wrote a little program meant to be run in the
background from .profile, that dug around in /dev/kmem,
figured out what was likely nearest-to-foreground process
associated with the same terminal, and printed a little
status info for that process.
It didn't take long for the remaining TOPS-10 DNA to
leach away, and besides it is much easier to run some
program in another window now that that is almost always
possible, so I don't miss it. But I like that idea
better than, in effect, hacking a mini-ps into the kernel,
even though the kernel doesn't have to do as much work
to get the data.
I also thought it made more sense to have a general
mechanism that could be used for other things. That
even happened once. The systems I ran were used, among
other things, for developing SMP, the symbolic-manipulation
interpreter worked on by Stephen Wolfram, Geoffrey Fox,
Chris Cole, and a host of graduate and undergraduate students.
(My memory of who deserves credit differs somewhat from
that of at least one person named.) SMP, by its nature,
sometimes had to spend a substantial time sitting and
computing. Someone (probably Wolfram, says my aging
memory) heard about the control-T stuff, asked me how
to use it, and added code to SMP so that during a long
computation control-T would tell you something about
what it was doing and how it was progressing.
Since the signal was, like interrupt and kill, sent
to the whole process group, there was no conflict if
you also had my little control-T monitor running in
the background.
I never tried to send my hacked-up UNIX to anyone else,
so if anyone else did the same sort of control-T hack,
they likely invented it independently.
Norman Wilson
Toronto ON
> From: Clem Cole
> Tops-20 or TENEX (aka Twin-Ex).
ISTR the nickname we used was 'TWENEX'?
BTW, the first 20 at MIT (MIT-XX) had a 'Dos Equis' label prominently stuck to
it... :-)
Noel
> From: Dave Horsfall
> I have a clear recollection that UNSW's driver (or was it Basser?) did
> not use interrupts .. but used the clock interrupt to empty the silos
> every so often. I'd check the source in the Unix Archive, but I don't
> remember which disk image it's in ... Can anyone confirm or deny this?
I found this one:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=AUSAM/sys/dmr/dz.c
which seems to be the one you're rhinking of, or close to it.
It actually does use interrupts, on both sides - sort of. On the input side,
it uses the 'silo alarm', which interrupts when the input buffer has 16
characters in it. This has the same issue as the silo on the DH11 - if there
are less characters than that waiting, the host never gets an interrupt. Which
may be why it does the timer-based input check also?
The output side is entirely interrupt driven; it _does_ reduce the number of
interrupts by checking _every_ output line (on every DZ11 in the machine) to
see if that line's ready for a character when it gets any output interrupt,
which will definitely seriously reduce the number of output interrupts - but
even then, if _one_ line is going flat out, that's still 1000 interrupts per
second.
Noel
Arthur Krewat:
On 5/29/2018 4:11 PM, Dan Cross wrote:
> "I don't always use computers, but when I do, I prefer PDP-10s."
>
> B B B B - Dan C.
Write-in for President in 2020.
===
Only if he's Not Insane.
Norman Wilson
Toronto ON
Long ago, at Caltech, I ran a VAX which had a mix of DZ11s and
Able DH/DMs. The latter were indeed much more pleasant, both
because of their DMA output and their fuller modem control.
For the DZ11s we used a scheme that originated somewhere in
the USG-UNIX world: output was handled through a KMC11.
Output interrupts were disablled on the DZ; a program
running in the KMC fetched output data with DMA, then
spoon-fed it into the DZ, polling the status register
to see when it was OK to send more, then sending an
interrupt to the host when the entire data block had
been sent.
The KMC, for those fortunate enough not to have programmed
it, has a very simple processor sort of on the level of
microcode. It has a few specialized registers and a
simple instruction set which I don't think had all four
of the usual arithmetic ops. I had to debug the KMC
program so I had to learn about it.
When I moved to Bell Labs a few years later, we didn't
need the KMC to drive serial lines--Dennis's stream I/O
code was able to do that smoothly enough, even with DZ11s
on a VAX-11/750 (and without any assembly-language help
either!). But my experience with the KMC was useful
anyway: we used them as protocol-offload processors for
Datakit, and that code needed occasional debugging and
tweaking too. Bill Marshall had looked after that stuff
in the past, but was happy to have someone else who
wasn't scared of it.
Norman Wilson
Toronto ON
> From: Paul Winalski
> DZ11s ... the controller had no buffer
Huh? The DZ11 did have an input buffer. (See the 'terminals and communications
handbook', 1978-79 edition, page 2-238: "As each character is received ...
the data bits are placed ... in a .. 64-word deep first-in/first-out hardware
buffer, called a 'silo'.")
Or did you mean output:
> if you were doing timesharing it could bring the CPU to its knees in
> short order
The thing that killed an OS was the fact that output was programmed I/O, a
character at a time; using interrupt-driven operation, it took an interrupt
per character. So for a 9600 baud line, 9 bits/character (1 start + 7 data + 1
stop - depending on the line configuration), that's about 1000 characters per
second -> 1000 interrupts per second.
The DH11 used DMA for output, and was much easier on the machine.
Noel
Lars Brinkhoff <lars(a)nocrew.org> reports on Mon, 28 May 2018 10:31:56 +0000:
>> But apparently the inspiration came from VMS:
>> http://web.archive.org/web/20170527120123/http://www.unixtop.org:80/about.s…
That link contains the statement
>> The first version of top was completed in the early part of 1984.
However, on TOPS-20, which was developed several years before VMS, but
still from the same corporation, we had the sysdpy utility which
produced a similar display as top does.
>From my source archives, I find in score/4-utilities/sysdpy.mac the
ending comments:
;462 - DON'T DO A RLJFN AFTER A CLOSF IN NEWDPY
;<4.UTILITIES>SYSDPY.MAC.58, 2-Jun-79 14:15:54, EDIT BY DBELL
;461 - START USING STANDARD TOPS-20 EDIT HISTORY CONVENTIONS, AND
; REMOVE OLD EDIT HISTORY.
...
;COPYRIGHT (C) 1976,1977,1978,1979 BY DIGITAL EQUIPMENT \
CORPORATION, MAYNARD, MASS.
I therefore expect that there was 460-entry list of log messages that
predated 2-Jun-1979, and likely went back a few years. Two other
versions of sysdpy.mac in my archives have also dropped log messages
before 461.
Even before TOPS-20, on the CDC 6400 SCOPE operating system, there was
a similar tool (whose name I no longer recall) that gave a
continuously updated display of system-wide process activity. That was
available in at least late 1973.
I suspect that top-like displays were added to most other interactive
operating systems, as soon as screen terminals made updates convenient
without wasting console paper. One of the first questions likely to
be asked by interactive users is "what is my job doing?".
In a TOPS-20 terminal window, you could type Ctl-T to get a one-line
status report for the job that was currently running from your
terminal. For many users, that was preferable to sysdpy, and it was
heavily used.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Lars Brinkhoff
> I'm surprised it appeared that late. Were there any other versions or
> similar Unix programs before that?
The MIT ~PWB1 system had a thing called 'dpy', I think written at MIT based on
'ps' (and no doubt inspired by ITS' PEEK), which had similar functionality.
Seems like it never escaped, though. Man page and source here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/man1/dpy.1http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/dpy.c
The top of my hard-copy man page says 'November 1977', but I suspect it dates
back further than that.
Noel
On this day in 1936, notable mathematician Alan Turing submitted his
thesis "On Computable Numbers", thereby laying the foundations for today's
computers.
Sigh; if only he hadn't eaten that apple... And we'll never know whether
it was murder or suicide.
-- Dave
> I still find pic really useful ... > I don't know of any other tool
that lets you do drawings like that
Unix had "ideal", a remarkable language by Chris Van Wyk, based on complex
numbers and capable of some constraint solving. Its code seemed to be
lost but can now be found in one of the online v10 repositories. I've
been meaning to try to resurrect it. If anyone has already done so,
I'd love to hear about it.
I, too, have some pic macros, though no big coherent packages, to do
things like polar coordinates and solving for the intersection of lines
and circles. I have even in extremis made filled triangles with scripts
that massage PostScript by deleting corners of filled rectangles. Then
from triangles you can, with patience, make polygons.
Doug
A worthy question.
---------- Forwarded message ----------
From: Richard Haight <dickhaight(a)gmail.com>
Date: Thu, May 24, 2018 at 2:53 PM
Subject: unix
To: jpl.jpl(a)gmail.com
Recently I was asked if I still had a spare deck of the Unix-25 cards.
Hadn’t thought of them in years. But it made me realize that 2019 will be
the 50th. Is anyone working on something to mark the anniversary?
Hello,
I'm curious about the history of "top". As far as I can see, the first
version was written by William LeFebvre and posted to net.sources in
1984. I'm surprised it appeared that late. Were there any other
versions or similar Unix programs before that?
Best regards,
Lars Brinkhoff
All, I've just received the following e-mail. I am not able to physically
get these documents, but if you are interested in them, feel free to contact
Mel yourself.
Cheers, Warren
----- Forwarded message from meljmel-unix(a)yahoo.com -----
Date: Wed, 23 May 2018 13:30:09 +1000 (AEST)
From: meljmel-unix(a)yahoo.com
To: Warren T <wkt(a)tuhs.org>
Subject: Old Unix manuals, TMs, etc
Hi,
I started working at Bell Labs in 1971 and although
not in the computing science research department, I
was in another department down the hall. As a result
I have many old Unix manuals, TM's and other papers
that I wish to dispose of. I found you when I did a
search to see if there was anyone who might want them.
Appended below is a list of what I have. If you are
interested in any of it or know who else might be, please
let me know. If I can't find anyone to take them I guess
I'll just throw them out.
Mel
meljmel-unix(a)yahoo.com
==========
These are the old Unix Manuals I have:
UNIX PROGRAMMER'S MANUAL
Program Generic PG-1C300 Issue 2
Published by the UNIX Support Group
January, 1976
UNIX PROGRAMMER'S MANUAL
Program Generic PG-1C300 Issue 3
Published by the UNIX Support Group
March, 1977
UNIX User's Manual
Release 3.0
T.A. Dolotta
S. B. Olsson
A.G. Petrucceli
Editors
June 1980
Laboratory 364
Bell Telephone Laboratories, Incorporated
Murray Hill, NJ 07974
The C Programmer's Handbook
AT&T Bell Laboratories
February 1984
M. I. Bolsky
P. G. Matthews
System Training Center
Copyright 1984
Piscataway, New Jersey 08854
Unix System V Quick Reference Guide
Copyright 1985 AT&T Technologies, Inc
307-130
UNIX TIME-SHARING SYSTEM
PROGRAMMER'S MANUAL
Research Version
Ninth Edition, Volume 1
September, 1986
AT&T Bell Laboratories
Murray Hill, New Jersey
The Vi User's Handbook
by Morris I. Bolsky
Systems Training Center
Copyright 1984 AT&T Bell Laboratories Incorporated
Copyright 1985 AT&T Technologies, Inc
Unix Research System Programmer's Manual
Tenth Edition, Volume I
Computing Science Research Center
Murray Hill, New Jersey
1990, American Telephone and Telegraph Company
Bell Laboratories Division
ISBN 0-03-047532-5
A. G. Hume
M. D. McIlroy
October, 1989
Unix Research System Papers
Tenth Edition, Volume II
Computing Science Research Center
AT&T Bell Laboratories
Murray Hill, New Jersey
1990, American Telephone and Telegraph Company
Bell Laboratories Division
ISBN 0-03-047529-5
A. G. Hume
M. D. McIlroy
January, 1990
----------
These are old Unix Technical Memorandum and Papers I have:
The C Reference Manual
January 15, 1974
TM: 74-1273-1
D. M. Ritchie
Programming in LIL: A Tutorial
June 17, 1974
TM: 74-1352-6
LIL Reference Manual
June 19, 1974
TM: 74-1352-8
A Description of the UNIX File System
September 16, 1975
Author J. F. Maranzano
The Portable C Library
May 16, 1975
TM: 75-1274-11
Author: M. E. Lesk
Lex - A Lexical Analyzer Generator
July 21, 1975
TM: 75-1274-15
Author: M. E. Lesk
Introduction to Scheduling and Switching under UNIX
October 20, 1975
TM: 75-8234-7
Author: T. M. Raleigh
Make - A program for Maintaining Computer Programs
December 5, 1975
TM: 75-1274-26
Author: S. I. Feldman
UNIX Programming
Brian w. Kernighan
Denis M. Ritchie
? 1975 ?
Bell Laboratories, Murray Hill, New Jersey 07974
"This paper is an introduction to programming on Unix.
The emphasis is on how to write programs that interface
to the operating system."
The C Language Calling Sequence
September 26, 1977
TMs: 77-1273-15, 77-1274-13
Authors: A.C. Johnson, D.M. Ritchie, M.E. Lesk
Lint, a C Program Checker
September 16, 1977
TM: 77-1273-14
Author: S. C. Johnson
The M4 Macor Processor
April 1, 1977
TM: 77-1273-6
Authors: Brian W. Kernighan, Dennis M. Ritchie
C Reference Manual
Dennis M. Ritchie
May 1, 1977
Murray Hill, New Jersey 07974
C Language Portability
September 22, 1977
Author: B. A. Tague
Variable Length Argument Lists in C
June 12, 1978
Author: Andrew Koenig
An Introduction to the UNIX Shell
July 21, 1978
TM: 78-1274-4
Author: S.R. Bourne
SED - A Non-Interactive Text Editor
August 15, 1978
TM: 78-1270-1
Author: Lee E. McMahon
UNIX Shell Tutorial
July 14, 1981
TM: 81-59322-5
Author: J. R. Mashey
Awk - A pattern Scanning and Processing Language
Programmer's Manual
June 19, 1985
Authors: Alfred V. Aho, Brian W. Kernighan, Peter J. Weinberger
TMs: 11272-850619-06TM, 11276-850619-09TM, 11273-850619-03TM
Yacc: Yet Another Compiler-Compiler
July 31. 1978
TM: 78-1273-4
Author: Stephen C. Johnson
RATFOR - A Preprocessor for a Rational Fortran
October 22, 1976
TM: 76-1273-10
Author Brian W. Kernighan
Miscellaneous undated (but old) papers:
On the Security of UNIX
Dennis M. Ritchie
A New Input-Output Package
D. M. Ritchie
The Unix I/O System
Dennis M. Ritchie
Programming in C - A tutorial
Brian W. Kernighan
? Date ?
==========
----- End forwarded message -----
WHo'll be the first to run our favourite OS with one of these?
http://obsolescence.wixsite.com/obsolescence/pidp-11-technical-details
``From a hardware perspective, the PiDP is just a frontpanel for a
Raspberry PI. In the hardware section below, the technical details of the
front panel are explained. In fact, the front panel could just as easily
be driven by any microcontroller (or FPGA), it only lights the leds and
scans the switch positions.''
-- Dave
I had an e-mail from someone who said:
PDP-11 Sys V is apparently derived from Unix CB 3.0, not from the
normal route... Or so says the great interweb :)
I found a family tree that suggests this. Know anything about this?
I hadn't heard of this before, can anybody substantiate or negate this
assertion, or shed more light on the genealogy od PDP-11 System V?
Thanks, Warren
I have read that one of the first groups in AT&T to use early Unix was
the legal dep't, specifically to use *roff to write patent
applications. Can anyone elaborate on this or supply references?
(This would in great contrast to today, where most applications are
written with certain products despite the USPTO, EPO, and others only
accepting PDF versions.) It would also be interesting to learn how
the writers were taught *roff, what editors were used, and what they
thought. (I recall that the secretaries, as they were then called, in
the math dep't used vi to compose plain TeX documents and xdvi to
proofread them.)
N.
>Date: Wed, 16 May 2018 10:05:24 -0400
>From: Doug McIlroy <doug(a)cs.dartmouth.edu>
>To: tuhs(a)minnie.tuhs.org
>Cc: lorinda.cherry(a)gmail.com
>Subject: Re: [TUHS] PWB - what is the history?
>Message-ID: <201805161405.w4GE5OeJ012025(a)coolidge.cs.Dartmouth.EDU>
>Content-Type: text/plain; charset=us-ascii
>
<snip>
>They were in WWB (writers workbench) not PWB (programmers workbench).
>WWB was a suite of Unix programs, organized by Nina MacDonald of USG.
>It appeared in various Unix versions, including research v8-v10.
>
>Lorinda Cherry in research wrote most of the basic tools in WWB,
...
I see Ms. Cherry also has a wiki page
https://en.wikipedia.org/wiki/Lorinda_Cherry which has "Cherry raced
rally cars as a hobby".
and the page contains a link to an interesting document which brings
us back to the PWB
"A Research UNIX Reader:
Annotated Excerpts from the Programmer’s Manual,
1971-1986
M. Douglas McIlroy"
- uncle rubl
> I think you mean 'style' and 'diction'. I thought those came from
research? I
> remember seeing papers about them in a manual; maybe 7th Ed or 4.2/4.3BSD?
They were in WWB (writers workbench) not PWB (programmers workbench).
WWB was a suite of Unix programs, organized by Nina MacDonald of USG.
It appeared in various Unix versions, including research v8-v10.
Lorinda Cherry in research wrote most of the basic tools in WWB,
most notably style, diction, and the really cool "parts" that
underlay style. William Vesterman at Rutgers suggested style and
diction. Having parts up her sleeve, Lorinda was able to turn them out
almost overnight. Most anyone else would scarcely have known how to
begin to make style.
Just yesterday Lorinda received a Pioneer in Tech award from the National
Center for Women in IT. Parts and eqn, both initiated by her, certainly
justify that honor.
[Parts did a remarkable job of tagging text with parts of speech, without
getting bogged down in the swamp of parsing English. It was largely
implemented in sed--certainly one of the grander programs written in that
language. Style reported statistics like length of words, frequency of
adjectives, and variety of sentence structure. Diction flagged cliches
and other common infelicities. WWB offered advice based on the findings
of these and other text-analysis programs.]
Doug
> Wouldn't the -man macros have predated -ms?
Indeed. My error.
The original -man package was quite weak. It got a major face
lift for v7 and once more at v9 or so. And further man-page
packages are still duking it out today. -ms has lots of rivals,
too, but its continued popularity attests to Mike Lesk's fine
sense of design.
Doug
> From: Nemo
> I have read that one of the first groups in AT&T to use early Unix was
> the legal dep't, specifically to use *roff to write patent applications.
> Can anyone elaborate on this or supply references?
Are you familiar with the description in Dennis M. Ritchie, "The Evolution of
the Unix Time-sharing System":
https://www.bell-labs.com/usr/dmr/www/hist.htm
(in the section "The first PDP-11 system")? Not a great deal of detail, but...
> It would also be interesting to learn how the writers were taught *roff,
> what editors were used
I'm pretty sure 'ed' was the only editor available at that point.
Noel
> From: Clem Cole
> Programmer's Workbench - aka PWB was John Mashey and team in Whippany.
> They took a V6 system and make some changes
I was suprised to find, reading the article on it in the Unix BSTJ issue, that
the system changes were less than I'd thought. Some of the stuff in the PWB1
release that we have (see previous message) is _not_ described in that article
(which is fairly detailed), which further compounds the lack of clarity over
who/what/when between V6 and V7.
> Noel may know how it made it to MIT
That I _do_ know! There was some sort of Boy Scouts group at Bell (not sure
exactly where) and one of the members went to MIT. I think he was doing
undergraduate research work in the first group at MIT to have Unix (Steve
Ward's), but anyway he had some connection there; and I think also had a
summer job at Bell. He was the Bell->MIT conduit.
> PWB 2.0 was released a few years later and was based on the UNIX/TS
> kernel and some other changes and it was around this time that the UNIX
> Support Group was formed
??? If PWB1 was in July '77, and PWB2 was some years later, USG couldn't have
been formed 'around [that] time' because there's that USG document from
January '76?
Noel
> From: Jon Forrest <nobozo(a)gmail.com>
> John Mashey had a lot to do with PWB so maybe he can say a few words
> about it if he's on here.
It would be great to have some inside info about the relationship among the
Research, USG and PWB systems. Clearly there was comunication, and things got
passed around, but we know so little about what was happening during the
period between V6 and V7 when a lot happened (e.g. the changes to C, just
mentioned).
E.g. check out the PWB1 version of exec():
https://minnie.tuhs.org//cgi-bin/utree.pl?file=PWB1/sys/sys/os/sys1.c
It's been changed from V6 to copy the arguments into swap space, _not_ buffers
allocated from the system buffer pool (which is how V6 does it). So, who did
this originally - did the PWB people do it, or was it something the research
people did, that PWB picked up?
I used to think it was USG, but there's a 'Unix Program Description' document
prepared by USG, dated January 1976, and it's still clearly using the V6
approach. The PWB1 release was allegedly July, 1977:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=PWB1
(Which is, AFAIK, the _only_ set of sources we have for after V6 and before V6
- other than the MIT system, which seems to be basically PWB1.)
So who did the exec() changes, originally?
And I could list a bunch more like this...
Noel
I never really learned VI. I can stumbled through it in ex mode if I have
to. If there's no EMACS on the UNIX system I'm using, I use ed.
You get real good at regular expressions. Some of my employees were
pretty amazed at how fast I could make code changes with just ed.
Here's part of the story.
> From: "Doug McIlroy" <doug(a)cs.dartmouth.edu>
> To:<tuhs(a)minnie.tuhs.org>
> Sent:Fri, 16 Dec 2016 21:09:16 -0500
> Subject:[TUHS] How Unix made it
to the top
>
> It has often been told how the Bell Labs law department became the
> first non-research department to use Unix, displacing a newly acquired
> stand-alone word-processing system that fell short of the department's
> hopes because it couldn't number the lines on patent applications,
> as USPTO required. When Joe Ossanna heard of this, he told them about
> nroff and promised to give it line-numbering capability the next day.
> They tried it and were hooked. Patent secretaries became remote
> members of the fellowship of the Unix lab. In due time the law
> department got its own machine.
Come to think of it, they must already have had a machine, probably
leased from the commercial word-processing company, for they had DEC
tapes they needed to convert to Unix format. Several of us in the Unix
lab spent a memorable afternoon decoding the proprietary format. It was
finally broken when we computed a bitwise autocorrelation function. It
had a big peak at seven. The tapes were pure ASCII rather than bytewise
ASCII--a lot of work for very little data compression.
As for training, the secretaries had to learn nroff and ed plus the
usual lot of ls, mkdir, mv, cp, rm. The patent department had to invest
in modems and order phone lines to plug them into. I don't know what
terminals they used.
>From this distant point in time it seems that it all happened in a couple
of weeks. Joe Ossanna did most of the teaching, and no doubt supplied
samples to copy. As far as I know the only other instructional materials
would have been man pages and the nroff manual (forbiddingly terse,
though thorough). He may have made a patent-macro package, but I doubt
it; I think honor for the first real macro package goes to Lesk's -ms.
Doug
Larry’s question about PWB made me think it might be useful to this list
for some of this to be written down.
When you write the story of UNIX, licensing is a huge part of it (both good
and bad). As I have tried to explain before the 1956 consent decree and
the later 1980 Judge Green ruling, as well as how the AT&T legal department
set up the licenses really cast huge shadows that almost seem trite today;
but seem to have been forgotten.
In fact later licensing would become so important, one of the more infamous
UNIX wars was based on it (if you go back to the original OSF Founding
Principles – two of them are ‘Fair and Stable Licensing Terms’). As we
all know, because of the original 1956 decree, AT&T was not allowed to be
in the computer business and so when people came calling both to use it
(Academically and Commercially) and to relicense it; history has shown that
AT&T’s management killed the golden goose. I’d love to hear the views of
Doug, Steve, Ken and other who were inside looking out.
FWIW: These are my thoughts from an Academic and Commercial user back in
the day. AT&T’s management was clearly concerned about the consent decree
and the original licenses show it. UNIX was licensed for free to academic
institutions for research use (sans a small tape copying fee) and the bits
were ‘abandoned on your door step.’ This style of license, along with
the publishing of the ideas behind really did get the ideas out and the
academic community loved it. We used it and we were able to share
everything.
The academic license was fine until people want to start to use in a
commercial setting (Rand Corp). Again, AT&T legal is worried about being
perceived in the computer business, so the original commercial use license
shows it. AT&T licensing basically uses the academic license but add the
ability to actually use it for commercial purposes. Then the first
Universities start to want to use UNIX more like a commercial system [Dan
Klein and I would go on strike and force CMU to purchase first commercial
use license for an Academic setting, following by Case Western].
As AT&T management realized the UNIX IP did seem to be some value, just
like the transistor had been earlier, it seems like they wanted to find a
way to keep it under their control. I remember having a dinner
conversation with Dennis at a USENIX about this topic. Steve has expressed
they told many folks to treated it as a ‘trade secret’ (which is strange to
me since the cat as already out of the bag by then and the ideas (IP)
behind UNIX had already been extensively published (we even had USENIX
formed to discuss ideas).
By the time Judge Green allows AT&T to be in the computer business I think
AT&T management completely misunderstood the value of what they had. The
AT&T legal team had changed the commercial rules in every new UNIX release
a new license was created, and thus firms like DEC, HP, IBM *et al* were
getting annoyed because they had begun to invest in the technology
themselves and the feeling inside of those firms was that AT&T management
was changing the ground rules after the game started.
IMO a funny thing happened (bad karma), it seems like the tighter AT&T
management seems to try to control things in the UNIX community, the less
control the community gave them. Clearly, the new features of the
technology started to be driven by BSD. But the license was the one place
they could control and they tried. In fact, by the time of the SVR4 it
all came to a head and OSF was formed because most firms were unwilling to
give AT&T the kind of control they were asking in the that license [as
Larry has previously expressed, Sun made a Faustian deal WRT to SVR4]. In
the end, the others were shipping from an SVR3 license or had bought it
out.
> From: Clem cole
> Thinking about this typesetter C may have been later with ditroff.
Not so sure about that; we had that C at MIT, but only regular troff (which
had been hacked to drive a Varian).
> From: Arnold Skeeve
> It seems to be shortly after the '78 release of V7.
No, typesetter C definitely pre-dated V7. The 'PWB1' system at MIT had the new
C.
Looking at the documentation files for the extension (e.g. addition of
'long's), none of them have dates in them (alas), but my hard-copy printout of
one is dated "May 8 1978", and it was several years old at that point.
(Also, several sources give '79 for V7 - Salus says 'June 1979').
Noel
> From: Clem Cole
> Their is a open question about the need to support self modifying code
> too. I personally don't think of that as important as the need for
> conditional instructions which I do think need to be there before you
> can really call it a computer.
Here's one way to look at it: with conditional branching, one can always
write a program to _emulate_ a machine with self-modifying code (if that's
what floats your boat, computing-wise) - because that's exactly what older,
simple microcoded machines (which don't, of course, have self-modifying code
- their programs are in ROM) do.
Noel
Way back on this day in 1941, Conrad Zuse unveiled the Z3; it was the
first programmable automatic computer as we know it (Colossus 1 was not
general-purpose). The last news I heard about the Z3 was that she was
destroyed in an air-raid...
This pretty much started computing, as we know it.
-- Dave
All, in case you haven't seen it:
https://www.ioccc.org/2018/mills/
This is a PDP-7 emulator in C, enough to run PDP-7 Unix. But the author
has written a PDP-11 emulator in PDP-7 assembly, and uses this to run
2.9BSD on the emulated PDP-7 :-)
Cheers, Warren
>From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
>To: tuhs(a)tuhs.org
>Cc: jnc(a)mercury.lcs.mit.edu
>Subject: Re: [TUHS] Who used *ROFF?
>Message-ID: <20180512110127.0B81418C08E(a)mercury.lcs.mit.edu>
>
<snip>
>Are you familiar with the description in Dennis M. Ritchie, "The Evolution of
>the Unix Time-sharing System":
>
> https://www.bell-labs.com/usr/dmr/www/hist.htm
>
<snip>
Please note the URL should end with ".html", not ".htm"
I wasted 5 minutes (insert big grin) wondering why I got an 404 like
404 Not Found
Code: NoSuchKey
Message: The specified key does not exist.
Key: hist.htm
RequestId: 454E36190753F99C
HostId: 6EJTsEdvnbnAr4VO7+mxSWH+dcX8X6AGRLZxwOLha/9q5G2CAxsVbEw6aMF+NHIPbhrAQ+/t/8o=
Hardly ever use notepad, hardly ever used notepad. Especially since I
discovered notepad++ many years ago ( https://notepad-plus-plus.org )
Of course, I use what is handy for what I'm doing. 'vim' I use when I
want to do some 'manipulation :-)
Does anyone know why UUCP "bag" files are called "bag"?
Seeing as this relates to unix-to-unix-copy, I figured that someone on
TUHS might have an idea.
Thanks in advance.
--
Grant. . . .
unix || die
Tomorrow, May 12, in 1941 the Z3 computer was presented by Konrad Zuse:
https://en.wikipedia.org/wiki/Z3_(computer)
I enjoyed reading the specs at the bottom of the Wikipedia page. I
never heard of this project until today, coming across it an article.
Mike Markowski
Born on this day in 1930, he gave us ALGOL, the basis of all decent
programming languages today, and the entire concept of structured
programming. Ah, well I remember the ALGOLW compiler on the 360...
There's a beaut article here:
https://www.cs.utexas.edu/users/EWD/ewd02xx/EWD215.PDF
And remember, testing can show the presence of errors, but not their
absence...
--
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)
People who fail to / understand security / surely will suffer. (tks: RichardM)
I'm curious as UNIX folks if somewhere can enlighten me. I sometimes
answer things on Quora and a few years ago the following question was
posted:
What does "Experience working in Unix/Linux environments" mean when given
as a required skill in company interviews? What do they want from us?
<https://www.quora.com/What-does-Experience-working-in-Unix-Linux-environmen…>
Why would this be considered a spam violation - which I was notified today
as being so.
It all depends the job for the specific experiences the hiring mangers want
to see. The #1 thing I believe they will looking for is something that does
not need to have a GUI to be useful. If you a simple user, it means you are
comfortable in a text based, shell environment and are at least familiar
with, if not proficient with the UNIX tools such as, ed, vi or emacs, grep,
tail, head, sed, awk, cut, join, tr, etc. You should be able to use one or
more of the Bourne Shell/Korn Shell/Bash family or CShell. You should be
familiar with the UNIX file tree and basic layout and its protection
scheme. It helps if you understand the differences between BSD, SysV, Mac
OSx, and Linux layout; but for many jobs in the UNIX community that may not
be required. You should also understand how to use the Unix man command to
get information on the tools you are using —* i.e.* you should have read,
if not own a copy of Kernighan and Pike The Unix Programming Environment
(Prentice-Hall Software Series): Brian W. Kernighan, Rob Pike:
9780139376818: Amazon.com: Books
<https://www.amazon.com/Unix-Programming-Environment-Prentice-Hall-Software/…>
and
be proficient in the first four chapters. If the job requires you writing
scripts to be able to write Shell script (*i.e.* program the shell) using
the Bourne Shell syntax *i.e.* Chapter 5 (Shell Programming).
If you are a programmer, then you need to be used to using the UNIX/Linux
toolchains and probably not require an IDE - again as a programmer
knowledge if not our proficiency of at least source code control system
from SCCS, RCS, CVS, SVN, Mercurial, git or the like needed. Kernighan and
Pike’s Chapter’s 6–8 should be common knowledge. But to be honest, you also
should be familiar with the contents contained within it, if not own and
keep a copy of the Rich Steven’s Advanced Programming in the UNIX
Environment, 3rd Edition: W. Richard Stevens, Stephen A. Rago:
9780321637734: Amazon.com: Books
<https://www.amazon.com/Advanced-Programming-UNIX-Environment-3rd/dp/0321637…>
(*aka* APUE) on your desk.
If you are system administrator, then the required set of tools get much
larger and besides the different way to “generate” (build) a system is a
good idea; but less tools for user maintenance. In this case you should be
familiar with, again if not own have a copy on your desk of Evi
Nemeth’s Amazon.com:
UNIX and Linux System Administration Handbook, 4th Edition (8580001058917):
Evi Nemeth, Garth Snyder, Trent R. Hein, Ben Whaley: Books
<https://www.amazon.com/UNIX-Linux-System-Administration-Handbook/dp/0131480…>
- which is and has been one of if not the best UNIX admin work for many,
many years.
Updated 05/07/18: to explain I am not shilling for anyone. I am trying to
honestly answer the question and make helpful recommendations of how to
learn what the person asked to help them be better equipped to be employed
in the Unix world. I used Amazon’s URL’s because they are global and easy
to use as a reference. But I am not suggesting you purchase from them. In
fact, if you can borrow a copy from you library to start, that might be a
good idea.
Grant Taylor:
(Maybe the 3' pipe wrench has something to do with it.)
=======
Real pipes don't need wrenches. Maybe those in Windows do.
Norman Wilson
Toronto ON
Hi all,
Forgive this cross-post from cctalk, if you're seeing this message twice. TUHS seems like a very appropriate list for this question.
I'm experimenting with setting up UUCP and Usenet on a cluster of 3B2/400s, and I've quickly discovered that while it's trivial to find old source code for Usenet (B News and C News), it's virtually impossible to find source code for old news *readers*.
I'm looking especially for nn, which was my go-to at the time. The oldest version I've found so far is nn 6.4, which is too big to compile on a 3B2/400. If I could get my hands on 6.1 or earlier, I think I'd have a good chance.
I also found that trn 3.6 from 1994 works well enough, though it is fairly bloated. Earlier versions of that might be better.
Does anyone have better Google-fu than I do? Or perhaps you've got earlier sources squirreled away?
As an aside: If you were active on Usenet in 1989, what software were you using?
-Seth
--
Seth Morabito
web(a)loomcom.com
Hi,
There's a unix "awesome list". It mentions TUHS's wiki, as well as this quote:
"This is the Unix philosophy: Write programs that do one thing and do
it well. Write programs to work together. Write programs to handle
text streams, because that is a universal interface." - Douglas
McIlroy, former head of Bell Labs Computing Sciences Research Center
https://github.com/sirredbeard/Awesome-UNIX
On 08.05.18 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > My point being that ... pages are invisible to the process segments are
> > very visible. And here we talk from a hardware point of view.
>
> So you're saying 'segmentation means instructions explicitly include segment
> numbers, and the address space is a two-dimensional array', or 'segmentation
> means pointers explicitly include segment numbers', or something like that?
Not really. I'm trying to understand your argument.
You said:
"BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user
(except
for setting the total size of the process), whereas segmentation is
explicitly
visible to the user."
And then you used MERT as an example of this.
My point then is, how is MERT any different from mmap() under Unix?
Would you then say that the paging is visible under Unix, meaning that
this is then segmentation?
In my view, you are talking about a software concept. And as such, it
has no bearing on whether a machine have pages or segments, as that is a
hardware thing and distinction, while anything done as a service by the
OS is a completely different, and independent question.
> I'm more interested in the semantics that are provided, not bits in
> instructions.
Well, if we talk semantics instead of the hardware, then you can just
say that any machine is segmented, and you can say that any machine is
have pages. Because I can certainly make it appear both ways from the
software point of view for applications running under an OS.
And I can definitely do that on a PDP-11. The OS can force pages to
always be 8K in size, and the OS can (as done by lots of OSes) provide a
mechanism that gives you something you call segments.
> It's true that with a large address space, one can sort of simulate
> segmentation. To me, machines which explicitly have segment numbers in
> instructions/pointers are one end of a spectrum of 'segmented machines', but
> that's not a strict requirement. I'm more concerned about how they are used,
> what the system/user gets.
So, again. Where does mmap() put you then?
And, just to point out the obvious, any machine with pages have a page
table, and the page table entry is selected based on the high bits of
the virtual address. Exactly the same as on the PDP-11. The only
difference is the number of pages, and the fact that the page on the
PDP-11 do not have a fixed length, but can be terminated earlier if wanted.
So, pages are explicitly numbered in pointers on any machine with pages.
> Similarly for paging - fixed sizes (or a small number of sizes) are part of
> the definition, but I'm more interested in how it's used - for demand loading,
> and to simplify main memory allocation purposes, etc.
I don't get it. So, in which way are you still saying that a PDP-11
don't have pages?
> >> the semantics available for - and_visible_ to - the user are
> >> constrained by the mechanisms of the underlying hardware.
>
> > That is not the same thing as being visible.
>
> It doesn't meet the definition above ('segment numbers in
> instructions/pointers'), no. But I don't accept that definition.
I'm trying to find out what your definition is. :-)
And if it is consistent and makes sense... :-)
> > All of this is so similar to mmap() that we could in fact be having this
> > exact discussion based on mmap() instead .. I don't see you claiming
> > that every machine use a segmented model
>
> mmap() (and similar file->address space mapping mechanisms, which a bunch of
> OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
> orthagonal - although it clearly needs support from memory management hardware.
Can you explain how mmap() is any different from the service provided by
MERT?
> And one can add 'sharing memory between two processes' here, too; very similar
> _mechanisms_ to mmap(), but different goals. (Although I suppose two processes
> could map the same area of a file, and that would give them IPC mapping.)
That how a single copy of shared libraries happen under Unix.
Exactly what happens if you modify the memory depends on what flags you
give to mmap().
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I started with roff (the simplest but utterly frozen) and moved up to nroff. It was a few years later I was involved with a rather project to make a CAT phototypesetter emulator for the Versatec printer-plotter (similar to the BSD vcat, which we had not seen yet). My friend George Toth went to the Naval Research Laboratory and printed out the entire typeface on their CAT on transparent film. Then he set out to figure out a way to digitize it.
Well, next door the the EE building (where the UNIX work took place at JHU) was the biophysics department. They had Scanning Transmission Electron Microscope there, quite an impressive machine. The front end of the thing was a PDP-11/20 with some digital to analog converters and vice versa and a frame buffer. The software would control the positioning of the beam and read back how much came through the material and was detected. Essentially, you were making a raster picture of the sample in the microscope.
George comes up with this great idea. He takes a regular oscilloscope. He takes the deflection wires from the 11/20 off the microscope and puts them in the X and Y amplifiers of the scope. He then put a photomultiplier tube in the shell of an old scope camera. He'd cut out a single character and tape it the front of the scope and hang the camera on it. He'd fire up the microscope software and tell it to scan the sample. It would then put the image in the frame buffer. We'd pull the microscope RK05 pack out and boot miniunix and read the data from the frame buffer (why we didn't just write software to drive the A2D from miniunix I do not recall).
Eventually, George gets everything scanned in and cleaned up. It worked somewhat adequately.
Another amusing feature was that Michael John Muuss (my mentor) wrote a macro package tmac.jm. Some people were someone peeved that we now had a "nroff -mjm" option.
Years later after ditroff was in vogue, my boss was always after me to switch to some modern document prep (Framemaker or the like). On one rough job I told him I'd do it but I didn't have time to learn framemaker.
I write one page of this proposal, print it and then go on. My boss would proof it and then my coworker would come behind me and make the corrections. I ended up rewriting a million dollar (a lot of money back in 1989 or so) proposal in two days, complete with 25 pages of narrative and may be 50 pages of TBL-based tables showing compliance with the RFP. We won that contract and got several follow ons.
Years later I was reading a published book. I noted little telltale bumps on the top of some of the tables. I wrote the author..."Did you use tbl and pic to typeset this book?" Sure enough he had. But it was way after I thought anybody was still using such technology. Of course, I was happy when Springer-Verlag suddenly learned out to typeset books. I had a number of their texts in college that didn't even look like the put a new ribbon in the typewriter when setting the book.
> From: Clem Cole
> I agree with Tannenbaum that uKernel's make more sense to me in the long
> run - even if they do cost something in speed
There's a certain irony in people complaining that ukernel's have more
overhead - while at the same time time mindlessly, and almost universally,
propogating such pinheaded computing hogs as '_mandating_ https for everything
under the sun' (even things as utterly pointless to protect as looking at
Wikipedia articles on mathematics), while simultaneously letting
Amazon/Facebook/Google do the 'all your data are belong to us' number; the
piling of Pelion upon Ossa in all the active content (page after page of
JavaScript, etc) in many (most?) modern Web sites that does nothing more than
'eye candy'; etc, etc.
Noel
> https://en.wikipedia.org/wiki/TeX#Pronunciation_and_spelling
Yes, TeX is supposed to be pronounced as Germans do Bach. And
Knuth further recommends that the name be typeset as a logo with
one letter off the base line. Damned if an awful lot of people,
especially LaTeX users, don't follow his advice. I've known
and admired Knuth for over 50 years, but part ways with him
on this. If you use the ready-made LaTeX logo in running text,
so should you also use flourished cursive for Coca-Cola and
Ford; and back in the day, discordantly slanted letters for
Holiday Inn. It's mad and it's a pox on the page.
Doug
> From: Johnny Billquist
> My point being that ... pages are invisible to the process segments are
> very visible. And here we talk from a hardware point of view.
So you're saying 'segmentation means instructions explicitly include segment
numbers, and the address space is a two-dimensional array', or 'segmentation
means pointers explicitly include segment numbers', or something like that?
I seem to recall machines where that wasn't so, but I don't have the time to
look for them. Maybe the IBM System 38? The two 'spaces' in the KA10/KI10,
although a degenerate case (fewer even than the PDP-11) are one example.
I'm more interested in the semantics that are provided, not bits in
instructions.
It's true that with a large address space, one can sort of simulate
segmentation. To me, machines which explicitly have segment numbers in
instructions/pointers are one end of a spectrum of 'segmented machines', but
that's not a strict requirement. I'm more concerned about how they are used,
what the system/user gets.
Similarly for paging - fixed sizes (or a small number of sizes) are part of
the definition, but I'm more interested in how it's used - for demand loading,
and to simplify main memory allocation purposes, etc.
>> the semantics available for - and _visible_ to - the user are
>> constrained by the mechanisms of the underlying hardware.
> That is not the same thing as being visible.
It doesn't meet the definition above ('segment numbers in
instructions/pointers'), no. But I don't accept that definition.
> All of this is so similar to mmap() that we could in fact be having this
> exact discussion based on mmap() instead .. I don't see you claiming
> that every machine use a segmented model
mmap() (and similar file->address space mapping mechanisms, which a bunch of
OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
orthagonal - although it clearly needs support from memory management hardware.
And one can add 'sharing memory between two processes' here, too; very similar
_mechanisms_ to mmap(), but different goals. (Although I suppose two processes
could map the same area of a file, and that would give them IPC mapping.)
Noel
> From: Johnny Billquist
>> "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
>> words long [which can grow in increments of 32 words]"
> But then it is not actually giving programs direct access and
> manipulation of the hardware. It is a software construct and service
> offered by the OS, and the OS might fiddly around with various hardware
> to give this service.
I don't understand how this is significant: most time-sharing OS's don't give
the users access to the memory management control hardware?
> So the hardware is totally invisible after all.
Not quite - the semantics available for - and _visible_ to - the user are
constrained by the mechanisms of the underlying hardware.
Consider a machine with a KT11-B - it could not provide support for very small
segments, or be able to adjust the segment size with such small quanta. On the
other side, the KT11-B could support starting a 'software segment' at any
512-byte boundary in the virtual address space, unlike the KT11-C which only
supports 8KB boundaries.
Noel
> From: Johnny Billquist
>> in MERT 'segments' (called that) were a basic system primitive, which
>> users had access to.
> the OS gives you some construct which can easily be mapped on to the
> hardware.
Right. "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
words long ... Associated with each segment are an internal segment
identifiern and an optional global name." So it's clear how that maps onto the
PDP-11 memory management hardware - and a MERT 'segment' might use more than
one 'chunk'.
>> I understand your definitions, and like breaking things up into
>> 'virtual addressing' (which I prefer as the term, see below),
>> 'non-residence' or 'demand loaded', and 'paging' (breaking into
>> smallish, equal-sized chunks), but the problem with using "virtual
>> memory" as a term for the first is that to most people, that term
>> already has a meaning - the combination of all three.
Actually, after some research, it turns out to be only the first two. But I
digress...
> It's actually not my definition. Demand paging is a term that have been
> used for this for the last 40 years, and is not something there is much
> contention about.
I wasn't talking about "demand paging", but rather your use of the term
"virtual memory":
>>> Virtual memory is just *virtual* memory. It's not "real" or physical
>>> in the sense that it has a dedicated location in physical memory
>>> ... Instead, each process has its own memory, which might be mapped
>>> somewhere in physical memory, but it might also not be. And one
>>> processes address 0 is not the same as another processes address
>>> 0. They both have the illusion that they have the full memory address
>>> range to them selves, unaware of the fact that there are many
>>> processes who also have that same illusion.
I _like_ having an explicit term for the _concept_ you're describing there; I
just had a problem with the use of the _term_ "virtual memory" for it - since
that term already has a different meaning to many people.
Try Googling "virtual memory" and you turn up things like this: "compensate
for physical memory shortages by temporarily transferring data from RAM to
disk". Which is why I proposed calling it "virtual addressing" instead.
> I must admit that I'm rather surprised if the term really is unknown to
> you.
No, of course I am familiar with "demand paging".
Anyway, this conversation has been very helpful in clarifying my thinking
about virtual memory/paging. I have updated the CHWiki article based on it:
http://gunkies.org/wiki/Virtual_memory
including the breakdown into three separate (but related) concepts: i) virtual
addressing, ii) demand loading, and iii) paging. I'd be interested in any
comments people have.
> Which also begs the question - was there also a RK11-A?
One assumes there much have been RK11-A's and -B's, otherwise they wouldn't
have gotten to RK11-C... :-)
I have no idea if both existed in physical form - one might have been just a
design exercise). However, the photo of the non-RK11-C indicator panel
confirms that at least one of them was actually implemented.
> And the "chunks" on a PDP-11, running Unix, RSX or RSTS/E, or something
> similar is also totally invisible.
Right, but not under MERT - although there clearly a single 'software' segment
might use more than one set of physical 'chunks'.
Actuallly, Unix is _somewhat_ similar, in that processes always have separate
stack and text/data 'areas' (they don't call them 'segments', as far as I
could see) - and separate text and data 'areas' too, when pure code is in
use; and any area might use more than one 'chunk'.
The difference is that Unix doesn't support 'segments' as an OS primitive, the
way MERT does.
Noel
> That would be a pretty ugly way to look at the world.
'Beauty is in the eye of the beholder', and all that! :-)
> Not to mention that one segment silently slides over into the next one
> if it's more than 8K.
Again, precedent; IIRC, on the GE-645 Multics, segments were limited to 2^N-1 pages,
precisely because otherwise incrementing an inter-segment pointer could march off
the end of one, and into the next! (The -645 was implemented as a 'bag on the side'
of the non-segmented -635, so things like this were somewhat inevitable.)
> wouldn't you say that the "chunks" on a PDP-11 are invisible to the
> user? Unless you are the kernel of course. Or run without protection.
No, in MERT 'segments' (called that) were a basic system primitive, which
users had access to. (Very cool system! I really need to get moving on trying
to recover that...)
> *Demand* paging is definitely a separate concept from virtual memory.
Hmmm. I understand your definitions, and like breaking things up into 'virtual
addressing' (which I prefer as the term, see below), 'non-residence' or
'demand loaded', and 'paging' (breaking into smallish, equal-sized chunks),
but the problem with using "virtual memory" as a term for the first is that to
most people, that term already has a meaning - the combination of all three.
(I have painful memories of this sort of thing - the term 'locator' was
invented after we gave up trying to convince people one could have a network
architecture in which not all packets contained addresses. That caused a major
'does not compute' fault in most people's brains! And 'locator' has since been
perverted from its original definition. But I digress.)
> There is no real connection between virtual memory and memory
> protection. One can exist with or without the other.
Virtual addressing and memory protection; yes, no connection. (Although the
former will often give you the latter - if process A can't see, or name,
process B's memory, it can't damage it.)
> Might have been just some internal early attempt that never got out of DEC?
Could be; something similar seems to have happened to the 'RK11-B':
http://gunkies.org/wiki/RK11_disk_controller
>> I don't have any problem with several different page sizes, _if it
>> engineering sense to support them_.
> So, would you then say that such machines do not have pages, but have
> segments?
> Or where do you draw the line? Is it some function of how many different
> sized pages there can be before you would call it segments? ;-)
No, the number doesn't make a difference (to me). I'm trying to work out what
the key difference is; in part, it's that segments are first-class objects
which are visible to the user; paging is almost always hidden under the
sheets.
But not always; some OS's allow processes to share pages, or to map file pages
into address spaces, etc. Which does make it complex to separate the two..
Noel
> From: Johnny Billquist
> This is where I disagree. The problem is that the chunks in the PDP-11
> do not describe things from a zero offset, while a segment do. Only
> chunk 0 is describing addresses from a 0 offset. And exactly which chunk
> is selected is based on the virtual address, and nothing else.
Well, you have something of a point, but... it depends on how you look at it.
If you think of a PDP-11 address as holding two concatenated fields (3 bits of
'segment' and 13 bits of 'offset within segment'), not so much.
IIRC there are other segmented machines that do things this way - I can't
recall the details of any off the top of my head. (Well, there the KA10/KI10,
with their writeable/write-protected 'chunks', but that's a bit of a
degenerate case. I'm sure there is some segmented machine that works that way,
but I can't recall it.)
BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user (except
for setting the total size of the process), whereas segmentation is explicitly
visible to the user.
I think there is at least one PDP-11 OS which makes the 'chunks' visible to
the user - MERT (and speaking of which, I need to get back on my project of
trying to track down source/documentation for it).
> Demand paging really is a separate thing from virtual memory. It's a
> very bad thing to try and conflate the two.
Really? I always worked on the basis that the two terms were synonyms - but
I'm very open to the possibility that there is a use to having them have
distinct meanings.
I see a definition of 'virtual memory' below, but what would you use for
'paging'?
Now that I think about it, there are actually _three_ concepts: 'virtual
memory', as you define it; what I will call 'non-residence' - i.e. a process
can run without _all_ of its virtual memory being present in physical memory;
and 'paging' - which I would define as 'use fixed-size blocks'. (The third is
more of an engineering thing, rather than high-level architecture, since it
means you never have to 'shuffle' core, as systems that used variable-sized
things seem to.)
'Non-residence' is actually orthogonal to 'paging'; I can imagine a paging
system which didn't support non-residence, and vice versa (either swapping
the entire virtual address space, or doing it a segment at a time if the
system has segments).
> There is nothing about virtual memory that says that you do not have to
> have all of your virtual memory mapped to physical memory when the
> process is running.
True.
> Virtual memory is just *virtual* memory. It's not "real" or physical in
> the sense that it has a dedicated location in physical memory, which
> would be the same for all processes talking about that memory
> address. Instead, each process has its own memory, which might be mapped
> somewhere in physical memory, but it might also not be.
OK so far.
> each process would have to be aware of all the other processes that use
> memory, and make sure that no two processes try to use the same memory,
> or chaos ensues.
There's also the System 360 approach, where processes share a single address
space (physical memory - no virtual memory on them!), but it uses protection
keys on memory 'chunks' (not sure of the correct IBM term) to ensure that one
process can't tromp on another's memory.
>> a memory management device for the PDP-11 which provided 'real' paging,
>> the KT11-B?
> have never read any technical details. Interesting read.
Yes, we were lucky to be able to retrieve detailed info on it! A PDP-11/20
sold on eBay with a fairly complete set of KT11-B documentation, and allegedly
a "KT11-B" as well, but alas, it turned out to 'only' be an RK11-C. Not that
RK11-C's aren't cool, but on the 'cool scale' they are like 3, whereas a
KT11-B would have been, like, 17! :-) Still, we managed to get the KT11-B
'manual' (such as it is) and prints online.
I'd love to find out equivalent detail for the KT11-A, but I've never seen
anything on it. (And I've appealed before for the KS11, which an early PDP-11
Unix apparently used, but no joy.)
> But how do you then view modern architectures which have different sized
> pages? Are they no longer pages then?
Actually, there is precedent for that. The original Multics hardware, the
GE-645, supported two page sizes. That was dropped in later machines (the
Honeywell 6000's) since it was decided that the extra complexity wasn't worth
it.
I don't have any problem with several different page sizes, _if it makes
engineering sense to support them_. (I assume that the rationale for their
re-introduction is that in the age of 64-bit machines, page tables for very
large 'chunks' can be very large if pages of ~1K or so are used, or something
like.)
It does make real memory allocation (one of the advantages of paging) more
difficult, since there would now be small and large page frames. Although I
suppose it wouldn't be hard to coalesce them, if there are only two sizes, and
one's a small power-of-2 multiple of the other - like 'fragments' in the
Berkeley Fast File System for BSD4.2.
I have a query, though - how does a system with two page sizes know which to
use? On Multics (and probably on the x86), it's a per-segment attribute. But
on a system with a large, flat address space, how does the system know which
parts of it are using small pages, and which large?
Noel
> From: Johnny Billquist
> Gah. If I were to try and collect every copy made, it would be quite a
> collection.
Well, just the 'processor handbook's (the little paperback things), I have
about 30. (If you add devices, that probably doubles it.) I think my
collection is complete.
> So there was a total change in terminology early in the 11/45 life, it
> would appear. I wonder why. ... I probably would not blame some market
> droids.
I was joking, but also serious. I really do think it was most likely
marketing-driven. (See below for why I don't think it was engineering-driven,
which leaves....)
I wonder if there's anything in the DEC archives (a big chunk of which are now
at the CHM) which would shed any light? Some of the archives are online there,
e.g.:
http://www.bitsavers.org/pdf/dec/pdp11/memos/
but it seems to be mostly engineering (although there's some that would be
characterized as marketing).
> one of the most important differences between segmentation and pages are
> that with segmentation you only have one contiguous range of memory,
> described by a base and a length register. This will be a contiguous
> range of memory both in virtual memory, and in physical memory.
I agree completely (although I extend it to multiple segments, each of which
has the characterstics you describe).
Which is why I think the original DEC nomenclature for the PDP-11's memory
management was more appropriate - the description above is _exactly_ the
functionality provided for each of the 8 'chunks' (to temporarily use a
neutral term) of PDP-11 address space, which don't quack like most other
'pages' (to use the 'if it quacks like a duck' standard).
One query I have comes from the usual goal of 'virtual memory' (which is the
concept most tightly associated with 'pages'), which is to allow a process to
run without all of its pages in physical memory.
I don't know much about PDP-11 DEC OS's, but do any of them do this? (I.e.
allow partial residency.) If not, that would be ironic (in view of the later
name) - and, I think, evidence that the PDP-11 'chunks' aren't really pages.
BTW, did you know that prior to the -11/45, there was a memory management
device for the PDP-11 which provided 'real' paging, the KT11-B? More here:
http://gunkies.org/wiki/KT11-B_Paging_Option
I seem to recall some memos in the memo archive that discussed it; I _think_
it mentioned why they decided not to go that way in doing memory management
for the /45, but I forget the details? (Maybe the performance hit of keeping
the page tables in main memory was significant?)_
> With segmentation you cannot have your virtual memory split up and
> spread out over physical memory.
Err, Multics did that; the process' address space was split up into many
segments (a true 2-dimensional naming system, with 18 bits of segment number),
which were then split up into pages, for both virtual memory ('not all
resident'), and for physical memory allocation.
Although I suppose one could view that as two separate, sequential steps -
i.e. i) the division into segments, and ii) the division of segments into
pages. In fact, I take this approach in describing the Multics memory system,
since it's easier to understand as two independent things.
> You can also have "holes" in your memory, with pages that are invalid,
> yet have pages higher up in your memory .. Something that is impossible
> with segmentation, since you only have one set of registers for each
> memory type (at most) in a segmented memory implementation.
You seem to be thinking of segmentation a la Intel 8086, which is a hack they
added to allow use of more memory (although I suspect that PDP-11 chunks were
a hack of a similar flavour).
At the time we are speaking of, the Intel 8086 did not exist (it came along
quite few years later). The systems which supported segmentation, such as
Multics, the Burroughs 5000 and successors, etc had 'real' segmentation, with
a full two-dimensional naming system for memory. (Burroughs 5000 segment
numbers were 10 bits wide.)
> I mean, when people talk about segmented memory, what most everyone
> today thinks of is the x86 model, where all of this certainly is true.
It's also (IMNSHO) irrelevant to this. Intel's brain-damage is not the
entirety of computer science (or shouldn't be).
(BTW, later Intel xx86 machines did allow you have to 'holes' in segments, via
the per-segment page tables.)
> it would be very wrong to call what the PDP-11 have segmentation
The problem is that what PDP-11 memory management does isn't really like
_either_ segmentation, or paging, as practised in other machines. With only 8
chunks, it's not like Multics etc, which have very large address spaces split
up into many segments. (And maybe _that_'s why the name was changed - when
people heard 'segments' they thought 'lots of them'.)
However, it's not like paging on effectively all other systems with paging,
because in them paging's used to provide virtual memory (in the sense of 'the
process runs with pages missing from real memory'), and to make memory
allocation simple by use of fixed-size page frames.
So any name given PDP-11 'chunks' is going to have _some_ problems. It just
thing 'segmentation' (as you defined it at the top) is a better fit than the
alternative...
Noel
Depending on the system PS may or may not need to be setuid to work by non-root users.
Ping needs to be setuid because it uses raw sockets which are restricted (much like opening listens on low number ports) in many systems.
> From: William Corcoran
> I think it's a bit more interesting to uncover why rm does not remove
> directories by default thereby obviating the need for rmdir
On early PDP-11 Unixes, 'rm' is an ordinary program, and 'rmdir' is
setuid-root, since it has to do special magic (writing into directory files,
etc). Given that, it made sense to have 'rm' run with the least amount of
privilege needed to do its job.
Noel
> From: Johnny Billquist
> For 1972 I only found the 11/40 handbook.
I have a spare copy of the '72 /45 handbook; send me your address, and I'll
send it along. (Every PDP-11 fan should have a copy of every edition of every
model's handbooks... :-)
In the meantime, I'm too lazy to scan the whole thing, but here's the first
page of Chapter 6 from the '72:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/jpg/tmp/PDP11145ProcHbook72pg6-1.j…
> went though the 1972 Maintenance Reference Manual for the 11/45. That
> one also says "page". :-)
There are a few remnant relics of the 'segment' phase, e.g. here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s
which has this comment:
/ turn on segmentation
Also, if you look at the end, you'll see SSR0, SSR1 etc (as per the '72
handbook), instead of the later SR0, SR1, etc.
Noel
> From: Paul Winalski <paul.winalski(a)gmail.com>
> Regarding the Winchester code name, I've argued about this with Clem
> before. Clem claims that the code name refers to various advances in
> disk technology first released in the 3330's disk packs. Wikipedia and
> my own memory agree with you that Winchester referred to the 3340.
And you believe anything in Wikipedia? If so, I have a bridge to sell you! :-)
But, in this case, it's correct. According to "IBM's 360 and Early 370
Computers" (Pugh, Johnson and Palmer - a very good book, BTW), pg. 507, the
first Winchester was the 3340. The confusion comes from the fact that it had
two spindles, each of 30MB capacity, making it a so-called "30-30" system -
that being the name of Winchester's rifle.
Noel
> From: Johnny Billquist
>> Well, the 1972 edition of the -11/45 processor handbook
^^
> It would be nice if you actually could point out where this is the
> case. I just went through that 1973 PDP-11/45 handbook
^^
Yes, the '73 one (red/purple cover) had switched. It's only the '72 one
(red/white cover) that says 'segments'.
Noel
Blimey, but I nearly missed this one (I was sick in bed).
On this day in 1981, some little company called Xerox PARC introduced
something called a "mouse" (mostly because it has a tail), but I'm
struggling to find more information about it; wasn't there a photo of a
big boxy device?
--
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)
People who fail to / understand security / surely will suffer. (tks: RichardM)
On 2018-04-26 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > if you hadn't had the ability for them to be less than 8K, you wouldn't
> > even try that argument.
>
> Well, the 1972 edition of the -11/45 processor handbook called them segments..:-)
I think we had this argument before as well. It would be nice if you
actually could point out where this is the case. I just went through
that 1973 PDP-11/45 handbook, and all it says are "page" everywhere I look.
I also checked the 1972 PDP-11/40 handbook, and except for one mention
of "segment" in the introduction part of the handbook, which is not even
clear if it actually specifically refers to the MMU capabilities, that
handbook also use the word "page" everywhere.
I also checked the PDP-11/20 handbook, but that one does not even cover
any MMU, so no mention of neither "page" nor "segment" can be found.
> I figure some marketing droid found out that 'paging' was the new buzzword, and
> changed the name...:-) :-)
Somehow I doubt it, but looking forward to your references... :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I am sure I remember a machine which had this (which would have been running a BSD 4.2 port). Is my memory right, and what was it for (something related to swap?)?
It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false).
--tim
> From: Dave Horsfall <dave(a)horsfall.org>
> I am constantly bemused by the number of "setuid root" commands, when a
> simple "setgid whatever" will achieve the same task.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/ken/sys4.c
/*
* Unlink system call.
*/
unlink()
{ ...
if((ip->i_mode&IFMT)==IFDIR && !suser())
goto out;
For many things, yes. Not in this particular case.
Noel
Hello all,
I recently wrote a 3B2/400 simulator on the SIMH platform. It emulates the core system board and peripherals quite well, but I am now turning my attention to the emulating the 3B2 IO expansion boards. The first board I've emulated is the PORTS 4-port serial card, which came together fairly easily because I have the full source code for the SVR3 driver.
Other cards, though, are more challenging because I do not have source code for them. I would like to emulate the following two cards:
* The CTC cartridge tape controller
* The NI 10base5 Ethernet controller
Of these two, I have partial source code for the CTC driver (ct.c, ct.h, ct_lla.h, ct_deps.h), but I am missing a core file (ct_lla.c) that would greatly help explain what's going on. And I have NO source code at all for the NI driver.
There was a source code package for the NI driver called "nisrc", probably distributed on tape or floppy, but I have never seen it.
If you or anyone you know happens to have these source packages and a way to get at them, could you please let me know? I would be grateful.
-Seth
--
Seth Morabito
web(a)loomcom.com
> Google didn't seem to turn up much on TML
Perhaps because there was no TML. I suspect you mean TMG,
which I implemented from scratch, based on Bob McClure's
original, on both PDP 8 and PDP 11 Unix. Bob Morris and
I used it to make EPL, the "early PL/I" compiler for
Multics.
Off topic, but TMG on the GE 635, usedto buld Multics
got there via quite an Odyssey. Bob McClure created it
for the CDC 1604. He tranliterated it by hand from 1604
assembly language to IBM 7090 and sent the green coding
sheets to me. Debugging it was an unusual exercise: I
knew the logic was right; allI had to do was ferret
out mistranslations. The most prevalant problem was
confusion between CLA (signed load) and CAL (unsigned).
When we decided to do EPL, Clem Pease mechanically
reproduced a 7090 inside a Ge 635, by defining 7090
instructions as macros--sometimes quite hairy, but
they worked.
Doug
On 2018-04-25 16:39, Tom Ivar Helbekkmo<tih(a)hamartun.priv.no> wrote:
>
> Ron Natalie<ron(a)ronnatalie.com> writes:
>
>> RK05’s were 4872 blocks. Don’t know why that number has stuck with
>> me, too many invocations of mkfs I guess. Oddly, DEC software for
>> some reason never used the last 72 blocks.
> I guess that's because they implemented DEC Standard 144, known as
> bad144 in BSD Unix. It's a system of remapping bad sectors to spares at
> the end of the disk. I had fun implementing that for the wd (ST506)
> disk driver in NetBSD, once upon a time...
Uh... DEC STD 144 does not have anything to do with remapping bad blocks
to replacement good blocks. DEC STD 144 describes how a media stores
known bad blocks on a disk, so that file system initialization can then
take whatever action needed to that these blocks are not used by
anything. In RSX (and VMS), they will be included in a file called
BADBLK.SYS, and thus be perceived as "used".
bad144 in NetBSD will keep the a table in memory for such disks, and
"skip" blocks listed in the list as bad, meaning all other blocks gets
shifted. So it sortof do a remapping to good blocks by extending the
used blocks, and does not allocate anything at the end of the disk per
se. However, that is a Unix specific solution. OS/8 had a similar
solution for RL01 and RL02 drives, but not RK05 (as RK05 disks don't
follow DEC STD 144.)
I don't know exactly why DEC left the last three tracks unused. Might
have been for diagnostic tools to have a scratch area to play with.
Might have been that those tracks were found to be less reliable. Or
maybe something completely different. But it was not for bad block
replacement, as DEC didn't even do that on RK05 (or more or less in
general at all before MSCP. MSCP, by the way, does not use DEC STD 144.)
Something Unix and other implementations usually miss with DEC STD 144
is that there are actually two tables with bad blocks defined by the
standard. There is the manufacturer defined bad blocks, which all
software seems to know and deal with, and there are the user defined bad
blocks, which are supposed to be where all bad blocks that develop after
manufacture are supposed to be placed. Lots of software does not deal
with this second list. In addition, you also have the pack serial number
and some stuff defined by DEC STD 144, which is also recorded on the
last track, where the bad block lists also are stored.
Note that this information is supposed to be on the last track, meaning
you cannot use the scheme Unix uses to remap bad blocks, unless you keep
some blocks before the last track unallocated.
The ultimate irony was when I discovered that bad144 under NetBSD was
only built for x86 machine, and not being built for VAX, which is the
only architecture supported which actually have real disks which follow
the DEC STD 144. But that was corrected about 20 years ago now (time flies).
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Johnny Billquist
> if you hadn't had the ability for them to be less than 8K, you wouldn't
> even try that argument.
Well, the 1972 edition of the -11/45 processor handbook called them segments.. :-)
I figure some marketing droid found out that 'paging' was the new buzzword, and
changed the name... :-) :-)
Noel
On 2018-04-26 00:55, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > PDP-11 have 8K pages.
>
> Segments.:-) (This is an old argument between Johnny and me, I'm not trying
> to re-open it, just yanking his chain...:-)
:-)
And if you hadn't had the ability for them to be less than 8K, you
wouldn't even try that argument. But just because the hardware gives you
some extra capabilities, you suddenly want to associate them with a
technology that really gives you much less capabilities.
Either way, the next page always start at the next 8K boundary.
> > On a PDP-11, all your virtual memory was always there when the process
> > was on the CPU
>
> In theory, at least (I don't know of an OS that made use of this), didn't the
> memory management hardware allow the possibility to do demand-paging? I note
> that Access Control Field value 0 is "non-resident".
Oh yes. You definitely could do demand paging based on the hardware
capabilities.
> Unix kinda-sorta used this stuff, to automatically extend the stack when the
> user ran off the end of it (causing a trap).
Ah. Good point. The same is also true for brk, even though that is an
explicit request to grow your memory space at the other side.
DEC OSes had the brk part as well, but stack was not automatically
extended if needed. DEC liked to have the stack at the low end of
address space, and have hardware that trapped if the stack grew below
400 (octal).
> > you normally did not have demand paging, since that was not really
> > gaining you much on a PDP-11
>
> Especially on the later machines, with more than 256KB of hardware main
> memory. Maybe it might have been useful on the earlier ones (e.g. the -11/45).
Yeah, it would actually probably have been more useful on an 11/45.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Johnny Billquist
> PDP-11 have 8K pages.
Segments. :-) (This is an old argument between Johnny and me, I'm not trying
to re-open it, just yanking his chain... :-)
> On a PDP-11, all your virtual memory was always there when the process
> was on the CPU
In theory, at least (I don't know of an OS that made use of this), didn't the
memory management hardware allow the possibility to do demand-paging? I note
that Access Control Field value 0 is "non-resident".
Unix kinda-sorta used this stuff, to automatically extend the stack when the
user ran off the end of it (causing a trap).
> you normally did not have demand paging, since that was not really
> gaining you much on a PDP-11
Especially on the later machines, with more than 256KB of hardware main
memory. Maybe it might have been useful on the earlier ones (e.g. the -11/45).
Noel
On 2018-04-25 23:14, Paul Winalski <paul.winalski(a)gmail.com> wrote:
> On 4/25/18, Ronald Natalie
> <ron(a)ronnatalie.com> wrote:
>> The fun argument is what is Virtual Memory. Typically, people align that
>> with paging but you can stretch the definition to cover paging.
>> This was a point of contention in the early VAX Unix days as the ATT (System
>> III, even V?) didn’t support paging on the VAX where as BSD did.
> In my book, virtual memory is any addressing scheme where the
> addresses that the program uses are different from the physical memory
> addresses. Nowadays most OSes use a scheme where each process has its
> own virtual address space, and virtual memory is demand-paged from
> backing store on disk. But there have been other schemes.
Yeah...
> Some PDP-11 models had a virtual addressing feature called PLAS
> (Program Logical Address Space). The PDP-11 had 16-bit addressing,
> allowing for at most 64K per process. To take advantage of physical
> memory larger than 64K, PLAS allowed multiple 64K virtual address
> spaces to be mapped to the larger physical memory. Sort of the
> reverse of the usual virtual addressing scheme, where there is more
> virtual memory than physical memory.
Note that PLAS is not a PDP-11 hardware thing. PLAS was the name for the
mechanism provided by the OS for applications to be able to access more
than 64K of memory while still be limited by the virtual address space
limit of 64K.
PLAS is in one way very similar to mmap, except that it's not backed by
a file. But you create a memory region though the OS (giving it a name
and a size, which can be more than 64K), and then you can map to it,
specifying the offset into it and window size, as well as where to map
to in your virtual address space.
This is realized by just using the pages of the MMU of the PDP-11 map to
different parts and things.
Any OS that had the PLAS capability by necessity had to have an MMU,
which was the hardware part that allowed this to be implemented.
So, all PDP-11s with an MMU could allow the OS running on it to provide
the PLAS capabilities.
A PDP-11 in general is "reversed" in that the physical address space is
much larger than the virtual one. Although, the same was also true on
the VAX in the last iteration where the NVAX implementation allowed for
a 34 bit physical address, while the virtual address space was still
only 32 bits.
But that doesn't make the virtual memory any less virtual.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Johnny Billquist
> I don't know exactly why DEC left the last three tracks unused. Might
> have been for diagnostic tools to have a scratch area to play with.
> Might have been that those tracks were found to be less reliable. Or
> maybe something completely different. But it was not for bad block
> replacement, as DEC didn't even do that on RK05
The "pdp11 peripherals handbook" (1975 edition at least, I can't be bothered
to check them all) says, for the RK11:
"Tracks/surface: 200+3 spare"
and for the RP11:
"Tracks/surface: 400 (plus 6 spares)"
which sounds like it could be for bad block replacement, but the RP11-C
Maintenance Manual says (pg. 3-10) "the inner-most cylinders 400-405 are only
used for maintenance".
Unix blithely ignored all that, and used every block available on both the
RK11 and RP11.
Noel
On 2018-04-25 16:39, arnold(a)skeeve.com wrote:
> Tim Bradshaw<tfb(a)tfeb.org> wrote:
>
>> Do systems with huge pages page in the move-them-to-disk sense I wonder?
>> I assume they don't in practice because it would be insane but I wonder
>> if the VM system is in theory even willing to try.
> Why not? If there's enough backing store availble?
>
> Note that many systems demand page-in the code section straight out of the
> executable, so if some of those pages aren't needed, they can just
> be released. And said pages can be shared among all processes running
> the same executable, for further savings.
Right.
>> Something I never completely understood in the paging vs swapping
>> thing was that I think that systems which could page (well, 4.xBSD in
>> particular) would*also* swap if pushed. I think the reason for that was
>> that, if you were really short of memory, swapping freed up the process
>> structure and also the page tables &c for the process, which would still
>> be needed even if all its pages had been evicted. Is that right?
> It depends upon the system. Some had pageable page tables, which is
> pretty hairy. Others didn't. I don't remember what 4BSD did on the
> Vax, but I suspect that the page tables and enough info to find everything
> on swap stayed in kernel memory. (Where's Chris Torek when you need
> him?:-)
The pages tables describing the users memory space are themselves
located in virtual memory on the VAX, so they can be paged out without
problem. If you refer to an entry in the user page table, and that page
itself is paged out, you'll get a page fault for the system page table,
so you'll need to page in that page of the system.
But I seem to remember 4BSD (as well as NetBSD) keep all of the kernel
in physical memory all the time, and don't page the kernel parts,
including process page tables.
> But yes, swapping was generally used to free up large amounts of memory
> if under heavy load.
Paging would free up the same amount of memory, if we talk about the
memory used by the process itself. However, there are various meta data
in the kernel itself that is needed for a process, which will remain in
memory even if no pages are in memory. Swapping will also move
non-essential kernel structures out to disk for the process, in addition
to the pages. Thus, there is a difference between swapping and paging.
The whole process context for example. Which includes both the page
tables as well as the kernel mode stack for the process, processor
registers, and possibly also open file contexts, and probably some other
things I'm forgetting now.
Very little needs to be kept in memory for a process if you are not
interested in resuming it on short notice.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I came across this yesterday:
> Fun fact: according to unsubstantiated UNIX lore, "rm" is NOT short-hand
> for "remove" but rather, it stands for the initials of the developer that wrote
> the original implementation, Robert Morris.
>
> https://news.ycombinator.com/item?id=16916565
I was curious if there's any truth to it. I found
http://minnie.tuhs.org/cgi-bin/utree.pl and was poking around but
couldn't determine when the rm command came about.
Thoughts?
--
Eric Blood
winkywooster(a)gmail.com
On 2018-04-25 16:39, Ronald Natalie<ron(a)ronnatalie.com> wrote:
>
>> On Apr 24, 2018, at 9:27 PM, Dan Stromberg<drsalists(a)gmail.com> wrote:
>>
>> On Sun, Apr 22, 2018 at 2:51 PM, Dave Horsfall<dave(a)horsfall.org> wrote:
>>> Now, how many youngsters know the difference between paging and swapping?
>> I'm a mere 52, but I believe paging is preferred over swapping.
>>
>> Swapping is an entire process at a time.
>>
>> Paging is just a page of memory at a time - like 4K or something thereabout.
> Early pages were 1K.
What machines are we talking about then?
PDP-11 have 8K pages. VAX have 512 byte pages, if we talk about hardware.
(And yes, I know pages on PDP-11s are not fixed in size, but if you want
the page to go right up to the next page, it's 8K.)
> The fun argument is what is Virtual Memory. Typically, people align that with paging but you can stretch the definition to cover paging.
> This was a point of contention in the early VAX Unix days as the ATT (System III, even V?) didn’t support paging on the VAX where as BSD did.
> Our comment was that “It ain’t VIRTUAL memory if it isn’t all there” as opposed to virtual addressing.
Weird comment. What does that mean? On a PDP-11, all your virtual memory
was always there when the process was on the CPU, but it might not be
there at other times. Just as not all processes memory would be in
physical memory all the time, since that often would require more
physical memory than you had.
But you normally did not have demand paging, since that was not really
gaining you much on a PDP-11. On the other hand, overlays do the same
thing for you, but in userspace.
So you would claim that ATT Unix did not have virtual memory because it
didn't do demand paging?
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Fake news knocks on the doors of Unixland: there's absolutely
no truth to the claim that the rm command was written by
Robert Morris. Rm was there from the beginning, when only
two people wrote Unix code--Thompson and Ritchie. In fact,
it would have been on PDP8 Unix, which Morris never used.
Doug
On 4/22/18, Clem cole <clemc(a)ccc.com> wrote:
>
> BTW if you want to be correct about dates - the DEC released the Vax in 76
> not 78 ( I personally used to program Vax serial #1 at CMU under VMS 1.0
> before I was at UCB which is what Dan had asked).
As I remember it, DEC announced the VAX in 1976 or 1977, and first
official customer ship didn't happen until 1978. Holy Cross had one
of the hardware beta machines in 1977. It ran a beta version of VMS
(version X0.5 initially). I ported a bunch of programs to the VAX,
including the PDP-10 version of Adventure.
-Paul W.
On 2018-04-24 01:30, Grant Taylor <gtaylor(a)tnetconsulting.net> wrote:
> On 04/23/2018 04:15 PM, Warner Losh wrote:
>> It's weird. These days lower LBAs perform better on spinning drives.
>> We're seeing about 1.5x better performance on the first 30% of a drive
>> than on the last 30%, at least for read speeds for video streaming....
> I think manufacturers have switched things around on us. I'm used to
> higher LBA numbers being on the outside of the disk. But I've seen
> anecdotal indicators that the opposite is now true.
That must have been somewhere in the middle of history in that case. Old
(proper) drives had/have track 0 at the outer edge. The disk loaded the
heads after spin up, and that was at the outer edge, and then you just
locked on to track 0, which should be near.
Heads had to be retracted for the disk pack to be replaced.
But this whole optimization for swap based on transfer speeds makes no
sense to me. The dominating factor in spinning rust is seek times, and
not transfer speed. If you place the swap at one end of the disk, it
won't matter much that transfers will be faster, as seek times will on
average be much longer, and that will eat up any transfer gain ten times
over before even thinking. (Unless all your disk ever does is swapping,
at which time the heads can stay around the swapping area all the time.)
Which is also why the file system for RSX (ODS-1) placed the index file
(equivalent of the inode table) at the middle of the disk by default.
Not sure if Unix did that optimization, but I would hope so. (Never dug
into that part of the code.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Computer pioneer Niklaus Wirth was born on this day in 1934; he basically
designed ALGOL, one of the most influential languages ever, with just
about every programming language in use today tracing its roots to it.
His name is pronounced "vurt" but he would accept "worth", and he joked
that you could call him by name or by value (you need to know ALGOL to
understand).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>Date: Mon, 23 Apr 2018 13:51:07 -0400
>From: Clem Cole <clemc(a)ccc.com>
>To: Ron Natalie <ron(a)ronnatalie.com>
>Cc: Tim Bradshaw <tfb(a)tfeb.org>, TUHS main list <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] /dev/drum
>Message-ID:
> <CAC20D2PEzAayjfaQN+->kQS=H7npcEZ_OKXL1ffPxak5b2ENv4Q(a)mail.gmail.com>
>Content-Type: text/plain; charset="utf-8"
... some stuff removed ...
>Exactly... For instance an RK04 was less that 5K blocks (4620 or some
>such - I've forgotten the actually amount). The disk was mkfs'ed to the
>first 4K and the left over was give to the swap system. By the time of
>4.X, the RP06 was 'partitioned' into 'rings' (some overlapping). The 'a'.
>partition was root, the 'b' was swap and one fo the others was the rest.
>Later the 'c' was a short form for copying the entire disk.
Wondered why, but I guess now I know that's the reason Digital UNIX on
alpha used the same disk layout. From a AlphaServer DS10 running
DU4.0g, output "disklabel -r rz16a"
# /dev/rrz16a:
type: SCSI
disk: BB009222
label:
flags:
bytes/sector: 512
sectors/track: 168
tracks/cylinder: 20
sectors/cylinder: 3360
cylinders: 5273
sectors/unit: 17773524
rpm: 7200
interleave: 1
trackskew: 66
cylinderskew: 83
headswitch: 0 # milliseconds
track-to-track seek: 0 # milliseconds
drivedata: 0
8 partitions:
# size offset fstype [fsize bsize cpg] #
NOTE: values not exact
a: 524288 0 AdvFS # (Cyl. 0 - 156*)
b: 1572864 524288 swap # (Cyl. 156*- 624*)
c: 17773524 0 unused 0 0 # (Cyl. 0 - 5289*)
d: 0 0 unused 0 0 # (Cyl. 0 - -1)
e: 0 0 unused 0 0 # (Cyl. 0 - -1)
f: 0 0 unused 0 0 # (Cyl. 0 - -1)
g: 4194304 2097152 AdvFS # (Cyl. 624*- 1872*)
h: 11482068 6291456 AdvFS # (Cyl. 1872*- 5289*)
> From: "Ron Natalie"
> I'm pretty sure that swapping in V6 took place to a major/minor number
> configured at kernel build time.
Yup, in c.c, along with the block/character device switches (which converted
major device numbers to routines).
> You could create a dev node for the swap device, but it wasn't used for
> the actual swapping.
Yes.
> We actually dedicated a full 1024 block RF11 fixed head to the system in
> the early days
Speaking of fixed-head disks, one of the Bell systems used (IIRC) an RS04
fixed-head disk for the root. DEC apparently only used that disk for swapping
in their OS's... So the DEC diagnsotics felt free to scribble on the disk.
So, Field Circus comes in to work on the machine... Ooops!
Noel
> From: Clem Cole
> To be honest, I really don't remember - but I know we used letters for
> the different partitions on the 11/70 before BSD showed up.
In V6 (and probably before that, too), it was numbers:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man4/rp.4
So on my machine which had 2 x 50MB CalChomps, with a Diva controller, which
we had to split up into two partition each (below), they were dv00, dv01, dv10
and dv11. Letters for the partitions made it easier...
> The reason for the partition originally was (and it must have been 6th
> edition when I first saw it), DEC finally made a disk large enough that
> number of blocks overflowed a 16 bit integer. So splitting the disk
> into smaller partitions allowed the original seek(2) to work without
> overflow.
No, in V6 filesystems, block numbers (in inodes, etc - also the file system
size in the superblock) were only 16 bits, so a 50MB disk (100K blocks) had to
be split up into partitions to use it all. True of the RP03/04 in V6 too (see
the man page above).
Noel
Ingo wrote:
> i have been working hard to reduce the number of options of low usefulness
Ah, soothing classical Unix Musik, so rare in the cacophonous Linux era.
Doug
Ray Tomlinson, computer pioneer, was born on this day in 1941. He is
credited with inventing this weird thing called "email" on the ARPAnet, in
particular the "@" sign to designate a remote host (although some jerk --
his name is not important -- is claiming that he was first).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Clem Cole:
On the other hand, we still 'dump core' and use the core files for
debugging. So, while the term 'drum' lost its meaning, 'core file' - might
be considered 'quaint' by todays hacker, it still has meaning.
====
Just as we still speak of dialling and hanging up the phone,
electing Presidents, and other actions long since made obsolete
by changes of technology and culture.
Norman Wilson
Toronto ON
> From: Warner Losh
> Drum memory stopped being a new thing in the early 70's.
Mid 60's. Fixed-head disks replaced them - same basic concept, same amount of
bits, less physical volume. Those lasted until the late 70's - early PDP-11
Unixes have drivers for the RF11 and RS0x fixed-head disks.
The 'fire-hose' drum on the GE 645 Multics was the last one I've heard
of. Amusing story about it here:
http://www.multicians.org/low-bottle-pressure.html
Although reading it, it may not have been (physically) a drum.
> There never was a drum device, at least a commercial, non-lab
> experiment, for the VAXen. They all swapped to spinning disks by then.
s/spinning/non-fixed-head/.
Noel
Does anyone know if UToronto's MRS database system (from about 1979) has
survived? It was described in:
Robert Hudyma, John Kornatowski, Ivor Ladd. MRS: A microcomputer
database management system. Proceedings of the 1981 ACM SIGSMALL
symposium on Small systems and SIGMOD workshop on Small database
systems, pp 174-180.
Apparently it was distributed to over 50 unix sites. This is the
software which became the MISTRESS and later EMPRESS products.
De
The recent Empress and earlier PC[67]300 conversations have churned my
failing memory to catch up on the CTIX versions I ran throughout the
1980s.
I (sort of) remember 5.x and 6.x as being the releases we faced. The 5.x
ones were derived from SVR1 IIRC. When 6.x arrived, SVR2+ was the order
of the day, but I don't recall much or anything of SVR3 creeping in.
Certainly no RFS or the like. And Convergent wasn't shy about letting
bits of Berkeley code sneak in when that made sense.
I think the UUCP code got a significant update between 5 and 6. Didn't
the 5.x uucico have the "window > 3 == core dump" bug? By 6.x I recall it
grew 'G' protocol (at least).
Any ex-Convergent hacks on the list who can fill in the blanks?
--lyndon
We lost software engineer Dick Hustvedt on this day in 2008, following
severe injuries in a vehicle accident. He contributed much to RSX-11 and
VMS, including the infamous "microfortnight" and the SD730 Fixed Head
Solar Horologue. An obituary of him can be found at
http://software.intel.com/en-us/blogs/2008/04/23/dick-hustvedt-the-consumma…
(and it's worth reading).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
We lost Robert Taylor, computer scientist and Internet pioneer, on this
day in 2017. Amongst other things, he helped invent the mouse, pioneered
computer communications leading up to ARPAnet, developed the computer
science lab at Xerox...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Today I reached a minor milestone with my 'Venix restoration project' that
I talked about months ago. I ran a Venix 86 binary (sync) to successful
completion on my FreeBSD/amd64 box (though none of the code should be too
FreeBSD specific).
So, I hacked https://github.com/tkchia/reenigne.git to remove the DOS
loader and emulator and to add a Venix system call loader and emulator, or
at least the start of one. I've also added FP instruction parsing, but it's
100% wrong (it just parses the instructions and does nothing else to decode
or implement them). With this, I'm able to load OMAGIC binaries from the
extant venix 86 distributions and run them. The only one that runs
successfully is sync() since I've not yet implemented argument passing or
any of the other 58 system calls :). NMAGIC should be pretty quick after
this.
This is but a step on the road to getting the Venix compiler running so I
can see how much of the system I can recreate from the v7 and other sources
that are on TUHS.
Not sure who, if anybody, cares about this stuff. I thought people here
might be interested. I've pushed the results to
https://github.com/bsdimp/venix if you care. This program is in the
tools/86sim directory. There's also a doc directory where I document the
Venix 86 ABI, as well as doing a very deep-dive into a disassembled
/bin/sync to discover what I can from it (turns out, it's quite a lot).
So, I thought I'd share this here. Don't know if anybody else is
interested, but you never know until you tell people about stuff...
Warner
I was sure that I'd read a paper on the legal history of Unix. So I did a
Google search for it, and found a link to the PDF. The linked PDF was on
the TUHS website :-)
http://wiki.tuhs.org/lib/exe/fetch.php?media=publications:theses:gmp_thesis…
I'd better do a backup of my brain, as I've got a few flakey DRAM chips.
Cheers, Warren
> From: Clem Cole
> first of Jan 83 was the day the Arpanet was supposed to be turned off
Err, NCP, not the ARPANet. The latter kept running for quite some time,
serving as the Internet's wide-area backbone, and was only slowly turned off
(IMP by IMP) in the late 80's, with the very last remnants finally being
turned off in 1990.
> The truth is, it did not happen, there were a few exceptions granted for
> some sites that were not quite ready (I've forgotten now).
A few, yes, but NCP was indeed turned off for most hosts on January 1, 1983.
> From: "Erik E. Fair"
> as of the advent of TCP/IP, all those Ethernet and Chaosnet connected
> workstations became first class hosts on the Internet, which they
> could not be before.
Huh? As I just pointed out, TCP/IP (and the Internet) was a going concern well
_before_ January 1, 1983 - and one can confidently say that even had NCP _not_
been turned off, history would have proceeded much as it actually did, since
all the machines not on the ARPANET would have wanted to be connected to the
Internet.
(Also, to be technical, I'm not sure if TCP/IP ever really ran on CHAOSNet
hardware - I know I did a spec for it, and the C Gateway implemented it, and
there was a Unix machine at EECS that tried to use it, but it was not a great
success. Workstations connected to the CHAOSNet as of that date - AFAIK, just
LISP Machines - could only get access to the Internet via service gateways,
since at that point they all only implemented the CHAOS protocols; Symbolics
did TCP/IP somewhat later, IIRC, although I don't know the exact date.)
Noel
> I rewrote the article on the Software Tools project
An excellent job, Deborah.
> the Software Tools movement established one of the earliest traditions of open source
Would you be open to saying "reestablished"? Open source (not so called,
and in no way portable) was very much a tradition of SHARE in the late
1950s. Portability, as exemplified in ACM's collected algorithms, came
in at the same time that industry moved to a model of trade secrets and
intellectual property. Open source went into eclipse.
Doug
I rewrote the article on the Software Tools project and, thanks to Bruce
Borden's efforts to upload, they accepted it within 1 day. You can see
it here: https://en.wikipedia.org/wiki/Software_tools_users_group
The Usenix article in Wiki is pretty thin, in case anyone would like to
spiffy it up.
Deborah
> From: Steve Nickolas
> I thought the epoch of the Internet was January 1, 1983.
Turning off NCP was a significant step, but not that big a deal in terms of
its actual effects, really.
For those of us already on the Internet before that date (since as the number
of ARPANet ports was severely limited, for many non-ARPANet-connected machines
- which were almost all time-sharing systems, at that point in time, so lots
of actual users - there was a lot of value in an Internet connection, so there
were quite a few), it didn't produce any significant change - the universe of
machines we could talk to didn't change (since we could only talk to
ARPANet-connected machines with TCP), etc.
And for ARPANET-connected machines, there too, things didn't change much - the
services available (remote login, email, etc) remained the same - it was just
carried over TCP, not NCP.
I guess in some sense it marked 'coming of age' for TCP/IP, but I'd analogize
it to that, rather than a 'birth' date.
Noel
> From: Clem Cole
> Katie Hafner's: Where Wizards Stay Up Late: The Origins Of The Internet
> ...
> It's a great read
Yes, she did a great deal of careful research, and it's quite accurate.
It _is_ pointed toward a general readership, not a technical one, so it's not
necessarily the best _technical_ history (which she had the material at hand
to produce, had she wanted to - but did not). Still, very worthwhile.
Noel
A nerdy group on an Aussie list are discussing old Unix cracks, and the
infamous "SPL 0" brick-that-box came up. I first saw it in ";login:" (I
think), and, err, tried it (as did others)...
Can anyone reproduce the code? It went something like:
> [ SPL 0 ]
>
> I only did that once (and you should've heard what he said to me)...
> I'm still trying to find the source for it (it was published in a
> ";login:" journal) to see if SIMH is vulnerable.
The concept was simple enough - fill your entire memory space with an uninterruptible instruction. It would have gone something like:
opc = 000230 ; 000230 is the opcode for SPL 0
sys brk, -1 ; or whatever value got you all 64k of address space
mov #place, sp
jmp place
. = opc - 2 ; the -2 is to allow for the PC increment on an instruction fetch, which I believe happens before any execution
place:
jsr pc, -(pc)
Ring any bells, anyone?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Dave Horsfall
> The Internet ... was born on this day in 1969, when RFC-1 got published
I have this vague memory that the Internet-History list decided that the
appropriate day was actually the day the format of the v4 headers was set,
i.e. 16 June, 1978. (See IEN-68, pg. 12, top.)
Picking the date of RFC-1 seems a little odd. Why not the day the first packet
was send over a deployed IMP, or the day the RFP was sent out, or the contract
let? And the ARPANet was just one predecessor; one might equally have picked a
CYCLADES date...
> (spelled with a capital "I", please, as it is a proper noun) ... As I
> said at a club lecture once, there are many internets, but only one
> Internet.
I myself prefer the formulation 'there are many white houses, but only one
White House'! :-)
Noel
J. Presper Eckert was born on this day in 1919; along with John Mauchly,
he was a co-designer of ENIAC, one of the world's first programmable
electronic computers. Yes, there is a long-running dispute over whether
ENIAC or Colossus was first; being a Pommie, I'm siding with Colossus :-)
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Steve Johnson:
But in this case, part of the requirement was to pass some standard
simulation tests (in FORTRAN, of course). He was complaining that
these programs had bugs and didn't give the right answer.
====
This reminds me of an episode during my time at Bell Labs.
The System V folks wanted to make pipes that were streams;
our experience in Research showed that that was useful. We'd
done it just by making pipe(2) create a stream. This caused
some subtle differences in semantics (pipes became full-duplex;
writing to a pipe put a delimiter in the stream, so that a
corresponding read on the other end would stop at the delimiter;
write(pipefd, "", 0) therefore generated something that would
make read(pipeotherfd, buf, len) return 0). We'd been running
all our systems that way for a while, and had uncovered no
serious problems.
But the System V folks were very nervous about it anyway, and
wrote a planning document in which they proposed to create a
new, different system call to make stream pipes. pipe(2) would
make an old-fashioned pipe; spipe(2) (or whatever it was called,
I forget the name) had to be called to get a stream. The document
didn't really explain the justification for this. To us in
Research it just sounded crazy.
Someone else was going to attend a meeting with the developers,
but at the last minute he had a conflict, so he drafted me to
go. Although I can be pretty blunt in writing, I try not to be
so much so in person when dealing with people I don't know; so
rather than asking how they could be so crazy as to add a new
kind of pipe, I asked why they really thought it necessary.
It took a little probing, but the answer turned out to be that
their management insisted that everything pass an official
verification suite to prove compliance with the System V,
Consider It Standard; and said verification suite didn't just
check that the first file descriptor returned by pipe(2) could
be read and the second written, it insisted that the first could
not be written and the second not read. Full-duplex pipes didn't
meet the standard, it was claimed.
I asked what exactly is the standard? The SVID, I was told.
What does the SVID really say, I wondered? We got a copy and
looked up pipe(2). According to the official standard, the
first file descriptor must be readable and the second writeable,
but there was no statement that it couldn't work the other way too.
Full-duplex pipes did in fact meet the standard; it was the
verification suite that, in an excess of zeal, didn't conform.
The developers were absolutely delighted with this. They too
thought it was stupid to have two different kinds of pipes,
particularly given our experience that full-duplex delimited
pipes didn't break anything. They were very happy to have
Research not just yell at them for doing things differently
from us, but help them figure out how to justify doing things
right.
I don't know just how they took this further with management,
but as it came out in SVr4, pipe(2) returned a full-duplex
stream. This is still true even unto Solaris 10, where I just
tested it.
I made friends that day. That developer group kept in touch
with me as they did further work on pipes, the terminal driver,
pseudo-ttys, and other things. I didn't agree with everything
they did, but we were able to discuss it all cordially.
Sometimes the verification program just needs to be fixed.
And sometimes the developers that seem set on doing the wrong
thing really want help in subverting whatever is forcing that
on them, because they really do know what the right thing is.
Norman Wilson
Toronto ON
Just had a look at RFC-1, my first look ever. First thing I noticed is
the enormous amount of abbreviations one is assumed to be able to
instantly place :-)
So looking up IMP for instance the wiki page gives me this funny titbit
"When Massachusetts Senator Edward Kennedy learned of BBN's
accomplishment in signing this million-dollar agreement, he sent a
telegram congratulating the company for being contracted to build the
"Interfaith Message Processor"."
https://en.wikipedia.org/wiki/Interface_Message_Processor
> Shortly after I arrived, the comp center announced a
brand-new feature -- permanent disc storage! (actually, I think it
was a drum...)
Irrelevant to the story (or Unix), but it was indeed a disc drive--much
more storage per unit volume than drums, which date to the 1940s, if
not before. Exact opposite of current technology: super heavy and
rigid combs banged in and out of the disk stack. The washing-machine
sized machine could be driven to walk across the floor. It would not
be nice to be caught in its path. (Fortunately ordinary work loads
did not have such an effect.) Vic Vyssotsky calculated that with only
10 times its 10MB capacity, we could have kept the entire printed
output since the advent of computers at the Labs on line.
Doug
The Internet (spelled with a capital "I", please, as it is a proper noun)
was born on this day in 1969, when RFC-1 got published; it described the
IMP and ARPAnet.
As I said at a club lecture once, there are many internets, but only one
Internet.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> Date: Fri, 30 Mar 2018 00:28:13 -0400
> From: Clem cole <clemc(a)ccc.com>
>
> Also, joy / BSD 4.2 was heavily influenced by Accent (and RIG )and the Mach memory system would eventually go back into BSD (4.3 IIRC) - which we have talked about before wrt to sockets and Accent/Mach’s port concept.
From an "outsider looking in” perspective I’m not sure I recognise such heavy influence in the sockets code base. Of course, suitability for distributed systems was an important part of the 4.2BSD design brief and Rick Rashid was one of the steering committee members, that is all agreed.
However in the code evolution for sockets I only see two influences that seem not direct continuations from earlier arpanet unices and have a possible Accent origins:
- Addition of sendto()/recvfrom() to the API. Earlier code had poor support for UDP and was forced through the TCP focused API’s, with fixed endpoint addresses. It could be Accent inspired, it could also be a natural solution for sending datagrams. For example, Jack Haverty had hacked a “raw datagram” facility into UoI Arpanet Unix around ’79 (it’s in the Unix Tree).
- Addition of a facility to pass file descriptors using sendmsg()/recvmsg() in the local domain. This facility was only added at the very last moment (it was not in 4.1c, only in 4.2). I’m being told that the CSRG team procrastinated on this because they did not see much use for it — and indeed it was mostly ignored in the field for the next two decades. Joy had left by that time, so perhaps the dynamics had changed.
Earlier I though that the select() call may have come from Accent, but it was inspired by the Ada select statement. Conceptually, it was preceded on Unix by Haverty’s await() call (also ’79).
For clarity: I wasn’t there, just commenting on what I see in the code.
Paul
The recent discussion of long-lived applications, and backwards
compatibility in Unix, got me thinking about the history of shared
objects. My own experience with Linux and MacOS is that
statically-linked applications tend to continue working from release
to release, but shared objects provided by the OS tend not to be
backwards compatible, and one often has to carry around with your
application the exact C runtime and other shared objects your program
was linked against. This is in big contrast to shared libraries on
VMS, where great care is taken to maintain strict backward
compatibility release to release.
What is the history of shared objects on Unix? When did they first
appear, and with what object/executable file format? The a.out ZMAGIC
format doesn't seem to support them. I don't recall if COFF does.
MACH-O, at least the MacOS dialect of it, supports dynamic libraries.
ELF supports them.
Also, when was symbol preemption invented? Traditional shared library
designs such as in IBM System/370, VMS, and Windows NT doesn't have
it. As one who worked on optimizations in compilers, I came to hate
symbol preemption because it prohibits many useful optimizations. ELF
does provide a way to turn it off, but it's on by default--you have to
explicitly declare symbols as protected or hidden via source language
pragmas to get rid of it.
-Paul W.
Time for another hand-grenade in the duck pond :-) Or as we call it
down-under, "stirring the possum".
On this day in 2010, it was found unanimously that Novell, not SCO, owned
"Unix". SCO appealed later, and it was dismissed "with prejudice"; SCO
shares plummeted as a result.
As an aside, this was the first and only time that I was on IBM's side,
and I still wonder whether M$ was bankrolling SCO in an effort to wipe
Linux off the map; what sort of an idiot would take on IBM?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On this day in 1778 businessman Oliver Pollock created the "$" sign, and
now we see it everywhere: shell prompts and variables, macro strings, Perl
variables, system references (SYS$BLAH, SWAP$SYS, etc), etc; where would
we be without it?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
[TUHS] long lived programs (was Re: RIP John Backus
> Every year someone takes some young hotshot and points them at some
"impossible" thing and one of them makes it work. I don't see that
changing.
Case in point.
We hired Tom Killian, a young high-energy physicist disenchanted
with contributing to hundred-author papers. He'd done plenty of
instrument programming, but no operating systems. So, high-energy
as he was, he cooked up an exercise to get his feet wet.
The result: /proc
Doug
On 3/17/2018 12:22 PM, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Leave it to IBM to do something backwards.
>
> Of course, that was in 1954, so I can't complain, it was 11 years before
> I was born. But that's ... odd.
>
> Was subtraction easier than addition with digital electronics back then?
> I would think that they were both the same level of effort (clock
> cycles) so why do something obviously backwards logically?
Subtraction was done by taking the two's complement and adding. I
suspect the CPU architect (Gene Amdahl -- not exactly a dullard)
intended programmers store array elements at increasing memory
addresses, and reference an array element relative to the address of the
last element plus one. This would allow a single index register (and
there were only three) to be used as the index and the (decreasing)
count. See the example on page 97 of:
James A. Saxon
Programming the IBM 7090: A Self-Instructional Programmed Manual
Prentice-Hall, 1963
http://www.bitsavers.org/pdf/ibm/7090/books/Saxon_Programming_the_IBM_7090_…
The Fortran compiler writers decided to reverse the layout of array
elements so a Fortran subscript could be used directly in an index register.
Hi,
The Hacker's Dictionary says that daemons were so named in CTSS. I'm
guessing then that Ken Thompson brought them into Unix? I've noticed that
more recent implementations of init have shunned the traditional
terminology in favor of the more prosaic word "services". For example,
Solaris now has SMF, the Service Management Facility, and systemd, the
linux replacement for init, has services as well. It makes me a little sad,
because it feels like some of the imaginativeness, fancifulness, and
playfulness that imbue the Unix spirit are being lost.
[try-II]
On Fri, Mar 23, 2018 at 6:43 AM, Tim Bradshaw <tfb(a)tfeb.org> wrote:
> On 22 Mar 2018, at 21:05, Bakul Shah <bakul(a)bitblocks.com> wrote:
>
>
> I was thinking about a similar issue after reading Bradshaw's
> message about FORTRAN programs being critical to his country's
> security. What happens in 50-100 years when such programs have
> been in use for a long time but none of the original authors
> may be alive? The world may have moved on to newer languages
> and there may be very few people who study "ancient" computer
> languages and even they won't have in-depth experience to
> understand the nuances & traps of these languages well enough.
> No guarantee that FORTRAN will be in much use then! Will it be
> like in science fiction where ancient spaceships continue
> working but no one knows what to do when they break?
>
>
> My experience of large systems like this is that this isn't how they work
> at all. The program I deal with (which is around 5 million lines naively
> (counting a lot of stuff which probably is not source but is in the source
> tree)) is looked after by probably several hundred people. It's been
> through several major changes in its essential guts and in the next ten
> years or so it will be entirely replaced by a new version of itself to deal
> with scaling problems inherent in the current implementation. We get a new
> machine every few years onto which it needs to be ported, and those
> machines have not all been just faster versions of the previous one, and
> will probably never be so.
>
> What it doesn't do is to just sit there as some sacred artifact which
> no-one understands, and it's unlikely ever to do so. The requirements for
> it to become like that would be at least that the technology of large-scale
> computers was entirely stable, compilers, libraries and operating systems
> had become entirely stable and people had stopped caring about making it do
> what it does better. None of those things seems very likely to me.
>
> (Just to be clear: this thing isn't simulating bombs: it's forecasting the
> weather.)
>
+1 - exactly
my
point.
We have drifted a bit from pure UNIX, but I actually do think this is
relevant to UNIX history. Once UNIX started to run on systems targeting
HPC loads where Fortran was the dominate programming language, UNIX quickly
displaced custom OSs and became the dominant target even if at the
beginning of that transition
as
the 'flavor' of UNIX did vary (we probably can and should discuss how that
happened and why independently
-- although
I will point out the UNIX/Linux implementation running at say LLNL != the
version running at say Nasa Moffitt). And the truth is today, for small
experiments you probably run Fortran on Windows on your desktop. But for
'production' - the primary OS for Fortran is a UNIX flavor of some type
and has been that way since the mid-1980s - really starting with the UNIX
wars of that time.
As I also have said here and elsewhere, while HPC and very much its
lubricant, Fortran, are not something 'academic CS types' like to study
these days
- even though
Fortran (HPC) pays my
and many of our
salar
ies
. Yet it runs on the system the those same academic types all prefer -
*i.e.* Ken and Dennis' ideas. The primary difference is the type of
program the users are running. But Ken and Dennis ideas work well for
almost all users and spans
specific
application market
s.
Here is a
picture
I did a few years ago for a number of Intel exec's. At the time I was
trying to explain to them that HPC is not a single style of application and
also help them understand that there two types of value - the code itself
and the data. Some markets (
*e.g.*
Financial) use public data but the methods they use
to crunch it
(
*i.e.* the
code
s
)
are
private, while others
market segments
might have private data (*e.g.*
oil and gas) but
different customers
use the same or similar codes to crunch it.
F
or this discussion, think about how much of the code I sho
w
below is complex arithmetics -
while
much of it is searching
google style
, but a lot is
just plain
nasty math. The 'nasty math' that has not changed
over the years
and thus those codes are dominated by Fortran.
[Note Steve has pointed out that with AI maybe the math could change in
the future - but certainly so far, history of these markets is basically
differential equations solvers].
As Tim says, I really can not
see
that changing and
the
reason (I believe) is I do not see any
compelling
economic reason to do so.
Clem
ᐧ
ᐧ
On this day in 1978 Kurt Shoens placed the following comment in
def.h (the fork i maintain keeps it in nail.h):
/*
* Mail -- a mail program
*
* Author: Kurt Shoens (UCB) March 25, 1978
*/
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
"The only thing I can think of is to use have programs that
translate programs in todays languages to a common but very
simple universal language for their "long term storage". May
be something like Lamport's TLA+? A very tough job.
"
Maybe not so hard. An existence proof is Brenda Baker's "struct",
which was in v7. It converted Fortran to Ratfor (which of course
turned it back to Fortran). Interestingly, authors found their
completely reorganized code easier to read than what they had
written in the first place.
Her big discovery was a canonical form--it was not a matter of
taste or choice how the code got rearranged.
It would be harder to convert the code to say, Matlab,
because then you'd have to unravel COMMON statements and
format strings. It's easy to cook up nasty examples, like
getting away with writing behyond the end of an array, but
such things are rare in working code.
Doug
A core package in a lot of the geospatial applications is a old piece of
mathematical code originally written in Fortran (probably in the sixties).
Someone probably in the 80's recoded the thing pretty much line for line
(maintaining the horrendous F66 variable names etc.) into C. It's
probably ripe for a jump to something else now.
We've been through four major generations of the software. The original
was all VAX based with specialized hardware (don't know what it was written
in). We followed that on with a portable UNIX (but mostly Suns, but ours
worked on SGI, Ardent, Stellar, various IBM AIX platofrms, Apollo DN1000's,
HP, DEC Alphas). This was primarily a C application. Then right about
the year 2000, we jumped to C++ on Windows. Subsequently it got back
ported to Linux. Yes there are some modules that have been unchanged for
decades, but the system on the whole has been maintained.
The bigger issue than software getting obsoleted is that the platform needed
to run it goes away.
> From: Larry McVoy <lm(a)mcvoy.com>
> Going forward, I wish that people tried to be simple as they tackle the
> more complicated problems we have.
I have a couple of relevant quotations on my 'Some Computer-Related Lines'
page:
"Deliberate complexity is the mark of an amateur. Elegant simplicity is the
mark of a master."
-- Unknown, quoted by Robert A. Crawford
"Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses
remove it."
-- Alan Perlis
"The most reliable components are the ones you leave out."
-- Gordon Bell
(For software, the latter needs to be read as 'The most bug-free lines of
codqe are the ones you leave out', of course.)
I remember watching the people building the LISP machine, and thinking 'Wow,
that system is complex'. I eventually decided the problem was that they were
_too_ smart. They could understand, and retain in their minds, all that
complexity.
Noel
On Wed, Mar 21, 2018 at 7:50 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I was sys admin for a Masscomp with a 40MB disk
Right an early MC-500/DP system - although I think the minimum was a 20M
[ST-506 based] disk.
On Wed, Mar 21, 2018 at 8:55 PM, Mike Markowski <mike.ab3ap(a)gmail.com>
wrote:
> I remember Masscomp ... it allowed data acquisition to not be swapped
>> out.
>>
> Actually, not quite right in how it worked, but in practice you got the
desired behavior. More in a minute.
On Wed, Mar 21, 2018 at 9:00 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I remember the Masscomps we had fondly. The ones we had were 68000 based
> and they had two of those CPUs running in lock step
Not lock step ... 'executor' and 'fixor' -- actually a solution Forest
Basket proposed at Stanford. The two implementations that actually did
this were the Masscomp MC-500 family and the original Apollo.
> ... because the
> 68K didn't do VM right. I can't remember the details, it was something
> like they didn't handle faults right so they ran two CPUs and when the
> fault happened the 2nd CPU somehow got involved.
>
Right, it the original 68000 did not save the faulting address properly
when it took the
exception, so there was not enough information to roll back the from the
fault and restart. The microcode in the 68010 corrected this flaw.
>
> Clem, what model was that?
We called the dual 68000 board the CPU board and the 68010/68000 board the
MPU. The difference was which chip was in the 'executor' socket and some
small changes in some PALs on the board to allow the 68010 to actually take
the exception. Either way, the memory fault was processed by the
'fixor'. BTW: the original MC-500/DP was a dual processor system, so it
has 4 68000 just for the 2 'cpu boards -- i.e. an executor and fixor on
each board. The later MC-5000 family could handle up to 16 processor
boards depending the model (MC-5000/300 was dual, MC-5000/500 up to 4 and
the MC-5000/700 up to 16 processor boards).
Also for the record, the MC-500/DP and MC-5000/700 predate the
multiprocessor 'Symmetry System' that Sequent would produce by a number of
years). The original RTU ran master/slave (Purdue VAX style) for the first
generation (certainly through RT 2.x and maybe 3.x). That restriction was
removed and a fully symmetric OS was created, as we did the 700
development. I've forgotten which OS release brought that out.
A few years later Stellix, while not in direct source development a child,
had the same Texieria/Cole team as RTU - was always fully symmetric - i.e.
lesson learned.
> And can you provide the real version of what I was trying say?
>
Sure ... soon after Motorola released the 68000, Stanford's Forest Baskett
in a paper I have sadly lost, proposed that the solution the 68000's issue
of not saving enough information when an illegal memory exception
occured, was to instead of allowing the memory exception, return 'wait
states' to the processor - thus never letting it fault.
More specifically, the original microprocessor designs had a fixed time
that the memory system needed to respond on a read or write cycle that was
defined by the system clock. I don't have either Motorola 68000 or MOS
6502 processor books handy, but as a for instance IIRC on a 1 Mhz MOS 6502
you had 100 nS for this operation. Because EPROMs of the day could not
respond that fast (IIRC 400ns for a read), the CPU chip designers created a
special 'WAIT' signal that would tell the processor, to look for the data
in on the next clock tick (and the wait signal could be repeated for each
tick for an indefinite wait if need be). *i.e. *in effect, when running
from ROM a 6502 would run at .25MHz if it was doing direct fetches from
something like an Intel 1702 ROM on every cycle. Similarly, early dynamic
RAM, while faster that ROM, had access issues with needing to ensure the
refresh cycles and the access cycles aligned. Static RAMs which were fast,
did not have these problems and could directly interface to processor, but
static RAMs of the day were the quite expensive chips (5-10 times) in
comparison to DRAM cost, so small caches were built to front end the memory
system.
Hence, the HW 'wait state' feature became standard (and I think is still
supported in todays processors), since it allows the memory system speed
and the processor to work at differing rates. i*.e.* the difference in
speed could be 'biased' to each side -> processor vs memory.
In his paper, Forest Basket observed that if a HW designer used the wait
state feature, a memory system could be built that refreshed the cache
using another processor, as long as the second processor that was 'fixing
up' the exception ( i.e. refilled the cache with proper data) could be
ensured it never had a memory fault.
Basket's idea was exactly how both the original Apollo and Masscomp CPU
boards were designed. On the Masscomp CPU board, there was a 68000 that
always ran from a static ram based cache (which we called the 'executor.'
If it tried to access a memory location that was not yet in cache, or if it
was a legal memory access, the memory system sent wait states until the
cache was refilled as you might expect. If the location was illegal, the
memory system also return a error exception as expected. However, it was
legal address but not yet in memory, the second 6800 (the 'fixor') was
notified of desire to access that specific memory location and the fixor
ran the paging code to put the page into the live memory. Once the cache
was properly set up, then the executor could be released from wait state
and instruction allowed to complete.
When Motorola released the 68010 with the internal fixes that would allow
an faulting instruction to be restarted, Masscomp revised the CPU board
(creating the MPU board) to install a 68010 in the executor socket and
changed a few PALs in the memory system on the board. If a memory address
was discovered to be legal, but not in memory, the executor (now a 68010)
was allowed to returned a paging exception and stack was saved
appropriately, and the executor did a context switch to different process -
allow the executor to do something useful while the fixor processed the
fault. Although we now had to add kernel code for the executor processor
to return from restart at the fault instruction on a context switch back to
the original process. The original fixor code did not need to be changed,
other than to remove need to clearing of the 'waiting flop' for that
restarted the executor. [RTU would detect which board was plugged into a
specific system automatically so it was an invisible change to the end user
- other than you got a few percent performance back if there was a lot of
page faults in your application since the executor never wait stated].
As for the way real time and analog I/O worked that Mike commented upon.
Yes, RTU - Real Time Unix supported features that could guarantee that I/O
would not be lost from the DMA front end. For those not familiar with
the system, it was a 'federated system' with a number of subsystems: the
primary UNIX portion that I just described and then a number of other
specialized processors that surrounded it for specific tasks. This
architecture was chosen because the market we were attacking was a
scientific laboratory and in particular real-time oriented - for instance,
some uses were Mass General in the Cardiac ward, on board ever EWACS plane
to collect all and retain all signals during any sortie [very interesting
application], NASA used 75 of them in Mission Control to be the 'consoles'
you see on TV, *etc *[ which have only recently be replaced by PCs, as some
were still in use at least 2 years ago when I was last in Houston].
So besides the 2/4 68000s in the main UNIX system, their was another
dedicated 68000 running a graphics system on the graphics board, another
80186 running TCP/IP, and an AM2900 bit slices processor called Data
Acquisition and Control Processor (DA/CP) - which Mike hinted, as well as
29000's for the FP unit and Array processor functionality. Using ther
DA/CP, in 1984, and MC-500 system could handle multiple 1MHz analog 16 bit
signals - just by sampling them -> in fact, a cute demo at the Computer
Museum in Boston just connected a TV camera to the DA/CP and without any
special HW, it could 'read' the video signal (if you normally have a
connection a TV camera, it usually takes a bunch of special logic to
interface to the TV signals -- in this case it was just SW in the DA/CP).
The resultant OS was RTU, were we had modified a combined 4.1BSD and System
III 'unices' (later up dated to 4.2 and V.2] to support RT behaviors, with
pre-emption, ASTs [which I still miss - a great idea but nicer than
traditional UNIX signals], async I/O, *etc*.. Another interesting idea was
the concept of 'universes' that allowed on a per user basis, to see a BSD
view of the system or an System V view].
One of the important things we did in RTU was rethink the file system
(although only a little then). Besides all the file types and storage
schemes you have with UFS, Masscomp added support for an pre-allocated
extent based files (like VMS and RSX) and then created a new file type
called contiguous file that used it [by the time of the Stellar and
Stellix, we just made all files extent based and got rid of the special
file type - *i.e.* looked like UFS to the user, but under the covers was
fully extend based]. So under RTU, once we had preallocated files that we
knew were on contiguous blocks, we could make all sorts of speed ups and
guarantees that traditional UNIX can not because of the randomized location
of the disk blocks.
So the feature that Mike was using, was actually less a UNIX change and did
not actually use the preemption and other types of RT features. We had
added the ability for the DA/CP to write/read information directly from the
disk without OS getting in the way - (we called it Direct to Disk IO).
An RTU application created a contiguous file large enough on disk to store
the data the DA/CP might receive - many megabytes typically - but the data
itself was never passed thru or stored in the UNIX buffer cache *etc*.
The microcode of the DA/CP and the RTU disk driver could/would cooperate
and all kernel or user processing on the UNIX side was by-passed. Thus,
when the real time data started to be produced by the DA/CP (say the EWACS
started getting radio signals on one of those 16 bit analog converters) the
digital representation of those analog signals were stored in the users
file system independent of what RTU was doing.
The same idea, by the way was why we chose to run IP/TCP on a
co-processor. So much of network I/O is 'unexpected,' we could
potentially lose the ability to make real-time guarantees. But keeping
all the protocol work outside of UNIX, the network I/O just operated on
'final bits' at the rate that was right for it. As an interesting aside,
this worked until we switched to a Motorola 68020 at 16 Mhz or 20Mhz
(whatever it was I've forgotten now) in the MC-5000 system. The host
ended up be so much faster than the 80186 in the ethernet coprocessor
board. We ended up offering a mode were they swapping roles and just used
the '186 board as a very heavily buffered ethernet controller, so we could
keep protocol processing very low priority compared to other tasks.
However if we did that, it did mean some of the system guarantees had to be
removed. My recollection is that only a couple of die hard RT style
customers, ended up continuing to use the original networking configuration.
Clem
> From: Dave Horsfall <dave(a)horsfall.org>
> Yep, and I'm glad that I had bosses to whom I didn't have to explain why
> my comments were so voluminous.
>
> And who first said "Write your code as though the next person to maintain
> it is a psychotic axe-wielding murderer who knows where you live"? I've
> often thought that way (as the murderer I mean, not the murderee).
>
> I'd name names, but he might be on this list...
>
I’ve always said that the person was a ‘reformed’ axe-wielding murderer
who knows where you live.
Please, a little decorum.
W.R.T. comments in code, and a bit of Unix, when I taught Unix Systems
programming at UCSD one of my students wanted to use his personal
computer for the programming. I didn’t care as long as it would pass the
tests I ran after the fact.
After his second homework was turned in, I stopped looking at his code
or even running it, as the comment blocks were just so well done. Nary
a comment in the function itself, just a block before each one very clearly
explaining what was happening.
David
"I was told in school (in 1985) that if I was ever lucky enough to
have access to the unix source, I'd find there were no comments. The
reason, I was told at the time, was that comments would make the
source code useful, and selling software was something AT&T couldn't
do due to some consent decree."
I can't speak for SYS V, but no such idea was ever mentioned in
Research circles. Aside from copyright notices, the licensing folks
took no interest in comments. Within rsearch there was tacit
recognition of good coding style--Ken's cut-to-the-bone code was
universally admired. This cultural understanding did not extend
to comments. There was disparagement for the bad, but not honor
for the good. Whatever comments you find in the code were put
there at the author's whim.
My own commenting style is consistent within a project, but
wildly inconsistent across projects, and not well correlated
with my perception of the audience I might be writing for.
Why? I'm still groping for the answer.
For imoortant code, custom is to describe it in a separate
paper, which is of course not maintained in parallel with
the code. In fact, comments are often out of date, too.
Knuth offered the remedy of "literate programming", which
might help in academic circles. In business, probably not.
Yet think of the possibility of a "literate spec", where
the code grows organically with the understanding of what
has to be done.
Doug
> From: Warren Toomey <wkt(a)tuhs.org>
> there is next to no commenting in the early code bases.
By 'early' you must mean the first 'C' PDP-11 Unixes, because certainly
starting with V6, it is reasonably well commented (to the point where I like
to say that I learned how to comment by reading the V6 code), e.g.:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/ken/slp.chttp://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/dmr/bio.c
to pick examples from each author; and there are _some_ comments in the
assembler systems (both PDP-7 and PDP-11).
> Given that the comments never made it into the compiled code, there was
> no space reason to omit comments. There must have been another reason.
I was going to say 'the early disks were really small', but that hypothesis
fails because the very earliest versions (in assembler) do have some comments.
Although assembler is often so cryptic, the habit of putting a comment on each
instruction isn't so unreasonable.
So maybe the sort of comments one sees in assembler code (line-by-line
descriptions of what's happening; for subroutines, which arguments are in
which registers; etc) aren't needed in C code, and it took a while for them to
work out what sort of commenting _was_ appropriate/useful for C code?
The sudden appearance in V6 does make it seem as if there was a deliberate
decision to comment the code, and they went through it and added them in a
deliberate campaign.
> From: Andy Kosela <akosela(a)andykosela.com>
> "Practice of Programming" by Rob Pike and Brian Kernighan.
> ...
> They also state: "Comments ... do not help by saying things the code
> already plainly says ... The best comments aid ... by briefly pointing
> out salient details or by providing a larger-scale view of the
> proceedings."
Exactly.
Noel
Revision 1.1, Sun Mar 21 09:45:37 1993 UTC (25 years ago) by cgd
http://cvsweb.netbsd.org/bsdweb.cgi/src/sbin/init/init.c?rev=1.1&content-ty…
Today is commonly considered the birthday of NetBSD.
Theo told me (seven years ago) that he, cgd, and glass (and one other
person) planned it within 30 minutes after discussing with the CSRG and
BSDI guys in the hot tub at the Town & Country Resort in San Diego at
the January 25-29 1993 USENIX conference. (Does anyone have more to
share about this discussion?) Soon, cgd had setup a CVS repository
(forked 386BSD with many patchkits) which was re-rolled a few times (due
to corrupted CVS). (So maybe March 21 is later than the real birthday.)
As far as I know, it is the oldest continuously-maintained complete
open source operating system. (It predates Slackware Linux, FreeBSD,
and Debian Linux by some months.)
"NetBSD" wasn't mentioned by name in the April 19. 1993 release files
(but was named in the announcement).
ftp://ftp.netbsd.org/pub/NetBSD/misc/release/NetBSD/NetBSD-0.8
On April 28, the kernel was renamed to /netbsd, the boot loader
identified it as NetBSD, and various references of 386BSD were changed
to NetBSD.
https://github.com/NetBSD/src/commit/a477732ff85d5557eef2808b5cbf221f3c7455…https://github.com/NetBSD/src/commit/446115f2d63299e52f34977fb4a88c289dcae9…
On 2018-03-21 14:48, Paul Winalski<paul.winalski(a)gmail.com> wrote:
>
> On 3/20/18, Clem Cole<clemc(a)ccc.com> wrote:
>> Paul can correct me, but I don't think DEC even developed a Pascal for TOPS
>> originally - IIRC the one I used came from the universities. I think the
>> first Pascal sold was targeted for the VAX. Also, RT11 and RSX were
>> 'laboratory' systems and those systems were dominated by Fortran back in
>> the day - so DEC marketing thought in those terms.
>>
> DEC did do a Pascal for RSX. I don't remember if it supported RT11 or
> RSTS. DEC did a BASIC compiler for RSTS and RSX. RSX and especially
> RT were designed mainly for real-time process control in laboratories.
DEC did both COBOL, DIBOL, PASCAL, FORTRAN (-IV, -IV-PLUS, -77), C as
well as Datatrieve for RSX and RSTS/E. Some of these were also available
for RT-11. Admittedly, the C compiler was very late to the game.
> A lot of the programming was in assembler for efficiency reasons
> (both time and space).
Yes. And MACRO-11 is pretty nice.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Let's see how much this thread can drift...
The venerable PDP-8 was introduced in 1965 today (or tomorrow if you're on
the wrong side of the date line). It was the first computer I ever used,
back around 1970 (I think I'd just left school and was checking out the
local University's computer department, and played with BASIC and FOCAL).
And (hopefully) coincidentally the Pentium first shipped in 1993; the
infamous FDIV defect was discovered a year later (and it turned out that
Intel was made aware of it by a post-grad student a bit earlier), and what
followed next was an utter farce, with some dealers refusing to accept the
results of a widely-distributed program as evidence of a faulty FPU.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: "Steve Johnson"
So, I have this persistent memory that I read, in some early Multics (possibly
CTSS, but ISTR it was Multics) document, a footnote explaining the origin of
the term 'daemon'. I went looking for it, but couldn't find it in a 'quick'
scan.
I did find this, though, which is of some interest: R. A. Freiburghouse, "The
Multics PL/1 Compiler" (available online here:
http://multicians.org/pl1-raf.html
if anyone is interested).
> There was a group that was pushing the adoption of PL/1, being used to
> code Multics, but the compiler was late and not very good and it never
> really caught on.
So, in that I read:
The entire compiler and the Multics operating system were written in EPL, a
large subset of PL/1 ... The EPL compiler was built by a team headed by
M. D. McIlroy and R. Morris ... Several members of the Multics PL/1 project
modified the original EPL compiler to improve its object code performance,
and utilized the knowledge acquired from this experience in the design of
the Multics PL/1 compiler.
The EPL compiler was written when the _original_ PL/1 compiler (supposedly
being produced by a consulting company, Digitek) blew up. More detail is
available here:
http://multicians.org/pl1.html
I assume it's the Digitek compiler you were thinking of above?
Noel
We lost computer pioneer John Backus on this day in 2007; amongst other
things he gave us FORTRAN (yuck!) and BNF, which is ironic, really,
because FORTRAN has no syntax to speak of.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I've put online at https://dspinellis.github.io/unix-history-man/ nine
timelines detailing the evolution of 15,596 unique documented facilities
(commands, system calls, library functions, device drivers, etc.) across
93 major Unix releases tracked by the Unix history repository.
For each facility you get a timeline starting from the release it first
appeared. Clicking on the timeline opens up the manual page for the
corresponding release. (Sadly, the formatting is often messed up,
because more work is needed on the JavaScript troff renderer I'm using.)
The associated scripts and the raw data are all available on GitHub.
Diomidis
A while ago someone was asking about the mt Xinu Unix manuals. I have a
found a complete set, currently owned by Vance Vaughan, one of the mt
Xinu founders. He is willing to donate them to Warren's Unix archive.
However, they are too expensive to ship to Australia.
Would anyone be willing to scan them in for the archive? Ah, there are
a lot of them (8? volumes). If so, I might be able to ship them to
somewhere in the US.
Let me know.
Thanks.
Deborah
Peter Guthrie Tait (1831--1901) seems to have recorded the oldest
mention of the thermodynamic demon of James {Clerk Maxwell} in the
page 213 image from Tait's book ``Sketch of Thermodynamics'' at
https://archive.org/stream/lifescientificwo00knotuoft#page/212/mode/2up
that was posted to this list by Bakul Shah <bakul(a)bitblocks.com> on
Tue, 20 Mar 2018 12:10:37 -0700.
I've been working on a bibliography (still unreleased) of Clerk
Maxwell, and the oldest reference that I had so far found to Maxwell's
demon is from an address by Sir William Thomson (later raised to Lord
Kelvin) entitled
The sorting demon of Maxwell: [Abstract of a Friday evening
Lecture before the Royal Institution of Great Britain,
February 28, 1879]
Proceedings of the Royal Institution of Great Britain 9,
113--114 (1882)
However, I've not been able to find that volume online. Hathi Trust
has only volumes 30--71, with numerous holes, and often, it will not
show page contents at all. The journal issue is old enough that few
university libraries are likely to have it, but it is probably
available through the Interlibrary Loan service.
I had also recorded
Harold Whiting
Maxwell's demons
Science (new series) 6(130), 83, July 1885
https://doi.org/10.1126/science.ns-6.130.83
and
W. Ehrenberg
Maxwell's demon
Scientific American 217(5) 103--110, November 1967
https://doi.org/10.1038/scientificamerican1167-103
plus numerous later papers and books.
I also went through a score of books on my shelf about physics or
thermodynamics, and finally found a brief mention of Maxwell's demon
in G. N. Lewis & M. Randall's famous text ``Thermodynamics'', first
published in 1923 (I have a 1961 reprint). The other books that I
checked remain strangely silent on that topic.
The Oxford English Dictionary (OED) online has this definition and
etymology:
>> ...
>> Maxwell's demon n. (also Maxwell demon) an entity imagined by Maxwell
>> as allowing only fast-moving molecules to pass through a hole in one
>> direction and only slow-moving ones in the other direction, so that if
>> the hole is in a partition dividing a gas-filled vessel, one side
>> becomes warmer and the other cooler, in contradiction of the second
>> law of thermodynamics.
>>
>> 1879 W. Thomson in Proc. Royal Inst. 9 113 Clerk Maxwell's `demon' is
>> a creature of imagination.., invented to help us to understand the
>> `Dissipation of Energy' in nature.
>>
>> 1885 Science 31 July 83/1 (heading) Maxwell's demons.
>>
>> 1956 E. H. Hutten Lang. Mod. Physics iv. 152 It would require a
>> Maxwell demon..to select the rapidly moving molecules according to
>> their velocity and concentrate them in one corner of the vessel.
>>
>> 1971 Sci. Amer. Sept. 182/2 Maxwell's demon became an intellectual
>> thorn in the side of thermodynamicists for almost a century. The
>> challenge to the second law of thermodynamics was this: Is the
>> principle of the increase of entropy in all spontaneous processes
>> invalid where intelligence intervenes?
>>
>> 1988 Nature 27 Oct. 779/2 Questions about the energy needed in
>> measurement began with Maxwell's demon.
>> ...
For the word `daemon', the OED has this:
>> ...
>> Etymology: Probably an extended use of demon ....
>>
>> A program (or part of a program), esp. within a Unix system, which
>> runs in the background without intervention by the user, either
>> continuously or only when automatically activated by a particular
>> event or condition. A distinction is sometimes made between the form
>> daemon, referring to a program on an operating system, and demon,
>> referring to a portion of a program, but the forms seem generally to
>> be used interchangeably, daemon being more usual.
>>
>> 1971 A. Bhushan Request for Comments (Network Working Group)
>> (Electronic text) No. 114. 2 The cooperating processes may be
>> `daemon' processes which `listen' to agreed-upon sockets, and
>> follow the initial connection protocol.
>>
>> 1983 E. S. Raymond Hacker's Dict. 53 The printer daemon is just a
>> program that is always running; it checks the special directory
>> periodically, and whenever it finds a file there it prints it
>> and then deletes it.
>>
>> 1989 DesignCenter ii. 41/3 The file server runs a standard set of
>> HP-UX system and network daemons.
>>
>> 1992 New Scientist 18 Jan. 35/2 These programs, which could recognise
>> simple patterns, were made up of several independent
>> information-processing units, or `demons', and a `master
>> demon'.
>>
>> 2002 N.Y. Times 7 Mar. d4/5 A mailer daemon installed on an e-mail
>> system can respond to a piece of incorrectly addressed e-mail
>> by generating an automated message to the sender that the
>> message was undeliverable.
>> ...
----------------------------------------
>From The Hacker's Dictionary (1983), reproduced in the Emacs info node
Jargon, I find another `explanation' of daemon:
>> ...
>> :daemon: /day'mn/ or /dee'mn/ /n./ [from the mythological
>> meaning, later rationalized as the acronym `Disk And Execution
>> MONitor'] A program that is not invoked explicitly, but lies
>> dormant waiting for some condition(s) to occur. The idea is that
>> the perpetrator of the condition need not be aware that a daemon is
>> lurking (though often a program will commit an action only because
>> it knows that it will implicitly invoke a daemon). For example,
>> under {{ITS}} writing a file on the {LPT} spooler's directory
>> would invoke the spooling daemon, which would then print the file.
>> The advantage is that programs wanting (in this example) files
>> printed need neither compete for access to nor understand any
>> idiosyncrasies of the {LPT}. They simply enter their implicit
>> requests and let the daemon decide what to do with them. Daemons
>> are usually spawned automatically by the system, and may either
>> live forever or be regenerated at intervals.
>>
>> Daemon and {demon} are often used interchangeably, but seem to
>> have distinct connotations. The term `daemon' was introduced to
>> computing by {CTSS} people (who pronounced it /dee'mon/) and
>> used it to refer to what ITS called a {dragon}. Although the
>> meaning and the pronunciation have drifted, we think this glossary
>> reflects current (1996) usage.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> I'll have to redo my kludgy fix to gmtime() ... I guess I'll have to fix
> it for real, instead of my kludgy fix (which extended it to work for
> 16-bit results). :-)
> ...
> And on the -11/23:
> Note that the returned 'quotient' is simply the high part of the dividend.
Heh. I had decided that the easiest clean and long-lived fix was to just to do
it right, using the long division routine used in the V7 C compiler runtime:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/libc/crt/ldiv.s
and I vaguely recalled reading a DMR story that talked about that, so just for
amusement I decided to re-read it, and looked it up:
https://www.bell-labs.com/usr/dmr/www/odd.html
(the section "Comments I do feel guilty about"), and it's lucky I did, because
I found this:
Addendum 18 Oct 1998
Amos Shapir of nSOF (and of long memory!) just blackened (or widened) the
spot a bit more in a mail message, to wit:
'I gather the "almost" here is because this trick almost worked... It has a
nasty bug which I had to find the hard way!
The "clever part" relies on the fact that if the "bvc 1f" is not taken, it
means that the result could not fit in 16 bits; in that case the long value
in r0,r1 is left unchanged. The bug is that this behavior is not documented;
in later models (I found this on an 11/34) when the result does fit in 16
bits but not in 15 bits ... which makes this routine provide very strange
results!'
So this code won't work on an 11/23 either (which bashes the low register of
the pair; above). I'd have been groveling in buggy math, again...
Caveat Haquur (if you're trying to run stock V7 on a /23 or /34)!
Noel
So, I have discovered, to my astonishment, that the double-word version of the
DIV instruction on the PDP-11 won't do a divide if the result won't fit into
15 bits. OK, I can understand it bitching if the quotient wouldn't fit into 16
bits - but what's the problem with returning an unsigned quotient?
And, just for grins, the results left in the registers which hold the quotient
and remainer is different in the -11/23 (KDF11-A) and the -11/73 (KDJ11-A).
(Although, to be fair, the PDP-11 Architecture Manual says 'register contents
are unpredictable if there's an overflow'.)
Oh well, guess I'll have to redo my kludgy fix to gmtime() (the distributed
version of which in V6 qhas a problem when the number of 8-hour periods since
the epoch overflows 15 bits)! I guess I'll have to fix it for real, instead of
my kludgy fix (which extended it to work for 16-bit results). :-)
I discovered this when I plugged in an -11/73 to make sure the prototype QSIC
(our RK11/etc emulator for the QBUS) worked with the -11/73 as well as the
-11/23 (which is what we'd mostly been using - when we first started working
on the DMA and interrupts, we did try them both). I noticed that with the
-11/73, the date printed incorrectly:
Sun Mar 10 93:71:92 EST 1991
After a certain amount of poking and prodding, I discovered the issue - and
on further reading, discovered the limitation to 15-bit results.
For those who are interested in the details, here's a little test program that
displays the problem:
r = ldiv(a, b, d);
m = ldivr;
printf("a: 0%o %d. b: 0%o %d. d: 0%o %d.\n", a, a, b, b, d, d);
printf("q: 0%o %d. r: 0%o %d.\n", r, r, m, m);
and, for those who don't have V6 source at hand, here's ldiv():
mov 2(sp),r0
mov 4(sp),r1
div 6(sp),r0
mov r1,_ldivr
rts pc
So here are the results, first from a simulator:
tld 055256 0145510 070200
a: 055256 23214. b: 0145510 -13496. d: 070200 28800.
q: 0147132 -12710. r: 037110 15944.
This is _mathematically_ correct: 055256,0145510 = 1521404744., 070200 =
28800., and 1521404744./28800. = 0147132.
And on the -11/23:
a: 055256 23214. b: 0145510 -13496. d: 070200 28800.
q: 055256 23214. r: 037110 15944.
Note that the returned 'quotient' is simply the high part of the dividend.
And on the -11/73:
a: 055256 23214. b: 0145510 -13496. d: 070200 28800.
q: 055256 23214. r: 037110 15944.
Note that in addition to the quotient behaviour, as with the /23, the
'remainder' is the low part of the dividend.
Noel
> From: Paul McJones <paul(a)mcjones.org>
> I suspect the CPU architect (Gene Amdahl -- not exactly a dullard)
> intended programmers store array elements at increasing memory
> addresses, and reference an array element relative to the address of the
> last element plus one. This would allow a single index register (and
> there were only three) to be used as the index and the (decreasing)
> count.
I suspect the younger members of the list, who've only ever lived in a world
in which one lights ones cigars with mega-gates, so to speak, may be missing
the implication here.
Back when the 704 (a _tube_ machine) was built, a register meant a whole row
of tubes. That's why early machines had few/one register(s).
So being able to double up on what a register did like this was _HYYUUGE_.
Noel
On 3/17/2018 8:54 AM, Dave Horsfall <dave(a)horsfall.org> wrote:
> ... Was it the 704, or the 709? I recall that the
> array indexing order mapped directly into its index register or something
> ...
It first ran on the IBM 704, whose index registers subtracted (as did
the follow-on 709, 7090, etc), so array indexing went from higher memory
addresses to lower.
> The bookshelf: I had most of those books once; what's the one on the
> bottom right? It has a "paperback" look about it, but I can't quite make
> it out because of the reflection on the spine.
I'm not sure, and things have shifted since then on the shelves, but I
sent the original photo to your email address.
On 3/17/2018 12:22 PM, Steve Simon <steve(a)quintile.net> wrote:
> on the subject of fortran’s language, i remember hearing tell of a French version. anyone ever meet any?
Yes: here is the French version of the original Fortran manual, with
keywords in French (via
http://www.softwarepreservation.org/projects/FORTRAN/)
Anonymous. FORTRAN Programmation Automatique de L'Ordinateur IBM 704 :
Manuel du Programmeur. IBM France, Institut de Calcul Scientifique,
Paris. No date, 51 pages. Given to Paul McJones by John Backus.
http://archive.computerhistory.org/resources/text/Fortran/102663111.05.01.a…
Dave Horsfall <dave(a)horsfall.org> wrote:
> We lost computer pioneer John Backus on this day in 2007; amongst other
> things he gave us FORTRAN (yuck!) and BNF, which is ironic, really,
> because FORTRAN has no syntax to speak of.
I think of FORTRAN as having established the very idea of high-level programming languages. For example, John McCarthy’s first idea for what became LISP was to extend FORTRAN with function subroutines written in assembly language for list-manipulation. (He had to give up on this idea when he realized a conditional expression operator wouldn’t work correctly since both the then-expression and the else-expression would be evaluated before the condition was tested.) The original FORTRAN compiler pioneered code optimization, generating code good enough for the users at the physics labs and aerospace companies. For more on this compiler, see:
http://www.softwarepreservation.org/projects/FORTRAN/
Disclosure: I worked with John in the 1970s (on functional programming) — see:
http://www.mcjones.org/dustydecks/archives/2007/04/01/60/ .
Paul McJones
The first Internet domain, symbolics.com, was registered in 1985 at 0500Z
("Zulu" time, i.e. UTC).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> So what are its origins? Where did it first appear?
It was a direct copy from CTSS, which already had it
n 1965 when we BTL folk began to use it.
The greatest MOTD story of all time happened at CTSS.
To set the stage, the CTSS editor made a temp file,
always of the same name, in one's home directory.
The MOTD was posted by the administrator account.
The password file was plain text, maintained by
editing it.
And multiple people had access to the administrator
account.
It happened one day that one administrator was
working on the password file at the same time
another was posting MOTD. The result: the password
file (probably the most secret file on the system)
got posted as the MOTD (the most public).
Upon seeing the password file type out before him,
an alert user shut the machine down by writing
and running one line of assembly code:
HERE TRA *HERE
(The star is for indirect addressing, and indirection
was transitive.)
Doug
One of the things that's always fascinated me about Unix is the community
aspect; in particular, I imagine that in the early days when machines were
multiplexed among many simultaneous users, I wonder whether there was a
greater sense of knowing what others were up to, working on, or generally
doing.
I think of the /etc/motd file as being a part of this. It is, in some very
real sense, a way to announce things to the entire user community.
So what are its origins? Where did it first appear? I haven't dug into
this, but I imagine it was at Berkeley. What was it used for early on at
individual sites?
- Dan C.
> From: Dave Horsfall
> he would've been the registraNT, no?
Symbolics was the registrant.
I may have spoken too soon, Postel/ISI might not have been the registrar when
".com" was set up, so maybe it was someone at SRI/NIC. (The memory is dim.) I
don't remember how "MIT.EDU" got registered - I'm not sure if I did it. It
was definitely Jon handing out addresses, not SRI - I do recall us going to
Jon to get 128.30 & 31.
Noel
> From: Michael Kjörling
> the DNS RFCs (initially 1034, 1035) were only published in 1987...
Ah, those were later versions; the originals were:
0882 Domain names: Concepts and facilities. P.V. Mockapetris. November
1983.
0883 Domain names: Implementation specification. P.V. Mockapetris.
November 1983.
Both were updated by RFC0973 before being replaced by 1034/1035.
You might also want to look at:
0881 Domain names plan and schedule. J. Postel. November 1983.
0897 Domain name system implementation schedule. J. Postel. February 1984.
0921 Domain name system implementation schedule - revised. J. Postel. October 1984.
Note that ".com" didn't exist in the early revs.
Noel
> From: Lars Brinkhoff
> Is this "Network Unix" available?
??? This was announced here not long ago:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC
It's called 'NOSC' because that's where it came from, but it has the Illinois
NCP code in it.
Noel
hi,
im looking for Unix/Unix-like/Linux friendly hardware(desks,laps,phones,etc) free of proprietary software or compatible with free software(OS,BIOS firmware,etc) something that is easy to replace stock or something that cames with free software preinstalled and that i can replace them if i want to.
i've seen some lists that contain vendors that are Unix/Linux friendly and also the hardware endorsed by FSF which seem to be Lenovo thinkpads,etc the thing is it seems most of hardware require external flashing to replace BIOS,etc and makes the task harder..
my question are,
what are the bests Unix/Unix-like/Linux friendly hardware manufacturers?
which hardware is the best to make a computer 100% free (free BIOS and OS) and that is optimized and behave better under Unix/Unix-like/Linux based OS's?
Thank you.
--
PHACT Phreakers / Hackers / Anarchists / Cyberpunks / Technologists
Back when the dinosaurs were using card readers (and yes, I've used a card
reader on Unix; I think it was a desktop CDC model, and the driver would
handle two modes: strict 80-column i.e. one 12-bit column per 16-bit word
and you got 80 of 'em on a DMA channel, or ASCII NL-terminated after last
non-blank column, and no, I have no idea whether it handled EBCDIC or CDC
etc, but I digress as usual).
Where was I? Oh yes, sleeps...
Back when sleep(3) was sleep(2) (yes, Virginia, sleep() used to be a
system call, then it became alarm()/pause(), and now it seems to be
nanosleep(), and I'm wandering again), you never called sleep(1) because
its granularity was +/-1 second (and all the way up to +infinity,
actually, on a really busy machine), thus it could return right away, with
ensuing hilarity.
So, I'm curious:
When did sleep(2) become sleep(3)? Was it V7, or some BSD? Or Babbage
help me, SysVile?
When did the caveat about sleeping for 1 second become known? I don't
think that I ever saw it documented, but was one of those "lore" things
passed around at Unix conferences and the like.
And when did it start using nanosleep() instead of alarm()/pause()? I see
that my Penguin box has a bet both ways; it "may" use SIGALRM[a] (thus
"mixing calls to alarm(2) and sleep() is a bad idea" (well, I've used
both), and also refers to nanosleep().
[a]
Alpine's spell-checker suggested "SICKROOM" here; pretty close when
dealing with timed-out reads on a TTY connection[ii] :-)
[ii]
Have you tried this with Perl? You can't rely on EINTR[3], so you have to
use eval{} blocks instead, and it starts getting pretty fugly...
[3]
And here it suggested "ENTREE".
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>Date: Wed, 7 Mar 2018 13:54:32 -0500
>From: Paul Winalski <paul.winalski(a)gmail.com>
>To: Clem Cole <clemc(a)ccc.com>
>Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>Subject: Re: [TUHS] Sleep()y musings
>Message-ID:
> <CABH=_VTO7sdgGypp3U7zQoWdJ3HsGUUjrk->6_Rf5VE5gyNGD7g(a)mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>...
>VAX also has a Time-of-Year Clock Register (colloquially called the
>TOY clock), a 32-bit unsigned value whose LSB represents a resolution
>of 10 milliseconds (0.01 second). All VAX models except the
>VAX-11/730 provided battery backup for the TOY clock so that it
>continued to operate even when the system was powered off. A VAX can
>thus be powered off for about 497 days and still remember the
>date/time.
Also in AlphaServers we still have this TOY, the clock and the battery that is.
>From a DS10 running Digital Unix 4.0G, /var/adm/messages file, I only
removed the BEL characters
Dec 12 03:01:27 br0011 vmunix: You must reset the system time manually
Dec 12 03:01:27 br0011 vmunix: Time of year (TOY) clock returned zero
as the current time
Dec 12 03:01:27 br0011 vmunix:
Dec 12 03:01:27 br0011 vmunix:
Dec 12 03:01:27 br0011 vmunix: WARNING: preposterous time in TOY clock
-- CHECK AND RESET THE DATE!!
Dec 12 03:01:27 br0011 vmunix:
Dec 12 03:01:27 br0011 vmunix: i2c: Server Management Hardware Present
Dec 12 03:01:27 br0011 vmunix: datalink: links=128, macs=6
Dec 12 03:01:27 br0011 vmunix: NOTE: dxb_configure: Configure values:
dxb, ffffffffffffff9d, ffffffff90bfbf80, ffffffff90bf9a20
Dec 12 03:01:27 br0011 vmunix: WARNING: dxb_configure:
configure_driver error = 22
Dec 12 03:01:28 br0011 vmunix: Node ID is 00-10-64-30-ae-38 (from device tu0)
Dec 12 03:01:28 br0011 vmunix: WARNING: Time of year (TOY) clock
battery is dead, time and NVR contents ignored
Dec 12 03:01:28 br0011 vmunix:
Dec 12 03:01:28 br0011 vmunix: You must reset the system time manually
Cheers,
uncle rubl
> From: Dave Horsfall
> When did sleep(2) become sleep(3)? Was it V7, or some BSD?
Before V7. The MIT system (~PWB1) says, on the man page for sleep (II), "As of
this writing the system call is still available although the C routine
implmeneting the function uses 'alarm' and 'pause' (II). It will be withdrawn
when convenient."
Probably left the system call there for compiled commands, etc which used it?
Noel
> > But the concept of email goes way back.
> Indeed, it does, but only on the same system.
Very far back. CTSS had a mail utility.
If communication within one system is not
recognized as email, then the exchange that
opened in Boston in 1877 was not a
telephone system.
Doug
We lost Ray Tomlinson on this day in 2016; known as the inventor of email,
he sent the first message between two hosts on the ARPAnet (prior to that
the users had to be on the same host), and pioneered the use of the "@"
sign.
In the meantime, some tosser (his name is not important) is claiming that
he invented email first; I recall that APL\360 had a "mailbox" facility,
but it certainly didn't use "@".
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I hadn't realized that groff hyphenation had been taken from
Tex, not troff. Is that becuase Tex did a better job, or
because troff's was deemed proprietary?
A new paper comparing Unix kernel designs was published earlier today:
Stergios Papadimitriou and Lefteris Moussiades
Mac OS versus FreeBSD: A Comparative Evaluation
[IEEE] Computer 52(2), 44--53, February 2018
https://doi.org/10.1109/MC.2018.1451648
Despite the title, GNU/Linux also is included in the comparisons. The
abstract is:
>> ...
>> FreeBSD (an open source Unix-like OS) and Apple's Mac OS use similar
>> BSD functionality but take different approaches. FreeBSD implements a
>> traditional compact monolithic Unix kernel, whereas Mac OS builds the
>> BSD Unix functionality on top of the Mach microkernel. The authors
>> provide an in-depth technical investigation of both approaches.
>> ...
Our fellow list member Larry McVoy, and his lmbench suite, are
mentioned in the article, along with results of that suite.
There are about 200 numbers in the two large tables of performance
measurements, and while many are comparable across operating systems,
there are some benchmarks that show up to a 40x difference between
systems on the same hardware.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
pipes in SCO UNIX 3.2V4.2,
It's long, long ago, so excuses for vagueness. I think the issue was
not pipe() perse, but the difference in functionality between 'pipe
filesystem' and streams pipes.
By using pipe() you create a FIFO pipe with certain limitations
(including 5120 write limit). When you open the streams device twice
and ioctl() two file descriptors together you have more flexibility.
Excuses for the possible confusion.
Following the Claude Shannon discussion:
http://www.jaycar.com.au/useless-box/p/GT3706
I tried to explain to the Young Thing (tm) behind the shop counter that it
was invented several decades ago etc, but I suspect it was beyond her
ken...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> [Bob Fano} still has a reverential photograph of Shannpn
> hanging in his office.
Alas, no, Fano died in 2016 at age 98.
More memories: Fano was among the grad students who came to
ice-skating parties at our house in the mid-40s--the house near
Shannon's later abode. I did not really get to know him until
Multics days. His gravelly mafioso voice would scare you off--until
you saw the irrepressible twinkle in his eye. A beloved and
inspiring leader, worthy of Dave Horsfall's calendar.
Doug
>From a piece of code I have in some SCO UNIX 3.2V4.2 source. SCO
doesn't have pipes, but you can simulate them.
int fd[2]
int rc;
struct strfdinsert ins;
queue_t *pointer = (queue_t *)NULL;
rc = -1;
/*
* First open the stream clone device "/dev/spx" twice,
* obtaining the two file descriptors.
*/
if ( (fd[0] = open(SPX_DEVICE, O_RDWR)) < 0)
{
gen_trace(gtr_flag, "-gen_pipe(): -open(fd[0]): %s\n", strerror(errno));
break;
}
if ( (fd[1] = open(SPX_DEVICE, O_RDWR)) < 0)
{
gen_trace(gtr_flag, ">gen_pipe(): -open(fd[1]): %s\n", strerror(errno));
close(fd[0]);
break;
}
/*
* Now link these two streams together with an
* I_FDINSERT ioctl.
*/
ins.ctlbuf.buf = (char *) &pointer; /* no ctl info, just the ptr */
ins.ctlbuf.maxlen = sizeof(queue_t *);
ins.ctlbuf.len = sizeof(queue_t *);
ins.databuf.buf = (char *) 0; /* no data to send */
ins.databuf.len = -1; /* magic: must be -1, not 0, for stream pipe */
ins.databuf.maxlen = 0;
ins.fildes = fd[1]; /* the fd to connect with fd[0] */
ins.flags = 0; /* nonpriority message */
ins.offset = 0; /* offset of pointer in control buffer */
if (ioctl(fd[0], I_FDINSERT, (char * ) &ins) < 0)
{
gen_trace(gtr_flag, ">gen_pipe(): -ioctl(I_FDINSERT): %s\n", strerror(errno));
close(fd[0]);
close(fd[1]);
break;
}
If I'm remembering correctly, miniunix didn't have pipes. The shell faked
it by taking the output of the first program into a file and then using it
as the input for the second.
Didn't really multitask anyhow, so it was pretty much fine.
-----Original Message-----
From: TUHS [mailto:tuhs-bounces@minnie.tuhs.org] On Behalf Of Ian Zimmerman
Sent: Monday, February 26, 2018 11:58 AM
To: tuhs(a)minnie.tuhs.org
Subject: Re: [TUHS] EOF on pipes?
On 2018-02-26 23:03, Rudi Blom wrote:
> From a piece of code I have in some SCO UNIX 3.2V4.2 source. SCO
> doesn't have pipes, but you can simulate them.
Is this a SCO speciality, or are there other UNIXes like that?
Does it not even have pipe() in its libc?
Many years ago (when the dinosaurs were using V6), I had a crazy idea[*]
that a write(fd, 0, NULL) would somehow signal EOF to the reader i.e. a
subsequent read would wait for further data instead of ENOTOBACCO.
Did any *nix ever implement that? I have no idea how it would be done.
Have an ENOGORILLA.
To answer the real question: stream pipes, which became the only
sort of pipe in the Research stream (sic) sometime between the
8/e and 9/e manuals.
The implementation was trivial, because from the beginning the
metadata within a stream admitted delimiters: markers that meant
`when this object reaches read(2) at the head end, return from
read with whatever has already been delivered.' Empty messages
(two consecutive delimiters) were explicitly allowed.
If a stream was marked as using delimeters (and pipes always
were), a delimeter was inserted after every write(2). So
write(2) generated an empty message, and read(2) returned it.
Norman Wilson
Toronto ON
We lost Claude Shannon on this day in 2001. He was a mathematician,
electrical engineer, and cryptographer; he is regarded as the "father" of
information theory, and he pioneered digital circuit design. Amongst
other things he built a barbed-wire telegraph, the "Ultimate Machine" (it
reached up and switched itself off), a Roman numeral computer ("THROBAC"),
the Minivac 601 (a digital trainer), a Rubik's Cube solver, a mechanical
mouse that learned how to solve mazes, and outlined a chess program
(pre-Belle). He formulated the security mantra "The enemy knows the
system", and did top-secret work in WW-2 on crypto and fire-control
systems.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Re: RIP Claude Shannon
> Never heard of Claude Shannon. So a good opportunity to do some
> searching reading to 'catch up'.
Shannon did some amazing work. My field is Information science and without Shannon, it would be a dull field indeed. His masters thesis laid out an elegant use of digital logic for switching systems that we take for granted today, his mathematical theory of communication, while dense, is foundational - ever heard of a bit?, and he actually loved juggling so much he created a juggling machine - what’s not to love? :) All that said, he was also in the right place at the right time and was surrounded by genius.
Will
> But a note on Dijkstra's algorithm: Moore and Dijsktra both published
> in 1959.
I was off by one on the year, but the sign of the error is debatable.
Moore's paper was presented in a conference held in early April, 1957,
proceedings from which were not issued until 1959. I learned about it
from Moore when I first I met him, in 1958. Then, he described the
algorithm in vivid, instantly understandable terms: imagine a flood
spreading at uniform speed through the network and record the
distance to nodes in order of wetting.
> But it is documented Dijkstra's algorithm has been invented and used
> by him in 1956.
Taking into account the lead time for conference submissions, one
can confidently say that Moore devised the algorithm before 1957.
I do not know, though, when it first ran on a Bell Labs computer.
That said, Moore's paper, which presented the algorithm essentially
by example, was not nearly as clear as the capsule summary he gave
me. It seems amateurish by comparison with Dijkstra's elegant treatment.
Dijkstra's name has been attached to the method with good reason.
Doug
>> pipe, ch(e)root.... Any more unix connections to smoking?
I have a slide that's a quadralingual pun (French, English, Art, shell)
in which Magritte's painting of a pipe with the words "Ceci n'est pas
une pipe" has been altered to read "Ceci n'est pas une |". The
altered phrase was borrowed from Jay Misra et al, who used it as
an example of a message in a paper on communicating processes.
Many years ago (when the dinosaurs were using V6), I had a crazy idea[*]
that a write(fd, 0, NULL) would somehow signal EOF to the reader i.e. a
subsequent read would wait for further data instead of ENOTOBACCO.
Did any *nix ever implement that? I have no idea how it would be done.
[*]
I was full of crazy ideas then, such as extending stty() to an arbitrary
device and was told by the anti-CSU mob that it was a stupid idea...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
So many memories. The "ultimate machine" (which was brought out and
demonstrated from time to time while I was at the Labs) was built in
collaboration with Ed Moore (he of Moore-model automata, who published
"Dijkstra's algorithm" for shortest paths a year before Dijkstra) and
(I believe) Dave Hagelbarger. Moore endowed the machine with a longevity
property seldom remarked on: majority logic so that any electrical
component can be removed without harming its observable behavior.
Shannon moved to MIT from Bell Labs some weeks before I moved the
other way, so I only met him much later when he visited the Unix room
(an excuse, albeit weak, for this distant detour from TUHS). By that
time Shannon was descending into Alzheimer's fog, but his wife who
accompanied him was a memorably curious and perceptive visitor. I have
wondered what role she may have played as a sounding board or more in
Shannon's research.
As a child, I used to ski on the 50-foot hill that was the lawn of the
mansion that Shannon would buy when he moved to Massachusetts. We kids
would ski down and climb back up. Not Shannon. He installed a chairlift.
One house separated mine from the ski hill. It belonged to John Trump,
another MIT prof who engineered the Van de Graaff generator into a
commercial product for generating million-volt x-rays and, yes, was uncle
of the Donald. John, as kind as he was bright, fortunately did not live
to see the apotheosis of his wayward nephew.
Doug
> We lost Claude Shannon on this day in 2001. He was a mathematician,
> electrical engineer, and cryptographer; he is regarded as the "father" of
> information theory, and he pioneered digital circuit design. Amongst
> other things he built a barbed-wire telegraph, the "Ultimate Machine" (it
> reached up and switched itself off), a Roman numeral computer ("THROBAC"),
> the Minivac 601 (a digital trainer), a Rubik's Cube solver, a mechanical
> mouse that learned how to solve mazes, and outlined a chess program
> (pre-Belle). He formulated the security mantra "The enemy knows the
> system", and did top-secret work in WW-2 on crypto and fire-control
> systems.
Never heard of Claude Shannon. So a good opportunity to do some
searching reading to 'catch up'.
Interesting person and this quota tends to make him my type of guy
"I just wondered how things were put together.
– C.E. Shannon"
http://themathpath.com/documents/ee376a_win0809_aboutClaudeShannon_willywu.…
Now wondering if I should register for this THORIAC project or just
read some more and do it. Not in the mood for learning Python I'd
probably do some fumbling in C.
https://www.engage-csedu.org/find-resources/shannons-throbac
Keeps me busy and amuzed,
uncle rubl
Just curious; am I the only who, back in the early days of V6, used pipes
as temporary files? I mean that after calling pipe(), instead of then
forking and playing "file-descriptor footsie" you just read and wrote
within the same process.
I seem to recall that it worked, as long as you avoided the 8-block limit
(or whatever it was then); I have no idea why I tried it, other than to
see if it worked i.e. avoid the creat() (without the "e") etc.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On 2/20/18, Donald ODona <mutiny.mutiny(a)india.com> wrote:
> since '86 he was working on an operating system, named Mica, which failed.
>
> At 19 Feb 2018 18:13:59 +0000 (+00:00) from Paul Winalski
> <paul.winalski(a)gmail.com>:
>> Dave Cutler was in the VMS group only for VMS version 1. He rarely
>> stayed on around for version 2 of anything. Hustvedt's and Lipman's
>> contributions for VMS were more extensive and longer-lasting than
>> Cutler's.
Cutler had already left the VMS OS group by the time I joined DEC in
February of 1980. After VMS he led the team developing PL/I and C
compilers for VMS. These shared a common back end called the VAX Code
Generator (VCG). The other VMS compilers at the time (Fortran,
Pascal, Cobol) had their own separate optimizers and code generators.
The VAX Ada compiler would also use VCG.
When version 1 of VAX PL/i and VAX C shipped, Cutler worked on
subsetting the VAX architecture so that a single chip implementation
could be done, and led the team that produced the MicroVAX I. The
MicroVAX architecture emulated expensive instructions such as packed
decimal. All of the later, single-chip VAXes used this architecture.
When the MicroVAX I shipped, Cutler devised a microkernel-based
real-time operating system for the VAX called VAXeln.
After VAXeln, Cutler led the team developing a RISC architecture
follow-on to the VAX called PRISM, and an operating system for it
called Mica. Mica had a VAXeln-like microkernel, and the plan was to
layer personality modules on top of that to implement VMS and
Unix-style ABIs and system calls.
Alpha was chosen instead of PRISM as the VAX successor, and that is
when Cutler left DEC for Microsoft. Windows NT has a lot of design
concepts and details previously seen in PRISM and VMS.
-Paul W.
At Rutgers Newark, we had VMS system that had Whitesmith's C on it. At one point, Whitesmiths decided to "fight piracy" by sending you a sticker you were supposed to stick on the front of your computer to show that you had a licensed copy. I suppose I might have been in trouble if the Whitesmiths police came to my machine room. I was a bit miffed when one of the other employees actually stuck the thing to the machine.
Years later I was loosely affiliated with Unipress. I did some consulting for them when I was between jobs. I went out to dinner with their principal, a man named Mark Krieger. After a bit of conversation it occurred to me. "Didn't you get booed off the stage at the University of Delaware UNIX users group meeting." He admitted he had, he was half of Whitesmiths with Paul Plauger. It then came back to me about Idris and the software stamps. I mentioned the stamps and he said he was gone by then but that was his sign that Plauger had gone over the edge. I carefully peeled our sticker off the Vax and gave it to him the next time I saw him.
Let's see how much thread-drift I can generate this time...
Dick Hustvedt was born on this day in 1946; an architect of RSX-11 and
VMS, he also had a weird sense of humour which he demonstrated by
enshrining the "microfortnight" into VMS.
Sadly, we lost him in a car accident in 2008.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I've send a couple of you private messages with some more details of why I
ask this, but I'll bring the large question to debate here:
Have POSIX and
LSB lost
their
usefulness/relevance? If so, we know ISV’s like Ansys are not going to go
‘FOSS’ and make their sources available (ignore religious beliefs, it just
is not their business model); how to we get that level of precision to
allow
the part of the
market
that will be 'binary only' continue to
create applications?
Seriously, please try to stay away from religion on this
question. Clearly, there are a large number of ISVs have traditionally
used interface specifications. To me it started with things like the old
Cobol and Fortran standards for the languages. That was not good enough
since the systems diverge, and /usr/group then IEEE/ANSI/ISO did Posix.
Clearly, Posix enabled Unix implementations such a Linux to shine, although
Linux does not doggedly follow it. Apple was once Posix conformant, but
I'd not think they worry to much about it. Linux created LSB, but I see
fewer and fewer references to it.
I worry that without a real binary definition, it's darned hard (at least
in the higher end of the business that I live day-to-day) to get ISV's to
care.
What do you folks think?
Clem
As an aside about Wolfram and SMP (and one that actually has
something to do with UNIX):
I ran the VAX on which Wolfram et al (and it was very much et al)
developed SMP. It started out running UNIX/TS 1.0. I know how
that system was snuck out of Bell Labs, but if I told you I'd have
to terminate you with extreme prejudice. (I wasn't involved
anyway.)
SMP really needed dynamic paging; the TS 1.0 kernel had only
swapping. We had quite a few discussions about what to do.
Moving wholesale to 3BSD or early 4BSD (this was in 1981)
would have been a big upheaval for our entire user community.
Those systems were also notorious at the time for their delicate
stability: some people reported that they ran well, others that
they crashed frequently. Our existing system was pretty solid,
and I had already put some work into making it more so (better
handling of low-level machine errors, for example).
Somehow we ended up deciding that the least-painful path was
to lift the VM code out of 4BSD and shoehorn it into our
existing kernel, creating what we called Bastardized Paging
UNIX. I did most of the work; I was younger and more energetic
back then. Also considerably grumpier. In the heart of the
page-in (I think) code, the Berkeley guys had written a single
C function that stretched to about ten printed pages. (For those
too young to remember printers, that means about 600 lines.)
I was then and still am adamant that that's the wrong way to
write anything, but I didn't want to take the time to rewrite
it all, so (being young and grumpy) I relieved my feelings by
adding a grumpy comment at the top of the source file.
I also wrote a paper about the work, which was published in
(of all places) AUUGN. I haven't read it in years but it was
probably a bit snotty. It nevertheless ended up causing a
local UNIX-systems-software company to head-hunt me (but at
the time I had no interest in leaving Caltech), so it must
not have been too rude.
What days those were, when a single person could understand
enough of the OS to do stuff like that in only a month or two,
and get it pretty much right too. I did end up finding some
interesting race-condition bugs, probably introduced by me, but
fascinating to track down; e.g. something that went wrong only
if a page fault happened at exactly the right time with respect
to something else.
Norman Wilson
Toronto ON
Donald ODana:
already 20 years ago I met a guy (masters degree, university) who never
freed dynamically allocated memory. He told me he is 'instantiating
a object', but had no idea what an heap is, and what dynamically
allocated memory means.
====
This is the sort of programmer for whom garbage collection was named:
his programs are a collection of garbage.
Norman Wilson
Toronto ON
(In 1127-snark mode this evening)
>Date: Sat, 17 Feb 2018 17:47:22 +1100 (EST)
>From: Dave Horsfall <dave(a)horsfall.org>
>To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>Subject: [TUHS] Of birthdays etc
>Message-ID: <alpine.BSF.2.21.1802171649520.798(a)aneurin.horsfall.org>
>Content-Type: text/plain; format=flowed; charset=US-ASCII
>
>...
>Harris' Lament? Look it up with your favourite search engine (I don't use Google).
>
probably early 1995 I dabbled a bit in AltaVista Search. So even now
still using Yahoo somehow :-)
Keep it coming Dave, it's appreciated, at least by me.
>From a former DECcie,
uncle rubl
Blimey... How was I to know that a throw-away remark would almost develop
into a shitfight? It would help if people changed the Subject line too,
as I'm sure that Ken must've been a little peeved... It would also help
if users didn't bloody top-post either, but I suspect that I've lost that
fight.
Anyway, this whole business started when I thought it might be a good idea
to post reminders of historical events here, as I do with some of the
other lists that I infest^W infect^W inhabit. I figured that the old
farts here might like to be reminded of (IMHO) significant events, and
similarly the youngsters might want to be reminded that there was indeed
life before Linux (which, by the way, I happen to loathe, but that's a
different story).
I'm glad that some people appreciate it; and don't worry, Steffen, you'll
soon catch up, as they should all be in the archives :-) A long-term goal
(if I live that long) is to set up one of those "this day in history"
sites, but it looks like Harris' Lament[*] has already applied :-(
I've had a number of corrections (thanks!), some weird comments on
pronunciation (an Englishman can probably pick my ancestry from me saying
"castle" as "c-AH-stle" and "dance" as "d-A-nce" etc), but oddly enough no
criticism (well, unless I'm talking about mounting a magtape as a
filesystem; no, I will not forget the implication that I was a liar), and
Warren has yet to spank me...
For the morbidly curious I keep these events in Calendar on my MacBook
(which actually spends most of its time in Terminal, and I don't even know
how to use the Finder!), and am always noting things which interest me and
therefore possibly others.
Anyway, thanks all; it is an honour and a privilege to share a mailing
list with some of the people who wrote the software that I have both used
in the past and still use to this day.
[*]
Harris' Lament? Look it up with your favourite search engine (I don't use
Google).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> On Feb 14, 2018, Dave Horsfall <dave(a)horsfall.org> wrote:
>
> Computer pioneer Niklaus Wirth was born on this day in 1934; he basically
> designed ALGOL, one of the most influential languages ever, with just
> about every programming language in use today tracing its roots to it.
Wirth designed many languages, including Euler, Algol W, Pascal, Modula, and Oberon, but he did not design Algol; more specifically, he did not design Algol 60. Instead, a committee (J. W. Backus, F. L. Bauer, J. Green, C. Katz, J. McCarthy, P. Naur, A. J. Perlis, H. Rutishauser, K. Samelson, B. Vauquois, J .H. Wegstein, A. van Wijngaarden, and M. Woodger) designed it, and Peter Naur edited the remarkable Algol 60 specification. A few others, including Edsgar Dijkstra, who completed the first implementation, participated in meetings leading up to the final design.
From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Like PL/I, it also
> borrowed the indispensable notion of structs from business languages
> (Flowmatic, Comtran, Cobol).
That is an interesting insight. I always thought that structs were
inspired by the assembler DORG construct, and hence the shared namespace
for members.
The above insight goes some way to explain why PDP11 “as” did not have
a DORG construct, but early C did have ‘struct'.
Paul
So people have called me on the claim that lisp is not fast. Here's a
rebuttal.
Please write a clone of GNU grep in lisp to demonstrate that the claim
that lisp is slower that C is false.
Best of luck and I'll be super impressed if you can get even remotely
close without dropping into C or assembler. If you do get close, I
will with draw my claim, stand corrected, point future "lisp is slow"
people at the lisp-grep, and buy you dinner and/or drinks.
--lm
> From: Larry McVoy <lm(a)mcvoy.com>
> the proof here is to show up with a pure lisp grep that is fast as the C
> version. ... I've never seen a lisp program that out performed a well
> written C program.
Your exact phrase (which my response was in reply to) was "lisp and
performance is not a thing". You didn't say 'LISP is not just as fast as C' -
a different thing entirely. I disagreed with your original statement, which
seems to mean 'LISP doesn't perform well'.
Quite a few people spent quite a lot of time making LISP compiler output fast,
to the point that it was possible to say "this compiler is also intended to
compete with the S-1 Pascal and FORTRAN compilers for quality of compiled
numeric code" [Brooks,Gabriel and Steele, 1982] and "with the development of
the S-1 Lisp compiler, it once again became feasible to implement Lisp in Lisp
and to expect similar performance to the best hand-tuned,
assembly-language-based Lisp systems" [Steele and Gabriel, 1993].
Noel
> Computer pioneer Niklaus Wirth was born on this day in 1934; he basically
> designed ALGOL, one of the most influential languages ever, with just
> about every programming language in use today tracing its roots to it.
Rather than "tracing its roots to it", I'd say "has some roots in it".
Algol per se hardly made a ripple in the US market, partly due to
politics and habit, but also because it didn't espouse separate
compilation. However, as asserted above, it had a profound impact on
language designers and counts many languages as descendants.
To bring the subject back to Unix, C followed Fortran's modularity and
Algol's block structure. (But it reached back across the definitive Algol
60 to pick up the "for" statement from Algol 58.) Like PL/I, it also
borrowed the indispensable notion of structs from business languages
(Flowmatic, Comtran, Cobol). It adopted pointers from Lisp, as polished
by BCPL (pointer arithmetic) and PL/I (the -> operator). For better or
worse, it went its own way by omitting multidimensional arrays.
So C has many roots. It just isn't fashionable in computer-language
circles to highlight Cobol in your family tree.
Doug
> From: Larry McVoy <lm(a)mcvoy.com>
> I don't know all the details but lisp and performance is not a thing.
This isn't really about Unix, but I hate to see inaccuracies go into
archives...
You might want to read:
http://multicians.org/lcp.html
Of course, when it comes to the speed/efficientcy of the compiled code, much
depends on the program/programmer. If one uses CONS wildly, there will have to
be garbage collection, which is of course not fast. But properly coded to stay
away from expensive constructs, my understanding is that 'lcp' and NCOMPLR
produced pretty amazing object code.
Noel
Actually, Algol 60 did allow functions and procedures as arguments (with correct static scoping), but not as results, so they weren’t “first class” in the Scheme sense. The Algol 60 report (along with its predecessor and successor) is available, among other places, here:
http://www.softwarepreservation.org/projects/ALGOL/standards/
On Feb 16, 2018, Bakul Shah <bakul(a)bitblocks.com> wrote:
> They did lexical scoping "right", no doubt. But back when
> Landin first found that lambda calculus was useful for
> modeling programming languages these concepts were not clearly
> understood. I do not recall reading anything about whether
> Algol designers not allowing full lexical scopin was due to an
> oversight or realizing that efficient implementation of
> functional argument was not possible. May be Algol's call by
> name was deemed sufficient? At any rate Algol's not having
> full lexical scoping does not mean one can simply reject the
> idea of being influenced by it. Often at the start there is
> lots of fumbling before people get it right. May be someone
> should ask Steele?
Clueless or careless?
A customer program worked for many years till one of the transaction
messages had a few bytes added.
Looking into it I discovered that the program had only worked because
the receive buffer was followed by another buffer which was used in a
later sequence. Only when also that buffer overflowed some critical
integers got overwritten and used as index in tables that gave a lot
of fun.
Well, as all here know, C is fun :-)
> From: Larry McVoy <lm(a)mcvoy.com>
I am completely non-LISP person (I think my brain was wired in C before C
existed :-), but...
> Nobody has written a serious operating system
Well, the LISP Machine OS was written entirely in LISP. Dunno if you call that
a 'serious OS', but it was a significantly more capable OS than, say,
DOS. (OK, there was a lot of microcde that did a lot of the low-level stuff,
but...)
> or a serious $BIG_PROJECT in Lisp.
Have you ever seen a set of Symbolics manuals? Sylph-like, it wesn't!
> Not one that has been commercially successful, so far as I know.
It's true that Symbolics _eventually_ crashed, but I think the biggest factor
there was that commodity microprocessors (e.g. Pentium) got faster so much
faster than Symbolics' custom LISP hardware, so that the whole rationale for
Symbolics (custom hardware to run LISP fast) went away. They still exist as a
software company selling their coding environment, FWTW.
> C performs far better even though it is, in the eyes of lisp people, far
> more awkward to do things.
I think it depend on what you're doing. For some kinds of things, LISP is
probably better.
I mean, for most of the kind of things I do, I think C is the bees' knees
(well, except I had to add conditions and condition handlers when I went to
write a compiler in it), but for some of the AI projects I know a little
about, LISP seems (from a distance, admittedly) to be a better match.
Noel
On Feb 15, 2018, Ian Zimmerman <itz(a)very.loosely.org> wrote:
>>
>> So, how's this relevant to Unix? Well, I'd like to know more about the
>> historical interplay between the Unix and Lisp communities. What about
>> the Lisp done at Berkeley on the VAX (Franz Lisp).
>
> I know one of the Franz founders, I'll ask him when I have a chance.
There is some information about Franz Lisp and its origins here:
http://www.softwarepreservation.org/projects/LISP/maclisp_family/#Franz_Lis…
(And lots more information about many other varieties of Lisp at the same web site.)
On Sat, Feb 3, 2018 at 5:59 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Sat, 3 Feb 2018, Arthur Krewat wrote:
>
>> I would imagine that Windows wouldn't be what it is today without UNIX.
>> Matter of fact, Windows NT (which is what Windows has been based on since
>> Windows ME went away) is really DEC's VMS underneath the covers at least to
>> a small extent.
>>
>
> I thought that NT has a POSIX-y kernel, which is why it was so reliable?
> Or was VMS a POSIX-like system? I only used it for a couple of years in
> the early 80s (up to 4.0, I think), and never dug inside it; to me, it was
> just RSX-11/RSTS-11 on steroids.
The design of the original NT kernel was overseen by Dave Cutler, of VMS
and RSX-11M fame, and had a very strong and apparent VMS influence. Some
VAX wizards I know told me that they saw a lot of VMS in NT's design, but
that it probably wasn't as good (different design goals, etc: apparently
Gates wanted DOS++ and a quick time to market; Cutler wanted to do a *real*
OS and they compromised to wind up with VMS--).
It's true that there was (is? I don't know anymore...) a POSIX subsystem,
but that seemed more oriented at being a marketing check in the box for
sales to the US government and DoD (which had "standardized" on POSIX and
made it a requirement when investing in new systems).
Now days, I understand that one can run Linux binaries natively; the
Linux-compatibility subsystem will even `apt-get install` dependencies for
you. Satya Nadella's company isn't your father's Microsoft anymore. VSCode
(their new snazzy editor that apparently all the kids love) is Open Source.
Note that there is some irony in the NT/POSIX thing: the US Government
standardized on Windows about two decades ago and now can't seem to figure
out how to get off of it.
A short story I can't resist telling: a couple of years ago, some folks
tried to recruit me back into the Marine Corps in some kind of technical
capacity. I asked if I'd be doing, you know, technical stuff and was told
that, since I was an officer no, I wouldn't. Not really interested. I ended
up going to a bar with a recon operator (Marine special operations) to get
the straight scoop and talking to a light colonel (that's a Lieutenant
Colonel) on the phone for an hour for the hard sell. Over a beer, the recon
bubba basically said, "It was weird. I went back to the infantry." The
colonel kept asking me why I didn't run Windows: "but it's the most popular
operating system in the world!" Actually, I suspect Linux and BSD in the
guise of iOS/macOS is running on a lot more devices than Windows at this
point. I didn't bother pointing that out to him.
Would VMS become what it was without UNIX's influence? Would UNIX become
>> what it later was without VMS?
>>
>> Would UNIX exist, or even be close to what it became without DEC?
>>
>
> I've oft wondered that, but we have to use a new thread to avoid
> embarrassing Ken :-)
>
The speculation of, "what would have happened?" is interesting, though of
course unanswerable. I suspect that had it not been for Unix, we'd all be
running software that was closer to what you'd find on a mainframe or RT-11.
- Dan C.
> already 20 years ago I met a guy (masters degree, university) who never freed dynamically allocated memory. He told me he is 'instantiating a object', but had no idea what an heap is, and what dynamically allocated memory means.
Years ago, I had an new programmer who I just couldn't teach. He never understood the difference between an array and pointer, and apparently couldn't be bothered to learn.
After string him along for three months, I was on my way into his office to fire him when I found out he had quit, but not before he checked a bunch of drek into our source code control system.
I thought I backed all his commits out at the time.
Years later I was running "purify" on our product looking for memory leaks. I found this small utility function that predated the source code control system leaking. This, I thought was odd, as it had been there FOREVER and was well tested. I brought up the source code system and checked it anyhow and found the afore mentioned programmer had checked in one change: he deleted the "free" call in it.
I KNOW what happened. He did something else to corrupt the malloc heap in his code and often this causes a core dump in a subsequent malloc/free call. Apparently this was the place it struck him, so he just deleted the free call there.
So, in:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s2/mv.c
what's the point of this piece of code:
p = place;
p1 = p;
while(*p++ = *argp3++);
p2 = p;
while(*p++ = *argp4++);
execl("/bin/cp","cp", p1, p2, 0);
I mean, I get that it's copying the two strings pointed to by 'argp3' and
'argp4' into a temporary buffer at 'place', and leaving 'p1' and 'p2' as
pointers to the copies of said strings, but... why is it doing that?
I at first thought that maybe the execl() call was smashing the stack (and
thus the copies pointed to by 'argp3' and 'argp4'), or something, but I don't
think it does that. So why couldn't the code have just been:
execl("/bin/cp","cp", argp3, argp4, 0);
Is this code maybe just a left-over from some previous variant?
Noel
> From: Dave Horsfall <dave(a)horsfall.org>
> I'd like to see it handle "JSR PC,@(SP)+"...
Heh!
But that does point out that the general concept is kind of confused - at
least, if you hope to get a fully working program out the far end. The only
way to do that is build (effectively) a simulator, one that _exactly_
re-creates the effects on the memory and registers of the original program.
Only instead of reading binary machine code, this one's going to read in the
machine language source, and produce a custom simulator, one that can run only
one program - the one fed into it.
Think of it as a 'simulator compiler'! :-)
Noel
>> I was wondering what it would take to convert the v6/v7 basic program
>> into something that can be run today.
>
> Hmmm... If it were C-generated then it would be (somewhat) easy, but it's
> hand-written and hand-optimised... You'd have to do some functional
> analysis on it e.g. what does this routine do, etc.
>
>> Its 2128 lines. It doesn't have that fun instruction in it :)
>
> I know! Say ~2,000 lines, say ~100 people on this list, distributed
> computing to the rescue! That's only 20 lines each, so it ought to be a
> piece of cake :-)
I'm up for that! However, only if the resulting C program can be compiled/run
on a V6/PDP11 again.
Let's assume that reverse engineering a subroutine of 20 lines takes
an hour. That then makes for 100 hours. If 10 people participate and
contribute one hour/routine per week, it will be done by May.
However, the initial analysis of the code architecture is a (time) hurdle.
Paul
PS: the Fortran66 of V6 is also assembler only...
IMO:
1) It kinda did catch on, in the form of macOS, but there was a time
when it was nearly dead as the major vendors moved to System V. For
some reason, Sun was the last major vendor to make the move, but they
caught most of the flack.
2) I think the main reason BSD nearly died, was the AT&T lawsuit. At
the time, Linux appeared to be a safer bet legally.
3) Linux got a reputation as an OS you had to be an expert to install,
so lots of people started it to install it to "prove themselves".
This was sort of true back when Linux came as 2 floppy images, but
didn't remain true for very long.
4) I believe the SCO lawsuit "against Linux" was too little, too late
to kill Linux's first mover advantage in the opensource *ix
department.
5) I think FreeBSD's ports and similar huge-source-tree approaches
didn't work out as well Linux developers contributing their changes
upstream.
Hi all,
Would anyone here be able to help me troubleshoot my qd32 controller? I
have a pdp11/73 that's mostly working, boots 2.11 from rl02 okay, but I
need my big disk to work so I can load the rest of the distro.
I've been following the manual for the qd32 to enter the geometry of my
real working m2333 (jumpered correctly according to the manuals), but when
I load the special command into the qd32's SP register that's supposed to
load the geometry table from the pdp11 memory to the novram, I get a bad
status value from the qd32's SP register and it remains unresponsive when I
try to store the geometry. If I go ahead and try the built-in qd32 format
command, it responds similarly. When I pull in mkfs from tape (vtserver)
and try anyway, despite the failures, to run mkfs on the m2333, I get an
!online error from the standalone unix mkfs. The disk does respond (the
select light flashes and I can hear heads actuating), but without geometry
and format, I'm obviously dead in the water.
Any suggestions on how to proceed?
thx
jake
> Why is it that umount(2) took the device special file name rather than the mount point directory name, anyway?
Symmetry. You unmount what you mount.
A competing model is that of links. Link makes an old file available
under a new name. But you unlink by the new name. Necessarily so,
because there may be many new names for one old file.
This is reminiscent of Don Norman's screed about the unnaturalness
of Unix. He didn't like strcpy because the arguments come in the
opposite order to those of cp. But stcpy is part of C, and in
C the destination of assignment comes before the source. But Norman
didn't rail at C. You pays your money and takes your choice.
Doug
these were in by the time I can along but I was wondering when they got it.
They've also always felt a bit like a thing that did not fit to me. I'm
pretty sure I was not alone, given that the Unix authors worked out a way
to get rid of them in later efforts. I know what came after, in Plan 9;
what came before, in Unix, that led to special files?
We lost computer pioneer John von Neumann on this day in 1957; the "von
Neumann" architecture (stored program etc) is the basis of all modern
computers, and he almost certainly borrowed it from Charles Babbage.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Greg Lehey
>> V3 and earlier still *called* them special files, but it seems they
>> were essentially just magic inode numbers (there was no physical file
>> on disk, just any directory entry with the given inode would be the
>> special file).
> Isn't that still the case?
>From reading the manual page (URL sent earlier), in V3 and before it really
was just an inode _number_ (less than 50, IIRC). The first inode, in the
first disk block after the super-block, was inode #51. This is of course
different from later Versions, where there is an _inode_ for devices, but
still no actual _file_.
Noel
Co-inventor of Unix, he was born on this day in 1943. Just think: without
those two, we'd all be running M$ Windoze and thinking that it's
wonderful (I know, it's an exaggeration, but think about it).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."