Thanks for clearing that the whole members out of nowhere thing.
I had thought (ha ha) that since I don't have a working fork, I could just
rebuild CC as a native
executable, and then just call apout for each stage, but I never realized
how interdependent
they all are, at least C0 to C1.
It's crazy to think of how much this stuff cost once upon a time.
And now we live in the era of javascript pdp-11's
http://pdp11.aiju.de/
-----Original Message-----
From: Brantley Coile
To: Jason Stevens
Cc: tuhs(a)minnie.tuhs.org
Sent: 10/27/14 9:03 PM
Subject: Re: [TUHS] speaking of early C compilers
Early C allowed you to use the '->' operator with any scaler. See early
C reference manuals. This is the reason there is one operator to access
a member of a structure using a pointer and another, '.', to access a
member in a static structure. The B language had no types, everything
was a word, and dmr evolved C from B. At first it made sense to use the
'->' operator to mean add a constant to whatever is on the left and use
as an l-value.
You will also find that member names share a single name space. The
simple symbol table had an bit in each entry to delineate members from
normal variables. You could only use the same member name in two
different structs if the members had the same offsets. In other words,
it was legal to add a member name to the symbol table that was already
there if the value of the symbol was the same as the existing entry.
Dennis' compilers kept some backward compatibility even after the
language evolved away from them.
This really shows the value of evolving software instead of thinking one
has all the answers going into development. If one follows the
development of C one sees the insights learned as they went. The study
of these early Unix systems have a great deal to teach that will be
valuable in the post Moore's law age. Much of the worlds software will
need to a re-evolution.
By the way, did you notice the compiler overwrites itself? We used to
have to work in tiny spaces. Four megabytes was four million dollars.
Sent from my iPad
> On Oct 27, 2014, at 6:42 AM, Jason Stevens
<jsteve(a)superglobalmegacorp.com> wrote:
>
> has anyone ever tried to compile any of the old C compilers with a
'modern'
> C compiler?
>
> I tried a few from the 80's (Microsoft/Borland) and there is a bunch
of
> weird stuff where integers suddenly become structs, structures
reference
> fields that aren't in that struct,
>
> c01.c
> register int t1;
> ....
> t1->type = UNSIGN;
>
>
> And my favorite which is closing a bunch of file handles for the heck
of it,
> and redirecting stdin/out/err from within the program instead of just
> opening the file and using fread/fwrite..
>
> c00.c
> if (freopen(argv[2], "w", stdout)==NULL ||
> (sbufp=fopen(argv[3],"w"))==NULL)
>
>
> How did any of this compile? How did this stuff run without
clobbering
> each-other?
>
> I don't know why but I started to look at this stuff with some half
hearted
> attempt at getting Apout running on Windows. Naturally there is no
fork, so
> when a child process dies, the whole thing crashes out. I guess I
could
> simulate a fork with threads and containing all the cpu variables to a
> structure for each thread, but that sounds like a lot of work for a
limited
> audience.
>
> But there really is some weird stuff in v7's c compiler.
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
> From: random832(a)fastmail.us
> Did casting not exist back then?
No, not in the early V6 compiler. It was only added as of the Typesetter
compiler. (I think if you look in those 'Recent C Changes' things I sent in
recently {Oct 17}, you'll find mention of it.)
Noel
Have you looked at http://real-votrax.no-ip.org/
they have a votrax hooked up, and yes it'll use your phonemes that speak
generates.
It just likes things to be upper case though.
So..
hello
!p
,h,e0,l,o0,o1,-1
works more like
H E0 L O0 O1 PA1
I wonder if anyone's generated wav's for each of the phonemes, then you
could hook up a line printer or something that'll read it as a pipe and just
play the wav's as needed..
It is rough 1970's speech synthesis, but I had one of those Intellivoice
things as a kid, so I kinda like it.
-----Original Message-----
From: Mark Longridge
To: tuhs
Sent: 10/13/14 8:57 AM
Subject: [TUHS] Getting Unix v5 to talk
Thanks to the efforts of Jonathan Gevaryahu I have managed
to get the Unix v5 speak utility to compile and execute.
All this was done using the simh emulator emulating a
PDP-11/70.
Jonathan managed extract enough of speak.c to reconstruct it
to the point it could be compiled with v5 cc. I believe it
was necessary to look at speak.o to accomplish this.
Jonathan also states that there are more interesting things
that could possibly be recovered from v6doc.tar.gz
One can look at speak.c source here:
http://www.maxhost.org/other/speak.c
Now had we have speak compiled we can go a bit further:
cat speak.v - | speak -v null
generates speak.m from ascii file speak.v
speak speak.m
computer
!p (prints out phonetics for working word)
which outputs:
,k,a0,m,p,E2,U1,t,er,-1
ctrl-d exits
Looking at speak.c we can see that it opens /dev/vs.
Fortunately we have the file /usr/sys/dmr/vs.c to look at
so this could be compiled into the kernel although I haven't
done this as yet.
speak.c looks like Unix v5 era code. My understanding is that
Unix v5 appeared in June 1974 and the comments say 'Copyright 1974'
so it seems plausible.
I'm intrigued by the possibility of getting Unix v5 to talk.
Mark
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
> From: "Engel, Michael" <M.Engel(a)leedsbeckett.ac.uk>
> The machine has a Multibus FD controller with its own 8085 CPU and a
> uPD765, connected to a Toshiba 5.25" DD floppy drive (720 kB, 80
> tracks, 9 sectors of 512 bytes), the model identifier is DDF5-30C-34I
> ... I couldn't find any information on that drive online, so I hesitate
> to simply connect a more modern drive due to possible pinout differences.
> ...
> I also found out a bit more on the SMD disk controller. It seems to be
> an OEM variant of the Micro Computer Technology MCT-4300 controller.
> The only place I could find this mentioned was in a catalog of Multibus
> boards on archive.org.
> ...
> So, if you happen to have any information on the Codata floppy
> controller, the Toshiba floppy or the MCT-4300 SMD disk controller, I
> would be happy to hear from you...
I don't, but can I suggest the Classic Computers mailing list:
http://www.classiccmp.org/mailman/listinfo/cctalk
They seem to have an extremely deep well of knowledge, and perhaps someone
there can help? (I'd rate the odds very high on the floppy drive.)
Noel
Hi,
it's time for an update on our progress with the Codata machine.
The serial interface problem was not caused by a defective transceiver
chip (which I found out after buying a couple…), but by an extreme
amount of noise on the (quite long and old) serial cable we used to
connect the machine to the PC acting as a terminal. Using a USB
to serial adapter and a short 9-to-25-pin adapter cable solved this
problem. Well, try the obvious things first (using a scope helped).
The second CPU board also works, so we could build a complete
second machine with our spare boards if we have another multibus
backplane...
We could get the machine up and booting from the first 8" hard disk
last Friday. Luckily, an old version of Kermit was installed and we
were able to transmit a large part of the root file system from single
user more - especially the Unix kernels, driver sources, the build
directories for the kernel, include files and the build directory for
the standalone boot floppies. All with a speed of 500 bytes/s (9600
bps serial line minus kermit overhead). cksum was used to confirm
that the files were transferred correctly (this was the only checksumming
tool that was available on the Codata, I didn't want to mount the fs
read-write and compile software before completing the backup).
I had to shut the machine down on Friday evening (security policy
that kicks you out of the building here), since I didn't want to leave
it running unattended over the weekend. Unfortunately, the disk
seems to have developed a bad sector in the autoconfiguration
region (the system seems to be quite modern in this respect).
The kernel can be booted successfully, but it refuses to mount the
root fs, complaining about a timeout. There seems to be a complete
root file system on the second disk (the firmware is able to read files
from the disk, but it doesn't offer a feature to list directories…), but the
kernel on the second disk also is hardwired to mount its root fs from the
first disk. Trying to connect disk 2 as disk 1 resulted in a non-booting
system...
The good news is that both root file systems seem to be reasonably
intact so far, I can read text files from the boot monitor. So our next
step to backup the rest of the system is to build an emergency boot
floppy. At the moment, however, the Codata refuses to talk to its
floppy drive. The machine has a Multibus FD controller with its own
8085 CPU and a uPD765, connected to a Toshiba 5.25" DD floppy
drive (720 kB, 80 tracks, 9 sectors of 512 bytes), the model identifier
is DDF5-30C-34I (printed on the motor assembly). I couldn't find
any information on that drive online, so I hesitate to simply
connect a more modern drive due to possible pinout differences.
I also found out a bit more on the SMD disk controller. It seems to
be an OEM variant of the Micro Computer Technology MCT-4300
controller. The only place I could find this mentioned was in a
catalog of Multibus boards on archive.org. It has its own driver
(cd.c), there is a separate one for the Interphase 2180 and an
additional one for the Codata MFM controller.
So, if you happen to have any information on the Codata floppy
controller, the Toshiba floppy or the MCT-4300 SMD disk controller,
I would be happy to hear from you...
-- Michael
> From: Greg 'groggy' Lehey <grog(a)lemis.com>
> This is really an identifier issues
Probably actually a function of the relocatable object format / linker on the
machines in question, which in most (all?) cases predated C itself.
> it's documented in K&R 1st edition, page 179:
Oooh, good piece of detective work!
Noel
Hi folks,
I've been looking at Unix v5 cc limitations.
It seems like early cc could only use variable and function names up
to 8 characters.
This limitation occurs in v5, v6 and v7.
But when using the nm utility to print out the name list I see
function test1234() listed as:
000044T _test123
That seems to suggest that only the first 7 characters are
significant, but when looking at other sources they stated that one
can use up to 8 characters.
I hacked up a short program to test this:
main()
{
test1234();
test1235();
}
test1234()
{
printf ("\nWorking");
}
test1235()
{
printf ("\nAlso working");
}
This generated:
Multiply defined: test5.o;_test123
So it would seem that function names can only be 7 characters in
length. I am not sure if limitations of early cc were documented
anywhere. When I backported unirubik to v5 it compiled the longer
functions without any problem.
Did anyone document these sorts of limitations of early cc? Does
anyone remember when cc started to use function names longer than 7
characters?
Mark
> From: Mark Longridge <cubexyz(a)gmail.com>
> It seems like early cc could only use variable and function names up to
> 8 characters.
> This limitation occurs in v5, v6 and v7.
> ...
> That seems to suggest that only the first 7 characters are significant,
> but when looking at other sources they stated that one can use up to 8
> characters.
The a.out symbol tables use 8-character fields to hold symbol names. However,
C automagically and unavoidably prepends an _ to all externals (I forget
about automatics, registers, etc - too tired to check right now), making the
limit for C names 7 characters.
> I am not sure if limitations of early cc were documented anywhere.
I remember reading the above.
Other limits... well, you need to remember that C was still changing in that
period, so limits were a moving target.
> When I backported unirubik to v5 it compiled the longer functions
> without any problem.
ISTR that C truncated external names longer than 7 characters. Probably the
ones in that program were all unique within 7, so you won.
> Did anyone document these sorts of limitations of early cc?
I seem to recall at least one document from that period (I think pertaining
to the so-called 'Typesetter C') about 'changes to C'.
Also, I have started a note with a list of 'issues with C when you're
backporting V7 and later code to V6', I'll see if I can dig them out tomorrow.
Noel
Afternoon,
# /etc/mkfs /dev/rrp1g 145673
isize = 65488
m/n = 3 500
write error: 2
# file rp0g
rp0g: block special (0/6)
# file rp1g
rp1g: block special (0/14)
# file rp0a
rp0a: block special (0/0)
# file rp1a
rp1a: block special (0/8)
# file rrp0a
rrp0a: character special (4/0)
# file rrp1a
rrp1a: character special (4/8)
# file rrp0g
rrp0g: character special (4/6)
# file rrp1g
rrp1g: character special (4/14)
DESCRIPTION
Files with minor device numbers 0 through 7 refer to various
portions of drive 0; minor devices 8 through 15 refer to
drive 1, etc.
The origin and size of the pseudo-disks on each drive are as
follows:
What am I forgetting? I have an image attached, I have modified hp.c to
have NHP as 2.
Is it conflict between rp.c and hp.c? (I patched hp.c to have NHP 2 after
patching NURP in rp.c to be 2).
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
>> Did anyone document these sorts of limitations of early cc?
> I seem to recall at least one document from that period (I think
> pertaining to the so-called 'Typesetter C') about 'changes to C'.
> ...
> I'll see if I can dig them out tomorrow.
OK, there are three documents which sort of fall into this class. First,
there is something titled "New C Compiler Features", no date, available here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=Interdata732/usr/doc/cdoc/news…
no date, but it appears to describe an early version of the so-called
'Typesetter C', mentioned in other documents, so this would be circa 1976 or
so.
There is a second document, untitled, no date, which I have not been able to
locate online at all. I scanned my hard-copy, available here:
http://ana-3.lcs.mit.edu/~jnc/history/unix/CImprovements1.jpg
..
http://ana-3.lcs.mit.edu/~jnc/history/unix/CImprovements5.jpg
>From the content, it seems to be from shortly after the previous one, so say,
circa 1977.
Sorry about the poor readability (it looked fine on the monitor of the
machine my scanner is attached to); fudging with contrast would probably make
it more readable. When I get the MIT V6 Unix tapes read (they have been sent
off to a specialist in reading old tapes, results soon, I hope) I might be
able to get more info (e.g. date/filename), and machine-readable source.
Finally, there is "Recent Changes to C", from November 15, 1978, available
here:
http://cm.bell-labs.com/cm/cs/who/dmr/cchanges.pdf
which documents a few final bits.
There is of course also Dennis M. Ritchie, "The Development of the C
Language", available here:
http://cm.bell-labs.com/who/dmr/chist.html
which is a good, interesting history of C.
> Also, I have started a note with a list of 'issues with C when you're
> backporting V7 and later code to V6'
I found several documents which are bits and pieces of this.
http://ana-3.lcs.mit.edu/~jnc/history/unix/C_Backport.txthttp://ana-3.lcs.mit.edu/~jnc/history/unix/V6_C.txt
Too busy to really clean them up at the moment.
Noel
Back in the 80s in my University days I was using ISPS (Instruction Set Processor Simulator if I remember correctly ) a software tool To simulate CPU. It ran on a Vax with BSD 4.2. I have been unable to find any reference to It on the Internet . Do someone on this list know anything offerte this software ?
Thanks
Luca
> From: Mark Longridge
> Fortunately we have the file /usr/sys/dmr/vs.c to look at so this could
> be compiled into the kernel although I haven't done this as yet.
The vs.c seems to be a Votrax speech synthesizer hooked up to a DC11
interface. Do any of the simulators support the DC11? If not, adding the
driver won't do you much good.
Noel
PS: I seem to recall the DSSR group on the 4th floor at LCS actually had one
of these, back in the day. The sound quality was pretty marginal, as I recall!
Thanks to the efforts of Jonathan Gevaryahu I have managed
to get the Unix v5 speak utility to compile and execute.
All this was done using the simh emulator emulating a
PDP-11/70.
Jonathan managed extract enough of speak.c to reconstruct it
to the point it could be compiled with v5 cc. I believe it
was necessary to look at speak.o to accomplish this.
Jonathan also states that there are more interesting things
that could possibly be recovered from v6doc.tar.gz
One can look at speak.c source here:
http://www.maxhost.org/other/speak.c
Now had we have speak compiled we can go a bit further:
cat speak.v - | speak -v null
generates speak.m from ascii file speak.v
speak speak.m
computer
!p (prints out phonetics for working word)
which outputs:
,k,a0,m,p,E2,U1,t,er,-1
ctrl-d exits
Looking at speak.c we can see that it opens /dev/vs.
Fortunately we have the file /usr/sys/dmr/vs.c to look at
so this could be compiled into the kernel although I haven't
done this as yet.
speak.c looks like Unix v5 era code. My understanding is that
Unix v5 appeared in June 1974 and the comments say 'Copyright 1974'
so it seems plausible.
I'm intrigued by the possibility of getting Unix v5 to talk.
Mark
Hi,
after carefully examining the power supply and checking the generated
voltages, we were convinced that this wouldn't kill our Multibus boards.
Maybe some of you are interested in our progress, so I though I would
send you an update.
After reconnecting the Multibus backplane, we started the system with
only a CPU board and a memory board. On one of our CPU boards
the smaller (P2) Multibus connector is masked with tape, I'll have to dig
deeper to find out what is deactivated by this…
One of our two CPU boards is currently non functional (the one without
the masking take, this doesn't say a thing on the console UART, will bring
in the scope in Monday to check for details). The other one brings up the
monitor startup message and prompt on a connected serial terminal
(emulator) - however, we are unable to get any characters echoed back.
The serial cable is working, we tried all sorts of handshake configurations.
If we get any characters back (the system is running at 9600 baud, I tried
all combinations of 7/8 bit, none/even/odd/mark/space parity and 1/2 stop
bits), these are garbled and contain mostly "1" bits (0xfc, 0xfe, 0xff or
similar).
The UART itself seems to work (exchanged it with the one from the non
working board - same result), so now I suspect the AM26LS32 RS423
driver to be the culprit.
I uploaded some pictures to
http://s1372.photobucket.com/user/michaelengel/library/Codata?sort=3&page=1
- there you can see that this machine is far from being in any sort of
original condition. Nevertheless, it's great to see it come alive again!
Btw., current versions of MAME/MESS include a rudimentary Codata
simulator. This doesn't do very much so far, but it can successfully run
the firmware ROM code (picture also uploaded to photobucket).
Best wishes,
Michael
On 2014-10-07 03:00, norman(a)oclsc.org (Norman Wilson) wrote:
>
> The 11/70 service manual is all good, but it's definitely not enough.
> Ideally, you should have access to the full drawings, the service manual
> for the CPU, the service manual for the memory subsystem, I seem to
> remember that the FP11 has its own service manual, and I think the
> massbus interface also has its own documentation set.
> Also, the memory system consists of both the Unibus map, the cache and
> memory bus system, and they you have separate documentation for the
> memory boxes (either MJ11 or MK11 box).
>
> It might be worth while to contact the Living Computer Museum.
> I forget whether they have an 11/70 running or just an 11/45,
> but I do know that they collect all the documentation they can
> get for old computers--I saw the room where they store it.
> Whenever they need to use it, or there's some other need to
> access it, they try to make time to scan it, so the precious
> copy can stay in the archive room.
LCM have atleast one 11/70 running. Although they are not really doing
anything fun on it. I hope to maybe help them with that next time I'm
there. I can't remember seeing any 11/45 running, but I'm pretty sure
there are some in their storage if nothing else...
I'm not going to try dragging a lot of documentation from Sweden to
Seattle, though (I'm not even in Sweden myself lots of the time). On the
other hand, I know they have plenty of documentation, so I would hope
they (and/or CHM) already have most of it.
> Since their goal is to have ancient computers actually
> running, they are certainly interested in having all the
> documents (even if you can't get the wood, as Warren might
> remark at this point), including full engineering drawings.
>
> It's also a neat place to visit if you have some free time in
> Seattle. I'm disappointed to have figured out that, although
> I'll be in Seattle for a conference in about a month, I won't
> be able to visit LCM while they're open unless I skip some
> conference sessions ... or unless I can convince them to open
> up specially. Anyone else on this list planning to attend
> LISA and interested in visiting a museum of old running
> computers?
I know of the place, and have known Rich Alderson for a long time.
It is a fun place, and I could see myself working there, if I just had
the right offer. Don't expect that to happen, though...
I'll be there for different reasons in about a month from now. But my
weekends are free... :-)
Johnny
Also, I had this e-mail sent to me from Jacob who is a long-time TUHS
person. Again, he has questions I don't know the answers to. Anybody?
Cheers, Warren
----- Forwarded message from Jacob Ritorto -----
Greetings Warren,
 It's been decades since we last corresponded and I'm delighted to
see that you're still active in the pdp11 unix community! I've found
some free time and have been kicking around the idea of repairing
the11/45 I scored some years ago (11/45 system number 273 from Stanford
University) and installing 2.9bsd on it. You helped me out years ago
when I had an 11/34 and I managed to do it back then, so I have some
hope this time around too, though there are some more serious hurdles
now. Glad to see that a lot of the license trolling finally appears
to be settled and we can have unfettered access to all the good stuff!
Â
 Any pointers to who
has parts and troubleshooting knowledge would be a big help.
 Softwarewise, I was also thinking I'd like to get my Fuji160 disks
working on the machine. Has work like this been done already, or
would you have pointers as to how to go about it?
  Also, has anyone written a miniature httpd for any of the ancient
bsds?
thanks
jake
----- End forwarded message -----
The 11/70 service manual is all good, but it's definitely not enough.
Ideally, you should have access to the full drawings, the service manual
for the CPU, the service manual for the memory subsystem, I seem to
remember that the FP11 has its own service manual, and I think the
massbus interface also has its own documentation set.
Also, the memory system consists of both the Unibus map, the cache and
memory bus system, and they you have separate documentation for the
memory boxes (either MJ11 or MK11 box).
It might be worth while to contact the Living Computer Museum.
I forget whether they have an 11/70 running or just an 11/45,
but I do know that they collect all the documentation they can
get for old computers--I saw the room where they store it.
Whenever they need to use it, or there's some other need to
access it, they try to make time to scan it, so the precious
copy can stay in the archive room.
Since their goal is to have ancient computers actually
running, they are certainly interested in having all the
documents (even if you can't get the wood, as Warren might
remark at this point), including full engineering drawings.
It's also a neat place to visit if you have some free time in
Seattle. I'm disappointed to have figured out that, although
I'll be in Seattle for a conference in about a month, I won't
be able to visit LCM while they're open unless I skip some
conference sessions ... or unless I can convince them to open
up specially. Anyone else on this list planning to attend
LISA and interested in visiting a museum of old running
computers?
Norman Wilson
Toronto ON
On 2014-10-05 03:00, Dave Horsfall<dave(a)horsfall.org> wrote:
>
> On Sat, 4 Oct 2014, Noel Chiappa wrote:
>
>> >Anyone seriously working on bringing old hardware back to life needs to
>> >get in touch with the Classic Computer Talk list:
> Is John Dodson on this list? He has an 11/70 in his house.
No idea. I occasionally scan cctalk, but most of the time don't bother.
Too much noise and irrelevant or ignorant posts. However, unfortunately
I don't have any better suggestions where to go if you are trying to
restore old hardware and don't have enough knowledge.
And yes, I keep lots of different PDP-8, PDP-11 and VAXen running,
including 11/70 systems.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Sun, Oct 5, 2014, at 13:47, Jacob Ritorto wrote:
> awesome, man, thanks. If I fork tinyhttpd on github, mind if I use 'em?
> I'll attribute to you of course If ok, any license preference? I
> usually
> use MIT..
tinyhttpd itself is not mine and appears to be under the GPL.
On Sat, Oct 4, 2014, at 12:37, Jacob Ritorto wrote:
> nice. may i see your difffs?
>
> On Sat, Oct 4, 2014 at 4:06 AM, <random832(a)fastmail.us> wrote:
> >
> > Seeing this question, I figured "it can't be that hard", and managed to
> > get tinyhttpd http://tinyhttpd.sourceforge.net/ to compile on 2.11BSD.
> > Mostly just required K&R-ification, though I also had to fix some bugs
> > in the way it uses buffers to get it to work at all. Normal GET requests
> > work; The whole CGI thing I disabled because I couldn't get it to work
> > reliably even on modern linux.
> >
> > Not actually tested, though - I couldn't get simh to emulate a network
> > device.
Sure.
Security note: the httpd itself doesn't appear to do anything about e.g.
".." in pathnames, and I didn't do anything about that.
Some guy on eBay has a flock of RL02 drives available (in New York, USA) for a
pretty reasonable price:
http://www.ebay.com/itm/261288230510
I just bought a flock of them, and they are in very good condition. They were
only recently withdrawn from service (at the FAA, so they were professionally
maintained up until they went), and were properly prepared for moving (heads
immobilized, _and_ the motor was locked down - very rare to see that last
step, as is involves finding the right machine screws - or having saved them).
They are late-production ones, too (looked, but couldn't find a date) - they
have the anti-RFI/EMI 'finger' strips (the kind that make a pressure-loaded
contact with the incoming connector shell), which I personally had never seen
on any RL0x drives.
Alas, they have no packs or terminators available, nor cables or slides (any
more :-). But other than that, recommended.
Noel
> From: Warren Toomey <wkt(a)tuhs.org>
>> have been kicking around the idea of repairing the11/45 I scored some
>> years ago (11/45 system number 273 from Stanford University)
>> ..
>> Any pointers to who has parts and troubleshooting knowledge would be a
>> big help.
Anyone seriously working on bringing old hardware back to life needs to get
in touch with the Classic Computer Talk list:
http://www.classiccmp.org/mailman/listinfo/cctalk
A wealth of experience and knowledge on every conceivable topic involved in
old hardware is available there, and people are very helpful.
Noel
> The machine is a Codata 300
Wow.
I went to Leeds Poly (as it was then) in the mid 1980s.
There where two Codatas in the electronics dept., one in its
original plastic case and one 19inch rackmount - built as a
IEEE 488 controller; I assume what you have is one of those.
The former machine was loaded with Whitesmiths cross compilers
and I learnt z80 assembly language on it ☺
It ran V7 indeed, and was a friend of the Interdata/Perkin Elmer
3210, the main electronics teaching machine. If it is this
machine then it should have the V7 source from the 3210 (Xelos
as it was called) and the source for the drivers for the
codata (which we gained by "accident").
I may be able to remember some other snippets - contat me
off-list with specific questions. I can give you the names
of the lecturers who would know most about it but I guess they
are now retired (though they may still be Headingly somwhere).
(fondly remembers Leeds).
-Steve
Hi All.
This is really gonna stretch the memories. (I may have even asked about
it on this list before.)
At one of the earlier USENIX conferences that I attended, maybe in
Atlanta, there was a contest to make up humorous new errno values.
The one that won, which I remember to this day was:
EMISTERED:
A host is a host from coast to coast,
and no-one can talk to host that's close,
unless the host that isn't close
is busy, hung, or dead.
I have quoted this is the gawk doc for many years.
I'm wondering if there is a way to find out who was the author
of this gem? I'd like to give him/her credit.
Thanks,
Arnold
I just received a new TUHS subscription request along with
an interesting query. Can anybody help Michael with his problem?
Cheers, Warren
----- Forwarded message from "Engel, Michael" -----
Dear Warren,
Could you please subscribe me to the TUHS mailing list?
I haven't worked with old Unix systems for quite some time, but I was appointed as a
Senior Lecturer at Leeds Beckett University (UK) two months ago and - to my big
surprise - I found an old Unix machine collecting dust in a corner..
The machine is a Codata 300, a Multibus-based system using a licensed clone of the
original Sun 100U CPU board and a number of additional Multibus controllers. The
machine seems to be complete, including two 8" SMD disks and a Cipher 9-track
tape drive, we also seem to have a set of replacement boards and the CPU board
manual (including schematics and code snippets explaining how to access the onboard
devices - some Codata documentation can also be found on bitkeeper).
We haven't tried to power up the machine yet (and, built around 1983, it certainly needs
a close examination of the power supply and capacitors). From information on the net,
this machine runs a Unisoft port of 7th Edition Unix - similar to the Sun 1 machines and
probably a Whitesmiths C compiler (there's a Whitesmiths license badge attached to the
case). Definitely a very interesting and probably quite rare machine and we would like
to revive it (and, if things go well, create a FPGA reimplementation of the system in
the context of a student design project).
Now, I would love to know more details about the implementation of 7th Edition Unix on
the 68000 and the use of the custom MMU built out of fast SRAM and TTL logic.
I do not think that source code to any of the various 68k 7th Edition ports produced by
Unisoft is available somewhere - do you know of a possible source?
Alternatively, do you think it would be worthwhile asking Unisoft for the source code or
do you know if anyone has tried this already? According to the Unisoft history web page
(http://www.unisoft.com/history.html) they still seem to know that they were porting Unix
30 years ago...
The only remotely related source code I could find in my archives is the A/UX 0.7 source
(SVR2, if I'm not mistaken), which probably already required a 68020 with 68851 MMU.
Best regards,
Michael Engel
----- End forwarded message -----
> From: arnold(a)skeeve.com
> it was clear at the time that UNIX and what was happening was extremely
> special .. Those of you who were there really were part of a "golden
> age".
I once observed to Jerry Saltzer that when I started at the MIT CS labs, I
had been bummed because I had missed what I considered their 'Golden Age' -
the work on CTSS and Multics.
When I showed up there, I did a deal where they let me use a PDP-11/40 to
explore some OS ideas I had, if I woule write diagnostics on it for a ring
interface they were working on - part of a project on networking that
included work on something called internetworking.
I had no idea what I was getting into!
And of course the networking work soon sucked me in completely. In that
message to Jerry, I said something to the effect of 'clearly I've lived
through a second golden age, and only now am I understanding that'.
Noel
TMG was in v2-v6, sometimes in man1 and sometimes in man6.
I have an apparently complete source listing. The year is
uncertain. (Back then date headings from pr didn't mention the year!)
The accompanying manual, though, is dated 1972. There is also, besides
the TMGL definition of TMGL, a TMGL definition of pig Latin as a
simple test case.
None of this is useful, though, without a PDP-11 binary for tmg--
the usual chicken-and-egg problem with a full-blown compiler written
in its own language.
Doug
> >Speaking of TMG, is there an implementation for FreeBSD/Mac/Linux? Or do
> > I have to find a CDC/PDP-7 emulator first? :-)
> >
> >-- Dave
>
> TMG is mentioned in the v3 manual:
>
> /sys/tmg/tmgl.o -- the compiler-compiler
>
> There's no files for it for v5 but it is in v6 and it seems to
> disappear after that.
> On TUHS V6/usr/source/tmg/tmgl.t would seem to be a source file.
>
> I did manage to get something running for pdp-7 on simh and got to the
> GA prompt.
> Didn't get it to do much beyond printing "CAB DECSYS7 COPY 15 JUNE 1966"
>
> Mark
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> None of this is useful, though, without a PDP-11 binary for tmg
There seems to be be binary for TMG on the V6 distribution.
Noel
Eric S. Raymond has written an article about the history of the time.h
functions at http://www.catb.org/esr/time-programming/
>From his blog post announcing it (http://esr.ibiblio.org/?p=6311)
> The C/UNIX library support for time and calendar programming is a nasty mess of historical contingency. I have grown tired of having to re-learn its quirks every time I’ve had to deal with it, so I’m doing something about that.
>
> Announcing Time, Clock, and Calendar Programming In C, a document which attempts to chart the historical clutter (so you can ignore it once you know why it’s there) and explain the mysteries.
>
> What I’ve released is an 0.9 beta version. My hope is that it will rapidly attract some thoroughgoing reviews so I can release a 1.0 in a week or so. More than that, I would welcome a subject matter expert as a collaborator.
When I saw it I thought it might be generally interesting to people who
subscribe to this list.
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Sorry for straying from direct Unix history
Au contraire. This was a fascinating account, and very valuable to have it
noted down for history. Please stray like that anytime you feel like it! :-)
Noel
scj wrote:
> There was a compiler/compiler in use at the Labs, imported I think by Doug
> McIlroy, called TMG.
Sorry for straying from direct Unix history, but this remark spurred a lot
of memories.
TMG (from "transmogrify", defined in Webster as "to change or alter, often
with grotesque or humorous effect") was imported from Bob McClure, erstwhile
Bell Labs person, then at Texas Instruments. And an interesting import
job it was. McClure had written TMG for the CDC 1604 in machine language. He
sent me green coding sheets hand-transliterated into 7090 code. Interesting
debugging: one knew that the logic of the code was sound, but the opcodes
might not always be right. Sometimes, for example, the wrong one of the two
accumulator-load instructions, CLA and CAL, was used.
Clem Pease converted TMG to the GE 635 for Multics by the artifice of defining
7090 opcodes as 635 macros--sometimes many instructions long to slavishly
emulate the 7090's peculiar accumulator (which mixed 38-bit sign-magnitude
with 37-bit twos-complement and 36-bit ones-complement arithmetic). It's
amusing to speculate about the progressive inflation of TMG had McClure sent
me a similar translation for the 7090.
TMG had a higher-level language written in TMG, which evolved during the
Multics project into something considerably more elaborate than McClure's
original, including features like syntax functions, e.g. seplist(a,b) denoted
a sequence of a's separated by b's, for arbitary syntactic categories a and
b. Syntax functions took TMG beyond the domain of context-free languages.
Multics was to be written in PL/I, a compiler for which was commissioned from
Digitek. They had brilliant Fortran technology, but flubbed PL/I. When it
appeared that the Digitek compiler was hopeless, Bob Morris proposed that
we write a quick and dirty one in TMG. Despite being slow (an interpreter
running on an emulated 7090) and providing only three diagnostics, this
compiler carried the project for a couple of years.
When Unix came along, we were again faced with how to bootstrap TMG across
machines. This time I wrote a bare-bones interpreter in PDP7 assembler, then
by stages grew the language back to the Multics state. Ken, in a compliment
I still treasure, once called this the most amazing program on the original
Unix machine.
I believe TMG was involved in the initial evolution of B, but the
real tour-de-force in B was the ability of an interpreted version to
exploit software paging and transcend the limited memory of the PDP-11.
The following scenario was to be repeated several times in the early days
of Unix. When the native version of B ran out of steam, the interpreted
version would be used to introduce some new optimization that would squeeze
the native version back to fit. (Bigger input, smaller output!) Subsequently
we saw the same thing happen with C and the kernel. When the kernel grew
too big, a new optimization would be introduced in C to squeeze it. (And to
squeeze the compiler, too. The compiler, though, never enjoyed B's advantage
of being able temporarily to run in a bigger arena.)
Doug
I've been looking into the history of yacc (yet another compiler compiler).
The earliest reference I can find is the man page for yacc from v3
which indicates that yacc was written in B language. The files actn.b
and /crp/scj/bigyacc are mentioned. No luck so far in locating those
files.
There is a man page for v4 which is very brief.
There is a yacc executable for v5 but so far I haven't found any v5
era code that works with it. My attempts to compile bc.y from v6 using
yacc from v5 were not successful.
Also the source code for yacc in v5 is missing.
On a happier note I was able to use yacc and cc to regenerate the bc
calculator in v6. It needed a fair amount of swap space to compile
otherwise an "Intermediate file error" will occur. It seemed to
require at least 300 blocks of hard drive space.
It's a bit mysterious what the Unix v5 yacc was used on. It predated
bc and expr. There's no v5 era *.y files available to look at.
Mark
I've been comparing Unix v5 libc to modern linux and various other
Unix versions and I found something odd.
I made a list of functions which occur in Unix v5 libc.a and modern
linux glibc.a and while there is no problem using the ecvt function in
modern linux it doesn't seem to appear:
ar t /usr/lib/libc.a | grep ecvt
...doesn't find ecvt.
But if you do:
grep ecvt /usr/lib/libc.a
then
Binary file /usr/lib/libc.a matches
So it seems it is in there somewhere. While searching for ecvt.c I
found it as part of openbsd. I assume in modern Linux ecvt must be
part of a larger function but I couldn't find it in the glibc source.
Of course in Unix v5 things were completely straightforward as TUHS
has the file V5/usr/source/s3/ecvt.s
I just want to find all the functions that are still in modern glibc.a
which also existed in Unix v5 libc.a
At 01:04 PM 8/15/2014, Brian Zick wrote:
>Would it still be possible today for someone like me to go out, and find an old teletype terminal (an old ASR or DECwriter or something), set up a phone line and modem and get a roll of paper, and then actually use it to connect to other computers?
Yes, lots of people do it. There is a "Greenkeys" mailing list
http://mailman.qth.net/mailman/listinfo/greenkeys
populated by mostly ham radio RTTY types, but it's also a great archive
of posts about hook-ups and repairs. Yes, there are current-loop
adapters and RS-232 to USB adapters that can be used to connect
to contemporary machines. There are also streaming audio web sites
that send RTTY-style signals if you'd like to emulate your old radio
over the Internet but still use your RTTY audio decoding hardware.
There's also a fellow http://aetherltd.com/ who connects even older teletype
hardware to cell-phone texting.
The Teletype Model 33 was very popular among early computer users
because it was relatively low-priced compared to heavier-duty
teletypes. The old RTTY folks tend to look down their nose at it
because it wasn't as robust as other models.
They routinely huff and puff at recent auction prices for the Model 33,
though, as old computer collectors routinely pay $1,000 for them, while
it's tough to give away the better-built (and heavier!) teletypes.
Last summer I picked up a Western Union-branded Teletype Model 28 KSR
(circa mid-1950s) in near-pristine condition for $50. Almost twenty
years ago I found a Model 33 in a university dumpster.
- John
At 07:46 PM 9/8/2014, you wrote:
>Traffic this evening on the pcc compiler list <pcc.lists.ludd.ltu.se>
>alerted me to the existence of the Software Preservation Group, a
>branch of the Computer History Museum, with a Web site at
>
> http://www.softwarepreservation.org/
Also at http://www.computerhistory.org/groups/spg/ .
It's run by Al Kossow:
http://www.computerhistory.org/staff/Al,Kossow/
He worked at Apple for a long time. His early years were at UW-Milwaukee.
He also frequents the Classic Computer Collector mailing list, a place
where you might acquire and learn to repair the old iron.
- John
The recent UUCP network conversation has me wondering ... is anyone collecting/curating the UUCP maps that represented the way we communicated (outside the ARPANET) from the time of Chesson's paper until the death of comp.mail.maps? Brian Reid's postscript maps were a work of genius; the hand-drawn ASCII maps that predated those are even more wonderful bits of Internet history, let alone art.
--lyndon
> Speaking of using a pipe to an existing command, I originally mis-read the
> code to think there was only _one_ process involved, and that it was buffering
> its output into the pipe before doing the exec() itself - something like this:
>
> pipe(p);
> write_stuff(p[1]);
> close(0);
> dup(p[0]);
> close(p[0]);
> close(p[1]);
> execl("/bin/other", "other", arg, 0);
>
> Which is kind of a hack, but... it does avoid using a second process, although
> the amount of data that can be passed is limited. (IIRC, a pipe will hold at
> most 8 blocks, at least on V6.) Did this hack ever get used in anything?
I didn't notice anybody commenting on the fact that this hack doesn't
work for the purpose--an interactive desk calculator. Running bc then
dc serially defeats interactivity.
The scheme of filling a pipe completely before emptying it is used
in Rob Pike's plan9 editor, sam, to pipe text through some transforming
process. Because the transformer (think sort) may not produce any
output until it has read all its input, sam can't read the result
back until it has finished stuffing the pipe.
Of course sam has to create two pipes, not just one, and it happens
that the initial writer and ultimate reader are the same. But the
basic idea is the same.
Doug
Speaking of things etymological, I've heard two versions of that for
dsw(1).
Delete from Switch Register (delete file whose i-num is in CSR)
Delete Sh*t Work (same, but expressed a bit more robustly)
-- Dave
> From: Dave Horsfall <dave(a)horsfall.org>
> On the *nix systems to which I have access, bc(1) is a standalone
> program on FreeBSD and OSX, but pipes to dc(1) on OpenBSD. I cannot
> check my Penguin box (Ubuntu)
Dude! Trying to answer questions about the origins of BC by looking at
systems this late is like trying to learn Latin by studying Italian! :-)
I looked at the Version 6 source, and it's a bunch of YACC code, but it pulls
the same technique of using a pipe to an instance of DC, viz:
pipe(p);
if (fork()==0) {
close(1);
dup(p[1]);
close(p[0]);
close(p[1]);
yyinit(argc, argv);
yyparse();
exit();
}
close(0);
dup(p[0]);
close(p[0]);
close(p[1]);
execl("/bin/dc", "dc", "-", 0);
There's likely even older versions than the V6 one, but that's the earliest
source I have online on my PC for easy access.
Noel
Hello,
on de.comp.os.unix.shell there is a recent thread about bc(1) which
turned into a discussion about why it is called "bc". dc(1) is pretty
clearly "desk calculator" as by the man page, but the etymology of bc
seems to be unclear.
I've heard the following plausible theories:
- basic calculator (Wikipedia)
- beauty calculator (some people apparently dislike RPN)
- better calculator
- bench calculator (Wikipedia)
- b is the letter d mirrored (RPN vs algebraic)
- bundle calculator (the word "bundle" appears 97 times in bc.y of V6)
...but nobody had a really conclusive argument. Perhaps someone here
remembers the real story?
Thanks,
--
Christian Neukirchen <chneukirchen(a)gmail.com> http://chneukirchen.org
> From: Mark Longridge <cubexyz(a)gmail.com>
> I have a version of Unix v6 that has a file called /usr/doc/bc that
> describes bc at length
Oh, right, I missed that. I'm a source kind of person... :-)
Speaking of using a pipe to an existing command, I originally mis-read the
code to think there was only _one_ process involved, and that it was buffering
its output into the pipe before doing the exec() itself - something like this:
pipe(p);
write_stuff(p[1]);
close(0);
dup(p[0]);
close(p[0]);
close(p[1]);
execl("/bin/other", "other", arg, 0);
Which is kind of a hack, but... it does avoid using a second process, although
the amount of data that can be passed is limited. (IIRC, a pipe will hold at
most 8 blocks, at least on V6.) Did this hack ever get used in anything?
Noel
Traffic this evening on the pcc compiler list <pcc.lists.ludd.ltu.se>
alerted me to the existence of the Software Preservation Group, a
branch of the Computer History Museum, with a Web site at
http://www.softwarepreservation.org/
I do not recall hearing of it before today, and perhaps a few TUHS
list readers have not either. It may be desirable to add a link to it
from the Unix block of the http://minnie.tuhs.org/ site.
I think that it could also be good to record a link to the Bitsaver's
site at
http://bitsavers.trailing-edge.com/
and to make a list of TUHS mirrors more prominent (e.g., we have one
at
http://www.math.utah.edu/mirrors/minnie.tuhs.org/
).
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Hi folks,
I was wondering if Unix had any form of networking before uucp
appeared in Unix v7. It would be interesting to know if one could pass
a file from one Unix v5 machine to another without having to store it
on a magnetic tape.
There's some reference to a mysterious "Spider Interface" in the Unix
v5 manual. It seems to have something to do with DR-11B (which is a
general purpose direct memory access interface to the PDP-11 Unibus).
There's also reference to the "Spider line-printer" :)
Mark
> From: "Jeremy C. Reed" <reed(a)reedmedia.net>
> Later, they considered an LNI, an early token ring (if I understand
> correctly), device
Yes. See:
http://ana-3.lcs.mit.edu/~jnc/history/RingMIT.txt
for more - that's a pre-print version of an article just published in the
_IEEE Annals of the History of Computing_; slight differences with the final
version, but nothing significant.
Thumbnail: There were two versions; V1 was 1MBit/second, produced in very
limited numbers (~10 or so) at MIT, most used there, although IIRC correctly
at pair (at least - one would be of no use :-) went to UCLA (I remember flying
out to LA to help them get them going). V2 was 10Mbit/second, produced as a
commercial product by Proteon in cooperation with MIT, large numbers sold.
Noel
The 3b1 emulator now kind of boot!..
There is some issues with stuff, but for the most part, it works
http://virtuallyfun.superglobalmegacorp.com/?p=4149
-----Original Message-----
From: arnold(a)skeeve.com [mailto:arnold@skeeve.com]
Sent: Tuesday, August 26, 2014 2:17 PM
To: lyndon(a)orthanc.ca; lm(a)mcvoy.com
Cc: rob(a)bolabs.com; tuhs(a)minnie.tuhs.org
Subject: Re: [TUHS] networking on unix before uucp
Larry McVoy <lm(a)mcvoy.com> wrote:
> On Mon, Aug 25, 2014 at 01:00:45PM -0700, Lyndon Nerenberg wrote:
> > It was quite astounding to see the wide range of performance impacts
> > this had on various systems. 3B* systems would tip over and die, except
> > for the (built by Convergent Tech) 3B1.
>
> Sheesh, you people keep bringing up stuff from my past. My buddy Rob
> Netzer (used to be a prof at Brown, now works on BitKeeper with me)
> had one of those 3B1s. Neat machine. Sort of like a desktop VAX.
I had one too. (Also a trailblazer and then a worldblazer.) The 3B1 ran
SVR2; the BSD networking was available as an add-on with the ethernet
card.
I spent many happy hours working on that box, developing gawk and its
documentation; it was slow enough that you could see algorithmic
differences, e.g. standard diff vs. GNU diff.
It had one of those great AT&T keyboards (as did the blit). The UI
wasn't anything special to write home about though.
For a while there was a separate 3b1.* set of newsgroups and an
archive of stuff at Ohio State; there remains a comp.sys.3b1 group
that still has some activity as new people try to revive some of
these machines and others who had them help out. Someone was writing
an emulator, but I don't think it ever got finished.
Ah, the memories .... :-)
Arnold
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
On Mon, Aug 25, 2014 at 08:42:53AM -0600, emanuel stiebler wrote:
> On 2014-08-23 22:59, Larry McVoy wrote:
> > On Sat, Aug 23, 2014 at 03:32:45PM -0400, Clem Cole wrote:
> >> BEGIN - old guy's memories ....
> >
> > I see your old guy memories and "raise" my sort of old guy memories.
> > This is a bell labs blit story. It relies heavily on 7 bit clean stuff.
> > I'm not entirely sure this ever worked reliably but here is what we did.
> >
> > I was a grad student at UW Madison and shared an office with
> another guy. We
> > had a serial line to the computing center across the street. We
> had a blit,
> > loved it. We wanted two.
>
> Any chance, you still have any software for the BLIT?
Nope, all we were doing was muxing a serial line and that was 8051 assembler.
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
Doug McIlroy wrote:
> > I was wondering if Unix had any form of networking before uucp
>
> Right from the time Unix came up on the PDP-11 it was
> networked in the sense that it had dial-in and dial-out
> modems. Fairly early on, when Unixes appeared in other
> Bell Labs locations, Charlie Roberts provided a program
> for logging into another machine. It had an escape for
> file transfer, so it covered the basic functionality
> of rsh and ftp. It was not included in distributions,
> however, and its name escapes me. Maybe scj can add
> further details.
>
> Doug
Are you thinking of the cu (call unix) command? But that was included in v7,
and don't think it was part of uucp. The escape was ~ So a ~. to hangup,
~%put to send a file to the remote and ~%take to get one, and ~~ to send a ~
later on, there was a ct (call terminal) command, expecting a terminal at the end
of phone line instead of another machine.
On Sat, Aug 23, 2014 at 07:01:40AM +1000, Dave Horsfall wrote:
> On Fri, 22 Aug 2014, Larry McVoy wrote:
>
> > If anyone wants the stuff we use, the stuff mentioned above, I can put
> > it up on the web.
>
> Pretty please! For private use only, of course.
You'all can use it anywhere you like.
http://www.mcvoy.com/lm/tcp.shar
It's not that big a deal (other than 20 years of bug fixes :)
Somewhere I have a bigger deal, at least I think it is, I made a library
to talk to Sun RPC servers in parallel. I called it rpc vectors and Ron
Minnich used it to put a bunch of nfs servers together, he called that
bigfoot. Paper below, if someone wants that code I can ship that too.
It was pretty neat, back in the days of 10Mbit ethernet I was querying
thousands of machines in a single call. The code dealt with the fact
that you had to start eating the replies before you were done sending
the question :)
http://wenku.baidu.com/view/797c4ac62cc58bd63186bd1c.html
or for old school people
http://www.mcvoy.com/lm/bitmover/lm/papers/bigfoot.ps
The code was pretty small, pretty clever, it's a shame it didn't catch on.
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
> I was wondering if Unix had any form of networking before uucp
Right from the time Unix came up on the PDP-11 it was
networked in the sense that it had dial-in and dial-out
modems. Fairly early on, when Unixes appeared in other
Bell Labs locations, Charlie Roberts provided a program
for logging into another machine. It had an escape for
file transfer, so it covered the basic functionality
of rsh and ftp. It was not included in distributions,
however, and its name escapes me. Maybe scj can add
further details.
Doug
> From: Dan Cross <crossd(a)gmail.com>
> Unix was on the ARPAnet circa 1975 (if not earlier):
> http://tools.ietf.org/html/rfc681
Good catch; I didn't know of that document. There is a later, more extensive
document (set) about it, "A Network Unix System for the ARPANET", but that's
from several years later, and doesn't include anything about the history of
the implementation.
> it's entirely possible that the ARPAnet Unix work was done before V6
> ...
> if I had to hazard a guess I'd say they were running V5; perhaps
> heavily patched.
The RFC says this (translated to lower case since the all-upper made my
eyes hurt :-):
FOr further information concerning the different I/O calls the reader is
directed to The Unix Programmer's Manual, Fifth Edition, K. Thompson,
D. M. Ritchie, June 1974.
which I think makes it pretty definitive...
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I was wondering if Unix had any form of networking before uucp appeared
> in Unix v7.
In general, no, but I know of a number of networked Unixes prior to V6.
ISTR that there were a number of Unixes attached to the ARPANET; I know at
least one (at UIllinois) was - that was a V6 machine.
There were several different TCP/IP implementations done under V6; the
UIllinois guys did one (in C), BBN did one (by Jack Haverty, who ported one
done in assembler by IIRC SRI), and one was done at MIT (by Liza Martin, in
C). I don't think any of them saw significant deployment.
Noel
> From: Brian Zick <brian(a)zickzickzick.com>
> The fun of trying to do something in this now novel way is really
> great. I was thinking I might try using it for my email. The
> news-ticker idea also seems great
I suspect you'll find that the charm wears off pretty quickly, if you try and
use it for Real Stuff, day in, and day out. There's a reason this technology
is not used any more! :-)
> I'm really excited that this not only seems possible but nearly in
> reach.
I share you enthusiasm for the fun of computer archaeology. (Thanks to Milo,
I now have an 11/84 that I'm in the process of trying to get up.) Good luck!
Noel
On Fri, Aug 15, 2014 at 11:27 AM, Lyndon Nerenberg <lyndon(a)orthanc.ca>
wrote:
>
> On Aug 15, 2014, at 11:04 AM, Brian Zick <brian(a)zickzickzick.com> wrote:
>
> > Would it still be possible today for someone like me to go out, and find
> an old teletype terminal (an old ASR or DECwriter or something), set up a
> phone line and modem and get a roll of paper, and then actually use it to
> connect to other computers?
> >
> > I know it's not really practical today - but is it possible?
>
> Certainly it's possible. Although you would really only be able to do it
> with an ASCII terminal. A DECwriter would work fine. For a Teletype
> beast, you would need to make sure it used ASCII. But lacking lower case,
> I think you would find it too painful to use, even though all the current
> versions of UNIX (and Linux) I'm aware of still seem to support the
> necessary case conversion in the tty drivers.
>
Hmm. So for a TTY that old there would probably be no option for
lowercase. That does sound a little painful, especially if I wanted to edit
modern programs..
> Your biggest obstacle might be finding a host machine that still has a
> modem attached that you could dial in to :-)
>
So perhaps I could simplify it and attach to a machine sitting next to the
TTY - which then in theory could connect to the outside world via the usual
means. I wonder, has anyone tried something like this?
> And, of course, everyone KNOWS the entire universe runs in terminals that
> support ANSI escape sequences for colour and cursor positioning. Who needs
> termcap? (I'm looking at you, git. And clang.) So you might find setting
> TERM=dumb isn't quite enough.
>
> Also, ed(1) is a wonderful editor on a hardcopy terminal. Unless you run
> it on Linux, which KNOWS the whole world runs on 24 line terminal windows,
> and therefore ed needs to pause its output.
I usually use vim, but before learning vim I learned ed and used it for
about a 2 month space for editing config files and things, so that should
hopefully be the easy part. :-)
Brian Zick
zickzickzick.com
.:/
,,///;, ,;/
o:::::::;;///
>::::::::;;\\\
''\\\\\'' ';\
\
> From: Ernesto Celis <ecelis(a)sdf.org>
> I own an USRobotics modem which I've been thinking about connect to the
> home server and use it to dial in to get acces to my shell
Just out of curiousity, what are you going to dial in _with_? :-)
Noel
> From: Brian Zick <brian(a)zickzickzick.com>
> Would it still be possible today for someone like me to go out, and
> find an old teletype terminal (an old ASR or DECwriter or something),
> set up a phone line and modem and get a roll of paper, and then
> actually use it to connect to other computers?
Well, although I used ASR33's for two years (attached to an 11/20 running
RSTS :-), it was a long time ago (I was 15/16 :-), and they aren't something
I _really_ know about, but ... Here are some issues you need to watch out for:
First, I think most Teletypes used what is called '20mA current loop' serial
line electrical interface standard (although some of the later ones could use
'EIA' - the now-usual, although fast disappearing, serial line electrical
interface standard). They are logically (i.e. at the framing level) the same,
but the voltages/etc are different.
The only Teletype I see listed (in a _very_ quick search, don't take this for
gospel) that used EIA is the Model 37. So if you get a Teletype Model 33 or
35, and want to plug it into a computer, either the computer is going to have
to have an _old_ serial line interface (e.g. DL-11A/C, on a PDP-11), or you're
going to have to locate a 20mA/EIA converter (I've never seen such a thing,
but I expect they existed).
And if you want to plug it into a modem... all modems I ever heard of are EIA
(at least, the ones you could plug terminals into - e.g. in most PC modem
cards, the serial interface is entirely internal to the card).
Second, most of those Teletypes were 110 baud (mechanical hardware
limitation).
So that means that first, if you plug into a computer, your serial interface
has to be able to go that slow. Second, if you're dialing up, you need to find
a dial-up port that supports 110 baud. (I would be seriously amazed if any are
left...)
Of course, if you go with a DecWriter, some of these issues go away, but be
careful: some older DecWriters were 20mA too, and the speeds were almost as
slow on many (probably 300 baud, but I don't know much about DecWriters).
Sorry to be so much cold water, but...
As for finding one... I suggest eBay. There's a broken ASR33 there at the
moment - if you're _really_ serious, might be worth buying as a parts
source. But if you wait, I'm pretty sure one will eventually float by...
Noel
Howdy folks -
So I'm mostly a lurker here and love the history and the way things used to
be done. But being born in '91 I pretty much missed all of it, although I
did grow up with 80s machines in the house.
There is one thing that I would love to do, and may seem a curious thing to
most, but I think about it from time to time, and it's enticing. But I'm
not sure where one would get started.
Would it still be possible today for someone like me to go out, and find an
old teletype terminal (an old ASR or DECwriter or something), set up a
phone line and modem and get a roll of paper, and then actually use it to
connect to other computers?
I know it's not really practical today - but is it possible?
Brian Zick
zickzickzick.com
.:/
,,///;, ,;/
o:::::::;;///
>::::::::;;\\\
''\\\\\'' ';\
\
On Fri, Aug 1, 2014 at 8:13 AM, Andy Kosela <akosela(a)andykosela.com> wrote:
>
>
> On Friday, August 1, 2014, Dario Niedermann <dnied(a)tiscali.it> wrote:
>
>> Tim Newsham <tim.newsham(a)gmail.com> wrote:
>>
>> > just for fun, you might want to run your
>> > ancient unix in simh using this terminal:
>> > https://github.com/Swordifish90/cool-old-term
>>
>> Cool! I've been waiting for ages for something like the Cathode terminal
>> emulator
>> to appear on Linux too. Cathode is Mac OS X only, unfortunately.
>> Homepage: http://devio.us/~ndr/
>> Gopherhole: gopher://retro-net.org/1/dnied/
>>
>>
> I still prefer my old Digital VT terminal though. Nothing will beat CRT
> screen when it comes to low resolution text-only mode.
>
> --Andy
>
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
>
>
> From: John Cowan <cowan(a)mercury.ccil.org>
>> if you're dialing up, you need to find a dial-up port that supports
>> 110 baud.
> I dialed up The World's local dialup line for my area, and heard a
> large variety of tones including Bell 103-compatible FSK, which is 300
> baud. I suspect that anything that can do Bell 103 can fall back to
> Bell 101, which was 110 baud.
There are two more things one needs to have for the port to support 110: i)
the serial interface needs to support 110 (even if the modem is integrated
with the serial hardware on one board, the serial hardware might not do 110),
and ii) the software needs to be willing to go 110.
I don't know anything about how contemporary dial-up ports work, so maybe
there's some side-channel from the modem to the interface which allows the
software to find out directly what speed the modem is using. However, 'back in
the day' with multi-speed ports, there was no such mechanism (the RS-232
interface spec didn't provide for speed indication), and one had to hit BREAK
and the serial line device driver would see that, and try the next speed in a
list. You can still see this in the big table of terminal types in getty.c,
e.g.:
/* table '0'-1-2 300,150,110 */
which tried 300, 150, 110. So if the software isn't looking for 110...
Noel
Rats :( :( :(
Did they have power supplies, and did they still work?
On Fri, 15 Aug 2014, Clem Cole wrote:
> Date: Fri, 15 Aug 2014 17:07:28 -0400
> From: Clem Cole <clemc(a)ccc.com>
> To: Brian Zick <brian(a)zickzickzick.com>
> Cc: "tuhs(a)minnie.tuhs.org" <tuhs(a)minnie.tuhs.org>
> Subject: Re: [TUHS] Teletype
[...]
>
> Funny, just this AM, I put into the the electronics recycling box at work 4
> telebit "Worldblazer" modems and a POTS line emulator (and a bunch of other
> old junk). I've been clean out my basement and I knew I would never use
> those again.
>
---------------------------------------------
This message was sent using Endymion MailMan.
http://www.endymion.com/products/mailman/
Doug McIlroy:
The single-token rule meant that, if you wanted to supply an
option to wc in the pipeline
ls > wc >
you couldn't write
ls > wc -l >
as one would expect, but instead had to write
ls > "wc -l" >
Yet a quoted "wc -l" as a bare command or (I suspect) as the
first command in a pipeline would lead to "command not found".
What a mess!
======
Then as now, a quoted "wc -l" would be taken by the shell
to be a single world, so
"wc -l" file
would be a request to find a file named "wc -l" (without
the quotes but with the embedded blank) somewhere in the
search path, and execute it with argv[0] = "wc -l" and
argv[1] = "file". But the shell's parser bound only the
word following > or < to the operator, so the command had
to be quoted (if it had arguments) to make it a single word.
So in the old syntax, if you needed to quote an argument
of a command that was part of a pipeline but not at the
head, you'd have to embed quotes within quotes; e.g.
ls > "grep '[0-9]'" >
Decidedly a quick hack, just like the original implementation
of fork(2) (which was, approximately, swap the current process
out, but keep the in-core copy, and give one of the two a new
process ID and process-table entry). Though unlike the original
fork, the original pipeline syntax was rough enough to be
worth fixing early on.
As a side note, when I was writing my earlier message, I was
going to construct an example using wc -l, until I checked the
manual and discovered that when pipelines were invented wc
didn't yet take options. I also thought about an example
using grep, except grep hadn't appeared yet either. Pipelines
(especially once they were attractive and convenient to use)
made a bigger difference than we remember in how commands
worked and which commands were useful.
And of course Doug gets at least as much credit as Ken for
changing our lives with all that.
Norman Wilson
Toronto ON
> From: Mark Longridge <cubexyz(a)gmail.com>
> The first problem I had was I couldn't just cp over all the
> /usr/source/s1 files to the new drive because of "Arg list too long"
John Cowan nailed this; as an aside, I don't know about V5, but in vanilla V6
the entire argument list had to fit into one disk buffer (I would assume V5 is
the same).
The PWB changes to v6 included a rewrite of exec() to accumulate the argument
list in swap space, so it could be much longer; the maximum length was a
parameter, NCARGS, which was set to 5120 (10 blocks) by default.
Noel
Since there's a pdf of the unix v5 man pages I figured I might as well
recreate all the necessary files to have man pages in v5.
There's a very simple thompson shell script that I used from v6 to create man:
if X$2 != X"" nroff man0/naa man$1/$2.$1
if X$2 = X"" nroff man0/naa man1/$1.1
I borrowed the assembly language source from v6 to recreate nroff for
v5. After that it was just a matter of matching the date from various
files in v4 and v6. If the date was exactly the same for a given man
page file I just copied it straight into v5. If the date was different
then I used the version in v6 and just edited it until it matched what
was shown in the v5 manual pdf.
It should also help me figure out a lot of the differences between v5
and v6. When it's all done I'll put the disk images and configuration
files on archive.org and post the URL here.
I developed a sort of philosophy for adding stuff to unix v5 which
goes beyond the v5root.tar.gz files donated by Dennis Ritchie:
No changing of cc or as.
No changing of the kernel code beyond recompiling the existing v5 code.
No changing of the existing device drivers (adding new ones is OK).
No backporting of iolib or stdio into v5.
No changing libc.
Adding userland programs is OK as long as the above rules are followed.
Mark
Ok, I was just thinking that we have a lack of Unix version 5 (and
older) source code but since the Unix v5 era was the era of
teletypewriters perhaps there could be a stockpile of old teletype
printouts somewhere. Assuming they didn't run out of paper all the
time there would have been an automatic record generated of everything
Thompson and Ritchie did. Some of those printouts must have been kept
somewhere.
Mark
Firstly, I should mention I'm using simh to simulate Unix version 5.
Well I tried to reorganize the files in unix v5. Mainly I wanted more
room on rk0 so I figured I'd create a new drive and put all the source
from /usr/source/s1 on it.
The first problem I had was I couldn't just cp over all the
/usr/source/s1 files to the new drive because of "Arg list too long"
so I figured I would just create an archive file called all.a which
would include all the files in /usr/source/s1 and copy that over.
But then I got "phase error" when I tried to keep adding files to the
archive (I had to do this in stages, e.g. ar r all.a /usr/source/s1/a*
then ar u all.a /usr/source/s1/b* etc). Phase error seemed to occur
when the archive got larger than around 160,000 bytes. So I ended up
creating 3 archive files to keep from getting "phase error".
I was wondering does anyone understand what the limits are for the cp
and ar commands?
Mark
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Yet a quoted "wc -l" as a bare command or (I suspect) as the first
> command in a pipeline would lead to "command not found".
I don't know about earlier versions of Unix, but FWLIW on V6 it does indeed
barf in both of these cases (just tried it).
Noel
Thanks, Norman, for reminding me of the actual pipe syntax in v3.
This reinforces the title of one history item on Dennis's
website: "Why Ken had to invent |".
I'd suppressed all memory of the fact that in the pipeline
... > cmd > ...
cmd had to be a single token. That was certainly not the intent
of my original proposal. It is understandable, though, as a
minimal hack to the existing shell syntax and its parser, which
accepted occurrences of <token and >token anywhere in a command.
The single-token rule meant that, if you wanted to supply an
option to wc in the pipeline
ls > wc >
you couldn't write
ls > wc -l >
as one would expect, but instead had to write
ls > "wc -l" >
Yet a quoted "wc -l" as a bare command or (I suspect) as the
first command in a pipeline would lead to "command not found".
What a mess!
Soon after, Ken was inspired to invent the | operator, lest
he should have to describe an ugly duckling in public at an
upcoming symposium in London.
Is it possible that the ugliness of the token hack was the
precipitating factor that gave us the sublime | ? But for
the hack, perhaps we'd still be writing
ls > wc >
Doug
> From: Mark Longridge <cubexyz(a)gmail.com>
> In Unix v6 there is a file in the TUHS archives
> V6/usr/source/s4/alloc.s
> ..
> In v5 the command "ar t /lib/libc.a" lists the files in the c library
> and that includes alloc.o
Well, at least you have the binary. Dis-assembly time! :-)
> so there should be a source file somewhere.
Ha-ha-ha-ha-HAH! You _are_ an optimist!
I'm not joking about the dis-assembly. Not to worry, it's not too bad (I had
to do it to retrieve the source for the V6 RL bootstraps - and you've got
symbols too), and you've got the V6 alloc.s to guide you - with luck, it did
not get re-written between V5 and V6, and you may have minimal (no?) changes
to deal with?
I don't know which debugger V5 has (db or cdb); if neither, you can spin up a
V6 and do the disassembly there.
Do "db alloc.o > alloc.s" and then type '0?<RETURN>' followed by a whole
bunch of RETURNs. (You can do plain 'db alloc.o' first, to see about how many
CR's you need.) Next do a "nm alloc.o" to get as many symbols as you can.
At that point, I would extract your prototype alloc.s from the emulated
machine so you can use a real editor to work on it. (You should have a way
to get files in and out of the simulated machine; that was one of the
first things I did with my V6 work:
http://www.chiappa.net/~jnc/tech/V6Unix.html
although if you're using SIMH I don't know if that has a way to import
files - a big advantage to using Ersatz-11, although one I didn't know
about when I picked it.)
You may need to go back and do a "xxx/" {with appropriate value for "xxx"}
plus a few CR's to get static constants, but at that point you should have
all the raw data you need to re-create the V5 alloc.s. Obviously, start
by having your proto-allocV5.s in one window, and compare with the
allocV6.s in another... like I said, you may luck out.
The final step, of course, is 'as alloc.s', and then 'cmp a.out alloc.o'.
Noel
> Interesting that they had both - I don't remember hearing about the 37
> but that doesn't mean much. :-)
The only model 33 on any PDP11 in Bell Labs research was the console.
Otherwise all terminals were ASCII devices. Model 37's predated Unix.
doug
Ok, I was trying to understand exactly what the alloc(III) subroutine
does in unix version 5 and I've discovered that the source code for it
appears to be missing.
In Unix v6 there is a file in the TUHS archives
V6/usr/source/s4/alloc.s so I assume there should be a
V5/usr/source/s4/alloc.s as well but I can't find it anywhere.
In v5 the command "ar t /lib/libc.a" lists the files in the c library
and that includes alloc.o so there should be a source file somewhere.
Mark
Davwe Horsfall:
I was surprised when "chdir" became "cd", but I suppose it fits the
philosophy of 2-letter commands.
======
Don't forget that the original spelling, in the PDP-7 UNIX that
had no published manual, was ch.
The 1/e manual spells it chdir. I remember that in one of
Dennis's retrospective papers, he remarks on the change, and
says he can't remember why.
I once asked in the UNIX room if anyone could recall why ch
changed to chdir. Someone, I forget who, suggested it was
because the working directory was the only thing that could
be changed: no chmod or chown in the PDP-7 days. I don't know
whether that's true.
Norman Wilson
Toronto ON
Lyndon Nerenberg:
Do you still consider '^' the shell's inter-command pipe character?
======
No. By the time I first used UNIX, | was well-established as
the official pipeline character; ^ was just a quirky synonym.
I had the impression somehow that ^ was there just to make
life easier on the Model 33 Teletype, which had no | key.
Digging into old manuals, ^ and | appear simultaneously, in
sh(1) in the Fourth Edition. Pipelines first appeared in
3/e, though with a clumsier syntax (not supported by
any current shell I know): where we would now type
ls | wc
the original syntax was
ls > wc >
The trailing > was required to tell the shell to make a pipeline
of the two commands, rather than to redirect output to a file
named wc. One could of course redirect the final command's
output to a file as well:
ls > wc > filecount
Even clumsier: where we would now type
ls | pr -h 'Look at all these files!'
the 3/e shell expected
ls > "pr -h 'Look at all these files!'" >
because its parser bound > to only the single following word.
The original syntax could be reversed too:
wc < ls <
The manual implies this was required if the pipeline's
ultimate input came from a file. Maybe someone with more
energy than I have right now can dig out the source code
and see.
I was originally going to use an example like
who | grep 'r.*' | wc -l
but the old-style version would be anachronistic. There
was no grep yet in 3/e, and wc took no arguments.
I do still have the habit of quoting ^ in command arguments,
but that's still necessary on a few current systems; e.g.
/bin/sh on Solaris 10. Fortunately, that makes it easier
to remember to quote ! as well, something required by the
clumsy command-history mechanism some people like but I don't.
(I usually turn off history but occasionally it gets turned on
by accident anyway.)
Norman Wilson
Toronto ON
>> From: Mark Longridge <cubexyz(a)gmail.com>
>> I thought you folks might be interested in looking at the changes I had
>> to make. It was a bit harder than the port to v6 but porting to v6
>> first did make things a bit easier.
> To save me from poring over 'diff' output :-), what (at a high level) were
> the changes you had to make to get it to run on v5?
> Noel
Briefly the differences were these:
modern to v7: remove all references to void
no vi on v7, v6, and v5 so using ed instead.
no conditional compilation so no way to make a truly universal version
which works on everything.
v7 to v6: use iolib instead of stdio: fopen -> copen, fclose ->
cclose, fgetc -> cgetc, fputc -> cputc
use int (no long or short in v6)
call to srand uses different argument
copen returns an int instead of a file pointer
no strcat in v6 so the function had to be added
v6 to v5: no iolib: fopen -> creat + open, copen -> open, cgetc ->
read, cputc -> write, cclose -> close
no scanf in v5 so I used the source for gets from v7 instead
getchar() leaves a newline in the buffer so I added an extra call to
getchar() immediately before each call to gets
The size of the unirubik executable was 8K for modern Linux, 10K for
v7, 10K for v6, and 5492 bytes for v5.
Generally I was a lot slower trying to edit files with ed rather than
vi but I'm a lot better with ed now. There wasn't really much unix v5
code to look at and I found the v5 manual a bit spartan.
Mark
Larry McVoy:
Looking at git like that is sort of like looking at the size of
a dynamically linked app. Ya gotta add in libc and all the extensions
people use to make it not suck.
=====
In which case one should also add the size of the kernel, or at
least the code paths exercised by a given program.
Not to mention the layers of window systems, networking, desktops,
message buses, name-space managers, programs to emulate 40-year-old
terminal hardware, flashy icons, and so on.
I say all this to underscore Larry's point, not to dispute it.
Everything has gotten more complicated. Some of the complexity
involves reasonable tradeoffs (the move toward replacing binary
interfaces with readable text where space and time are nowhere
near critical, like the /proc and /sys trees in modern Linux).
Some reflects the more-complex world we live in (networking).
But a lot of it seems, to my mind that felt really at home when
it first settled into UNIX in 1981, just plain tasteless.
There are certainly legitimate differences in aesthetic taste
involved, though. I think taste becomes technically important
when it can be mapped onto real differences in how easily a
program can be understood, whether its innards or its external
interface; how easily the program can adapt to different tasks
or environments; and so on.
Norman Wilson
Toronto ON
Tim Newsham:
I was referring to the bell labs guys who wrote linux and later plan9...
=======
Which Bell Labs guys wrote Linux?
I assume you're not referring to Andy Tanenbaum, erstwhile teacher
of a certain famous Finn ...
Norman Wilson
Toronto ON
PS: it's true that the Plan 9 folks at Bell Labs were early
champions of both Unicode and the UTF-8 encoding. Source:
personal memory.
> Remember that writing programs on terminals was a relative latecomer --
> FORTRAN was designed for punched cards.
Remember that FORTRAN also was a latecomer. It was a shock
to convert from the full character set of the Flexowriters at
Whirlwind to the rebarbative upper-case-only of the 704.
In that vein, there was a period when the Chicago Manual Style
disparaged uppercase text, with an exception being made for
computer programs, which of course should be presented in
upper case.
Doug
> no conditional compilation so no way to make a truly universal version
> which works on everything.
If cc -I is there, it should be able to do the tailoring.
Also conditional compilation of non-declaration statements can
be replaced by regular if statements that typically can be
optimized away (though the old C compilers may not do so).
Incidentally, I would say that the use of conditional compilation
is evidence that the code is NOT truly universal, but has to be
specially adapted to various environments.
Doug
> "Reining in", please (peeve, peeve)
Ouch. Doubly so for the erstwhile curator of "spell".
(COuld "reigning in" somehow have been implanted by this
headline that I saw in The Economist a few days ago:
"The reign in Maine is easy to explain"?)
Doug
> From: "A. P. Garcia" <a.phillip.garcia(a)gmail.com>
> the spirit of emacs without the bloat :-)
Exactly. I've often wondered what the heck exactly it is that GNU Emacs, GCC,
etc are all doing with those megabytes of code. (In part, probably all those
options: "Options. We've got lots of them. So many in fact, that you need two
strong people to carry the documentation around.", as that classic hack says.
But there's no way the options alone can explain it all.)
The thing is that it's not just aesthetics that makes large programs bad;
there are very good practical reasons why they are bad, too. The 'takes more
resources' isn't such a big deal today, because memory is massive, and
there's a ton of computing power to be thrown at things. (Although I'm always
amazed at how the active content in Web pages seems to run incredibly slowly
on all but the very latest and greatest machines. WTF are they doing?)
But more code = more material that someone new has to understand before they
can make some change (and long-lived code is always being changed/upgraded by
new people). And when people understand a system poorly, their changes tend
to be 'a bag on the side', and that's the kind of 'code cancer' that tends to
kill systems in the long run. More code also is also more places where there
can be bugs (especially when it's changed by someone who understands it
poorly, repeat previous comment).
Etc, etc. And those will never go away - human brain power is finite, and
unlike hardware, not expanding.
There's just no reason to have N megabytes of code when .N will do. (I've
often thought we ought to make new programmers serve an apprenticeship of a
year of two on a PDP-11 - to teach them to 'think small', and to realize you
_can_ do a lot in a small space.)
Noel
Doug McIlroy:
A symptom of why I have always detested emacs and vi. With ^D, ^C,
and ^\, Unix has more than enough mystery chords to learn.
====
What is this ^C mystery chord?
Or can it be that I am actually more wedded to the past than
Doug, in that I still use DEL as my interrupt character?
And, for that matter, @ for kill (though in the modern world
one has to type @ often enough to require learning a different
modern-world mystery chord, ^V).
I break with the past for character-erase, though: backspace,
not #.
Norman Wilson
Toronto ON
> From: John Cowan <cowan(a)mercury.ccil.org>
> because of all the Elisp code it ships with
So why is /usr/bin/emacs 4.5 megabytes?
> That's basically just the kind of peeving that objects to the use of
> computers as calculators and spelling checkers.
I just gave several good reasons why large programs (well, technically,
systems) are bad. Are you saying those reasons are fallacious?
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I thought you folks might be interested in looking at the changes I had
> to make. It was a bit harder than the port to v6 but porting to v6
> first did make things a bit easier.
To save me from poring over 'diff' output :-), what (at a high level) were
the changes you had to make to get it to run on v5?
Noel
> From: "A. P. Garcia" <a.phillip.garcia(a)gmail.com>
> Being so small, I expected the editor to lack a scripting language.
Well, there is a companion 'compiler' which converts extension source into
the intermediate form (byte-code) which is interpreted by the editor. But
it's even smaller (67KB!) and as fast as the editor itself.
> I was pleasantly surprised that it does have one, and that it's a c
> derivative ... "Extensible and modifiable" doesn't always mean the same
> thing to everyone, and well, you're a kernel hacker.
Take a quick look at a source file, e.g. one of mine:
http://ana-3.lcs.mit.edu/~jnc/tech/cmd.e
and you'll see i) what it's like (except for a few new editing-specific
keywords, such as 'on <key>' in function definitions, it's pretty much C),
and ii) it will give you a sense of the kind of things one writes in it, and
how easy it is to do so.
The underlying run-time basically just provides buffer, display, etc
primitives, and pretty much all the actual editor commands are written in the
'extension' languge, even simple things like 'forward character' (^F), etc.
The complete manual is available online, the run-time system is described
here:
http://www.lugaru.com/man/Primitives.and.EEL.Subroutines.html
Epsilon comes (as of a few versions back, I haven't bothered to upgrade) with
about 22K lines of source, which is the bulk of the actual editor; that turns
into about 190KB of intermediate code.
Noel
On Sat, 2 Aug 2014, John Cowan wrote:
> > Hadn't really noticed; I went straight from CP/M to Unix, giving
> > MS-DOS a miss.
>
> I was actually thinking about OS/8 and RT-11.
Ahh... RT-11 and TECO... Who here hasn't typed their name into it to see
what it did?
I was thinking of home systems.
-- Dave
> From: Benjamin Huntsman <BHuntsman(a)mail2.cu-portland.edu>
> I thought it stood for Escape-Meta-Alt-Control-Shift :)
> From: Dave Horsfall <dave(a)horsfall.org>
> EMACS - Editor too large
Those are both pretty funny!
BTW, Epsilon (that 250KB Emacs that I was raving about) not only runs under
Windows, it also runs under Linux, Mac OS, FreeBSD, etc. Here:
http://lugaru.com/
I can't say enough good things about it (hence my 30-year addiction to it).
If you want an Emacs clone that is very small; very fast; and wildly
extensible and modifiable (it comes with almost all the source), in C
(effectively); this is the one.
Noel
> To each their own.
Indeed.
> As a Vi user, nothing beats having Esc on the home row.
A symptom of why I have always detested emacs and vi. With ^D, ^C,
and ^\, Unix has more than enough mystery chords to learn. Emacs
and vi raised that number to a high power--an interface at least
as arcane and disorganized as the DD card in OS 360--baroque
efflorescences totally out of harmony with the spirit of Unix.
(Perhaps one could liken learning vi to learning how to finger
the flute, but the flute pays off with beautiful music. To put the
worst face on vi, it "pays off" only by promoting frantic tinkering.)
A modern-day analog of the undisciplined exuberance of emacs and vi:
for a good time on linux try
less --help | wc
Does comment on taste belong in a discussion of history? I think
so. Unix was born of a taste for achieving big power by small
means rather than by unbounded accumulation of facilities. But
evolution, including the evolution of Unix, does not work that
way. An interesting question is how the corrective of taste manages
ever to recenter the exuberance of evolution. The birth of Unix shows
it can happen. When will it happen again? Can one cite small-scale
examples that gained traction within the larger evolution of Unix?
Doug
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> A symptom of why I have always detested emacs and vi. With ^D, ^C, and
> ^\, Unix has more than enough mystery chords to learn. Emacs and vi
> raised that number to a high power--an interface at least as arcane and
> disorganized as the DD card in OS 360--baroque efflorescences totally
> out of harmony with the spirit of Unix.
I will agree that the Emacs user interface is not simple - although there are
levels, and one can start out e.g. without knowing the commands to move by
word, and get by with the commands to move by character - and of course
nowadays, the arrows, etc, keys on keyboards are bound to the appropriate
commands, for novices.
But it's a subtle debate; yes, it's not for everyone, but i) as an
application, not everyone has to use it (unlike a kernel), and ii) as the
editor is the principal tool which most programmers spend hours a day using,
it is not insensible to have a more complex but powerful tool which takes a
while to fully master. (Like playing a violin...)
Back on V6, we started using one written by someone at BBN (memory fails me as
to exactly who), and it improved my productivity immensely (with 'WYSIWG'
editing - i.e. you always see the current contents of the buffer, multiple
buffers, multiple windows, etc).
I had been using 'ed' (although I had access to Emacs on the ITS machines),
and although I was (and remain) fairly skilled at 'ed', the factors I just
listed are immense, IMO. Being able to see the code as I work on it really,
really helps (for me, at least).
But a lot of that is orthogonal to Emacs command interface; you can have
'WYSIWYG', multiple buffers, etc with a wholly different command interface,
and get much the same benefit. (E.g. uSoft Word has most of those; real
WYSIWG [i.e. with multiple fonts], multiple files open at once, etc, etc.)
Does something like Word produce the same reaction for you? I don't use it
much, but my wife does (she's an engineer, and uses it to write papers), and
its complexity drives her crazy sometimes.
As for the size of Emacs, everyone needs to distinguish between GNU Emacs, and
Emacs-like editors. Just as GCC is a beast, but other C compilers are and were
much smaller, there are small Emacses out there.
Back on V6 (on a PDP-11, of course), it had to fit into 64KB; the one we used
didn't have the kind of extensibility common in them now, but it was still
a much better tool for me than 'ed'.
As I recall the performance was pretty good (albeit it chewed CPU time, since
it woke up on every character - Multics had an Emacs which tried to avoid
that, and only woke up on non-printing characters, and used system echoing for
the others). I don't know for sure (I don't have the source to hand at the
moment - that's one of the things I hope to recover if/when I can read those
backup tapes), but I suspect that it 'windowed' files (i.e. didn't read the
whole thing in); with the 65KB address space of the 11, that would be almost
inevitable.
I have been using another Emacs, Epsilon, for almost 30 years now; it started
as basically Emacs for MS-DOS, and later became Emacs for Windows, and it is
small and very fast. The Windows executable is about 250KB, and it loads a
'state file' (mostly interpreted 'compiled' intermediate code, written in
something that's 99.2% 'C', in which a lot of the editor is actually written)
of about 200K (for mine, which has a lot of extensions I wrote). It starts
fast, and runs blindingly fast. It also uses the file 'windowing' techniques,
and can handle much larger files than its address space (this dates back to
its MS-DOS days).
So Emacs != big (at least, not necessarily).
> A modern-day analog of the undisciplined exuberance of emacs and vi:
> for a good time on linux try
I basically agree with you on this; I want to go away and collect my thoughts
before responding, though.
Noel
Arrow keys? Vi does arrow keys? But then I'd have to move my hand from home.
That's not vi.
--
Ed Skinner, ed(a)flat5.net, http://www.flat5.net/
-------- Original message --------
From: Doug McIlroy <doug(a)cs.dartmouth.edu>
Date: 08/02/2014 8:38 AM (GMT-07:00)
To: tuhs@minnie.tuhs.org,lm@mcvoy.com
Subject: Re: [TUHS] Unix taste
> So Doug, ed? Or what?
Yes, ed for small things. It loads instantly and works in the
current window without disturbing it. And it has been ingrained
in my fingers since Multics days.
But for heavy duty work, I use sam, in Windows as well as Linux.
Sam marries ed to screen editing much more cleanly than vi.
It has recursive global commands and infinite undo. Like qed
(whence came ed's syntax) and Larry's xvi it can work on
several files (or even several areas in one file) at once.
I would guess that a vi adept would miss having arrow keys
as well as the mouse, but probably not much else. Sam offers
one answer for my question about examples of taste reigning
in featurism during the course of Unix evolution.
Doug
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
> So Doug, ed? Or what?
Yes, ed for small things. It loads instantly and works in the
current window without disturbing it. And it has been ingrained
in my fingers since Multics days.
But for heavy duty work, I use sam, in Windows as well as Linux.
Sam marries ed to screen editing much more cleanly than vi.
It has recursive global commands and infinite undo. Like qed
(whence came ed's syntax) and Larry's xvi it can work on
several files (or even several areas in one file) at once.
I would guess that a vi adept would miss having arrow keys
as well as the mouse, but probably not much else. Sam offers
one answer for my question about examples of taste reigning
in featurism during the course of Unix evolution.
Doug
> From: Dave Horsfall <dave(a)horsfall.org>
> I recall that there were other differences as well, but only minor. In
> my paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all
> about porting V6 to the thing.
I did a google for that, but couldn't find it. Is it available anywhere
online? (I'd love to read it.) I seem to recall vaguely that AUUGN stuff were
online, but if so, I'm not sure why the search didn't turn it up.
> I vaguely remember that the LTC had to be disabled during the boot
> process, for example, with an external switch.
I think you might be right, which means the simulated 11/23 I tested on
wasn't quite right - but keep reading!
I remember being worried about this when I started doing the V6 11/23 version
a couple of months back, because I remembered the 11/03's didn't have a
programmable clock, just a switch. So I was reading through the 11/23
documentation (I had used 11/23s, but on this point my memory had faded),
trying to see if they too did not have a programmable clock.
As best I can currently make out, the answer is 'yes/no, depending on the
exact model'! E.g. the 11/23-PLUS _does_ seem to have a programmable clock
(see pg. 610 of the 1982 edition of "microcomputers and memories"), but the
base 11/23 _apparently_ does not.
Anyway, the simulated 11/23 (on Ersatz11) does have the LTC (I just checked,
and 'lks' contains '0177546', so it thinks it has one :-).
But this will be easy to code around; if no link clock is found (in main.c),
I'd probably set 'lks' to point somewhere harmless (054, say - I'm using
050/052 to hold the pointer to the CSW, and the software CSW if there isn't a
hardware one). That way I can limit the changes to be in main.c, I won't have
to futz with clock.c too.
Noel
PS: On at least the 11/40 (and maybe the /45 too), the line clock was an
option! It was a single-height card, IIRC.
> From: Mark Longridge <cubexyz(a)gmail.com>
> I was digging around trying to figure out which Unixes would run on a
> PDP-11 with QBUS. It seems that the very early stuff like v5 was
> strictly UNIBUS and that the first version of Unix that supported QBUS
> was v7m (please correct me if this is wrong).
That may or may not be true; let me explain. The 11/23 is almost
indistinguishable, in programming terms, from an 11/40. There is only one
very minor difference (which UNIX would care about) that I know of - the
11/23 does not have a hardware switch register.
Yes, UNIBUS devices can't be plugged into a QBUS, and vice versa, _but_ i)
there a programming-compatible QBUS versions of many UNIBUS devices, and ii)
there were UNIBUS-QBUS converters which actually allowed a QBUS processor to
have UNIBUS peripherals.
So I don't know which version of Unix was the first run on an 11/23 - but it
could have been almost any.
It is quite possible to run V6 on an 11/23, provided you make a very small
number of very minor changes, to avoid use of the CSWR. I have done this, and
run V6 on a simulated 11/23 (I have a short note explaining what one needs to
do, if anyone is interested.) Admittedly, this is not the same as running it
on a real 11/23, but I see no resons the latter would not be doable.
I had started in on the work needed to get V6 running on a real 11/23, which
was the (likely) need to load Unix into the machine over a serial line. WKT
has done this for V7:
http://www.tuhs.org/Archive/PDP-11/Tools/Tapes/Vtserver/
but it needs a little tweaking for V6; I was about to start in on that.
> I have hopes to eventually run a Unix on real hardware
As do a lot of us... :-)
> It seems like DEC just didn't make a desktop that could run Bell Labs
> Unix, e.g. we can't just grab a DEC Pro-350 and stick Unix v7 on it.
I'm not sure about that; I'd have to check into the Pro-350. If it has memory
mapping, it should not be hard.
Also, even if it doesn't have memory mapping, there was a Mini-Unix done for
PDP-11's without memory mapping; I can dig up some URLs if you're interested.
The feeling is, I gather, very similar.
> it would be nice to eventually run a Unix with all the source code at
> hand on a real machine.
Having done that 'back in the day', I can assure you that it doesn't feel
that different from the simulated experience (except that the latter are
noticeably faster :-).
In fact, even if/when I do have a real 11, I'll probably still mostly use the
simulator, for a variety of reasons; e.g. the ability to edit source with a
nice modern editor, etc, etc is just too nice to pass up! :-)
Noel
Hi folks,
I was digging around trying to figure out which Unixes would run on a
PDP-11 with QBUS. It seems that the very early stuff like v5 was
strictly UNIBUS and that the first version of Unix that supported QBUS
was v7m (please correct me if this is wrong).
I was thinking that the MicroPDP-11's were all QBUS and that it would
be easier to run a Unix on a MicroPDP because they are the most
compact. So I figured I would try to obtain a Unix v7m distribution
tape image. I see the Jean Huens files on tuhs but I'm not sure what
to do with them.
I have hopes to eventually run a Unix on real hardware but for now I'm
going to stick with simh. It seems like DEC just didn't make a desktop
that could run Bell Labs Unix, e.g. we can't just grab a DEC Pro-350
and stick Unix v7 on it. Naturally I'll still have fun checking out
Unix v5 on the emulator but it would be nice to eventually run a Unix
with all the source code at hand on a real machine.
Mark
Many Q-bus devices were indeed programmed exactly as if
on a UNIBUS. This isn't surprising: Digital wanted their
own operating systems to port easily as well.
That won't help make UNIX run on a Pro-350 or Pro-380,
though. Those systems had standard single-chip PDP-11
CPUs (F11, like that in the 11/23, for the 350; J11,
like that in the 11/73, for the 380), but they didn't
have a Q-bus; they used the CTI (`computing terminal
interconnect'), a bus used only for the Pro-series
systems. DEC's operating systems wouldn't run on
the Pro either without special hacks. I think the
P/OS, the standard OS shipped with those systems, was
a hacked-up RSX-11M. I don't know whether there was
ever an RT-11 for the Pro. There were UNIX ports but
they weren't just copies of stock V7.
I vaguely remember, from my days at Caltech > 30 years
ago, helping someone get a locally-hacked-up V7
running on an 11/24, the same as an 11/23 except is
has a UNIBUS instead of a Q-bus. I don't think they
chose the 11/24 over the 11/23 to make it easier to
get UNIX running; probably it had something to do with
specific peripherals they wanted to use. It was a
long time ago and I didn't keep notebooks back then,
so the details may be unrecoverable.
Norman Wilson
Toronto ON
>> the downstream process is in the middle of a read call (waiting for
>> more data to be put in the pipe), and it has already computed a pointer
>> to the pipe's inode, and it's looping waiting for that inode to have
>> data.
> I think it would be necessary to make non-trivial adjustments to the
> pipe and file reading/writing code to make this work; either i) some
> sort of flag bit to say 'you've been spliced, take appropriate action'
> which the pipe code would have to check on being woken up, and then
> back out to let the main file reading/writing code take another crack
> at it
> ...
> I'm not sure I want to do the work to make this actually work - it's
> not clear if anyone is really that interested? And it's not something
> that I'm interested in having for my own use.
So I decided that it was silly to put all that work into this, and not get it
to work. I did 'cut a corner', by not handling the case where it's the first
or last process which is bailing (which requires a file-pipe splice, not a
pipe-pipe; the former is more complex); i.e. I was just doing a 'working proof
of concept', not a full implementation.
I used the 'flag bit on the inode' approach; the pipe-pipe case could be dealt
with entirely inside pipe.c/readp(). Here's the added code in readp() (at the
loop start):
if ((ip->i_flag & ISPLICE) != 0) {
closei(ip, 0);
ip = rp->f_inode;
}
It worked first time!
In more detail, I had written a 'splicetest' program that simply passed input
to its output, looking for a line with a single keyword ("stop"); at that
point, it did a splice() call and exited. When I did "cat input | splicetest
| cat > output", with appropriate test data in "input", all of the test data
(less the "stop" line) appeared in the output file!
For the first time (AFAIK) a process succesfully departed a pipeline, which
continued to operate! So it is do-able. (If anyone has any interest in the
code, let me know.)
Noel
Hi folks,
I've been typing sync;sync at the shell prompt then hitting ctrl-e to
get out of simh to shutdown v5 and v6 unix.
So far this has worked fairly well but I was wondering if there might
be a better way to do a shutdown on early unix.
There's a piece of code for Unix v7 that I came across for doing a shutdown:
http://www.maxhost.org/other/shutdown.c
I doesn't work on pre-v7 unix, but maybe it could be modified to work?
Mark
> the downstream process is in the middle of a read call (waiting for
> more data to be put in the pipe), and it has already computed a pointer
> to the pipe's inode, and it's looping waiting for that inode to have
> data.
> So now I have to regroup and figure out how to deal with that. My most
> likely approach is to copy the inode data across
So I've had a good look at the pipe code, and it turns out that the simple
hack won't work, for two reasons.
First, the pipe on the _other_ side of the middle process is _also_ probably
in the middle of a write call, and so you can't snarf its inode out from
underneath it. (This whole problem reminds me of 'musical chairs' - I just
want the music to stop so everything will go quiet so I can move things
around! :-)
Second, if the process that wants to close down and do a splice is either the
start or end process, its neighbour is going to go from having a pipe to
having a plain file - and the pipe code knows the inode for a pipe has two
users, etc.
So I think it would be necessary to make non-trivial adjustments to the pipe
and file reading/writing code to make this work; either i) some sort of flag
bit to say 'you've been spliced, take appropriate action' which the pipe code
would have to check on being woken up, and then back out to let the main file
reading/writing code take another crack at it, or ii) perhaps some sort of
non-local goto to forcefully back out the call to readp()/writep(), back to
the start of the read/write sequence.
(Simply terminating the read/write call will not work, I think, because that
will often, AFAICT, return with 0 bytes transferred, which will look like an
EOF, etc; so the I/O will have to be restarted.)
I'm not sure I want to do the work to make this actually work - it's not
clear if anyone is really that interested? And it's not something that I'm
interested in having for my own use.
Anyway, none of this is in any way a problem with the fundamental service
model - it's purely kernel implementation issues.
Noel
Ok, this is cheating a bit but I was wondering if I could possibly
compile my unix v6 version of unirubik which has working file IO and
run it under unix v5.
At first I couldn't figure out how to send a binary from unix v6 to
unix v5 but I did some experimenting and found:
tp m1r unirubik
which would output unirubik to mag tape #1 and
tp m1x unirubik
which would input unirubik from mag tape #1.
I don't know what cc does exactly but I thought "well if it compiles
to PDP-11 machine code and it's statically linked it could work". And
it actually does work!
I still want to try to get unirubik to compile under Unix v5 cc but
it's interesting that a program that uses iolib functions can work
under unix v5.
Mark
> From: Norman Wilson <norman(a)oclsc.org>
> I believe that when sync(2) returned, all unflushed I/O had been queued
> to the device driver, but not necessarily finished
Yes. I have just looked at update() (the internal version of 'sync') again,
and it does three things: writes out super-blocks, any modified inodes, and
(finally) any cached disk blocks (in that order).
In all three cases, the code calls (either directly or indirectly) bwrite(),
the exact operation of which (wait for completion, or merely schedule the
operation) on any given buffer depends on the flag bits on that buffer.
At least one of the cases (the third), it sets the 'ASYNC' bit on the buffer,
i.e. it doesn't wait for the I/O to complete, merely schedules it. For the
first two, though, it looks like it probably waits.
> so the second sync was just a time-filling no-op. If all the disks were
> in view, it probably sufficed just to watch them until all the lights
> ... had stopped blinking.
Yes. If the system is single-user, and you say 'sync', if you wait a bit for
the I/O to complete, any later syncs won't actually do anything.
I don't know of any programmatic way to make sure that all the disk I/O has
completed (although obviously one could be written); even the 'unmount' call
doesn't check to make sure all the I/O is completed (it just calls update()).
Watching the lights was as good as anything.
> I usually typed sync three or four times myself.
I usually just type it once, wait a moment, and then halt the machine. I've
never experienced disk corruption from so doing.
With modern ginormous disk caches, you might have to wait more than a moment,
but we're talking older machines here...
Noel
After a day and an evening of fighting with modern hardware,
the modern tangle that passes for UNIX nowadays, and modern
e-merchandising, I am too lazy to go look up the details.
But as I remember it, two syncs was indeed probably enough.
I believe that when sync(2) returned, all unflushed I/O had
been queued to the device driver, but not necessarily finished,
so the second sync was just a time-filling no-op. If all the
disks were in view, it probably sufficed just to watch them
until all the lights (little incandescent bulbs in those days,
not LEDs) had stopped blinking.
I usually typed sync three or four times myself. It gave me
a comfortable feeling (the opposite of a syncing feeling, I
suppose). I still occasionally type `sync' to the shell as
a sort of comfort word while thinking about what I'm going
to do next. Old habits die hard.
(sync; sync; sync)
Norman Wilson
Toronto ON
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Process A spawns process B, which reads stdin with buffering. B gets
> all it deserves from stdin and exits. What's left in the buffer,
> intehded for A, is lost.
Ah. Got it.
The problem is not with buffering as a generic approach, the problem is that
you're trying to use a buffering package intended for simple,
straight-forward situations in one which doesn't fall into that category! :-)
Clearly, either B has to i) be able to put back data which was not for it
('ungets' as a system call), or ii) not read the data that's not for it - but
that may be incompatible with the concept of buffering the input (depending
on the syntax, and thus the ability to predict the approaching of the data B
wants, the only way to avoid the need for ungetc() might be to read a byte at
a time).
If B and its upstream (U) are written together, that could be another way to
deal with it: if U knows where B's syntatical boundaries are, it can give it
advance warning, and B could then use a non-trivial buffering package to do
the right thing. E.g. if U emits 'records' with a header giving the record
length X, B could tell its buffering package 'don't read ahead more than X
bytes until I tell you to go ahead with the next record'.
Of course, that's not a general solution; it only works with prepared U's.
Really, the only general, efficient way to deal with that situation that I can
see is to add 'ungets' to the operating system...
Noel
>> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
>> The spec below isn't hard: just hook two buffer chains together and
>> twiddle a couple of file desciptors.
> In thinking about how to implement it, I was thinking that if there was
> any buffered data in an output pipe, that the process doing the
> splice() would wait (inside the splice() system call) on all the
> buffered data being read by the down-stream process.
> ...
> As a side-benefit, if one adopted that line, one wouldn't have to deal
> with the case (in the middle of the chain) of a pipe-pipe splice with u
> buffered data in both pipes (where one would have to copy the data
> across); instead one could just use the exact same code for both cases
So a couple of days ago I suffered a Big Hack Attack and actually wrote the
code for splice() (for V6, of course :-).
It took me a day or so to get 'mostly' running. (I got tripped up by pointer
arithmetic issues in a number of places, because V6 declares just about
_everything_ to be "int *", so e.g. "ip + 1" doesn't produce the right value
for sleep() if ip is declared to be "struct inode *", which is what I did
automatically.)
My code only had one real bug so far (I forgot to mark the user's channels as
closed, which resulted in their file entries getting sub-zero usage counts
when the middle (departing) process exited).
However, now I have run across a real problem: I was just copying the system
file table entry for the middle process' input channel over to the entry for
the downstream's input (so further reads on its part would read the channel
the middle process used to be reading). Copying the data from one entry to
another meant I didn't have to go chase down file table pointers in the other
process' U structure, etc.
Alas, this simple approach doesn't work.
Using the approach I outlined (where the middle channel waits for the
downstream pipe to be empty, so it can discard it and do the splice by
copying the file table entries) doesn't work, because the downstream process
is in the middle of a read call (waiting for more data to be put in the
pipe), and it has already computed a pointer to the pipe's inode, and it's
looping waiting for that inode to have data.
So now I have to regroup and figure out how to deal with that. My most likely
approach is to copy the inode data across (so I don't have to go mess with the
downstream process to get it to go look at another inode), but i) I want to
think about it a bit first, and ii) I have to check that it won't screw
anything else up if I move the inode data to another slot.
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I was wondering if there might be a better way to do a shutdown on
> early unix.
Not really; I don't seem to recall our having one on the MIT V6 machine.
(We did add a 'reboot' system call so we could reboot the machine without
having to take the elevator up to the machine room [the console was on our
floor, and the reboot() call just jumped into the hardware bootstrap], but in
the source it doesn't even bother to do an update(). Well, I should't say
that: I only have the source for the kernel, which doesn't; I don't at the
moment have access to the source for the rest of the system - although I do
have some full dump tapes, once I can work out how to read them. Anyway, so
maybe the user command for rebooting the system did a sync() first.)
I suppose you could set the switch register to 173030 and send a 'kill -1 1',
which IIRC kills of all shells except the one on the console, but somehow
I doubt you're running multi-user anyway... :-)
Noel
>> the cp command seems different from all other versions, I'm not sure I
>> understand it so I used the mv command instead which worked as expected.
>
> I'm intrigued; in what way is it different?
It seems that one must first cp a file to another file then do a mv to
actually put it into a different directory:
e.g. while in /usr/src
as ctr0.s
cp a.out ctr0.o
mv ctr0.o /usr/lib
...rather than trying to just "cp a.out /usr/lib/ctr0.o"
Mark
Yes, an evil necessary to get things going.
The very definition of original sin.
Doug
Larry McVoy wrote:
>>>> For stdio, of course, one would need fsplice(3), which must flush the
>>>> in-process buffers--penance for stdio's original sin of said buffering.
>>> Err, why is buffering data in the process a sin? (Or was this just a
>>> humourous aside?)
>> Process A spawns process B, which reads stdin with buffering. B gets
>> all it deserves from stdin and exits. What's left in the buffer,
>> intehded for A, is lost. Sinful.
> It really depends on what you want. That buffering is a big win for
> some use cases. Even on today's processors reading a byte at a time via
> read(2) is costly. Like 5000x more costly on the laptop I'm typing on:
> Err, why is buffering data in the process a sin? (Or was this just a
humourous aside?)
Process A spawns process B, which reads stdin with buffering. B gets
all it deserves from stdin and exits. What's left in the buffer,
intehded for A, is lost. Sinful.
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> The spec below isn't hard: just hook two buffer chains together and
> twiddle a couple of file desciptors.
How amusing! I was about to send a message with almost the exact same
description - it even had the exact same syntax for the splice() call! A
couple of points from my thoughts which were not covered in your message:
In thinking about how to implement it, I was thinking that if there was any
buffered data in an output pipe, that the process doing the splice() would
wait (inside the splice() system call) on all the buffered data being read by
the down-stream process.
The main point of this is for the case where the up-stream is the head of the
chain (i.e. it's reading from a file), where one more or less has to wait,
because one will want to set the down-streams' file descriptor to point to
the file - but one can't really do that until all the buffered data was
consumed (else it will be lost - one can't exactly put it into the file :-).
As a side-benefit, if one adopted that line, one wouldn't have to deal with
the case (in the middle of the chain) of a pipe-pipe splice with buffered
data in both pipes (where one would have to copy the data across); instead
one could just use the exact same code for both cases, and in that case the
wait would be until the down-stream pipe can simply be discarded.
One thing I couldn't decide is what to do if the upstream is a pipe with
buffered data, and the downstream is a file - does one discard the buffered
data, write it to the file, abort the system call so the calling process can
deal with the buffered data, or what? Perhaps there could be a flag argument
to control the behaviour in such cases.
Speaking of which, I'm not sure I quite grokked this:
> If file descriptor fd0 is associated with a pipe and fd1 is not, then
> fd1 is updated to reflect the effect of buffered data for fd0, and the
> pipe's other descriptor is replaced with a duplicate of fd1.
But what happens to the data? Is it written to the file? (That's the
implication, but it's not stated directly.)
> The same statement holds when "fd0" is exchanged with "fd1" and "write"
> is exchanged with "read".
Ditto - what happens to the data? One can't simply stuff it into the input
file? I think the 'wait in the system call until it drains' approach is
better.
Also, it seemed to me that the right thing to do was to bash the entry in the
system-wide file table (i.e. not the specific pointers in the u area). That
would automatically pick up any children.
Finally, there are 'potential' security issues (I say 'potential' because I'm
not sure they're really problems). For instance, suppose that an end process
(i.e. reading/writing a file) has access to that file (e.g. because it
executed a SUID program), but its neighbour process does not. If the end
process wants to go away, should the neighbour process be allowed access to
the file? A 'simple' implementation would do so (since IIRC file permissions
are only checked at open time, not read/write time).
I don't pretend that this is a complete list of issues - just what I managed
to think up while considering the new call.
> For stdio, of course, one would need fsplice(3), which must flush the
> in-process buffers--penance for stdio's original sin of said buffering.
Err, why is buffering data in the process a sin? (Or was this just a
humourous aside?)
Noel
Larry wrote in separate emails
> If you really think that this could be done I'd suggest trying to
> write the man page for the call.
> I already claimed splice(2) back in 1998; the Linux guys did
> implement part of it ...
I began to write the following spec without knowing that Linux had
appropriated the name "splice" for a capability that was in DTSS
over 40 years ago under a more accurate name, "copy". The spec
below isn't hard: just hook two buffer chains together and twiddle
a couple of file desciptors. For stdio, of course, one would need
fsplice(3), which must flush the in-process buffers--penance for
stdio's original sin of said buffering.
Incidentally, the question is not abstract. I have code that takes
quadratic time because it grows a pipeline of length proportional
to the input, though only a bounded number of the processes are
usefully active at any one time; the rest are cats. Splicing out
the cats would make it linear. Linear approaches that don't need
splice are not nearly as clean.
Doug
SPLICE(2)
SYNOPSIS
int splice(int fd0, int fd1);
DESCRIPTION
Splice connects the source for a reading file descriptor fd0
directly to the destination for a writing file descriptor fd1
and closes both fd0 and fd1. Either the source or the destination
must be another process (via a pipe). Data buffered for fd0 at
the time of splicing follows such data for fd1. If both source
and destination are processes, they become connected by a pipe. If
the source (destination) is a process, the file descriptor
in that process becomes write-only (read-only).
If file descriptor fd0 is associated with a pipe and fd1 is not,
then fd1 is updated to reflect the effect of buffered data for fd0,
and the pipe's other descriptor is replaced with a duplicate of fd1.
The same statement holds when "fd0" is exchanged with "fd1" and
"write" is exchanged with "read".
Splice's effect on any file descriptor propagates to shared file
descriptors in all processes.
NOTES
One file must be a pipe lest the spliced data stream have no
controlling process. It might seem that a socket would suffice,
ceding control to a remote system; but that would allow the
uncontrolled connection file-socket-socket-file.
The provision about a file descriptor becoming either write-only or
read-only sidesteps complications due to read-write file descriptors.
> From: Dave Horsfall <dave(a)horsfall.org>
> crt0.s -> C Run Time (support). It jiggers the stack pointer in some
> obscure manner
It's the initial startup; it sets up the arguments into the canonical C form,
and then calls main(). (It does not do the initial stack frame, a canonical
call to CSV from inside main() will do that.) Here are the exact details:
On an exec(), once the exec() returns, the arguments are available at the
very top of memory: the arguments themselves are at the top, as a sequence of
zero-terminated byte strings. Below them is an array of word pointers to the
arguments, with a -1 in the last entry. (I.e. if there are N arguments, the
array of pointers has N+1 entries, with the last being -1.) Below that is a
word containing the size of that array (i.e. N+1).
The Stack Pointer register points to that count word; all other registers
(including the PC) are cleared.
All CRT0.s does is move that argument count word down one location on the
stack, adjust the SP to point to it, and put a pointer to the argument
pointer table in the now-free word (between the argument count, and the first
element of the argument pointer table). Hence the canonical C main() argument
list of:
int argc;
int **argv;
If/when main() returns, it takes the return value (passed in r0) and calls
exit() with it. (If using the stdio library, that exit() flushes the buffers
and closes all open files.) Should _that_ return, it does a 'sys exit'.
There are two variant forms: fcrt0.s arranges for the floating point
emulation to be loaded, and hooked up; mcrt0.s (much more complicated)
arranges for process monitoring to be done.
Noel
Hi folks,
Yes I have managed to compile Hello World on v1/v2.
the cp command seems different from all other versions, I'm not sure I
understand it so I used the mv command instead which worked as
expected.
I had to "as crt0.s" and put crt0.o in /usr/lib and then it compiled
without issue.
Is the kernel in /etc? I saw a core file in /etc that looked like it
would be about the right size. No unix file in the root directory
which surprised me.
At least I know what crt0.s does now. I guess a port of unirubik to
v1/v2 is in the cards (maybe).
Mark
Hi folks,
I'm interested in comparing notes with C programmers who have written
programs for Unix v5, v6 and v7.
Also I'm interested to know if there's anything similar to the scanf
function for unix v5. Stdio and iolib I know well enough to do file IO
but v5 predates iolib.
Back in 1988 I tried to write a universal rubik's cube program which I
called unirubik and after discovering TUHS I tried to backport it to
v7 (which was easy) and v6 (which was a bit harder) and now I'm trying
to backport it to v5. The v5 version currently doesn't have the any
file IO capability as yet. Here are a few links to the various
versions:
http://www.maxhost.org/other/unirubik.c.v7http://www.maxhost.org/other/unirubik.c.v6http://www.maxhost.org/other/unirubik.c.v5
Also I've compiled the file utility from v6 in v5 and it seemed to
work fine. Once I got /dev/mt0 working for unix v5 (thanks to Warren's
help) I transferred the binary for the paging utility pg into it. This
version of pg I believe was from 1BSD.
I did some experimenting with math functions which can be seen here:
http://www.maxhost.org/other/math1.c
This will compile on unix v5.
My initial impression of Unix v5 was that it was a primitive and
almost unusable version of Unix but now that I understand it a bit
better it seems a fairly complete system. I'm a bit foggy on what the
memory limits are with v5 and v6. Unix v7 seems to run under simh
emulating a PDP-11/70 with 2 megabytes of ram (any more than that and
the kernel panics).
Also I'd be interested in seeing the source code for Ken Thompson's
APL interpreter for Unix v5. I know it does exist as it is referenced
in the Unix v5 manual. The earliest version I could find was dated Oct
1976 and I've written some notes on it here:
http://apl.maxhost.org/getting-apl-11-1976-to-work.txt
Ok, that's about it for now. Is there any chance of going further back
to v4, v3, v2 etc?
Mark
here's the e-mail that I sent on to Mark in the hope that it would
give him enough information to get his 5th Edition kernel working
with a tape device. He has also now joined the list. Welcome aboard, Mark.
Warren
----- Forwarded message from Warren Toomey <wkt(a)tuhs.org> -----
On Thu, Jul 10, 2014 at 05:56:04PM -0400, Mark Longridge wrote:
> There was no m40.s in v5 so I substituted mch.s for m40.s and that
> seemed to create a kernel and it booted but I can't access /dev/mt0.
Mark, glad to hear you were able to rebuild the kernel. I've never tried
on 5th Edition. Just reading through the 6th Edition docs, it says this:
-----
Next you must put in all of the special files in the
directory /dev using mknod‐VIII. Print the configuration
file c.c created above. This is the major device switch of
each device class (block and character). There is one line
for each device configured in your system and a null line
for place holding for those devices not configured. The
block special devices are put in first by executing the fol‐
lowing generic command for each disk or tape drive. (Note
that some of these files already exist in the directory
/dev. Examine each file with ls‐I with −l flag to see if
the file should be removed.)
/etc/mknod /dev/NAME b MAJOR MINOR
The NAME is selected from the following list:
c.c NAME device
rf rf0 RS fixed head disk
tc tap0 TU56 DECtape
rk rk0 RK03 RK05 moving head disk
tm mt0 TU10 TU16 magtape
rp rp0 RP moving head disk
hs hs0 RS03 RS04 fixed head disk
hp hp0 RP04 moving head disk
The major device number is selected by counting the line
number (from zero) of the device’s entry in the block con‐
figuration table. Thus the first entry in the table bdevsw
would be major device zero.
The minor device is the drive number, unit number or
partition as described under each device in section IV. The
last digit of the name (all given as 0 in the table above)
should reflect the minor device number. For tapes where the
unit is dial selectable, a special file may be made for each
possible selection.
The same goes for the character devices. Here the
names are arbitrary except that devices meant to be used for
teletype access should be named /dev/ttyX, where X is any
character. The files tty8 (console), mem, kmem, null are
already correctly configured.
The disk and magtape drivers provide a ‘raw’ interface
to the device which provides direct transmission between the
user’s core and the device and allows reading or writing
large records. The raw device counts as a character device,
and should have the name of the corresponding standard block
special file with ‘r’ prepended. Thus the raw magtape files
would be called /dev/rmtX.
When all the special files have been created, care
should be taken to change the access modes (chmod‐I) on
these files to appropriate values.
-----
Looking at the c.c generated, it has:
int (*bdevsw[])()
{
&nulldev, &nulldev, &rkstrategy, &rktab,
&tmopen, &tmclose, &tmstrategy, &tmtab, /* 1 */
&nulldev, &tcclose, &tcstrategy, &tctab,
0
};
int (*cdevsw[])()
{
&klopen, &klclose, &klread, &klwrite, &klsgtty,
&nulldev, &nulldev, &mmread, &mmwrite, &nodev,
&nulldev, &nulldev, &rkread, &rkwrite, &nodev,
&tmopen, &tmclose, &tmread, &tmwrite, &nodev, /* 3 */
&dcopen, &dcclose, &dcread, &dcwrite, &dcsgtty,
&lpopen, &lpclose, &nodev, &lpwrite, &nodev,
0
};
Following on from the docs, you should be able to make the /dev/mt0
device file by doing:
/etc/mknod /dev/tm0 b 1 0
And possibly also:
/etc/mknod /dev/rmt0 c 3 0
Cheers,
Warren
All, just received this from a fellow who isn't on the TUHS mail list (yet).
I've answered him about using mknod (after reading the 6e docs: we don't have
5e docs). I thought I'd forward the e-mail here as a record of an attempt to
rebuild the 5e kernel.
Cheers, Warren
----- Forwarded message from Mark -----
I hope you don't mind me asking you about compiling the unix v5
kernel. I haven't been able to find any documentation for it.
I tried this:
./mkconf
rk
tm
tc
dc
lp
ctrl-d
# as mch.s
# mv a.out mch.o
# cc -c c.c
# as l.s
# ld -x a.out mch.o c.o ../lib1 ../lib2
There was no m40.s in v5 so I substituted mch.s for m40.s and that
seemed to create a kernel and it booted but I can't access /dev/mt0.
Any pointers are appreciated. Thanks for all your work on early unix,
I thought it was very interesting.
Mark
----- End forwarded message -----
PS: I see I have over-generalized the problem. Doug's original message say "a
process could excise itself from a pipeline". So presumably the initiation
would come from process2 itself, and it would know when it had no
internally-buffered data.
So now we're back to the issue of 'either we need a system call to merge two
pipes into one, or the process has to hang around and turn itself into a cat'.
Noel
> From: Larry McVoy <lm(a)mcvoy.com>
> Making what you are talking about work is gonna be a mess of buffer
> management and it's going to be hard to design system calls that would
> work and still give you reasonable semantics on the pipe. Consider
> calls that want to know if there is data in the pipe
Oh, I didn't say it would work well, and cleanly! :-) I mean, taking one
element in an existing, operating, chain, and blowing it away, is almost
bound to cause problems.
My previous note was merely to say that the file descriptor/pipe
re-arrangement involved might be easier done with a system call - in fact, now
that I think about it, as someone has already sort of pointed out, without a
system call to merge the two pipes into one, you have to keep the middle
process around, and have it turn into a 'cat'.
Thinking out loud for a moment, though, along the lines you suggest....
Here's one problem - suppose process2 has read some data, but not yet
processed it and output it towards process3, when you go to do the splice.
How would the anything outside the process (be it the OS, or the command
interpreter or whatever is initiating the splice) even detect that, much less
retrieve the data?
Even using a heuristic such as 'wait for process2 to try and read data, at
which point we can assume that it no longer has any internally buffered data,
and it's OK to do the splice' fails, because process2 may have decided it
didn't have a complete semantic unit in hand (e.g. a complete line), and
decided to go back and get the rest of the unit before outputting the
complete, processed semantic unit (i.e. including data it had previously
buffered internally).
And suppose the reads _never_ happen to coincide with the semantic units
being output; i.e. process2 will _always_ have some buffered data inside it,
until the whole chain starts to shut down with EOFs from the first stage?
In short, maybe this problem isn't solvable in the general case. In which
case I guess we're back to your "Every utility that you put in a pipeline
would have to be reworked".
Stages would have to have some way to say 'I am not now holding any buffered
data', and only when that state was true could they be spliced out. Or there
could be some signal defined which means 'go into "not holding any buffered
data" state'. At which point my proposed splice() system call might be some
use... :-)
Noel
> From: Larry McVoy <lm(a)mcvoy.com>
> Every utility that you put in a pipeline would have to be reworked to
> pass file descriptors around
Unless the whole operation is supported in the OS directly:
if ((pipe1 = process1->stdout) == process2->stdin) &&
((pipe2 = process2->stdout) == process3->stdin) {
prepend_buffer_contents(pipe1, pipe2);
process1->stdout = process2->stdout;
kill_pipe(pipe1);
}
to be invoked from the chain's parent (e.g. shell).
(The code would probably want to do something with process2's stdin and
stdout, like close them; I wouldn't have the call kill process2 directly, that
could be left to the parent, except in the rare cases where it might have some
use for the spliced-out process.)
Noel
It's easy for a process to insert a new process into a
pipeline either upstrean or downstream. Was there ever a
flavor of Unix in which a process could excise itselfa
from a pipeline without breaking the pipeline?
Doug
Armando P. Stettner:
If I've said once, I've said it a million times: DECvax is NOT connected to kremvax. :)
====
Not any more, anyway.
Norman Wilson
Toronto ON
(once upon a time, research!norman)
Keywords: encoding, charset, latin, roman, accented, diacritical
Hi! Does anyone know which was the earliest Unix release to support the
ISO-8859-1 character set?
Hi there,
Sorry for an eventual Offtopic but this is strictly
related to our Computer Museum activity...
We launched a campaign to get some help for our upcoming
initiative, that will be the starting point for a big step
forward to a new Museum!
So please take time to read it, and please share everywhere to
anyone interested!
http://igg.me/at/insertcoin
love
asbesto
Knowing Dave and his long history with Unix, I suspect it was simply a typo. Just like vi commands are now hardwired into my fingers, I guess K&R is imprinted on his fingers.
Cheers, Warren
On 28 June 2014 17:12:14 AEST, Armando Stettner <aps(a)ieee.org> wrote:
>K&R usually refers to Brian Kernighan and Dennis Ritchie, writers of
>the (I think) first book on C. If there were two people to acknowledge
>for getting it right, it would be Ken and Dennis.
>
> aps
>
>
>Begin forwarded message:
>
>> From: Warren Toomey <wkt(a)tuhs.org>
>> Subject: [TUHS] 40 years of Unix CACM Article
>> Date: June 28, 2014 at 12:02:06 AM PDT
>> To: tuhs(a)tuhs.org
>>
>> Just in from an early Unix devotee.
>> Warren
>>
>> From: Dave Horsfall <dave(a)horsfall.org>
>> Sent: 28 June 2014 16:14:18 AEST
>> To: Auug Talk <talk(a)lists.auug.org.au>
>> Subject: [AUUG-Talk]: 40 years of Unix
>>
>> Next month sees the 40th anniversary of the article "The Unix
>Timesharing
>> System" published in Communications of the ACM; I was at UNSW at the
>time,
>> and we bought the first tape for subsequent distribution.
>>
>> At the time its only competitor was RSTS-11, and to a lesser extent
>> RSX-11D and RSX-11M (all DEC systems). It saw CP/M vanish, MS-DOS
>come
>> and go, NT tried to challenge it, and even Windows hasn't beaten it.
>>
>> It spawned Linux, which Billy Gates regarded as a serious threat
>("any box
>> running Linux is not running Windows") and even tried a smear
>campaign
>> against it.
>>
>> Unix was a design that "just worked" because K&R simply got it right,
>
>> right from the start.
>>
>> It'll never go away.
>>
>> -- Dave
>>
>> Applix 1616 mailing list
>> Applix-L(a)object-craft.com.au
>> https://www.object-craft.com.au/cgi-bin/mailman/listinfo/applix-l
>>
>> Talk - The AUUG discussion list.
>> Talk(a)lists.auug.org.au
>> https://lists.auug.org.au/listinfo/talk
>>
>> --
>> Sent from my Android phone with K-9 Mail. Please excuse my
>brevity._______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Just in from an early Unix devotee.
Warren
-------- Original Message --------
From: Dave Horsfall <dave(a)horsfall.org>
Sent: 28 June 2014 16:14:18 AEST
To: Auug Talk <talk(a)lists.auug.org.au>
Subject: [AUUG-Talk]: 40 years of Unix
Next month sees the 40th anniversary of the article "The Unix Timesharing
System" published in Communications of the ACM; I was at UNSW at the time,
and we bought the first tape for subsequent distribution.
At the time its only competitor was RSTS-11, and to a lesser extent
RSX-11D and RSX-11M (all DEC systems). It saw CP/M vanish, MS-DOS come
and go, NT tried to challenge it, and even Windows hasn't beaten it.
It spawned Linux, which Billy Gates regarded as a serious threat ("any box
running Linux is not running Windows") and even tried a smear campaign
against it.
Unix was a design that "just worked" because K&R simply got it right,
right from the start.
It'll never go away.
-- Dave
_______________________________________________
Applix 1616 mailing list
Applix-L(a)object-craft.com.au
https://www.object-craft.com.au/cgi-bin/mailman/listinfo/applix-l
_______________________________________________
Talk - The AUUG discussion list.
Talk(a)lists.auug.org.au
https://lists.auug.org.au/listinfo/talk
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
> Ahh! I wonder if they'll be making images and PDFs available?
If you want to check, my contact has been:
William Harnack <wharnack(a)computerhistory.org>
Doug
> > Have Xinu media to go with it? It's something I've been trying to track down. ;)
>
> Have you asked Doug? I've copied him
I just donated my extra copy of Xinu tapes and floppies to the computer museum
(along with first editions of the books).
> (Hey Doug, you should be on this list, all the long time unix nerds seem to be
> here, lots of fun with memory lane).
I'd be happy to join.
Doug
They pitched a PDP-10 for a similar reason--hardware to build a
bigger Unix on. When a small pot of end-of-year money appeared,
they took a PDP-11 instead--serendipitously, because university
folks started proving this elegant system on cheap hardware
in many projects in small labs, which they never could have
done had the system existed on a PDP-10 mainframe. While
upper management did not directly cause Unix to be built,
their decisions to abandon Multics and not to buy a PDP-10
were notable causes for its creation and spread.
Doug
> Date: Wed, 18 Jun 2014 07:43:51 -0700
> From: iking(a)killthewabbit.org
> To: tuhs(a)minnie.tuhs.org,Doug McIlroy <doug(a)cs.dartmouth.edu>
> Subject: Re: [TUHS] Happy birthday, core dumped
> Message-ID: <ef723f8a-52b6-4810-be59-1837c75b1da3.maildroid@localhost>
> Content-Type: text/plain; charset="utf-8"
>
> Interesting - what's your source? It was also my understanding they used the -7 'because it was there' but that they had pitched for a PDP-10, which had TOPS-10. - Ian
>
> Sent from my android device.
>
> -----Original Message-----
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> To: tuhs(a)minnie.tuhs.org
> Sent: Wed, 18 Jun 2014 4:06 AM
> Subject: Re: [TUHS] Happy birthday, core dumped
>
>
> > It's always been a bit of a mystery to me why Thompson and Ritchie decided they needed to write a new executive - UNICS - rather than use DECsys.
>
> It was the other way around. They had conceived a clean, simple, yet
> powerful, operating system and needed a machine to build it on. A
> cast-off PDP-7 happened to be at hand.
>
> Doug
> From: "A. P. Garcia" <a.phillip.garcia(a)gmail.com>
> that's like asking george martin for his source regarding a beatles
> song...
Reminds me of the person on Wikipedia who tried to argue with me about the
'History of the Internet' article... :-)
> From: John Cowan <cowan(a)mercury.ccil.org>
>> scj(a)yaccman.com scripsit:
>> a Dec repair person who ran "preventive maintenance" on our disc that
>> wiped out the file system! His excuse was that Dec didn't support
>> "permanent storage" on the disc at the time...
> Next time, mount a scratch monkey.
It was probably a fixed-head disk (RS11 or RS04); can't exactly stick a
different pack in! :-) Probably the DEC OS's only used it for swapping or
something, since they were both relatively small - 512KB.
(Speaking of RS11's: the first PDP-11 I used - an 11/20 running RSTS - had a
grand total disk storage of _one_ RS11!)
And speaking of putting file systems on them: I recently wrote this command
for V6 called 'si' which allowed me (among many other interesting things) to
watch the contents of the disk buffer(s). It turns out that even with other
packs mounted, the buffer is almost always completely full of blocks from the
root device; it makes plain the value of having the root on a _really_
fast disk.
I don't know if that usage pattern is because /bin is there, or because pipes
get created on the root, or what. When I get up the energy I'll move /bin to
another drive (yeah, yeah, I know - good way to lose and create a systen that
won't boot, so I'll actually make a _copy_ of /bin and mount it _over_ the
original /bin - probably a host of interesting errors there, e.g. if a process
has the old /bin as its current dir), and see what the cache contents look
like then.
Noel
For reconstructing Unix history on a single repository [1], I'd need to
represent the branches, merges, and chronological sequence of the late
BSD releases (after 4.3). However, I've found on the internet some
conflicting and simplistic information, so I'd welcome your input on how
to straighten things up.
First, consider this widely reproduced BSD family tree [2]. It has
4.4BSD-Encumbered derive from a line that includes Net/1, which was
freely redistributable. Wouldn't it be clearer to create two branches,
one with distributions free of AT&T code (4.3 BSD Net/1, 4.3 BSD Net/2,
4.4 BSD Lite1, 4.4 BSD Lite2) and one with full distributions (4.4 BSD,
...)? On which side would Tahoe and Reno stand?
Also, the same tree [2] shows 4.4 BSD having as its ancestor 4.3 BSD
Net/2, whereas another tree depicted on Wikipedia [3] has shows 4.4 BSD
and 4.3 BSD Net/2 having as their ancestor 4.3 BSD Reno. What's the
correct genealogy?
Finally, I have a conflict with release dates. Wikipedia gives the
following dates for Tahoe and Net/1 [4]:
4.3 BSD Tahoe June 1988
4.3 BSD Net/1 June 1989
However, looking at time-stamp of the newest files available under the
corresponding directories in the CSRG CD-ROMs [5] I find the opposite order:
cd2/net.1/sendmail/src/util.c 1989-01-01 12:15:58
cd2/4.3tahoe/usr/src/sys/tahoevba/vx.c 1989-05-23 13:47:43
What's the actual time sequence, and what's the corresponding genealogy?
[1] https://github.com/dspinellis/unix-history-repo
[2]
http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-t…
[3] https://en.wikipedia.org/wiki/File:Unix_history-simple.svg
[4] https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[5] https://www.mckusick.com/csrg/
Many thanks,
Diomidis Spinellis
PS Thank you all for the help you've provided so far.
Interesting - what's your source? It was also my understanding they
used the -7 'because it was there' but that they had pitched for a
PDP-10, which had TOPS-10.
======
I think Doug's source is in the class `personal observation.'
He was there at the time; Ken and Dennis's department head, if
I've got it right.
Remember that Bell Labs had just disengaged itself from the
Multics project. The interest in a new OS sprang partly
from the desire to have a comfortable multi-user system
now that Multics was no longer available. That's why the
DEC operating systems of the time, which were (as I understand
it) simple single-user monitors, didn't fill the bill.
The character of the players matters too: remember that
Ken is the guy who one night sat down to write a Fortran
compiler because real systems have Fortran, and ended up
inventing B instead.
I've read that there was indeed a pitch to buy a PDP-10; that
there was some complicated plan to lower the effective cost;
and that upper management (not Doug) turned it down because
`Bell Labs doesn't do business that way.' I think I got that
from Dennis's retrospective paper, published in the 1984
all-UNIX issue of the Bell Labs Techical Journal, a must-read
(along with the late-1970s all-UNIX issue of BSTJ) for anyone
on this list.
> It's always been a bit of a mystery to me why Thompson and Ritchie decided they needed to write a new executive - UNICS - rather than use DECsys.
It was the other way around. They had conceived a clean, simple, yet
powerful, operating system and needed a machine to build it on. A
cast-off PDP-7 happened to be at hand.
Doug
Jay Forrester, who invented core memory, first described it in
a lab notebook 65 years ago today.
(Thanks to the Living Computer Museum, through whose Twitter
feed I learned this tidbit. It's a place--the real museum,
not just the Twitter feed--many on this list might enjoy:
among their aged-but-working computers are a Xerox Star and
a PDP-7.)
Norman Wilson
Toronto ON
Greetings,
I have a pdp11/84 and various peripherals available to whomever is willing to take them. I'm in La Crosse, Wisconsin, USA.
pdp11/84
RX01
TU80
2 cabs of 2ea RA81
Thanks,
Milo
--
Milo Velimirović
La Crosse, Wisconsin 54601 USA 43 48 48 N 91 13 53 W
Hello all, recent subscriber to this list...but some might
recognise me.
I'm currently fighting (and mostly succeeding) with getting a pure
4.3BSD-Tahoe install rolling in a VAX simulator (all my tape drives are
currently nonfunctional or I'd fire up a real II or III). Still in the
middle of compiling (I know there are easier ways...but I like the feeling
of doing it purely "from scratch"...feels more historically accurate).
Anyways, while waiting for the compile I ended up with a working 4.1c
installation and I began poking around its source tree (I also did a
little bit of poking at the CSRG discs). I stumbled upon the fact 4.1c
supports the DMC (which SIMH git now emulates!) and I see it's also still
in src/old on some versions of 4.3 (I wonder if it'll still build...).
Looking at the source is immensely confusing...does anyone have any
knowledge/notes from the time period/documentation that will help me with
my little experiment in archaic networking? I might just be tired...but
the berknet configuration doesn't make a whole lot of sense to me.
(There's also the fact my grasp of C is minimal).
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
Speaking of old publications, is there an on-line archive of old ;login:
or EUUG newsletters? I did one for the AUUG newsletters here:
http://minnie.tuhs.org/Archive/Documentation/AUUGN/
but it would be nice to have an archive for the U.S and Europe.
Cheers,
Warren
Doug McIlroy:
> Does anyone know why [Computing Systems] folded?
Arnold Skeeve:
ISTR that they simply ran out of content; they weren't getting
enough submissions to keep it going, and journal production isn't
an inexpensive undertaking.
======
That's what I remember too, though there may also have been
insufficient interest from the members. The front matter
in the last issue suggests that.
Computing Systems was published from Winter 1988 to Fall 1996.
(More years than I'd have guessed, even looking at the physical
journals on my shelf; it was a quarterly.) It would probably
not have lasted much longer no matter what, as the USENIX
community was likely in the forefront of putting papers online
on the World-Wide Web.
USENIX now makes all their conference papers available online,
free to anyone, except that only those registered for a
conference can read them before the conference actually happens.
That's not a bad substitute for a journal, I suppose.
Norman Wilson
Toronto ON
'skeeve' is my domain name. Robbins is my surname.
Sorry about that; up too late with too many balls
in the air (packing, finishing a tax return, listening
to our provincial election results).
At least I didn't further truncate it to skeev, as
Ken might have done.
UNIX/WORLD started in 1984 and was renamed UnixWorld Magazine: Open
Systems Computing in 1991 and then UnixWorld's Open Computing in 1994
and it folded in 1995.
SunExpert started in 1989 was renamed to Server/Workstatsion Expert in
1999 and it folded in 2001. I always enjoyed Mike OBrien's offbeat
"Ask Mr. Protocol"
> From: Dan Cross <crossd(a)gmail.com>
> There were several, starting I guess in the 80s mostly. The one I remember
> in particular was "Unix Review", but there were a few "journal" type
> magazines that also specialized in Unix-y things (e.g., ";login:" from
> USENIX; still published, I believe), and several associated with particular
> vendors: "SunExpert" was one, if I recall correctly.
>
> Occasionally, Unix and related things showed up in the "mainstream"
> consumer computer press of the time. I can remember in particular an issue
> of "PC Magazine" (I think June of 1993) that ran a lengthy couple of
> articles proving machines from Sun and SGI, in addition to version of Unix
> that ran on PCs (interestingly, Linux was omitted despite really starting
> to capture a lot of the imagination in that space; similarly I don't recall
> any mention of BSD).
>
> Some of these old magazines are definitely blasts from the past.
>
> - Dan C.
>
>
>
> On Wed, Jun 11, 2014 at 11:10 PM, Sergey Lapin <slapinid(a)gmail.com> wrote:
>
> > Hi, all!
> >
> > I've read recently published link to byte article and got an idea....
> > Was there a magazine related to UNIX systems in 70s-80s?
> > I had so much fun reading that Byte issue, even ads (especially ads!)
> > It is so fun...
> >
> > _______________________________________________
> > TUHS mailing list
> > TUHS(a)minnie.tuhs.org
> > https://minnie.tuhs.org/mailman/listinfo/tuhs
> >
> ;login: is alive and well.
For a few years Usenix even published a refereed technical
journal, "Computing Systems", quite different in tone from
;login: It had some nice content. Does anyone know why
it folded?
Doug
Dan Cross:
... there were a few "journal" type
magazines that also specialized in Unix-y things (e.g., ";login:" from
USENIX; still published, I believe) ...
======
;login: is alive and well. So is USENIX. It's no longer
the UNIX user's group it started as many decades ago; the
focus has broadened to advanced computing and systems
research, though the descendants of UNIX are still prominent
in those areas.
For an old-fashioned programmer/systems hack/systems generalist
like me, it's still quite a worthwhile journal and a worthwhile
organization. They've even been known to have a talk or two
about resurrecting old versions of UNIX.
I'm just off to the federation of medium-sized conferences
and workshops that has grown out of the former USENIX
Annual Technical Conference. I'm looking forward to it.
Norman Wilson
Toronto ON
Hi, all!
I've read recently published link to byte article and got an idea....
Was there a magazine related to UNIX systems in 70s-80s?
I had so much fun reading that Byte issue, even ads (especially ads!)
It is so fun...
Phil Garcia wrote:
I've always wondered about something
else, though: Were the original Unix authors annoyed when they learned that
some irascible young upstart named Richard Stallman was determined to make
a free Unix clone? Was he a gadfly, or just some kook you decided to
ignore? The fathers of Unix have been strangely silent on this topic for
many years. Maybe nobody's ever asked?
Gnu was always taken as a compliment. And of course the Unix clone
was pie in the sky until Linus came along. I wonder about the power
relationship underlying "GNU/Linux", as rms modestly styles it.
There are certain differences in taste between Unix and Gnu, vide
emacs and texinfo. (I grit my teeth every time a man page tells me,
"The full documentation for ___ is maintained as a Texinfo file.")
But all disagreement is swept away before the fact that the old
familiar environment is everywhere, from Cray to Apple, with rms
a very important contributor.
Doug
Does anyone have that running on anything? If so, I'd like a copy of the
lint libraries, probably /usr/lib/ll* or something like that.
It's not well known but I spent a pile of time creating lint libraries for
pure BSD, System V, etc, so you could lint your code against a target and
know if you let some non-standard stuff creep in.
I suppose I could fire up a Sun3 emulator like this and find them:
http://www.abiyo.net/retrocomputing/installingsunos411tosun3emulatedintme08…
If someone has a SunOS 4.1.1 box on the net and can give me a login (non-root)
that would be appreciated.
Thanks,
--lm
I noted just as I sent my previous posting with two references to
fuzz-test papers that the abstract of the second mentions two earlier
ones.
I've just tracked them down, and added them to various bibliographies.
Here are short references to them:
Fuzz Revisited: A Re-examination of the Reliability of UNIX
Utilities and Services
ftp://ftp.cs.wisc.edu/pub/techreports/1995/TR1268.pdf
An Empirical Study of the Robustness of MacOS Applications
Using Random Testing
http://dx.doi.org/10.1145/1228291.1228308
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Ken and Dennis and the other guys behind
> the earliest UNIX code were smart guys and good programmers,
> but they were far from perfect; and back in those days we
> were all a lot sloppier.
The observation that exploits may be able to parlay
mundane bugs into security holes was not a commonplace
back then--even in the Unix room. So input buffers were
often made "bigger than ever will be needed" and left
that way on the understanding that crashes are tolerable
on outlandish data. In an idle moment one day, Dennis fed
a huge line of input to most everything in /bin. To the
surprise of nobody, including Dennis, lots of programs
crashed. We WERE surprised a few years later, when a journal
published this fact as a research result. Does anybody
remember who published that deep new insight and/or where?
Doug
> From: norman(a)oclsc.org (Norman Wilson)
> SP&E published a paper by Don Knuth discussing all the many bugs found
> in TeX, including some statistical analysis.
> From: John Cowan <cowan(a)mercury.ccil.org>
> "The Errors of TeX" was an excellent article.
Thanks for the pointer; it sounds like a great paper, but alas the only
copies I could fine online were behind paywalls.
> From: Clem Cole <clemc(a)ccc.com>
> btw. there is a v6 version of fsck floating around.
Yes, we had it at MIT.
> I'm wonder if I can find a readable copy.
As I've mentioned, I have this goal of putting the MIT Unix (the kernel was
basically PWB1, with a host of new applications) sources online.
I have recently discovered (in my basement!) two sets of full dump tapes
(1/2" magtape) of what I think are the whole filesystem, so if I can find a
way to get them read, we'll have the V6 fsck - and much more besides (such
as a TCP/IP for V6). So I think you may soon get your wish!
Noel
> From: "Ron Natalie" <ron(a)ronnatalie.com>
> The variable in question was a global static, 'ino' (the current inode
> number),
> Static is a much overloaded word in C, it's just a global variable.
Sorry; I was using 'static' in the general CS sense, not C-specific!
> in the version 7 version of icheck .. they appear to have fixed it.
Actually, they seem to have got all three bugs I saw (including the one I
hadn't actually experienced yet, which would cause a segmentation violation).
> From: Tim Newsham <tim.newsham(a)gmail.com>
> There are bugs to be found .. Here are some more (security related, as
> thats my inclination):
> ...
> http://minnie.tuhs.org/pipermail/unix-jun72/2008-May/000126.html
Fascinating mailing list! Thanks for the pointer.
Noel
A. P. Garcia <a.phillip.garcia(a)gmail.com> wrote:
> Were the original Unix authors annoyed when they learned that
> some irascible young upstart named Richard Stallman was determined to make
> a free Unix clone?
A deeper, more profound question would be: how did these original Unix
authors feel about their employer owning the rights to their creation?
Did they feel any guilt at all for having had to sign over all rights
in exchange for their paychecks?
Did Dennis and/or Ken personally wish their creation were free to the
world, public domain, or were they personally in agreement with the
licensing policies of their employer? I argue that this question is
far more important than how they felt about RMS (if they cared at all).
Ronald Natalie <ron(a)ronnatalie.com> wrote:
> [RMS] If you read his earlier manifesto rants he hated UNIX =
> with a passion.
> Holding out the TOPS operating systems as the be-all and end-all of user =
> interface.
I wish more people would point out this aspect of RMS and GNU. While
I wholeheartedly agree with Richard on the general philosophy of free
software, i.e., the *ethics* part and the Four Freedoms, when it comes
to GNU as a specific OS, in technical terms, I've always disliked
everything about it. I love UNIX, and as Ron pointed it out like few
people do, GNU was fundamentally born out of hatred for the thing I
love.
SF
So it turns out the 'dcheck' distributed with V6 has two (well, three, but
the third one was only a potential problem for me) bugs it.
The first was a fence-post error on a table clearing operation; it could
cause the entry for the last inode of the disk in the constructed table of
directory entry counts to start with a non-zero count when a second disk was
scanned. However, it was only triggered in very specific circumstances:
- A larger disk was listed before a smaller one (either in the command line,
or compiled in)
- The inode on the larger disk corresponding to the last inode on the smaller
one was in use
I can understand how they never ran across this one.
The other one, however, which was an un-initalized variable, should have
bitten them anytime they had more than one disk listed! It caused the
constructed table of directory entry counts to be partially or wholly
(depending on the size of the two disks) blank in all disks after the first
one, causing numerous (bogus) error reports.
(It was also amusing to find an un-used procedure in the source; it looks
like dcheck was written starting with the code for 'icheck' - which explains
the second bug; since the logic in icheck is subtly different, that variable
_is_ set properly in icheck.)
How this bug never bit them I cannot understand - unless they saw it, and
couldn't be bothered to find and fix it!
To me, it's completely amazing to find such a serious bug in such a critical
piece of widely-distributd code! A lesson for archaeologists...
Anyway, a fixed version is here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/ucmd/dcheck.c
if anyone cares/needs it.
Noel
Larry McVoy scripsit:
> I love Rob Pike, he's spot on on a lot of stuff. I'm a big fan of
> "if you think you need threads then your processes are too fat".
Oh, he's a brilliant fellow. I don't know him personally, but I know
people who do, and I don't think I'd love him if I knew him. Humanity has
always found it useful to keep its (demi)gods at arm's length at least.
--
John Cowan http://www.ccil.org/~cowan cowan(a)ccil.org
Barry thirteen gules and argent on a canton azure fifty mullets of five
points of the second, six, five, six, five, six, five, six, five, and six.
--blazoning the U.S. flag
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
> the second (the un-initialized variable) should have happened every
> time.
OK, so I was wrong! The variable in question was a global static, 'ino' (the
current inode number), so the answer isn't something simple like 'it was an
auto that happened to be cleared for each disk'. But now that I look closely,
I think I see a way it might have worked.
'dcheck' is a two-pass per disk thing: it begins each disk by clearing its
'inode link count' table; then the first pass does a pass over all the inodes,
and for ones that are directories, increments counts for all the entries; the
second pass re-scans all the inodes, and makes sure that the link count in the
inode itself matches the computed count in the table.
'ino' was cleared before the _second_ pass, but not the _first_. So it was
zero for the first pass of the first disk, but non-zero for the first pass on
the second disk.
This looks like the kind of bug that should almost always be fatal, right?
That's what I thought at first... (and I tried the original version on one of
my machines to make sure it did fail). But...
The loop in each pass has two index variables, one of which is 'ino', which it
compares with the maximum inode number for that disk (per the super-block),
and bails if it reaches the max:
for(i=0; ino<nfiles; i =+ NIBLK)
If the first disk is _larger_ than the second, the first pass will never
execute at all for the second desk (producing errors).
However, if the _second_ is larger, then the second disk's first pass will in
fact examine the starting (nfilesSUBsecond - nfilesSUBfirst) inodes of the
second disk to see if they are directories (and if so, count their links).
So if the last nfilesSUBfirst inodes of the second disk are empty (which is
often the case with large drives - I had modified 'df' to count the free
inodes as well as disk blocks, and after doing so I noticed that Unix seems to
be quite generous in its default inode allocations), it will in fact work!
The fact that 'ino' is wrong all throughout the first pass of the second disk
(it counts up from nfilesSUBfirst to nfilesSUBsecond) turns out to be
harmless, because the first pass never uses the current inode number, it only
looks at the inode numbers in the directories.
Note that with two disks of _equal size_, it fails. Only if the second is
larger does it work! (And this generalizes out to N disks - as long as each
one is enough larger than the one before!) So for the config they were
running (rk2, dp0) it probably did in fact work!
Noel
Noel Chiappa:
To me, it's completely amazing to find such a serious bug in such a critical
piece of widely-distributd code! A lesson for archaeologists...
======
To me it's not surprising at all.
On one hand, current examples of widely-distributed critical
code containing serious flaws are legion. What, after all,
were the Heartbleed and OS X goto fail; bugs? What is every
version of Internet Explorer?
On the other hand, Ken and Dennis and the other guys behind
the earliest UNIX code were smart guys and good programmers,
but they were far from perfect; and back in those days we
were all a lot sloppier.
So surprising? No. Interesting? Certainly. All bugs are
interesting.
(To me, anyway. Back in the 1980s, when I was at Bell Labs,
SP&E published a paper by Don Knuth discussing all the many
bugs found in TeX, including some statistical analysis. I
thought it fascinating and revealing and think reading it
made me a better programmer. Rob Pike thought it was terribly
boring and shouldn't have been published. Decidedly different
viewpoints.)
Norman Wilson
Toronto ON
> From: Ronald Natalie <ron(a)ronnatalie.com>
> If I understand what you are saying, it only occurs when you run dcheck
> with mutliple volumes at one time?
Right, _both_ bugs have that characteristic. But the first one (the
fence-post) only happens in very particular circumstances; the second (the
un-initialized variable) should have happened every time.
> From: norman(a)oclsc.org (Norman Wilson)
> To me it's not surprising at all.
> On one hand, current examples of widely-distributed critical code
> containing serious flaws are legion.
What astonished me was not that there was a bug (which I can easily believe),
but that it was one that would have happened _every time they ran it_.
'dcheck' has this list of disks compiled into it. (Oh, BTW, my fixed version
now reads a file, /etc/disks; I am running a number of simulated machines,
and the compiled-in table was a pain.)
So I would have thought they must have at least tried that mode of operation
once? And running it that way just once should have shown the bug. Or did
they try it, see the bug, and 'dealt' with it by just never running it that
way?
Noel
> From: asbesto <asbesto(a)freaknet.org>
> We have about 40 disks, with RT-11 on them
Ah. You should definitely try Unix - a much more pleasant computing/etc
environment!
Although without a video editor... although I hope to have one available
'soon', from the MIT V6+ system (I think I have found some backup tapes from
it).
> This PDP-11/34 was used for a medical CAT equipment
As, so it probably has the floating point, then. If so, you should be able to
use the Shoppa V6 Unix disk as it is, then - that has a Unix on it which will
work on an 11/23 (which don't have the switch register that V6 normally
requires).
But if not, let me know, and I can provide a V6 Unix for it (I already have
the tweaked version running on a /23 in the simulator).
Noel
PS: For those who downloaded the 'fixed' ctime.c (if anyone :-), it turns out
there was a bug in my fix - in some cases, one variable wasn't initialized
properly. There's a fixed one up there now.
> From: asbesto <asbesto(a)freaknet.org>
> Just in these days we restored a PDP-11/23PLUS here at our Museum! :)
> ...
> CPU is working
That is good to hear! You all seem to have been very resourceful in making
the power supply for it!
> and we're trying to boot from a RL02 unit :)
Is your RL02 drive and RLV11 controller all working? Here are some
interesting pages:
http://www.retrocmp.com/pdp-11/pdp-1144/my-pdp-1144/rl02-disk-troublehttp://www.retrocmp.com/pdp-11/pdp-1144/my-pdp-1144/more-on-rl01rl02
from someone in Germany about getting their RL11 and RL02 to work.
Also, when you say "boot from an RL02", what are you trying to boot? Do you
have an RL02 pack with a working system on it? If so, what kind - a Unix
of some sort, or some DEC operating system?
> From: SPC <spedraja(a)gmail.com>
> I'll keep a reference of this message and try it as soon as possible...
Speaking of getting Unix to run on an 11/23 with an RL02... I just realized
that the hard part of getting a Unix running, for you, will not be getting V6
to run on a machine without a switch register (which is actually pretty easy
- I have worked out a way to do it that involves changing one line in
param.h, and adding two lines of code to main.c).
The hard part is going getting the bits onto the disk! If all you have is an
RL02, you are going to have to load bits into the computer over a serial line.
WKT has done this for V7 Unix:
http://www.tuhs.org/Archive/PDP-11/Tools/Tapes/Vtserver/
but V7 really wants a machine with split I/D (which the /23 does not have). I
guess V7 'sort of' works on a machine without I/D, but I'm not a V7 expert,
so I can't say for sure.
It would not be hard to do something similar to the VTServer thing for V6,
though. If you would like to go this way, let me know, I would be very
interested in helping with this.
Also, do you only have one working RL02 drive, or more than one? If you only
have one, you will not be able to do backups (unless you have something else
connected to the machine, e.g. some sort of tape drive, or something).
Noel
> From: SPC <spedraja(a)gmail.com>
> I'll keep a reference of this message and try it as soon as possible...
No rush! Take your time...
> the disruptive fact (in terms of time) here is to put up-to-date both
> the PDP-11/23-PLUS and RL02.
My apologies, I just now noticed that you have an 11/23-PLUS (it is slightly
different from a plain 11/23).
I am not very familiar with the 11/23-PLUS (I never worked with one), but from
documentation I just dug out, it seems that they normally come with the MMU
chip, so we don't need to worry about that. However, the FPP is not standard,
so that is still an issue for bringing up Unix.
In fact, there are two different FPP options for the 11/23-PLUS (and,
actually, for the 11/23 as well): one is the KEF-11AA chip which goes on the
CPU card (on the 11/23-PLUS, in the middle large DIP holder), and the other is
something called the FPF-11 card, which is basically hardware floating point
(the KEF-11A is just microcode), for people who are doing serious number
crunching. It's a quad-size card which has a cable with a DIP header on the
end which plugs into the same DIP holder on the CPU card as the KEF-11A. They
look the same to software; one is just faster than the other.
Anyway, if you don't have either one, we'll have to produce a new Unix
load for you (not a big problem, if it is needed).
Noel
Does anyone know if the source for an early PDP-11 version of MERT is
available anywhere?
(For those who aren't familiar with MERT, it was a micro-kernel [as we would
name it now] which provided message-passing and [potentially shared] memory
segments, intended for real-time applications; it supported several levels of
protection, using the 'Supervisor' mode available in the 11/45 and 11/70. One
set of supervisor processes provided a Unix environment; the combination was
called UNIX/RT - hence my asking about it here.)
Thanks!
Noel
>> I got one PDP-11/23-PLUS without any kind of disk (by now, I got one
>> RL12 board plus one RL02 drive pending of cleaning and arrangement)...
>> I guess if could be possible to run V6 in this machine. There's any
>> kind of adaptation of this Unix version (or whatever) to run under ?
> IIRC the README page for that set of disk images indicates that in fact
> they originally came off an 11/23, so they should run fine on yours.
So I was idly looking through main.c for the Shoppa Unix (because it printed
some unusual messages when it started, and I wanted to see that code), and I
noticed it had some fancy code for dealing with the clock, and that tickled a
very dim memory that LSI-11's had some unusual clock thing. So I decided I
had better check up on that...
I got out an LSI-11 manual, and it looked like the 23 should work, even for
the 'vanilla' V6 from the Bell distro. But I decided I had better check it to
be sure, so I fired up the simulator, mounted a Bell disk, set the cpu type
to '23', and booted 'rkunix'. Which promptly halted!
After a bit of digging, it turned out that the problem is that the 11/23
doesn't have a switch register! It hit a kernel NXM trying to touch it -
and then another trying to read it in the putchar() routine trying to do a
panic(), at which point it died a horrible death.
So I added a SR (you can create all sorts of bizarre hybrids like that with
Ersatz-11, like 11/40's with 11/45 type floating point :-), and then it
booted fine. The clock even worked!
So you will have to use the Shoppa disk to boot (but see below), or we'll
have to spin you a special vanilla V6 Unix that doesn't try to touch the SR -
that shouldn't be much work, I only found two place in the code that touch it.
I did try the Shoppa 'unix', and it booted fine on an 11/23.
Two things to check for, though: first, your 11/23 _has_ to have the MMU chip
(that's the large DIP package with one chip on it nearest the edge of the
card), so if yours looks like this:
http://www.psych.usyd.edu.au/pdp-11/Images/23.jpeg
you're OK. Without the MMU chip, most variants of Unix will not run on the 23
(although there's something called MiniUnix, IIRC, which runs on an LSI-11,
which would probably run on a /23 without an MMU).
Here's the part that might be a problem: To run any of the Unixes on the
Shoppa disk, you also have to have the FPP chip too (that's the second large
DIP package with two chips on it - the image above does not include that
chip, so if yours looks like that, you have a minor problem, and I will have
to build you a Unix or something).
All of the Unixes on the Shoppa disk have to have the FPP, except one - and
that one wants an RX floppy as the root/swap device! The others will all
crash (I tried one, to make sure) if you try and boot them on an 11/23
without the FPP.
I could try patching the binary on the one that doesn't expect to use the FPP
to use the RL as the root, or either i) build you a vanilla V6 for a 23
(above), or ii) figure out how to build systems on the Shoppa disk, and build
you a Unix there which i) uses the RL as the root/swap, and ii) does not
expect to have the FPP.
But let's first find out exactly what you have...
Noel
> From: SPC <spedraja(a)gmail.com>
> I got one PDP-11/23-PLUS without any kind of disk (by now, I got one
> RL12 board plus one RL02 drive pending of cleaning and arrangement)...
> I guess if could be possible to run V6 in this machine. There's any
> kind of adaptation of this Unix version (or whatever) to run under ?
As I mentioned in a previous message on this thread, when I took that root
pack image from the Shoppa group, I could get it to boot to Unix right off.
All it needs is a single RL02 drive (RL/0) (and the console terminal, of
course).
I looked at the 'unix' on it, and it's for an 11/40 type machine (which
includes 11/23's); IIRC the README page for that set of disk images indicates
that in fact they originally came off an 11/23, so they should run fine on
yours.
That Unix has a couple of other devices built into it (looks like an RX and
some sort of A-D), but as long as you don't try and touch them, they will not
be an issue.
Let me know if you need any help getting it up (once you have a working RL02).
Noel
> From: John Cowan <cowan(a)mercury.ccil.org>
> Well, provided the compiler is honest, contra [Ken].
A thought on this:
The C compiler actually produces assembler, which can be (fairly easily)
visually audited; yes, yes, I know about disassembly, but trust me, having
done some recently (the RL bootstrap(s)), disassembled code is a lot harder
to grok!
So, really, to find the Thompson hack, we'd have to edit the binaries of the
assembler!
For real grins, we could write a program to convert .s format assembler to
.mac syntax, run the results through Macro-11, and link it with the other
linker... :-)
Also, I found on what's going on here:
> What was wierd was that in the new one, the routine csv is one word
> shorter (and so is csv.o). So now I don't understand what made them the
> same sizes!? The new ones should have been one word shorter!? Still
> poking into this...
The C compiler is linked with the -n flag, which produces pure code. What
the linker documentation doesn't say (and I never realized this 'back in the
day') is that when this option is used, it rounds up the size of the text
segment to the nearest click (0100).
So, in c2 (which is what I was looking at), the last instruction is at
015446, _etext is at 015450, but if you look at the executable header, it
lists a text size of 015500 - i.e. 030 more bytes. And indeed there are 014
words of '0' in the executable file before the data starts.
And if you link c2 _without_ the -n flag, it shows 015450 in the header as
the text size.
So that's why the two versions of all the C compiler phases were the same
size (as files); it rounded up to the same place in both, hiding the one-word
difference in text size.
Noel
Erg! Discussion of file name length brought back some chilling memories
of very early Unix, when file names were at most 6 characters long.
Longer names were accepted but truncated at 6 characters. So you could
edit ABCDE.c, store it, read it and edit it again, but the file system
knew it as "ABCDE." So when you compiled the program, the compiler
produced ABCDE.o, which overwrote the source code!
> To boot up the root pack (I don't think I did this at any point; I've
> always mounted it as a subsidiary drive)
> ...
> The disk has a working RL bootstrap in block 0, it should boot OK.
So I recently had to reboot my machine, and I took the opportunity to try
this; it worked right off, booted 'unix' OK. (I didn't try any of the other
Unixes in the root directory.) I had only that pack mounted on DL0, nothing
else.
> So, just for grins, because I was curious (after your question), I did
> try recompiling the C compiler, to see what I'd get.
> What I got were three files (c0, c1 and c2) which were _the exact same
> size_ (down to the byte) as the binaries on the V6 Research distro, but
> had a number of differences when compared with 'cmp -l'. Odd!
> ...
> I'll take a gander tomorrow and try and work it out.
So, this turned out to be because I had replaced the csv.o in libc.a with a
new one, because the standard V6 one doesn't work with long returns (which
use R1 as well as R0, and the V6 cret bashed R1). I put the old csv.o back,
and re-linked them, and this time c? all turned out identical.
So the source in the distro really is the source for the running compiler on
it.
What was wierd was that in the new one, the routine csv is one word shorter
(and so is csv.o). So now I don't understand what made them the same sizes!?
The new ones should have been one word shorter!? Still poking into this...
I understand most of the differences between the versions of c? with the old
and new csv.o; in all the jumps to cret, the indirect word in the instruction
was off by two (because cret was one word lower because csv was one word
shorter); that, along with different contents in csv.o, created most of the
differences.
Why one word shorter? Because in csv:
tst -(sp) / creates a temporary on top of the stack
jmp (r0)
had been replaced with:
jsr pc,(r0)
(saving one instruction, and making it one word smaller).
Noel
The web page mentions files-11 which is ODS-1.
Technically (it is all coming back to me now):
FILES-11 is a family of file systems that started in RSX-11
(or perhaps before but that's the oldest instance I know).
ODS-1 is really FILES-11 ODS-1; ODS is `on-disk [something].'
RSX-11 used ODS-1. VMS used ODS-2. I'm not sure of all
the differences offhand, but they were substantial enough
that we ended up writing two different programs to fetch
files to UNIX from RSX and VMS volumes (we had the latter
to deal with too). Certainly the directory entries were
different between the two: ODS-1 used RADIX-50-encoded
file names with at most six characters plus an at-most-three-
character `extension' (a term which newbies sometimes
improperly import into UNIX as well); I forget the exact
filename rules in VMS, but filenames certainly could be
longer than six characters.
I've found the early-1980s programs I remembered. There
were two, getrsx.c and getvms.c; two programs, one for
each file-system format. They are surely full of ancient
sloppiness that won't compile or won't work right under
a modern C compiler, and they make assumptions about byte
order. I'll spend some time in the next few days going over
them and see if I can quickly get something workable.
A footnote as to their origin: in the world where we wrote
these programs, we had not only multiple systems, but
shared disk drives. The disk drives themselves were
dual-ported; the controllers we used could connect to
multiple hosts as well. Each system had its own dedicated
disk drives, but the UNIX systems could also see the drives
belonging to the RSX and VMS systems; hence the file-fetching
programs, since this was well before the sort of networking
we take for granted these days.
On the other hand, we had several UNIX systems which spoke
uucp to one another, and that was occasionally used for
large file transfers. To speed that up, I taught uucico
a new protocol, whereby control information still went over
a serial line, but data blocks were transferred over a
chunk of raw shared disk (with appropriate locks, of course).
It was a simpler world back then, but that made it a lot more fun.
Norman Wilson
Toronto ON
This is also an RSX pack (I think), but when I tried to boot it, it said
"THIS VOLUME DOES NOT CONTAIN A HARDWARE BOOTABLE SYSTEM", and since I don't
know how to mount disks under RSX-11 I left it at that.
There exists somewhere a UNIX program that reads an ODS-1 file-system
image and produces directory listings and extracts named files.
I know it exists because it was written by a friend (and probably hacked
around a little by me) almost 35 years ago, when we both worked in a
place that had some UNIX and some RSX-11. That means `somewhere'
probably includes some place in my old files. I'll see if I can
dig it out, upgrade it to work cleanly with modern C (it's just possible
I have an ODS-1 file system image lying around somewher too), and
post it either somewhere on the web or just to the list.
Norman Wilson
Toronto ON
> From: Larry McVoy <lm(a)bitmover.com>
> have you gotten to a point where you can rebuild the world and install
> your newly built stuff?
Well, I haven't tried to do that (it's not something that I'm that interested
in), but it _should_ be possible, since the 'vanilla' V6 distribution does
include the source for pretty much everything (including the C compiler,
assembler, loader, etc).
(This does not include the stuff from the Shoppa disk, like the new C
compiler, where I don't have the source. [The PWB distribution, which
includes a C compiler from 1977, is probably pretty close. Looking into
the PWB stuff is one of my next projects; we have 17 different versions
of that stuff, and I'd like to see what the differences among them are,
and maybe create a 'canonical' PWB.]
Also, per the 'Improvements' page, I have source for the Standard I/O
Library, but I'm using the binary library from the Shoppa disk, which may or
may not correspond to that source.)
Noel
Found in:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=1BSD/s6/man.c
this minor gem:
execl("/bin/ssp", "ssp", 0);
execl("/usr/bin/ssp", "ssp", 0);
printf("Can't find ssp!\n");
execl("/bin/cat", "cat", 0);
printf("or cat - gott in himmel!\n");
exit(1);
Not as good as "hodie natus" (Google it if you don't know of it - it's a
classic), but mildly amusing.
Noel
> From: Gregg Levine <gregg.drwho8(a)gmail.com>
> By Shoppa disk do you mean, Shoppe disk
The root pack (linked to from my page):
http://www.tuhs.org/Archive/PDP-11/Distributions/other/Tim_Shoppa_v6/unix_v…
It contains lots of V6 goodies - but not, alas, source for most of them. (E.g.
I had to disassemble the RL bootstrap(s) from it.)
> I've been trying to figure out how to attach them to the E11 one to
> bring the whole thing up.
The others are junk (see below). To boot up the root pack (I don't think I
did this at any point; I've always mounted it as a subsidiary drive) you'd
need to say:
mount dl0: unix_v6.rl02 /RL02
and then:
boot dl0:
The disk has a working RL bootstrap in block 0, it should boot OK. Then you
get to take your pick of which unix to boot! There are 7 to chose from:
77 -rwxrwxr-x 1 root 38598 Aug 22 1984 oldunix
77 -rwxrwxr-x 1 root 38504 Jul 19 1984 oldunix.25.7
77 -rwxrwxr-x 1 root 38568 Feb 20 1985 unix
74 -rwxrwxr-x 1 4 36956 Mar 9 1983 unix.jones
69 -rwxrwxr-x 1 root 34408 Aug 16 1983 unix.mlab
76 -rwxrwxr-x 1 4 38316 Sep 3 1982 unix.rxrl
68 -rwxrwxr-x 1 root 33958 Jun 6 1983 unix.tmp
and I have no idea how they all differ - or what each one expects to use for
a root device and swap device.
Looking at 'unix', it gives both rootdev and swapdev as '01000' (which is
probably the RL, I'm too lazy to grovel around in bdevsw and make sure). The
super block reports 19000 as the size of the file system, and sure enough,
swplo is reported as 045070 (19000), and nswap as 02710 (1480). So it's
probably set up to run and swap on RL/0.
> and any of the set of disks that are in the distributions portion of
> the site with his name on them?
All of the other disks in the V6 folder with his name on it:
http://www.tuhs.org/Archive/PDP-11/Distributions/other/Tim_Shoppa_v6
are junk; here's a brief rundown on each one:
copy_num1_user.rl02:
user0_backup.rl02:
user_backup2.rl02:
All very similar to the 'unlabeled' disk (below) - lots of random user files
for biology stuff, little if anything of any use/interest.
There is a file which isn't listed in the README:
unlabeled.rl02:
It looks like another /user disk, full of random biology stuff; nothing
interesting except a copy of a nice-looking Kermit written in C (by someone
at Columbia, IIRC).
junk:
An RL01 pack (the others are all RL02's); it has a boot block with PDP-11
code in it; I mounted it on a simulator and booted it, and it says it's an
RSX-11M V3.2 disk.
user01.rl02:
This is also an RSX pack (I think), but when I tried to boot it, it said
"THIS VOLUME DOES NOT CONTAIN A HARDWARE BOOTABLE SYSTEM", and since I don't
know how to mount disks under RSX-11 I left it at that.
scratch_disk_1123.rl02:
This does indeed seem to be something that was used for disk diagnostics: the
boot block contains gubble, including (in the first RL11 block) lots of words
with all ones, and a lot of 52525's.
Noel
>> From: Larry McVoy <lm(a)bitmover.com>
>> have you gotten to a point where you can rebuild the world and install
>> your newly built stuff?
> Well, I haven't tried to do that (it's not something that I'm that
> interested in), but it _should_ be possible, since the 'vanilla' V6
> distribution does include the source for pretty much everything
> (including the C compiler, assembler, loader, etc).
So, just for grins, because I was curious (after your question), I did try
recompiling the C compiler, to see what I'd get.
What I got were three files (c0, c1 and c2) which were _the exact same size_
(down to the byte) as the binaries on the V6 Research distro, but had a
number of differences when compared with 'cmp -l'. Odd!
I don't know what the differences result from (and it's too late now to dig
into why, I'm fading). I'll take a gander tomorrow and try and work it out.
Too bad the binaries in the Research distro have had their symbol tables
stripped! That would have made it much easier...
My guess is something like 'libraries in different order, so two library
routines are swapped around in the linked binary', or something like that
(given that the size is an exact match). But I'll need to dig a bit...
Noel
> OK, I have whipped up some material on how to bring V6 up under
> Ersatz-11. See here:
> http://ana-3.lcs.mit.edu/~jnc/tech/unix/V6Unix.html
New location (although the old one still works):
http://www.chiappa.net/~jnc/tech/V6Unix.html
So it turns out there was a bogon on the old version of this page: it claimed
the Windoze version of 'tar' needed Unix system mods (which is incorrect, it
was the version for V6 which needed the mods - cut and paste error). Fixed now
(it's useful to have a Windoze 'tar' because some of the old TAR files in the
archive can't be read by modern tools), and a certain amount of new material
added to the page overall.
> There is more content/pages coming: the start of an 'advanced things
> you can do to improve your V6 Unix' page
I've upgraded that somewhat to a complete page (although I'll probably add
more at some point in the future, e.g. I have a version of 'ps' which shows
sleep channels symbolically, text slots as ordinals, etc but I need to tweak
it a bit before I bring it out). The page is here:
http://www.chiappa.net/~jnc/tech/ImprovingV6.html
now, although the main page links to it too.
And I have another page coming, e.g. a 'what to look out for when you're
porting stuff to and from V6' guide. Etc, etc.
Noel
> From: John Cowan <cowan(a)mercury.ccil.org>
> If you haven't already, see
> <http://pdos.csail.mit.edu/6.828/2004/homework/hw5.html> et seqq.
I had seen a later variant of that course; I wasn't aware that an earlier one
used Unix; thanks for the pointer.
It's a bit unfortunate (from my PoV) that in the Unix coverage they seemed to
mostly focus on the low-level mechanics (e.g. how stacks are switched, etc,
etc), and not on the (to me) more interesting lessons to be learned from V6 -
most notably, how to get so much bang for so little buck! (I am convinced
that one of the primary challenges facing computer science these days is
control of complexity, but I don't want to get way off-topic, so I'll stop
there.)
Although I suppose things like that have to be covered at some point, and
they might as well do it in V6 as in anything else!
Noel
Hello, all: I'm working (long-term) on a project to bring back to life the
V6+ Unix system (it wasn't vanilla V6 - it looks like it had some PWB stuff
added) that was used on a number of machines at the Laboratory for Computer
Science at MIT in the late 70s - early 80s.
As part of that, I've been playing with bringing up V6 on a PDP11 simulator,
and have written some stuff that would probably be useful to anyone who's
interested in bringing up Unix on a PDP-11 simulator.
I used the Ersatz-11 simulator from D-Bit (for no particularly good reason,
except it runs under Windoze, and the "FAQ on the Unix Archive and Unix on
the PDP-11" page said it was the fastest).
I have been very pleased with this simulator; it is indeed fast (my simulated
11/70 runs at about 100 MIPS on a relatively elderly Athlon, which is about
30 times as fast as a real one used to :-), and it has lots of nice features
(e.g. you can TELNET in to a terminal port on the simulated PDP-11).
It also has this nice virtual device that allows a program running on the
simulated PDP-11 i) access to files in the Windows file system, and ii) to
issue commands to the emulator. I have written a V6 driver for it (should be
fairly easy to adapt to V7 or later), and a suite of Unix commands to grab a
file off the Windows file system (both binary and text mode), and issue
various commands to the simulator.
Finally, I have a number of Windows commands to do various useful things,
such as read a file off a simulated Unix V6 file system (hosted in a Windows
file), including ports of a number of Unix commands (e.g. ncheck, nm, etc); I
don't detail them all here as I don't want this email to get too long (and
boring).
I'm not sure if anyone's interested in any of this; if so, I can send
in more info (or whip up a Web page, whichever would be better).
I also ran into a number of pitfalls on the way to getting V6 running, using
RK05 disk images from the TUHS archive, and I can do a short writeup on 'How
to bring up V6 under Ersatz-11' if anyone's interested.
Noel
> On Sat, May 03, 2014 at 06:20:55PM -0400, Gregg Levine wrote:
> What he said. I believe we all are interested.
OK, I have whipped up some material on how to bring V6 up under Ersatz-11.
See here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/V6Unix.html
That page covers i) how to get the emulator, V6 Unix disks, and what you need
to do start it running, and then ii) some of the initial steps to take past
that to improve to working environment. Included in that are a couple of
things I tripped over, and how to avoid them.
The latter part assumes you want to do something more than just start it, so
you can see it start up; it includes coverage of the commands that work with
the emulator's "DOS device" to do things like read files off the host machine
into the Unix; a serious problem on V6 Unix having to do with 21st Century
dates; a 'more' command for V6 Unix (Vanilla V6 was back before the days of
video terminals... :-); the ability to TELNET into the emulator; and some
useful Windows commands (e.g. to read files out of the Unix into the host
machine, and create blank disk pack files); and finally configuration file
for the E11 emulator.
There is more content/pages coming: the start of an 'advanced things you can
do to improve your V6 Unix' page, which includes the new C compiler [the
'vanilla' V6 C compiler does not handle longs, unsigned, casts, and a bunch
of other things]; tar; the Standard I/O library; etc is already there, but
unfinished. And I have some material on things you can trip over in porting
stuff back and forth (I found some doozies trying to make V6 commands run
under Windoze), etc. But that will be later.
For now, I'm interested in hearing: of any errors or issues with the first
page; whether people find it completely incomprensible, or totally fantastic
(or whereever on the axis between them it lies); what additional topics I
should cover; etc, etc.
Let me know! And enjoy your V6 experience!
Noel
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> I didn't realize that MIT had PDP-11 Unixes.
Yes; I don't know exactly when the first one arrived, but I'm pretty sure it
was around the end of my freshman year (i.e. the summmer of '75), because I
remember a friend showing me Unix sometime in my sophmore year, and they
already had it as a going concern then.
The first one (I think) at MIT was the 11/70 belonging to the Domain-Specific
Systems Research group at LCS (DSSR). I got a copy of theirs for the Computer
Systems Research group (CSR) at LCS, that would have been in the fall of '77.
I don't think the AI Lab ever had one; with us all together in 545 Tech Sq I
think I would have heard (although maybe the Turtle guys on the third floor
had one).
There were probably others on campus, but Unixes could be pretty small, and
MIT is big enough that the left hand wouldn't know of the right (and 545 Tech
Sq was kind of insulated from the rest of campus anyway). One I recollect on
main campus later on was somewhere in the EE department - maybe Speech? They
had the CHAOS protocols installed on it - that was later, say '79-'80 or so.
> Other places didn't worry about it, with John Lyons' V6 book being the
> biggest leak.
Not to mention things like the ACM paper, which was public domain...
> From: Win Treese <treese(a)acm.org>
> As I understood it, MIT's main objection was that they didn't want to
> get entangled in anything that would require students to sign
non-disclosure agreements.
That sounds likely.
> At some point, MIT did have a license with Western Electric that did
> not have such a requirement. I'm pretty sure it was at least V7, and
> possibly 32V; not sure about V6.
Well, I don't know about the license (MIT had a student who was an intern at
Bell, and I think a lot of stuff slipped out the back door with him :-), but
MIT definitely had V6 in 1976 or so, and V7 wasn't released until 1979. So we
had it long before V7. Plus to which I have listings of the kernel; it's
definitely pre-V7, but it's not vanilla V6, it has some other stuff in it (I
think it's probably PWB).
Noel
> V6 ... on a number of machines at the Laboratory for Computer
> Science at MIT in the late 70s - early 80s
Interesting. I didn't realize that MIT had PDP-11 Unixes. When
university CS departments were snapping up licenses right and
left, MIT demurred because AT&T licensed it as a trade secret
and MIT's lawyers (probably rightly) feared there was no way
they could keep Unix knowledge from contaminating research
projects. Other places didn't worry about it, with John Lyons'
V6 book being the biggest leak. AT&T lawyers did clamp down
on general distribution of the book, but Bell Labs eagerly
hired Lyons for a sabbatical visit.
Did MIT's lawyers relent by V6 time, or did LCS somehow
circumvent them?
Doug
the short: yes you could chown your own files in 1st to 5th editions, the
first pwb was a derivition of 4th ed, so its not the originator of the idea.
the long:
that "superuser could not even chown setuid files" awoke a long dead memory.
I needed to go back to read the 1st ed man entries for chown(1) and (2)
again ( see http://cm.bell-labs.com/cm/cs/who/dmr/1stEdman.html inc. below)
and it is documented that yes, one indeed could give away their
own files. also note 1st ed was pre-gid, files had only owner and
non-owner, so no setgid to worry about. looking more 2nd and 3rd ed
were the same (see ttp://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v2/v2man.pdf
and http://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v3/v3man.t…)
however in the 3rd ed there were now gids yet no restistions on chown of
setgid files. 4th and 5th ed still allowed file give away even if the
setuid bit was set by stripping that bit out (unless superuser) (but
did not strip the setgid bit)
(see http://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v4/v4man.t…
and http://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v5/v5man.p…)
The 6th ed and on is when only the superuser could change file owners.
Since the first PWBs were derived from 4th and 5th editions, they just
did not buy into the new chown() restrictions from v6 (and added the
missed strippig of the setgid bit)
1st Ed manual enries --
11/3/71 CHOWN (I)
NAME chown -- change owner
SYNOPSIS chown owner file
DESCRIPTION owner becomes the new owner of the files. The owner may be
either a decimal UID or a name found in /etc/uids.
Only the owner of a file is allowed to change the owner. It
is illegal to change the owner of a file with the set-user-
ID mode.
FILES /etc/uids
SEE ALSO stat
DIAGNOSTICS
BUGS
OWNER ken, dmr
11/3/71 SYS CHOWN (II)
NAME chown -- change owner of file
SYNOPSIS sys chown; name; owner / chown = 16.
DESCRIPTION The file whose name is given by the null-terminated string
pointed to by name has its owner changed to owner. Only
the present owner of a file (or the super-user) may donate
the file to another user. Also, one may not change the
owner of a file with the set-user-ID bit on, otherwise one
could create Trojan Horses able to misuse other's files.
FILES
SEE ALSO /etc/uids has the mapping between user names and user
numbers.
DIAGNOSTICS The error bit (c-bit) is set on illegal owner changes.
BUGS
OWNER ken, dmr
> From Doug McIlroy <doug(a)cs.dartmouth.edu>
>
> Indeed, research Unix never allowed ordinary users to
> change a uid. And even in the first edition, the superuser
> was not allowed to do so on set-uid files, presumably to
> prevent inadvertent laying of cuckoo eggs. The v6 note
> about interaction with accounting reflected field
> experience with the overly liberal stance of other Unixes.
>
On Tuesday, January 14, 2014 2:00 AM [GMT+1=CET], John Cowan wrote:
> SZIGETI Szabolcs scripsit:
>
> > Well, with the same reasoning, we don't need passwords or protection
> > bits on files, since I can always take a piece of steel pipe and
> > beat the owner, until he gives out the data, so why bother?
>
> More like beating my argument with a pipe than the owner.
>
> > Blocking chown for general users is one level of several controls.
>
> Its specific purpose was to make per-user quotas practical, but since
> per-user quotas are as dead as the dodo, it no longer serves any known
> purpose.
I don't think quotas are dead. It seems nowadays the "preferred" storage
backend for email on Unix/Linux mail servers is Maildir, and Maildir uses
the
filesystem as its own backend, together with the filesystem's quota facility
to give or take storage space to/from mailboxes -- yes, provided the users
are real system users and not "virtual users", but still.
What is "dead as the dodo" is multi-user shell access. But that does not
mean
multi-user shell access should be removed from modern systems, no matter how
dead it may be.
-Pepe.
Sorry if this is off-topic but I bet someone here will know.
I recently had a significant surprise when I discovered that on HP-UX ordinary users can still give away files. Various of us who remember fairly old Unixes then sat around trying to remember which systems had this and where it came from: getting it almost entirely wrong, it turns out.
What we remembered was that it came from BSD, but this seems to be entirely wrong. It looks like it originated with System III / V, and perhaps was imported from there into some SunOS (possibly it was in all versions before Solaris?) which is why we remember it as being in BSD. It must have been in some 80s system or we would not remember it. POSIX still allows it but clearly reluctantly.
So the questions are: (a) did it originate in System III or does it go back further than that, and (b) was it in the BSD-derived SunOSes or are we just making that up?
And I guess: who thought this was a good idea?
Thanks
--tim
Yea, but that was all much later. It wasn't a problem with PWB as it's
file system structure, in practice, was /u[0-9]/{projectname}/{username}
so when a mount point ran low on space it wasn't the individual user that
got hassled, but the projects under the mount. Makes disc usage
accounting easy too, if it's under the project, the project pays for it.
You can see that structure and how PWB was actually used by a project at
http://9grid.org.uk/pwb/users-view.pdf
> From: Ed Carp <erc(a)pobox.com>
>
> But it was fun to give away large files to someone else to avoid getting
> hassled by a sysadmin when you were close to filling up your disk quota. :)
Very, very true. PWB was all about introducing UNIX into the comp
centers which were all ready in existance, with big iron sitting
in them. Existing staff was tapped to run PWB as well, typically
consisted of operators, system administrators, system programmers,
program counselors, and accounting personal. None were researchers,
UNIX was a service to be provided, the goal was to keep the machines
up and running to charge for usage.
> From: <scj(a)yaccman.com>
>
> Recall that in those days, "systems administrator" was an entry level,
> minimum wage job. Most worked third shift, and their primary duties were
> to mount and dismount disc backup tapes. Those people who actually did
> administration in the sense we think of it were greatly underpaid and
> disrespected. The next decade or two, particularly with networking,
> caused a huge change. The Usenix LISA conferences did a lot to raise
> consciousness that there was a real there there.
>
so RJE first, yes as written it did require that chown() work as non-root,
as it ran as the "rje" euid and did chown() files to the user's uid, I do
not believe that was the cause of the chown() semantics change, just a use
of it. iOne could do the same thing by other means (run as root, have a
setuid 0, file owner changer, etc) if chown(2) was root only. Why did I
believe that?
in section V.
(iv) Make changes to the UNIX system only after much delibera-
tion, and only when major gains can be made. Avoid chang-
ing the UNIX system's interfaces, and isolate any such changes
as much as possible. Stay close to the Research UNIX system,
in order to take advantage of continuing improvements.
OK now lets look at a passage from VI --
A good many UNIX (as opposed to PWB/UNIX) systems are run
as "friendly-user" systems, and are each used by a fairly small
number of people who often work closely together. A large frac-
tion of these users have read/write permissions for most (or all)
of the files on the system, have permission to add commands to
the public directories, are capable of "re-booting" the operating
system, and even know how to repair damaged file systems.
The PWB/UNIX system, on the other hand, is most often found
in a computer-center environment.
So the old way, no one even really needs chown() everybody had access to
everything.
then 8.1
The first major set of reliability improvements concerned the
handling of disk files. It is a fact of life that time-sharing sys-
tems are continually short of disk space;
...
long-term tape backup copies, on the other hand, offer users the
chance to delete files that they might want back at some time in
the future, without requiring them to make "personal" copies
disk is always full, and users are discourged from making multiple copies
of f and even encourge to remove stuff you do not need right now and get
it from backup later, let alone making copies of someone else's files.
next from 8.4, while its abouttrying to solve the uid shortage
(only 256 at the time) it shows you the users' mind set, groups tended to
functionally operate as single user, but everyone still wanted a unique id.
.... depended heavily on the
characteristics of the PWB/UNIX user community, which, as mentioned
above, consists mostly of groups of cooperating users,
rather than of individual users working in isolation from one
another. Typical behavior and opinions in these groups were:
(i) Users in such a group cared very little about how much
protection they had from each other, as long as their files
were protected from damage by users outside their group.
(ii) A common password was often used by members of a
group, even when they owned distinct user-IDs. This was
often done so that a needed file could be accessed without
delay when its owner was unavailable.
(iii) Most users were willing to have only one or two
user-IDs per group, but wanted to retain their own login names
and login directories. We also favored such a distinction, because
experience showed that the use of a single login name by
more than a few users almost always produced cluttered
directory structures containing useless files.
so the group members would know each othesr passwords (but there were many
groups on the same machine) thus non-root chown() become self-service in
sharing of files between group members, without the need of system
administrator involvment. while one could give their files to someone
outside the group, it is not productive.
And then --
to improve the security of files, a few commands were
changed to create files with read/write permission for their own-
ers, but read-only for everyone else. The net effect of these
changes was to greatly enlarge the size of the user community
that could be served, without destroying the convenience of the
UNIX system and without requiring widespread and fundamental
changes
shows the need to lock down file access, which means if you are sharing
in Research UNIX you did not need to chown() files, you could
just write them, but PWB will lock it down so a different group cannot
muck with it.
Now you are going to say this could all be done with proper use of group ids
and group permissions. I agree, but in practice it was not done, as PWB
even consdidered the complete removal of gids, but only decided against it
as they would have to change too much software --
considered was that of
decreasing the available number of the so-called "group-IDs," or
removing them entirely, and using the bits thus freed to increase
the number of distinct user-IDs. Although attractive in many
ways, this solution required a change in the interpretation of
information stored with every single disk file (and every backup
copy thereof), changes to large numbers of commands, and a fun-
damental departure from the Research UNIX system during a time
when thought was being given to possible changes to that
system's protection mechanisms. For these reasons, this solution
was deemed unwise.
> From: Clem Cole <clemc(a)ccc.com>
>
> Brian - I know the paper and Mash - the 3rd author and lived the times ;-)
>
> I just don't see how having the ability to give away a file to some one
> else made it easier for anyone - system programmer or admin. The idea of
> giving a device back begs the question of how did you get ownership in the
> first place.
>
> The one thing I could think of was something like the RJE system that you
> would wanted to have made your files be owned by the RJE system, have them
> send it to the mainframe, get back information and then give the results
> back to you. If they wanted to do that subsystem with out a root style
> privilege, you would need some way to give files away.
>
> But I can think of other ways to do that without needing the chown(2) call
> to work with that semantic, so I really don't understand what it was used.
>
> To me, it does not seem to be worth much. As I said have to ask Mash if
> he remembers why it was considered a good idea.
>
> Clem
Yep, but where did the user base from PWB come from? They were
existing professional programmers from the mainframe world, still
writing for the mainframe, now sumbmitting via UNIX RJE.
Where did the sysadmins of PWB that added these users come from?
Same answer. If users are not added into the right groups, and
the users don't know (or need, care, or be able change) groups,
they don't get implemented properly.
And if you don't have gids, want to collaborate, and are discouraged
from copying, you need to do a ton of chown()s
> From: Larry McVoy <lm(a)bitmover.com>
>
> > Now you are going to say this could all be done with proper use of group ids
> > and group permissions. I agree, but in practice it was not done
>
> Bzzt. We have a solution, they should have used it.
>
it follows the philosophy of pwb -- a usable system for disparate small groups
of developers on the same hardware that could be managed by admins not
system programmers.
read http://www3.alcatel-lucent.com/bstj/vol57-1978/articles/bstj57-6-2177.pdf
for the flavor of that time, and you'll understand better.
> From: Clem Cole <clemc(a)ccc.com>
>
> Brian - right as I showed in the code snippet from V6 and PWB. The idea
> came into being with PWB.
> The question that is still open is why was it added/need in the first
> place? I always thought is was a crazy/miss feature,
>
> I think the argument is that if you owned the file, you should be allowed
> to give it to anyone else [including root] - but that actions opens up a
> number of issues (you pointed the big security one that was handled by
> and-ing off the SUID/SGID bits). There are accounting issues as well as
> the practical one that Tim and I pointed out with importing of files on a
> tape.
>
> As I said, the file give-away feature comes into UNIX with PWB, so I would
> ask Mash is he remembers why it was needed and why the SVID folks wanted
> it. As I said, I personally found it not useful/a bad idea/miss-feature.
> I remember that I soon after I learned about it/got bitten by the side
> effect, I ran into dmr and srb at a USENIX and asked them about that a few
> other System III features that I found a little strange. I don't remember
> much of the conversation. But, if there are been a "good" reason I think
> I would have remembered it and not always thought it to be a bad idea.
>
> Clem
Indeed, research Unix never allowed ordinary users to
change a uid. And even in the first edition, the superuser
was not allowed to do so on set-uid files, presumably to
prevent inadvertent laying of cuckoo eggs. The v6 note
about interaction with accounting reflected field
experience with the overly liberal stance of other Unixes.
non-su chown worked in pwb, if the caller owned the file. code had to be
added then to the system call to strip the setuid/setgid bits if you were
not su, for obvious security reasons. you didnt see that bit stripping
in, say the v6/v7 code.
> From: Tim Bradshaw <tfb(a)tfeb.org>
>
> Sorry if this is off-topic but I bet someone here will know.
>
> I recently had a significant surprise when I discovered that on HP-UX ordinary users can still give away files. Various of us who remember fairly old Unixes then sat around trying to remember which systems had this and where it came from: getting it almost entirely wrong, it turns out.
>
> What we remembered was that it came from BSD, but this seems to be entirely wrong. It looks like it originated with System III / V, and perhaps was imported from there into some SunOS (possibly it was in all versions before Solaris?) which is why we remember it as being in BSD. It must have been in some 80s system or we would not remember it. POSIX still allows it but clearly reluctantly.
>
> So the questions are: (a) did it originate in System III or does it go back further than that, and (b) was it in the BSD-derived SunOSes or are we just making that up?
>
> And I guess: who thought this was a good idea?
>
> Thanks
>
> --tim
OK, let me try this one more time with links to get around the
restrictions on the message size.
I was cleaning out my basement and I
found a box of stuff from my office at BRL (where I left in 1987).
Most
of it was buttons I'd picked up at trade shows and SF cons (I had a
fabric partition next to my desk that I had them all stuck to.
Of
course in the box (among a couple of later editions) was Armando's
original UNIX license (note no reference to DEC):
Also in the box were
some buttons from various UNIX conferences. I particularly remember the
Sex, Drugs, and Unix one. Some of you will also remember the year I was
giving out the No Ada shirts. There's a picture of dmr wearing one
floating around somewhere.
Sun was giving out these one year:
Peter
Langston thought this was a little conceited on Bill Joy's part, so the
next show he arrived with buttons to hand out that said things like "The
psl of UNIX" and "The dmr of UNIX". I had a "The ron of UNIX" somewhere
but I couldn't find it in the box.
Finally there was this wooden
nickle courtesy of Bill Yost...
> The wikipedia description
> <http://en.wikipedia.org/wiki/CAT_(phototypesetter)>
> seems pretty accurate although I have never seen the beast myself.
I can confirm the wikipedia description. At Bell Labs, however, we
did not use paper tape input. As soon as the machine arrived, Joe
Ossanna bypassed the tape reader so the C/A/T could be driven
directly from the PDP-11. The manufacturer was astonished.
The only operational difficulty we had was with the separate
developer. If you didn't hand feed the end of the paper perfectly
straight into that machine, the paper would tear. Joe Condon
fixed that by arranging for the canister to sit on rollers so
it could give when the paper pulled sideways.
The first technical paper that came off the C/A/T drew a query
from the journal editor, who'd never seen a phototypeset
manuscript before: had it been published elsewhere?
Doug
> Didn't the BSTJ get phototypeset on your typesetter??
Not that I know of. But Charlie Brown's office did use it.
Brown, the CEO of AT&T, did not like wearing reading glasses
when he made a speech, so his speechwriters produced the
text in large type. Once they were into document preparation
they began to use our machine for other things. When I
discovered that confidential minutes of AT&T board meetings
were in our file system, I told them the facts of life
about computer security, especially at the center of the
UUCP universe, and persuaded the executive suite to get
its own machine.
Doug
the one at WH was directly connected to a vax 11/780, no paper tape either.
so that now finally explains why /dev/cat was write only, it was substituted
for a paper tape reader. it was always a curiosity that you could write
to it, yet never read it (i.e. get a status). a "cat /dev/cat" would
get you a "cat: cannot open /dev/cat" while a "cat /some/file > /dev/cat"
would succeed, but act like you used /dev/null instead
(as /some/file was not valid phototypeseter input)
> From: Doug McIlroy
>
> > The wikipedia description
> > <http://en.wikipedia.org/wiki/CAT_(phototypesetter)>
> > seems pretty accurate although I have never seen the beast myself.
>
> I can confirm the wikipedia description. At Bell Labs, however, we
> did not use paper tape input. As soon as the machine arrived, Joe
> Ossanna bypassed the tape reader so the C/A/T could be driven
> directly from the PDP-11. The manufacturer was astonished.
>
> The only operational difficulty we had was with the separate
> developer. If you didn't hand feed the end of the paper perfectly
> straight into that machine, the paper would tear. Joe Condon
> fixed that by arranging for the canister to sit on rollers so
> it could give when the paper pulled sideways.
>
> The first technical paper that came off the C/A/T drew a query
> from the journal editor, who'd never seen a phototypeset
> manuscript before: had it been published elsewhere?
>
> Doug
>
Look at United States Patent 4074285
http://patentimages.storage.googleapis.com/pdfs/US4074285.pdf
Figure 1 is identical to the machine I ran at Whippany Bell Labs
in the early to mid 80s. It was about 4 1/2 feet tall
Figure 4 is the font wheel (seen as 16 in Fig 1) there were 4 distinct
sectors, each with a different font. One with Times Roman, one with
Times Roman Bold, one with Times Roman Italic and the last was the
symbol fonts (math, greek chars, left hand (\lh right hand \(rh etc. and
this one was made specifically for the Labs as it had a Bell Logo \(bs on it)
The paper was a roll of photo paper, glossy on the text side, rough on
the reverse, it was thick. It would end up going into the cassette
(20 in Fig 1) and would need to be developed. Not shown in the patent
figures was the developing and drying apparatus. At the end of
a job the exposed paper was in the cassette you'd remove
it from the typesetter and put into a device with rollers that would pull
it out and run it thru developer and fixer liquid chemicals. Exiting
that it would go into a dryer drum.
After it was completly dry, as it was still a continuous roll, you
would need to cut all the pages apart by hand (that is why there was
the cut mark macro (.CM) is -ms so you could tell where to cut)
As it came from a roll, no pages ever layed completely flat.
The checmical baths were nasty smelling and it gummed up the rollers.
You'd needed to regularly take the developer roller and gear guts into
the janitor's closet and scrub it with a toothbrush in the slop sink
under running water.
By the second half of the 80s it was replaced by QMS PostScript
laser printers.
> From: "Jacob Goense" <dugo(a)xs4all.nl>
> All, I'm looking for images of the cat device as mentioned several
> times in the 7th edition manual, see e.g. TROFF(1)and CAT(4).
>
> From what I gathered during my digs is that it should look like a
> GSI 8400, but that didn't help. Anyone here who can help me find out
> what these machines looked like? A picture would be the best, but
> information on what to look for in images of unnamed typesetters will
> do as well.
>
> /Jacob
>
>
All, I'm looking for images of the cat device as mentioned several
times in the 7th edition manual, see e.g. TROFF(1)and CAT(4).
>From what I gathered during my digs is that it should look like a
GSI 8400, but that didn't help. Anyone here who can help me find out
what these machines looked like? A picture would be the best, but
information on what to look for in images of unnamed typesetters will
do as well.
/Jacob
Hi all.
This may be of some interest. From a friend at Utah:
> Date: Sat, 30 Nov 2013 16:06:25 -0700 (MST)
> Subject: [it-professionals] computer history: Arpanet IMPs resurrected
>
> The simh list about simulators for early computers recently carried
> traffic about an effort to reconstruct and resurrect the Arpanet
> Interface Message Processors (IMPs), which were the network boxes that
> connected hosts on the early Arpanet, which later became the Internet.
>
> There is a draft of a paper about the work here:
>
> The ARPANET IMP Program: Retrospective and Resurrection
> http://walden-family.com/bbn/imp-code.pdf
>
> Utah was one of the original gang-of-five hosts on the Arpanet, and we
> received IMP number 4. Utah is mentioned twice in the article, and
> also appears in the map in Figure 3 on page 14.
>
> One amusing remark in the article (bottom of page 7) has to do with
> the fail-safe design of the IMPs:
>
> In addition ``reliability code'' was developed to allow a
> Pluribus IMP to keep functioning as a packet switch in the
> face of various bits of its hardware failing, such as a
> processor or memory [Katsuki78, Walden11 pp. 534-538]. This
> was so successful there was no simple off switch for the
> machine; a program had to be run to shut parts of the machine
> down faster than the machine could ``fix itself'' and keep
> running.
>
> As happened with early Unix releases, machine-readable code for the
> IMPs was lost, but fortunately, some old listings that turned up
> recently allowed its laborious reconstruction, verification, assembly,
> and simulation.
Arnold
Clem Cole <clemc(a)ccc.com> wrote:
>If the original BBN code had
>been left alone, since most people did not have the issues Berkeley did,
>they would never have bothered with sendmail.cf.
Now I might be badly wrong, but nonetheless this strikes me as
badly revisionist history.
The motivation for sendmail.cf was the collision of multiple
namespaces (Arpanet, Bitnet, Usenet, etc.), each implemented
in varying nonstandard ways by different mail clients and servers,
resulting in messes like "IJQ3SRA%UCLAMVS.BITNET%SU-LINDY(a)SU-CSLI.ARPA",
as one of many, many examples, as observed in the famous
"The Hideous Name", Rob Pike & P.J. Weinberger, 1985
http://pdos.csail.mit.edu/~rsc/pike85hideous.pdf
The thing is, although sendmail.cf was/is itself hideous to understand
and therefore make maintenance changes to (although I have), it is
quite capable of actually handling the above kinds of messes, and
being extended to handle new messes as they turn up.
In short, it got the job done, despite its weaknesses.
I may be wrong, but it was my strong impression that, back in the
day, this could not be said of anyone else's code, BBN or otherwise.
Doug Merritt
I am reading the delivermail (later known as postbox and then sendmail)
code from 4.0BSD and from sccs history from June 1980.
Its arpa-mailer(8) manual says it just spools the letter and actual
delivery will be performed by the ARPANET mailer daemon and refers to
mailer(ARPA) manual. The arpa.c code says "is stuck away in the
outgoing arpanet mail queue for delivery by the true arpanet mailer."
Where is this true arpanet mailer? I am guessing it periodically looks
in /usr/spool/netmail/ and delivers the messages using FTP and RFC458.
Where is this mailer(ARPA) manual?
Where is the ftp server code used for the incoming mail? (Example
code mail-dm.c is provided for the ftp server to "handle the MAIL <user>
command over the command connection.")
Also where is the uucp-mailer(8) manpage referenced in delivermail(8)?
Jeremy C. Reed
echo 'EhZ[h ^jjf0%%h[[Zc[Z_W$d[j%Xeeai%ZW[ced#]dk#f[d]k_d%' | \
tr '#-~' '\-.-{'
On Mon, Nov 25, 2013 at 04:55:58PM -0600, Jeremy C. Reed wrote:
> Kashtan's VMS performance comparison paper and Joy's followup from early
> 1980 both refered to the VM/UNIX as Version 2.1 of the Berkeley system;
> this was the "Third" distribution; by April the kernel was known as 3.1.
>
> 2.4BSD was mentioned in the kermit source's Makefile. But maybe a
> mistake.
I stand well corrected!
Thanks,
Warren
Just saw this on the ClassicCMP list. Wonder if anyone could read it out... or if it's actually something that's already out there.
--Dave
Begin forwarded message:
> From: Chuck Guzis <cclist(a)sydex.com>
> Subject: 4.3BSD source tape offered on FreeBSD
> Date: November 25, 2013 at 2:29:19 PM EST
> To: General Discussion: On-Topic and Off-Topic Posts <cctalk(a)classiccmp.org>
> Reply-To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk(a)classiccmp.org>
>
> http://forums.freebsd.org/showthread.php?t=43346