Hi,
Don't forget the Zuse machines, which were later proven to be Turing
complete. It is certainly fascinating to see handling binary floating point
numbers in a purely mechanical device (check it out if you happen to be in
Berlin). Later machines were electromechanical and electronics.
Regards,
Szabolcs
>
> 2015.12.04. 15:52 ezt írta ("John Cowan" <cowan(a)mercury.ccil.org>):
>>
>> Greg 'groggy' Lehey scripsit:
>>
>> > Take a look at CSIRAC in the Melbourne museum, the oldest computer in
>> > the world. It's worth it, even if they don't have it running.
>>
>> Well, there's the Antikythera mechanism.
>>
>> --
>> John Cowan http://www.ccil.org/~cowan cowan(a)ccil.org
>> In the sciences, we are now uniquely privileged to sit side by side
>> with the giants on whose shoulders we stand. --Gerald Holton
>> _______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
below
On Fri, Dec 4, 2015 at 12:02 AM, Will Senn <will.senn(a)gmail.com> wrote:
> 1. a utility on the host that is capable of copying a directory and its
> contents, recursively, onto a blank magtape/dectape/rk image that is then
> readable in the v6 environment
>
Right - you want a common archive format between the two systems that talk
to the tape device.
You can either create your own or better take on the old ones that exist.
> 2. a tar and unzip binary for v6 that is capable of dealing with the
> tarball (but isn't the tarball going to exceed the max file size anyway, if
> so this won't work)
>
I think you have a many to chose from off the top of my head I can think of
each with different advantages (more in a minute):
- tar
- cpio
- tp/stp
- ar (new format)
You seem to also want a compression tool, but you might try compressing on
the modern system - but there are solution here also.
- pack/unpack was the old v5/v6 compression tool - I've forgotten where
it was sourced check the first USENIX tape in 77
- porting a modern zip/gzip/bzip
> 3. an alternative archiver that runs on FreeBSD or Mac OS X, that can
> create a single file archive from a subdirectory's contents on the host
> (the resultant file would need to be extractable on v6, and if file size is
> too limited, won't work either).
>
That is a lot of work and unless this is going to be a very long term
thing, I'm not so sure it's worth it. Basically you want a virtual FS on
the v6 system and the simulator. If you are going to do this alot, then
its worth it. Think the VFS that vmware and like offer.
> 4. some kind of directory transfer utility that works over telnet that can
> be executed from a FreeBSD or Mac OS X host and that can be executed on the
> v6 system as well.
>
the original unix kermit will compile using the v6 compiler (maybe the
v5) compiler. You have to dig in the archives, but you want a version
from Columbia circa 1977 and you be fine. The latest version will use
things in the language first described in the white book - aka Typersetter
C (Dennis was wrote the book starting with v6, but's not published until
v7). If you a later compiler running on v6 you'll be fine.
> 5. a utility capable of creating an empty magtape/dectape/rk image and
> another capable of making a filesystem on the image and another of
> populating the image (analogous to fdisk rkimage; mkfs rkimage; rkcopy dir
> rkimage)
>
You could move the file system creation tools and set of a virtual v6 FS.
It's a lot of work and unless this is going to be a very long term thing,
I'm not so sure it's worth it.
As for the archivers which in the short term is likely to be your best bet:
1. tar - there a couple of versions of tar for v6 including binaries.
I personally would start there.
2. cpio was written for PWB 1.0 which is v6 kernel based. That binary
should run. But IIRC correctly the original cpio was only binary headers
(the -c/ascii headers was added later). So you'll need to be careful on
the modern computer and make sure you set the switches so that he created
the proper endian/byte swapping -- ness in the header
3. tp/stp - on the original USENIX tape is a "super tp" that replaced
the original one. The binary should run as is. The code for it is
pre-K&R so compiling it with a modern compiler will be a little bit of
work. Also, IIRC the "directory" which is on the front of the tape is
binary, so you'll need to make sure you write everything in PDP-11 format.
4. ar - was updated by the community. Eventually, V7 took the "new ar"
from original USENIX tape. Again that binary should just run fine.
Although I don't think its directory is recursive so it may fail that
requirement for you
Clem
All,
I am trying to figure out how to get parts of 1BSD added into a pristine
v6 install, but the question I have relates to moving more than a
handful of files from a host system into v6, which lacks several
capabilities that are taken for granted from v7 onward (tar, unzip, and
so on).
For background, in looking at the 1bsd tarball, exploded out, I saw that
ex was available on the tape in a binary form that is suitable for a
PDP-11/40 and I thought it would make life easier in v6 to have ex. So,
I used dd to move the a.outNOID file onto a file, which can be used as
a raw RK image and then off the RK image loaded in the PDP-11 into the
v6 system as the executable file ex, and that worked. I was able to run
ex (well, sort of, I get the colon prompt anyway... I haven't figured
out how it actually works yet). Yeeha! Having had success of a sort with
a single executable from the 1BSD tape, I would like to see if other
parts of 1BSD will work in the environment and if I can properly install
those parts.
Individually moving files using dd is tedious in the extreme (there are
many files in the tarball). I know there has to be a better way. Since
v6 doesn't have tar, or unzip, it doesn't seem likely that using dd to
move the tarball into v6 will be help matters. But, if there was a way
to dd a subdirectory and its contents onto an RK image and get them off
again into a useable v6 file system, that would work.
My question for the group is based on the preceding discussion and the
following assumption:
1. given a tarball such as 1bsd.tar.gz from the TUHS archive located at:
/PDP-11/Distributions/ucb
2. with a running SimH PDP-11/40 instance
with a virtual TU10 magtape
with a virtual TU56 dectape
with a virtual RK05 hard drive
3. running v6 as the operating system
What is an efficient method of moving the files of the 1bsd
distribution, or any other set of files and directories, into the v6
operating environment?
Here are some approaches that seem reasonable, but that I haven't been
able to figure out, if you know better, please do tell:
1. a utility on the host that is capable of copying a directory and its
contents, recursively, onto a blank magtape/dectape/rk image that is
then readable in the v6 environment
2. a tar and unzip binary for v6 that is capable of dealing with the
tarball (but isn't the tarball going to exceed the max file size anyway,
if so this won't work)
3. an alternative archiver that runs on FreeBSD or Mac OS X, that can
create a single file archive from a subdirectory's contents on the host
(the resultant file would need to be extractable on v6, and if file size
is too limited, won't work either).
4. some kind of directory transfer utility that works over telnet that
can be executed from a FreeBSD or Mac OS X host and that can be executed
on the v6 system as well.
5. a utility capable of creating an empty magtape/dectape/rk image and
another capable of making a filesystem on the image and another of
populating the image (analogous to fdisk rkimage; mkfs rkimage; rkcopy
dir rkimage)
If I am asking the wrong questions, or thinking badly, I would
appreciate a steer in the right direction.
Regards,
Will
> From: Will Senn <will.senn(a)gmail.com>
> I am studying Unix v6 using SimH and I am documenting the process
I did a very similar exercise using the Ersatz11 simulator; I have a lot
of stuff about the process here:
http://www.chiappa.net/~jnc/tech/V6Unix.html
It contains a number of items that you might find useful, e.g.: "V6 as
distributed is strictly a 20th Century operating system. Literally. You can't
set the date to anytime in the 21st century, for two reasons. First, the
'date' command only take a 2-digit year number. Second, even if you fix that,
the ctime() library routine has a bug in it that makes it stop working in the
closing months of 1999."
> the PDP architecture
Technically, a PDP-11 - there were a number of different PDP architectures:
https://en.wikipedia.org/wiki/Programmed_Data_Processor
is a decent listing of them; several (PDP-8, PDP-10, etc) were very popular
and successful.
A few things I noted in your first post:
> I am using the Ken Wellsch tape because it boots and is stated to be
> identical to Dennis Ritchie's tape other than being bootable and having
> a different timestamp on root.
The only differences I could discover between the two are that in the Wellsch
versions i) a Western Electric rights notice (which prints on booting) has
been added to ken/main.c, and the Unix bootable images; and ii) the RK pack
images do have, as you noted, the bootstrap in block 0.
> Note: sh is critically important, don't muck it up :). The issue is
> that if you do, there really isn't an easy way to recover.
One should _never_ install a new shell version as '/bin/sh' until it has been
run and tested for a while (for the exact reason you mention). Happily, in
Unix, as far as the OS is concerned, the command interpreter is just another
program, so it's trivial to name a new binary of the shell 'nsh' or
something, and run that for a while to make sure it's working OK, before
installing it as '/bin/sh'.
> a special file (whatever that is)
Special files are UNIXisms for 'devices'. _All_ devices in Unix appear as
'special files' in the file system, usually (but not necessarily) in /dev -
that location is a convention, not a requirment of the OS.
Noel
On Sun, Nov 29, 2015 at 08:55:23PM -0800, Paul McJones wrote:
> Thanks very much for making the original and the OCR-enhanced versions
> of Doug’s scan of the “UnixEditionZero” document available
> on tuhs.org. I notice that even with Nelson’s enhanced version,
> the file size is still large for a scanned text document, apparently
> because it was originally scanned in RGB mode, 24 bits/pixel. The
> attached version is 2.5MB, and to my eye is identical looks identical
> to UnixEditionZero-OCR.pdf.
Paul, I've added your version into the same directory. Thanks!
Warren
Hi all,
In v2 no5 AUUGN Jun-Jul 1980, Andy Tanenbaum announced the availability of a Portable Pascal Compiler for the then proposed ISO standard. A tape was made for v6, v7, and non-unix platforms. Does anyone know if there is a tape image around that has the distro?
On a related note, has anyone successfully installed 1BSD on a v6 install running in SImH? 1BSD has the Berkeley Pascal Instructional system on it.
Regards,
Will
Sent from my iPhone
I'm too tired to dig for the exact words in the ISO standard,
but I had the impression that the official C rule these days
is that the effect of writing on a string literal is undefined.
So it's legal for an implementation to make strings read-only,
or to point several references to "What's the recipe today, Jim"
to one copy of the stripng in memory, and even to point uses of
"Jim" to the tail of the same string. Or both.
It is also legal for every string literal to reside in its own
memory and to be writable, but since the effect is undefined,
code that relies on that is on thin ice, especially if meant to
be portable code.
I have used, and even fixed (unrelated) bugs in, a compiler
that merged identical strings. I forget whether it also looked
for suffix matches. Whether the strings went in read-only
memory was up to the code generator (of course); in the new
back-end I wrote for it, I made them so. This turned up quite a
few fumbles in very-old UNIX code that assumed unique, writable
string literals, especially those that called mktemp(3). To my
mind that just meant the programs needed to be fixed to match
current standards (just as many old programs needed fixes to
compile without error in ISO C), so I fixed them.
I didn't (and still don't) like Joy's heavy-handed hack, but I
see his point, and think it's just fine for the language rules
to allow the compiler to do it hacklessly.
Norman Wilson
Toronto ON
I've gotten sucked into an embedded system project and they are running out of
memory. I have a vague memory of some sort of tool that I think Bill Joy
wrote (or maybe he told me about it) that would do some magic processing of
all the string constants and somehow it de-dupped the space.
Though now that I'm typing this that doesn't seem possible. Does this ring
a bell with anyone? I'm sure it was for the PDP 11 port.
Thanks,
--lm
Thanks, Doug and Warren, for the new files at
http://www.tuhs.org/Archive/PDP-11/Distributions/research/McIlroy_v0/
At the TUHS mirror at my site, you can find an additional file
http://www.math.utah.edu/pub/mirrors/minnie.tuhs.org/PDP-11/Distributions/r…ftp://ftp.math.utah.edu/pub/mirrors/minnie.tuhs.org/PDP-11/Distributions/re…
that is less than half the size, and is (somewhat) searchable, thanks
to Adobe Acrobat Pro 11 OCR conversion. Please include that in the
TUHS master archive, even renaming it to the original file, if you
wish.
I like the beginning of "Section 2. Hardware", where Dennis wrote:
>> ...
>> The PDP-11 on which UNIX is implemented is a 16-bit 12K computer,
>> and UNIX occupies 8K words. More than half of this space, however, is
>> utilized for a variable number of disk buffers; with some loss of
>> speed the number of buffers could be cut significantly.
>> ...
How much more powerful early Unix was compared to CP/M and MS-DOS, in
a small fraction of their memory space!
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Woe betide the user if any string was changed at run time...
That was then. Now it would be OK to do so for const data.
(Unless the tool chain has discarded const indications.)
Doug
> It's worth noting that Unix was built for troff. Typesetting patents
if I recall correctly.
This is a stretch. Unix was really built because Ken and Dennis
had a good idea. The purchase of a PDP-11 for it was in part
justified by the goal of making a word-processing system. The
first in-house "sale" of Unix was indeed to the patent department
for typing patents--the selling point was that roff could be
made (by an overnight modification) to print line numbers as
USPTO required, whereas that was not a feature of a commercial
competitor. The timeline is really roff--Unix--patent--nroff--troff.
Though roff antedated Unix, it did not motivate Unix.
> Is this The UNIX Time-Sharing System, or related to it? The same
> claim appears in the first paragraph:
> https://www.bell-labs.com/usr/dmr/www/cacm.html
This draft clearly dates from 1971. Pieces of it were worked
into subsequent versions of the manual as well as published
descriptions of Unix, including the SIGOPS/CACM paper.
Doug
Hi,
I wanted to at least give it a try porting 2.11 BSD to my Z8001
machine. I din't really wrote any kernel part until now so it
will be a huge learning curve for sure. No idea what my spare
time permits, but... at least I'm planning giving it a try.
I didn't found something like "thing you should do first when
porting 2.11 BSD to another architecture" online so I thought
myself... maybe it would be good to start with the standalone
utilities - more precisely with "disklabel".
Is there a good "HOWTO" for "first things first" as implementing
disklabel seems to require quite some "device work" before the
first "hello world" is there - is there something else which
should be could be done first and does not require so much to
port (the whole disk subsystem on that machine is different
from "usual" disk subsystems as it is handled via a PIO)
Regards, Oliver
I know that I'm jumping the gun a bit, but if/when someone has any news
of any 50th anniversary celebrations for Unix in mid-2019?
I'd love to start planning things now, given I'm in Australia and I also
need to convince my darling wife of the need for a holiday in the U.S
[or elsewhere 8-) ].
I will keep asking every six months.
Cheers, Warren
> I've not seen anything before Dennis' scan of the 1st
> Edition manuals. Can you make a scan of this one available?
I shall, as I had intended to do if this document was as
unknown or forgotten by others as it was by me.
Doug
> The phototypesetter version of Unix was V7.
I'm not sure of what's being said here. Manuals from
the 4th edition on were phototypeaet, first on a
CAT and later a Linotron (if I remember the name right).
Doug
Hi all, I just receivd this e-mail from Will Senn who has just joined
the TUHS mailing list:
----- Forwarded message from Will Senn -----
Hi,
I am conducting research on older UNIX operating systems and came
across a letter from Richard Wolf to Ian Johnstone, dated Feb 5, 1979.
On p. 29 of the AUUGN, Volume 1 number 3, Mr. Wolf refers to a set of
101 fixes for research version 6. In my research, I am currently using
v6 and wondered if you knew where I might find the fixes or if the
bits are known to exist?
Kind Regards,
Will
----- End forwarded message -----
Will, there was a "50 bugs" tape for 6th Edition Unix that was "released"
to Unix owners in a very interesting distribution method: see
http://www.groklaw.net/articlebasic.php?story=20060616172103795
You can find it in the Unix Archive. Look in Applications/Spencer_Tapes/unsw3.tar.gz. It is the file usr/sys/v6unix/unix_changes.
Does anybody know of something which could be described as "101 fixes for
research version 6"? The phototypesetter version of Unix was V7.
Cheers all and welcome to the list Will.
Warren
https://groups.google.com/d/msg/net.unix/Cya18ywIebk/2SI8HrSciyYJ
Apparently the 8th Edition shell had the ability to export functions via
the environment.
I'm wondering - were there (are there?) any other shells other than bash
that picked up this feature? How was it implemented, considering this
was the cause of the "Shellshock" vulnerability?
I was amused to see it come up in one of the olduse.net newsgroups I've
been following.
Interestingly, the SysIII version of cut.c does not have the line
mentioned here. That's because it doesn't initialize _any_ of the flag
variables. The line was added some time between then and SysV, and that
is the _only_ significant change between the SysIII and pdp11v versions.
https://groups.google.com/d/msg/net.bugs.usg/iAkgNVBJNSo/PgXAC2vi044J
Hi,
I'm struggling on reimplementing the C code for the link()
syscall.
Usually on SYSIII and V7 you have something like:
link()
{
register struct inode *ip, *xp;
register struct a {
char *target;
char *linkname;
} *uap;
[...]
u.u_dirp = (caddr_t)uap->linkname;
[...]
}
The problem now on my system is, u_dirp in the user struct
is saddr_t (*long) and not caddr_t (*char) and I wonder how
I have to assign uap->linkname.
The original ASM code looks like:
_link::
{
dec fp,#~L2
ldm _stkseg+4(fp),r8,#6
ldl rr8,_u+36
[...]
ldl rr2,rr8(#4)
ldl rr4,rr2
and r4,#32512
ldl _u+78,rr4
[...]
I had the same problem already 7 years ago but didn't came up
with a solution back then.
http://home.salatschuessel.net/quest/problems.php
What came to my mind in the meantime is the following and maybe
someone can check if this is right:
1. _u+78 (u.u_dirp) contains a pointer - so what is assigned
here in ASM is a memory address.
2. The memory notation for accessing segmented data on Z8001
seems to be 0xSS00XXXX where SS is the segment number up
to 127 and XXXX is the relative address in that segment.
3. This means ANDing 0xSS00 with 0x7F00 means to strip out
all invalid data from the segment-position of the address,
to make sure it can only be between 0 and 127 (0x0000 and
0x7F00).
I wonder how the assignment of uap->linkname to u.u_dirp has
to be done correctly?!
I see http://archive.org/details/BillJoyInterview but the source is
unknown. Does anyone know who conducted this interview or where it came
from? (I tried to contact the archive years ago but didn't hear back.)
Most of the stories I have alternative sources for but I'd like to cite
some of this content in a book I am authoring.
Also it doesn't seem to have a starting place. It appears the beginning
of the interview is missing. (Also it has eight sections marked with
"[Skipped portion you requested.]" and 27 page breaks.)
It appears it may have been OCR'd (Exacfiy = Exactly, correcfiy =
correctly, f'mally = finally, f'md = find, f'n'st = first, llliac =
ILLIAC, Riogin = Rlogin, HTrP: = HTTP: and many other OCR-like typos),
plus misspelled names where the originally typed wrong (so I assume the
transcriber wasn't directly related to this story, like deck = DEC,
Favory = Fabry, Gerwitz = Gurwitz, "E-bid(?) ASCH" = maybe EBCDIC to
ASCII).
If anyone knows the source for this interview or a proper bibtex entry
for it, I'd appreciate it.
Howdy,
I'm the secretary of the Atlanta Historical Computing Society, and a lurker
here on the TUHS list.
We're starting our process of looking for speakers at our upcoming VCF SE
4.0. It'll be April 2nd and 3rd 2016
in the Atlanta area. Since I've enjoyed reading and hearing about all the
UNIX history on this list,
I was wondering if anybody here might be willing to speak at our event.
It seems there is a good
deal of history that is captured in the minds of the members of this
list... which might make a number
of good presentations.
We're open on ranges of topics. We've had many different people speak...
the first editor of Byte,
the artist who did the covers of many Byte magazines, Jason Scott from
archive.org, some early SWTPC
engineers, some early Apple engineers including Daniel Kottke. We also
have members from the
various Vintage Computer groups from around the U.S. speak (and of course
some local members),
and some University Archivists who are starting to have to deal with old
media. This year we will have
Jerry Manock (the designer from Apple who established their design
group...designed cases for Mac, etc.)
as one speaker.
We love to learn about the history, esp. from the folks who lived it. I am
just slightly too young to have
been there (was born in '65) but always enjoy the talks. We can
accommodate from a 30 min talk to
an hour. We have a professional sound set up and stage. We have a
co-sponsor, the Computer
Museum of America that is being established in the area as well. We have
between 5 and 10 slots to fill.
We aren't a large group, but we do have a limited budget to assist with
travel, lodging, etc. We can handle
"nice" but not the Ritz :-)
If anybody is interested, please contact me and I can provide further
details. And if you'd be interested
but can't make this year, please still contact me, maybe we can work
something out in the future.
Thanks!
Earl Baugh
Secretary
Atlanta Historical Computing Society.
Hi,
does someone know where "u" is defined on SYSIII or V7?
sys/user.h states:
extern struct user u;
But I wonder where it is defined? On ZEUS I have u.o but I'm
not able to correctly disassemble it. Right now I'm guessing
that it should be something like:
u module
$segmented
$abs %F600
global
_u array [%572 byte]
end u
But the resulting object (u.o.hd) does not match 100% the existing
u.o on the system (u.o.orig.hd).
--- u.o.orig.hd 2008-05-16 21:52:12.000000000 +0200
+++ u.o.hd 2008-05-16 21:52:16.000000000 +0200
@@ -3,6 +3,6 @@
00000020 00 00 00 01 00 00 00 00 01 00 00 00 00 00 00 00
|................|
00000030 00 00 00 02 00 00 00 00 00 00 00 00 1e 00 75 5f
|..............u_|
00000040 70 00 00 00 00 00 01 00 00 00 1e 01 75 5f 64 00
|p...........u_d.|
-00000050 00 00 00 00 3e 00 f6 00 61 3e 5f 75 00 00 00 00 |....>..a>_u....|
+00000050 00 00 00 00 01 00 f6 00 61 01 5f 75 00 00 00 00 |.......a._u....|
00000060 00 00 |..|
00000062
iPhone email
> On Nov 13, 2015, at 2:38 PM, Brantley Coile <brantleycoile(a)icloud.com> wrote:
>
> For performance reasons an assembly symbol "u" was defined to be a fixed address. That allowed us to use constructions like u.u_procp to generate a single address. It was very fast. Does this help?
>
> iPhone email
>
>> On Nov 13, 2015, at 2:33 PM, Oliver Lehmann <lehmann(a)ans-netz.de> wrote:
>>
>>
>> Oliver Lehmann <lehmann(a)ans-netz.de> wrote:
>>
>>> u module
>>> $segmented
>>> $abs %F600
>>>
>>> global
>>>
>>> _u array [%572 byte]
>>>
>>> end u
>>
>> By any way - is here someone on the list understanding Z8000 PLZ/ASM? ;)
>>
>> The problem is, that "u" must be available in the address space on this
>> location for the kernel to function correctly:
>>
>> # define UBASE 0x3E00F600 /* kernel virtual addr of user struct */
>>
>> And with the above ASM code, it is placed on 0x0100F600. I also tried
>> of course $abs 0x3E00F600 but it makes no difference. It is always
>> placed at 0x0100F600 and I have zero clue why
>>
>> the original object from the system:
>>
>> #67 nm /usr/sys/conf/u.o
>> 3e00f600 A _u
>> 01000000 s u_d
>> 0000 s u_p
>>
>>
>> my object generated from my u.s:
>>
>> #68 nm u.o
>> 0100f600 A _u
>> 01000000 s u_d
>> 0000 s u_p
>>
>> Somehow I need to get the address right.... This is why I wanted to
>> look up how the original SYSIII or V7 was doing it (even if the asm
>> would be of course completely different).
>> _______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
I'm not sure how old cut is, but a quick look at the code gave me the
idea it could be backported to V7, as I'm fairly sure that cut wasn't
in V7.
It doesn't look like it needs a lot of stuff, just fclose, puts, do
and while loops. Even a v6 or v5 backport doesn't seem too difficult.
Mark
> /* (-s option: serial concatenation like old (127's) paste command */
>
> For that matter, what's the "old (127's) paste command" it refers to?
I can't remember 127 ever having a "paste" command. We did have "ov",
which overlaid adjacent pairs of formatted pages to make two-column
text. "Serial concatenation" would seem to be what was done by "pr"
or "cat".
"ov" figured in the flurry of demos on the day of pipes' birth.
nroff | ov | ov
made four-column output.
For that matter, what's the "old (127's) paste command" it refers to?
Every organization at AT&T had a number as well as a name.
In the early days of UNIX, the number for Computer Science
Research was 127. At some point a 1 was prepended, making
it 1127, but old-timers still used the three-digit code.
So it's a good guess that `127's paste command' means
one that came from, or had been modified in, Research.
I don't know when or where, though. I don't see a paste
command in V7. paste.c in V8 has exactly the same comment
at the top.
Norman Wilson
Toronto ON
>> I thought PWB (makers of "make") came from Harvard?
> PWB ... came straight out of Bell. Not sure about all the
> applications (well, SCCS came from Bell).
PWB did not create make; Stu Feldman did it in research.
PWB did make SCCS. I believe it also originated cico,
find and eval. Probably more, too, but I can't reliably
separate PWB's other contributions from USG's.
Doug
Hi,
i have an old Z8001 based SysIII variant and I would love to have
TCP/IP on it (SLIP first, later with a homebrew ethernet device).
I wonder if someone ever saw TCP/IP available on a System III?
I have lets say 90% of the kernel running on it as source
available and I started digging in the available 4.2 BSD sources.
It looks like there would be much to do to hack in TCP/IP on my
own (no IPC, no Net, no PTY, no....).
I got K5JB running (userland TCP/IP implementation) after I fixed
some C code because the C Compiler available on the system is.....
kinda limited.
telnetd is of course not working as there are no pseudo-teletypes
on this SYSIII. At least I got ping, echoping and ftpd up and
running via SLIP
(10.1.1.2 is my SysIII box:)
# ping -c3 10.1.1.2
PING 10.1.1.2 (10.1.1.2): 56 data bytes
64 bytes from 10.1.1.2: icmp_seq=0 ttl=254 time=316.317 ms
64 bytes from 10.1.1.2: icmp_seq=1 ttl=254 time=297.328 ms
64 bytes from 10.1.1.2: icmp_seq=2 ttl=254 time=296.369 ms
--- 10.1.1.2 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 296.369/303.338/316.317/9.186 ms
# ftp 10.1.1.2
Connected to 10.1.1.2.
220 FTP version K5JB.k37 ready at Tue Apr 30 22:25:47 1991
Name (10.1.1.2:root): test
331 Enter PASS command
Password:
230 Logged in
ftp> get sa.timer
local: sa.timer remote: sa.timer
500 Unknown command
500 Unknown command
200 Port command okay
150 Opening data connection for RETR sa.timer
2571 0.53 KB/s
226 File sent OK
2571 bytes received in 00:05 (0.48 KB/s)
ftp> get wega
local: wega remote: wega
200 Port command okay
150 Opening data connection for RETR wega
98723 0.51 KB/s
226 File sent OK
98723 bytes received in 03:05 (0.51 KB/s)
ftp> exit
221 Goodbye!
#
So I wonder if someone got anything SYSIII -> Net/TCP/IP related
which could help me in any way to get a SYSIII kernel capable of
TCP/IP and PTYs to get a telnetd up and running via SLIP is my
first goal.
Regards,
Oliver
I just got on this list today, and I see that Larry McVoy asks:
"I wish Marc was on this list, be fun to chat."
I'd be happy to chime in on SCCS or early PWB questions, to the extent I
remember anything.
I did see a thread about PWB contributions in which people are trying to
sort out what came from research and what from the PWB group (under Evan
Ivie). As I recall, PWB was always based on research. Dick Haight would
install the latest research system from time-to-time, and then the
so-called "PWB UNIX" was whatever he had taken from research plus stuff we
were developing, such as SCCS. Unlike, say, Columbus UNIX, our kernel
always matched research at the system call level, so there never was such a
thing as a PWB-kernel dependency.
(I think the USG system was run quite differently: They had their own
system, and would merge improvements from research into it. I could be
wrong about this, as I never worked in the USG group.)
--Marc
Anyone have some sun4c or hp300 gear they'd be persuaded to part with? Preferred in the SF Bay Area? It's getting a bit too difficult using broken emulators and broken cross compilers...
Sent from my iPhone
Hi Marc,
TUHS == The Unix Historical Society, it's a mailing list as well as a
repository of Unix source code (including yours). A lot of the Bell
Labs guys are on the list, it has weird topics like the current one of
how to get System III booting on a Zilog something that is 16 bits but
can address 8MB in segments.
There was a side discussion of PWB and SCCS came up and I started talking
about how cool SCCS was and how RCS gave it an undeserved bad rap. In
the process I said "I wish Marc was on this list" and John Cowan said
here is his email, go ask him.
I think you'd have fun on the list, it's old school unix. Lots of signal,
very little noise. I personally would love to have you there, SCCS was
brilliant. It would be fun to pick your brain about how that happened.
And for the record your advanced unix programming book has influenced
how I code. It error checks when there could be errors and passes when
there shouldn't be errors. I feel like that book threaded the needle -
error checking matters except when it doesn't. It taught me a lot and
I pass it on to anyone who will listen.
If you want to get on the list send an email to wkt(a)tuhs.org. Be good
to have your voice here.
--lm
> cpio, expr, xargs, yacc, and lex first appeared outside
> the Bell Labs boundary in the PWB release
This gently corrects a statement in my posting: the name
of one of the PWB-originated programs is expr, not eval.
Doug
> From: Dave Horsfall <dave(a)horsfall.org>
> I thought PWB (makers of "make") came from Harvard?
PWB? As in "Programmer's Work Bench"? The OS part of that came straight out
of Bell - see pg. 266 in the first Unix BSTJ issue. Not sure about all the
applications (well, SCCS came from Bell).
Noel
Dan,
I wrote:
Quiz for the occasion: which major Unix utility adopted IPL's
unprecedented expression syntax?
You correctly responded:
troff.
I suppose, in a sense, that 'dc' also fits the bill but given that that is
inherent in it's stack based nature, I doubt that is what you meant.
The notion of precedence pertains specifically to infix notation, so
postfix dc is definitely not in the running.
Idle thought about my typo: Though APL is famously inscrutable, IPL
(specifically IPL-V) outshined it in that department.
Doug
Loved or loathed for inventing APL, we lost him in 2004. The best thing
you can say about APL (I used APL\360) is that it's, err, concise...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Sent to me by a friend:
https://youtu.be/vT_J6xc-Az0
There's another one there about "The C Programming Language" book
as well. And looks like more to come.
Arnold
On Fri, 2 Oct 2015 12:00:08 -0600, I posted to this list a summary of the
earliest mentions of Unix in several corporate technical journals.
This morning, I made a similar search in the complete bibliographies of
29 journals on the history of computing, mathematics, and science listed at
http://ftp.math.utah.edu/pub/tex/bib/index.html#content
As might be expected, there is little mention of Unix (or Linux) in those
publications: they only ones that I found are these:
+-----------------------+------------------+----------------------------------------------------------------------------------+
| filename | label | substr(title,1,80) |
+-----------------------+------------------+----------------------------------------------------------------------------------+
| cryptologia.bib | Morris:1982:CFU | Cryptographic Features of the UNIX Operating System |
| annhistcomput.bib | Tomayko:1989:ACI | Anecdotes: a Critical Incident; The First Port of UNIX |
| annhistcomput.bib | Tomayko:1989:AWC | Anecdotes: The Windmill Computer---An Eyewitness Report of the Scheutz Differenc |
| ieeeannhistcomput.bib | Toomey:2010:FEU | First Edition Unix: Its Creation and Restoration |
| ieeeannhistcomput.bib | Sippl:2013:IIM | Informix: Information Management on Unix |
+-----------------------+------------------+----------------------------------------------------------------------------------+
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Recent traffic on the TUHS list has discussed early publications about
UNIX at DECUS.
The Digital Technical Journal of Digital Equipment Corporation began
publishing in August 1985, and there is a nearly complete bibliography
at
http://www.math.utah.edu/pub/tex/bib/dectechj.bib
Change .bib to .html for a version with live hyperlinks.
The first publication there that mentions ULTRIX in its title is from
March 1986. Unix appears in a title first in Spring 1995.
The document collection at
http://bitsavers.trailing-edge.com/pdf/dec/decus/
doesn't appear to have much that might be related to Unix ports to DEC
hardware.
The Hewlett-Packard Journal is documented in
http://www.math.utah.edu/pub/tex/bib/hpj.bib
The first paper recorded there that mentions Unix or HP-UX is
from March 1984.
The Intel Technical Journal is covered in those archives as well at
http://www.math.utah.edu/pub/tex/bib/intel-tech-j.bib
but it only began relatively recently, in 1997.
The IBM Systems Journal began in 1962, and the IBM Journal of Research
and Development in 1957, and both are in those archives at
http://www.math.utah.edu/pub/tex/bib/ibmsysj.bibhttp://www.math.utah.edu/pub/tex/bib/ibmjrd.bib
In the Systems Journal, the first mention of Unix or AIX is in Fall
1979 (Unix) and then December 1987 (AIX). In the Journal of R&D, AIX
appears in January 1990, and Unix appears in abstracts sporadically,
but is in a title first in late Fall 2002.
In the Bell Systems Technical Journal, covered at
http://www.math.utah.edu/pub/tex/bib/bstj1970.bib
(and other decades from 1920 to 2010), the first mention of Unix in a
title is July/August 1978.
There may have been similar corporate technology journals at other
computer companies, such as CDC, Cray, Data General, English Electric,
Ferranti, Gould, Harris, NCR, Pr1me, Univac, Wang, and others, but
I've so far made no attempt to track them down and add bibliographic
coverage. Suggestions are welcome!
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Dave Horsfall:
Oh, and I also wrote many articles for AUUGN, and presented the original
Unix paper at a DECUS conference, just to stir up the VMSoids.
=====
Do you mean the first UNIX-related paper ever at a DECUS? If so,
do you mean DECUS Australia or DECUS at all? I'm pretty sure there
was UNIX-related activity in DECUS US in 1980, probably earlier, and
am quite sure there was by 1981 when I was on the sidelines of what
eventually became the UNIX SIG.
It was initially called the Special Software and Operating Systems SIG,
because DECUS US leadership always included a somewhat stodgy subgroup
who were more afraid of offending Digital's marketing people than of
serving the membership. So we ended up with a code name.
Since there were in fact Digital technical and marketing people supporting
the new SIG, it was only a couple of years before the name was fixed.
Norman Wilson
Toronto ON
(Lived in Los Angeles and then New Jersey during that period)
On Thu, Sep 24, 2015 at 9:27 AM, <arnold(a)skeeve.com> wrote:
> I think the Berkeley guys had an underground
> pipeline to Bell labs and some stuff got out that way. :-)
>
It was not underground at all. Tools packaged in BSD came from all over
the community. style and diction were released into the wild by
themselves before the were packaged into an AT&T USG UNIX or Research UNIX
release. It got them personally directly and had them installed at
Tektronix soon after first publishing and a talk about them at USENIX (IIRC
that was the Boulder conference in the "Black Hole" movie theatre.
Since I had a minor stake in it (as my first C program) fsck is another
good example of the path to UCB . Ted started the predecessor program
when he was at UMich (with Bill Joy). He did his OYOC year and later a
full PhD at CMU. He was one of my lab partners in his OYOC year. fsck
was a we know it now was done during that time ( and I helped him a bit).
He was bring the sources back and forth from Summit to CMU (at the time in
an RK05 or sometimes a bootable DOS tape image of one - I may still have
one of these). I believe he gave a copy of the sources very early to wnj
-- which is how it ended up in 4.1BSD. I don't think it was in the
original 3.0 or 4.0 packages as it was not in V5, V6 or V7 either. I
believe it was released in PWB 2.0 - not sure and Minnie does not seem to
have them.
I'm pretty the SCCS and cpio sources came through one of the PWB releases
(1 or 2) that UCB got from AT&T.
Clem
In late 2010, I released decade-specific bibliographies of the Bell
System Technical Journal (BSTJ) at
http://www.math.utah.edu/pub/tex/bib/bstj1920.bib
...
http://www.math.utah.edu/pub/tex/bib/bstj2010.bib
(change .bib to .html for versions with live hyperlinks).
I get weekly status reports for the hundreds of bibliographies in the
archives to which the bstj*.bib files belong, and until recently, I'd
been puzzling about the apparent cessation of publication of the Bell
Labs Technical Journal (its current name) in March 2014.
I now understand why: according to the Wiley Web site for the journal,
ownership and the archives have been transferred to IEEE, effective
with volume 19 (2014).
The bstj2010.bib file has accordingly been updated today with coverage
of (so far, only four) articles published by IEEE in volume 19. [The
first of those is a 50-year retrospective on the discovery of the
Cosmic Microware Background that provided some of the first solid
evidence for the Big Bang theory of the origin and evolution of the
universe, and led to the award of the 1978 Nobel Prize in Physics to
Arno Penzias and Robert Wilson. The article also includes a timeline
of important Bell Labs developments, and Unix is listed there.]
Older list readers may remember that a lot of the early research
publications about Unix appeared in the BSTJ issues, so this journal
should have considerable interest for TUHS list users, and the move of
the archives from Wiley to IEEE may make the back issues somewhat more
accessible outside those academic environments that have library
subscriptions for Wiley journals in electronic form.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
The disappearance of some troff-related documents that had
been on line at Bell Labs was recently reported on this
list. That turns out to have been a bureaucratic snafu.
Plan 9 and v7 are back now. It is hoped that CSTRs will
follow.
Doug
It seems that nroff had the ability to show underlined text very early
on, possibly as early as v3 according to the v3 manual.
I haven't managed to get this to work right under simh but I was
thinking maybe there's a way to do it. It needs an 'underline font'
but the mechanism of how this worked in the old days is a bit of
mystery to me. The output device would have to have the ability to
either display or print underlined text. Maybe someone can remember
which terminal devices supported this in the old days which worked
"out of the box" in the v5,v6 era.
Maybe there was the ability to use overstrike characters on the teletype?
In bash I can use:
echo -e "\e[4munderline\e[0m"
Shouldn't be too hard to hack up something that works in emulated v5.
Mark
> It seems that nroff had the ability to show underlined text very early
Pre-Unix roff had the .ul request. Thus I expect (but haven't checked)
that it was in Unix roff. It would be very surprising if nroff, which was
intended to be more capable that roff, didn't have some underlining
facility right from the start.
Doug
Unix was what the authors wanted for a productive computing environment,
not a bag of everything they thought somebody somewhere might want.
One objective, perhaps subliminal originally, was to make program
behavior easy to reason about. Thus pipes were accepted into research
Unix, but more general (and unruly) IPC mechanisms such as messages
and events never were.
The infrastructure had to be asynchronous. The whole point was to
surmount that difficult model and keep everyday programming simple.
User visibility of asynchrony was held to a minimum: fork(), signal(),
wait(). Signal() was there first and foremost to support SIGKILL; it
did not purport to provide a sound basis for asynchronous IPC.
The complexity of sigaction() is evidence that asynchrony remains
untamed 40 years on.
Doug
Hi All.
Here is BWK's contribution.
| Date: Thu, 24 Sep 2015 17:28:21 -0400 (EDT)
| From: Brian Kernighan <bwk(a)cs.princeton.edu>
| To: arnold(a)skeeve.com
| Subject: Re: please get out your flash light...
|
| Some answers interpolated, but lots remain mysteries...
|
| On Thu, 24 Sep 2015, arnold(a)skeeve.com wrote:
|
| > Hi. Can you shed some light?
| >
| >> From: Diomidis Spinellis <dds(a)aueb.gr>
| >> To: tuhs(a)minnie.tuhs.org
| >> Date: Thu, 24 Sep 2015 12:27:03 +0300
| >> Subject: [TUHS] Questions regarding early Unix contributors
| >>
| >> I found out that the book "Life with Unix" by Don Libes and Sandy
| >> Ressler has a seven page listing of Unix notables, and I'm using that to
| >> fill gaps in the contributors of the Unix history repository [1,2].
| >> Working through the list, the following questions came up.
| >>
| >> - Lorinda Cherry is credited with diction. But diction.c first appears
| >> in 4BSD and 2.10BSD. Did Lorinda Cherry implement it at Berkeley?
|
| Nina Macdonald, maybe? Lorinda worked with people in that group.
|
| >> - Is Chuck Haley listed in the book as the author of tar the same as
| >> Charles B. Haley who co-authored V7 usr/doc/{regen,security,setup}? He
| >> appears to have worked both at Bell labs (tar, usr/doc/*) and at
| >> Berkeley (ex, Pascal). Is this correct?
|
| I think so.
|
| >> - Andrew Koenig is credited with varargs. This is a four-line header
| >> file in V7. Did he actually write it?
| >>
| >> - Ted Dolotta is credited with the mm macros, but the document "Typing
| >> Documents with MM is written by by D. W. Smith and E. M. Piskorik. Did
| >> its authors only write the documentation? Did Ted Dolotta also write
| >> mmcheck?
|
| I don't think Ted wrote -mm; he might have been the manager of that
| group? Ask him: ted(a)dolotta.org
|
| >> Also, I'm missing the login identifiers for the following people. If
| >> anyone remembers them, please send me a note.
| >>
| >> Bell Labs, PWB, USG, USDL:
|
| ark
|
| >> Andrew Koenig
| >> Charles B. Haley
| >> Dick Haight
|
| Maybe rhaight, but don't quote me. Last address I have is from
| long ago: rhaight(a)jedi.accn.org
|
| >> Greg Chesson
|
| Can't remember whether it was grc or greg
|
| >> Herb Gellis
| >> Mark Rochkind
|
| You probably mean Marc J Rochkind. I think it was mmr, but
| ask him: rochkind(a)basepath.com
|
| >> Ted Dolotta
| >>
| >> BSD:
| >> Bill Reeves
| >> Charles B. Haley
| >> Colin L. Mc Master
| >> Chris Van Wyk
|
| Was Chris ever part of BSD? He was at Stanford, then Bell Labs,
| where he was cvw.
|
| >> Douglas Lanam
| >> David Willcox
| >> Eric Schienbrood
| >> Earl T. Cohen
| >> Herb Gellis
| >> Ivan Maltz
| >> Juan Porcar
| >> Len Edmondson
| >> Mark Rochkind
|
| See above
|
| >> Mike Tilson
| >> Olivier Roubine
| >> Peter Honeyman
|
| honey (remember honeydanber?
|
| >> R. Dowell
| >> Ross Harvey
| >> Robert Toxen
| >> Tom Duff
|
| td
|
| >> Ted Dolotta
| >> T. J. Kowalski
|
| frodo
|
| >> Finally, I've summarized all contributions allocated through file path
| >> regular expressions [3] into two tables ordered by author [4]. (The
| >> summary is auto-generated by taking the last significant part of each
| >> path regex.) If you want, please have a look at them and point out
| >> omissions and mistakes.
| >>
| >> I will try to commit all responses I receive with appropriate credit to
| >> the repository. (You can also submit a GitHub pull-request, if you prefer.)
| >>
| >> [1] https://github.com/dspinellis/unix-history-repo
| >> [2]
| >> http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf
| >> [3]
| >> https://github.com/dspinellis/unix-history-make/tree/master/src/author-path
| >> [4] http://istlab.dmst.aueb.gr/~dds/contributions.pdf
| >>
| >> Diomidis
| >> _______________________________________________
| >> TUHS mailing list
| >> TUHS(a)minnie.tuhs.org
| >> https://minnie.tuhs.org/mailman/listinfo/tuhs
I can assure you that Lorinda Cherry wrote most of the important
code in WWB, including style and diction. The idea for them
came from Bill Vesterman at Rutgers. Lorinda already had parts,
a real tour de force, which assigned parts of speech to words
in a text. Style was the killer app for parts and was running
within days of his approach to the labs wondering whether
such a thing could be built. Lorinda also wrote deroff, which
these tools of course needed. WWB per se was packaged by
USDL; I am sorry I can't remember the name of the guiding
spirit. So Lorinda's code detoured through there on its
way into research Unix.
Chris van Wyk was cvw. He was at Bell Labs, not BSD.
Chuck Haley is indeed Charles B. Haley.
Andy Koenig was ark.
A few scattered answers, some redundant with those of others:
-- Lorinda Cherry (llc) worked at Bell Labs. She wrote diction (and
the rest of the Writer's Workbench tools) there, in the early
1980s; if some people saw it first in BSD releases that is just
an accident of timing (too late for V7) and exposure (I'm pretty
sure it was available in the USG systems, which weren't generally
accessible until a year or two later).
Lorinda is one of the less-known members of the original Computer
Science Research Center who nevertheless wrote or co-wrote a lot
of things we now take for granted, like dc and bc and eqn and
libplot.
Checking some of this on the web, I came across an interesting
tidbit apparently derived from an interview with Lorinda:
http://www.princeton.edu/~hos/frs122/precis/cherry2.htm
I wholly endorse what she says about UNIX and the group it came from.
One fumble in the text: `Bob Ross' who liked to break programs is
surely really Bob Morris.
-- So far as I know, Tom Duff (td) was never at Berkeley. He's
originally from Toronto; attended U of T; was at Lucasfilm for a
while (he has a particular interest in graphics, though he is a
very sharp and subtle programmer in general); started at Bell Labs
in 1984, not long before I did. He left sometime in the 1990s,
lives in Berkeley CA, but works neither at UCB nor at Google but
at Pixar.
-- T. J. Kowalski (frodo) was at Bell Labs; when I was there he
worked in the research group down the hall (Acoustics, I think), with
whom Computer Science shared a lot of UNIX-releasted stuff. Ted is
well-known for his work on fsck, but did a lot of other stuff, including
being the first to get Research UNIX to work on the MicroVAX II. He
also had a high-quality mustache.
-- Andrew Koenig (ark) was part of the Computer Science group when
I was there in the latter 1980s. He was a early adopter of C++.
asd, the automatic-software distributor we used to keep the software
in sync on the 20-or-so systems that ran Research UNIX, was his work.
-- Mike Tilson was, I think, one of the founders of HCR (Human Computing
Resources), a UNIX-oriented software company based in Toronto in the
early 1980s. The company was later acquired by SCO, in the days when
SCO was still a technical company rather than a den of lawyers.
-- Peter Honeyman (honey) was never, I think, at Berkeley, though
he is certainly of the right character. In the 1980s he was variously
(sometimes concurrently?) working for some part of AT&T and at Princeton.
For many years now he has been in Ann Arbor MI at the University of
Michigan, where his still-crusty manner appears not to interfere with
his being a respected researcher and much-liked teacher.
Norman Wilson
Toronto ON
(Bell Labs Computing Science Research, 1984-1990)
I found out that the book "Life with Unix" by Don Libes and Sandy
Ressler has a seven page listing of Unix notables, and I'm using that to
fill gaps in the contributors of the Unix history repository [1,2].
Working through the list, the following questions came up.
- Lorinda Cherry is credited with diction. But diction.c first appears
in 4BSD and 2.10BSD. Did Lorinda Cherry implement it at Berkeley?
- Is Chuck Haley listed in the book as the author of tar the same as
Charles B. Haley who co-authored V7 usr/doc/{regen,security,setup}? He
appears to have worked both at Bell labs (tar, usr/doc/*) and at
Berkeley (ex, Pascal). Is this correct?
- Andrew Koenig is credited with varargs. This is a four-line header
file in V7. Did he actually write it?
- Ted Dolotta is credited with the mm macros, but the document "Typing
Documents with MM is written by by D. W. Smith and E. M. Piskorik. Did
its authors only write the documentation? Did Ted Dolotta also write
mmcheck?
Also, I'm missing the login identifiers for the following people. If
anyone remembers them, please send me a note.
Bell Labs, PWB, USG, USDL:
Andrew Koenig
Charles B. Haley
Dick Haight
Greg Chesson
Herb Gellis
Mark Rochkind
Ted Dolotta
BSD:
Bill Reeves
Charles B. Haley
Colin L. Mc Master
Chris Van Wyk
Douglas Lanam
David Willcox
Eric Schienbrood
Earl T. Cohen
Herb Gellis
Ivan Maltz
Juan Porcar
Len Edmondson
Mark Rochkind
Mike Tilson
Olivier Roubine
Peter Honeyman
R. Dowell
Ross Harvey
Robert Toxen
Tom Duff
Ted Dolotta
T. J. Kowalski
Finally, I've summarized all contributions allocated through file path
regular expressions [3] into two tables ordered by author [4]. (The
summary is auto-generated by taking the last significant part of each
path regex.) If you want, please have a look at them and point out
omissions and mistakes.
I will try to commit all responses I receive with appropriate credit to
the repository. (You can also submit a GitHub pull-request, if you prefer.)
[1] https://github.com/dspinellis/unix-history-repo
[2]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf
[3]
https://github.com/dspinellis/unix-history-make/tree/master/src/author-path
[4] http://istlab.dmst.aueb.gr/~dds/contributions.pdf
Diomidis
> From: Clem Cole
> Eric Schienbrood
> .. Noel might remember his MIT moniker
No, alas; and I tried 'finger Schienbrood(a)lcs.mit.edu' and got no result.
Maybe he was in some other part of MIT, not Tech Sq?
> From: Arnold Skeeve
> Here too I think stuff written at ATT got out through Berkeley. (SCCS)
That happened at MIT, too - we had SCCS quite early (my MIT V6 manual has
it), plus all sorts of other stuff (e.g. TROFF).
I think some of it may have come through Jon Sieber, who, while he was in high
school, had been part of (IIRC) a Scout troop which had some association with
Bell Labs, and continued to have contacts there after he became an MIT
undergrad.
Noel
> From: Peter Jeremy <peter(a)rulingia.com>
> Why were the original read(2) and write(2) system calls written to
> offer synchronous I/O only?
A very interesting question (to me, particularly, see below). I don't think
any of the Unix papers answer this question?
> It's relatively easy to create synchronous I/O functions given
> asynchronous I/O primitives but it's impossible to do the opposite.
Indeed, and I've seen operating systems (e.g. a real-time PDP-11 OS I worked
with a lot called MOS) that did that.
I actually did add asynchronous I/O to V6 UNIX, for use with very early
Internet networking software being done at MIT (in a user process). Actually,
it wasn't just asynchronous, it was _raw_ asynchronous I/O! (The networking
device was DMA, and the s/w did DMA directly into the user process' memory.)
The code also allowed more than one outstanding I/O request, too. (So the
input could be re-enabled on the device ASAP, without having to wake up a
process, have it run, do a new read call, etc.)
We didn't redo the whole Unix I/O system, to support/use asyn I/O throughout,
though; I just kind of warted it onto the side. (IIRC, it notified the user
process via a signal that the I/O had completed; the user software then had
to do an sgtty() call to get the transfer status, size, etc.)
Anyway, back to the original topic: I don't want to speculate (although I
could :-); perhaps someone who was around 'back then' can offer some insight?
If not, time for speculation! :-)
Noel
Why were the original read(2) and write(2) system calls written to offer
synchronous I/O only? It's relatively easy to create synchronous I/O
functions given asynchronous I/O primitives but it's impossible to do the
opposite.
Multics (at least) supported asynchronous I/O so the concept wasn't novel.
And any multi-tasking kernel has to support asynchronous I/O internally so
suitable code exists in the kernel.
--
Peter Jeremy
As I was dropping off to sleep last night, I wondered why the superuser
account on Unix is called root.
There's a hierarchy of directories and files beginning at the tree root /.
There's a hierarchy of processes rooted with init. But there's no hierarchy
of users, so why the moniker "root"?
Any ideas?
Cheers, Warren
> Did any Unix or Unix like OS ever zero fill on realloc?
> On zero fill, I doubt many did that. Many really early on when memory
> was small.
This sparks rminiscence. When I wrote an allocation strategy somewhat
more sophisticated than the original alloc(), I introduced realloc() and
changed the error return from -1 to the honest pointer value 0. The
latter change compelled a new name; "malloc" has been with us ever since.
To keep the per-byte cost of allocation low, malloc stuck with alloc's
nonzeroing policy. The minimal extra code to handle calls that triggered
sbrk had the startling property that five passes through the arena might
be required in some cases--not exactly scalable to giant virtual address
spaces!
It's odd that the later introduction of calloc() as a zeroing malloc()
has never been complemented by a similar variant of realloc().
> Am I the only one that remembers realloc() being buggy on some systems?
I've never met a particular realloc() bug, but realloc does inherit the
portability bug that Posix baked into malloc(). Rob Pike and I
requested that malloc(0) be required to return a pointer distinct from
any live pointer. Posix instead allowed an undefined choice between
that behavior and an error return, confounding it with the out-of-memory
indication. Maybe it's time to right the wrong and retire "malloc".
The name "alloc" might be recycled for it. It could also clear memory
and obsolete calloc().
Doug
Dave Horsfall:
Today is The Day of the Programmer, being the 0x100'th day of the year.
===
Are you sure you want to use that radix as your standard?
You risk putting a hex on our profession.
Norman Wilson
Toronto ON
> Today is The Day of the Programmer, being the 0x100'th day of the year.
Still further off topic, but it reminds me of a Y2K incident circa 1960.
Our IBM 7090 had been fitted with a homegrown time-of-day clock (no, big
blue did not build such into their machines back then). The most significant
bits of the clock registered the day of the year. On day 0x100 the clock
went negative and the system went wild.
Doug
Hi there,
we just restored our PDP-11/23+ rebulding a new PSU around a
normal PC PSU and creating the real time clock needed for some
OS.
we're wondering about what UNIX can eventually run on it :)
http://museo.freaknet.org/en/restauro-pdp1123plus/
bye,
Gabriele
--
[ ::::::::: 73 de IW9HGS : http://freaknet.org/asbesto ::::::::::: ]
[ Freaknet Medialab :: Poetry Hacklab : Dyne.Org :: Radio Cybernet ]
[ NON SCRIVERMI USANDO LETTERE ACCENTATE - NON MANDARMI ALLEGATI ]
[ *I DELETE* EMAIL > 100K, ATTACHMENTS, HTML, M$-WORD DOC and SPAM ]
Today is The Day of the Programmer, being the 0x100'th day of the year.
Take a bow, all programmers...
Did you know that it's an official professional holiday in Russia?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I'll support shark-culling when they have been observed walking on dry land.
On Thu, 10 Sep 2015, Norman Wilson wrote:
> #$%^&*\{{{
>
> NO CARRIER
>
> +++
> ATH
My favourite would be:
+++
(pause - this was necessary)
ATZ
I.e. a reset.
I think there were even worse ones in the Hayes codes, like ATH3 or
something. Dammit, but mental bit-rot is setting in.
Of course, I never did such an evil thing, your honour... Honest! Never!
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I'll support shark-culling when they have been observed walking on dry land.
I've used realloc a lot, and never run into bugs. Maybe I've just
been lucky, or maybe it's that I probably didn't use it that much
until the latter 1980s, and then more with pukka Doug malloc code
than with the stuff floating around elsewhere.
Never mind that sometime around 1995 I found a subtle bug in the
pukka Doug malloc (not realloc): arena blew up badly when presented
with a certain pattern of alternating small and large allocs and frees,
produced by a pukka Brian awk regression test. I had a lot of (genuine)
fun tracking that down, writing some low-level tools to extract the
malloc and free calls and sizes and then a simulator in (what else?)
awk to test different fixes. Oh, for the days when UNIX was that
simple.
I've never heard before of a belief that the new memory belonging
to realloc is always cleared, except in conjunction with the utterly-
mistaken belief that that's true of malloc as well. I don't think it
was ever promised to be true, though it was probably true by accident
just often enough (just as often as with malloc) to fool the careless.
Norman Wilson
Toronto ON
I’ve just had a discussion with my boss about this and he claimed that it did at one point and I said it hasn’t in all the unix versions I’ve ever played with (v6, v7, sys III, V, BSD 2, 3, 4, SunOS and Solaris).
My question to this illustrious group is: Did any Unix or Unix like OS ever zero fill on realloc?
David
I never used realloc(), only malloc() and calloc().
Checking a few unixes I have access to all reallocs() seem to state
either nothing on contents of memory added or state explicitly
'undefined'.
The only function which zeroes allocated memory is calloc() it seems.
Unixes checks: SCO UNIX 3.2V4.2, Digital Unix 4.0G, Tru64 Unix V5.1B,
HP-UX 11.23, HP-UX 11.31
Cheers
On 9/11/15, tuhs-request(a)minnie.tuhs.org <tuhs-request(a)minnie.tuhs.org> wrote:
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. Did realloc ever zero the new memory? (David)
> 2. Re: Did realloc ever zero the new memory? (Jim Capp)
> 3. Re: Did realloc ever zero the new memory? (David)
> 4. Re: Did realloc ever zero the new memory? (Larry McVoy)
> 5. Re: Did realloc ever zero the new memory? (David)
> 6. Re: Did realloc ever zero the new memory? (Larry McVoy)
> 7. Re: Did realloc ever zero the new memory? (Clem Cole)
> 8. Re: Did realloc ever zero the new memory? (Dave Horsfall)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 10 Sep 2015 12:52:45 -0700
> From: David <david(a)kdbarto.org>
> To: tuhs(a)minnie.tuhs.org
> Subject: [TUHS] Did realloc ever zero the new memory?
> Message-ID: <E798E102-2C50-4AB2-92CC-188D05AA951F(a)kdbarto.org>
> Content-Type: text/plain; charset=utf-8
>
> I?ve just had a discussion with my boss about this and he claimed that it
> did at one point and I said it hasn?t in all the unix versions I?ve ever
> played with (v6, v7, sys III, V, BSD 2, 3, 4, SunOS and Solaris).
>
> My question to this illustrious group is: Did any Unix or Unix like OS ever
> zero fill on realloc?
>
> David
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 10 Sep 2015 16:10:41 -0400 (EDT)
> From: Jim Capp <jcapp(a)anteil.com>
> To: david(a)kdbarto.org
> Cc: tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Did realloc ever zero the new memory?
> Message-ID: <5962857.12872.1441915841364.JavaMail.root@zimbraanteil>
> Content-Type: text/plain; charset="utf-8"
>
> On every system that I've ever used, I believe that realloc did not do a
> zero fill. There might have been a time when malloc did a zero fill, but I
> never counted on it. I always performed a memset or bzero after a malloc.
> I'm pretty sure that I counted on realloc to NOT perform a zero fill.
>
>
> $.02
>
>
>
> From: "David" <david(a)kdbarto.org>
> To: tuhs(a)minnie.tuhs.org
> Sent: Thursday, September 10, 2015 3:52:45 PM
> Subject: [TUHS] Did realloc ever zero the new memory?
>
> I?ve just had a discussion with my boss about this and he claimed that it
> did at one point and I said it hasn?t in all the unix versions I?ve ever
> played with (v6, v7, sys III, V, BSD 2, 3, 4, SunOS and Solaris).
>
> My question to this illustrious group is: Did any Unix or Unix like OS ever
> zero fill on realloc?
>
> David
>
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
>
Can't say much more, really...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Concerned about shark attacks? Then don't go swimming in their food bowl...
---------- Forwarded message ----------
From: Jim Haynes
Cc: greenkeys(a)mailman.qth.net
Subject: Re: [GreenKeys] Teletype Industrial Design
On Fri, 28 Aug 2015, Jack wrote:
> How were they still applying for patents in 1993?
>
> D332,465 (1993) Sokolowski
>
It was filed for in 1988 and was assigned to AT&T Bell Laboratories.
So I guess that was after what was left of Teletype had gone to
Naperville. And what was left of Bell Labs was still part of AT&T,
before the spinoff of Lucent in 1996.
Incidentally if you google for Bell Laboratories the first thing that
comes up is
Bell Laboratories - Home
www.belllabs.com/
An exclusive manufacturer of rodent control products, Bell Laboratories produces
the highest quality rodenticides and other rodent control products available to
...
Just stirring up the gene pool, so to speak... And who hasn't played
chess with a computer, and caught it cheating?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer"
RIP Cecil the Lion; he was in pain for two days, thanks to some brave hunter.
---------- Forwarded message ----------
Date: Sat, 15 Aug 2015 16:49:19 -0400
From: Christian Gauger-Cosgrove
To: David Tumey
Cc: GREENKEYS BULLETIN BOARD <greenkeys(a)mailman.qth.net>
Subject: Re: [GreenKeys] TELETYPE Chess Anyone??
On 15 August 2015 at 16:37, David Tumey via GreenKeys
<greenkeys(a)mailman.qth.net> wrote:
> house. I got to play a game of Chess on a Model 33/PDP-?? and it totally
> blew my mind. I knew that I wanted that to be part of my current teletype
You know, the current version of the SIMH emulator can connect to
serial ports now. If you want I can help you setup SIMH's PDP-11
simulator running a PDP-11 UNIX which of course has chess you play. So
that it'll work on your TTY.
Only required information is: "What serial port is your Teletype's
current loop adapter connected?" and "What do you want to run? V6, V7,
Ultrix-11, RT-11 (V4, V5.3, V5.7), RSTS/E (V7, V10.1-L), RSX-11/M+,
DSM-11 (kill it with fire)? All of the above?"
Cheers,
Christian
--
Christian M. Gauger-Cosgrove
STCKON08DS0
Contact information available upon request.
______________________________________________________________
GreenKeys mailing list
Home: http://mailman.qth.net/mailman/listinfo/greenkeys
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:GreenKeys@mailman.qth.net
2002-to-present greenkeys archive: http://mailman.qth.net/pipermail/greenkeys/
1998-to-2001 greenkeys archive: http://mailman.qth.net/archive/greenkeys/greenkeys.html
Randy Guttery's 2001-to-2009 GreenKeys Search Tool: http://comcents.com/tty/greenkeyssearch.html
This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Ah hah! My stray memory of `Not War?' must date from my TOPS-10 days.
I can't find a trace of the string `love' anywhere in any of the make
sources in Kirk's multi-CD compendium of historic BSD, so it certainly
can't have been from there.
Norman Wilson
Toronto ON
So me, being an uber-geek, tried it on a few boxen again...
On the Mac:
ozzie:~ dave$ make love
make: *** No rule to make target `love'. Stop.
Boring...
On FreeBSD:
aneurin% make love
Not war.
Thank you for keeping the faith!
And on my tame penguin:
dave@debbie:~$ make love
-bash: make: command not found
Sigh...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Watson never said: "I think there is a world market for maybe five computers."
On 30 July 2015 at 17:11, Norman Wilson <norman(a)oclsc.org> wrote:
> My vague memory is that the original make, e.g. in V7, printed
> `Don't know how to make love.' This was not a special case:
> `don't know how to make XXX' was the normal message.
>
> There was a variant make that printed
> Not war?
> if asked to make love without explicit instructions. I thought
> that appeared in 3BSD or 4BSD, but I could be mistaken.
On Solaris 10, /usr/css/bin/make reports:
make: Fatal error: Don't know how to make target `love'
N.
My vague memory is that the original make, e.g. in V7, printed
`Don't know how to make love.' This was not a special case:
`don't know how to make XXX' was the normal message.
There was a variant make that printed
Not war?
if asked to make love without explicit instructions. I thought
that appeared in 3BSD or 4BSD, but I could be mistaken.
Norman Wilson
Toronto ON
I recall playing with this on the -11, but it seems to have become extinct
(the program, I mean). I seem to recall that it was written in PDP-11
assembly; did it ever get rewritten in C?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer"
"The complexity for minimum component costs has increased at a rate of
roughly a factor of two per year." -- G.Moore, Electronics, Vol 38 No 8, 1965
> The punning pseudonym, the complaint at the end that Unix and C are dead
> and nothing is even on the horizon to replace them, and the general snarky
> tone suggest to me that it's Rob "Mark V. Shaney" Pike. In that case,
> the affiliation with Bellcore is a blind ("not Goodyear, Goodrich").
VSM, MVS: what other mystery authors in Unix land identify thus with VMS?
Doug
I authored those files so I could render the Seventh Edition manuals
as PDF in 1998 (long after I had departed the Labs). As pic did not
exist yet (Kernighan had not written it) there were never any
original pic files for these documents. I do not know what 1127 was
doing to publish diagrams at the time.
The Bell logo I did directly in postscript so \(bs would render. The logo
was originally it's own custom "character" just like an A, B or C, on the
phototypesetter's optical font wheel.
You can see what they look liked from the v7 PDF manuals --
In Volume 2A (http://plan9.bell-labs.com/7thEdMan/v7vol2a.pdf)
bs.ps is on variety of pages such as 129, 130, 216
ms.pic is on page 127
make.ps is on page 282
In Volume 2B (http://plan9.bell-labs.com/7thEdMan/v7vol2b.pdf)
implfig1.pic is on page 162
implfig2.pic is on page 168
these are the PDF page numbers (where the title is page 1)
> From: Mark Longridge <cubexyz(a)gmail.com>
>
> I came across some Unix files in v7add such as bs.ps for the Bell logo
> and ms.pic (described as Figure 1 for msmacros).
>
> http://www.maxhost.org/other/ms.pic
>
> I was wondering if there was some viewer or conversion program so we
> could look at pic files from this era?
>
> Mark
>
>
I came across some Unix files in v7add such as bs.ps for the Bell logo
and ms.pic (described as Figure 1 for msmacros).
http://www.maxhost.org/other/ms.pic
I was wondering if there was some viewer or conversion program so we
could look at pic files from this era?
Mark
Peter Salus noted there was workshop in Newport, RI, in 1984 concerning
"Distributed UNIX." The report on “Distributed UNIX” by Veigh S. Meer [a
transparent pseudonym] appeared in /;login:/ 9.5 (November 1984), pp.
5-9. So who was "Veigh S. Meer"? The affiliation says "Bellcore," but
who was there in 1984? Peter's first thought was Peter Langston.
Any ideas?
Debbie
Dave Horsfall:
Call me slow, but can someone please explain the joke? If it's American
humo[u]r, then remember that I'm British/Australian...
There is no such thing as American humour, because Yanks don't know
how to spell.
They can't get the wood either.
Norman Wilson
(reformed Yank)
Toronto ON
When I get back to a keyboard I thought maybe it would be nice to share
some Greg stories, I have enough of them. So I'll try and do that.
If anyone wants to do that in person, USENIX ATC is next
week and would be an appropriate venue. Perhaps a Greg
Chesson Memorial BOF?
Norman Wilson
Toronto ON
Much to my surprise I see there isn't a WIKI page on Greg Chesson yet?
Maybe some of his friends get get together on submit one ?
Cheers,
rudi
On 6/30/15, tuhs-request(a)minnie.tuhs.org <tuhs-request(a)minnie.tuhs.org> wrote:
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. We've lost Greg Chesson (Dave Horsfall)
> 2. Re: We've lost Greg Chesson (Clem Cole)
> 3. Re: We've lost Greg Chesson (Larry McVoy)
> 4. Re: We've lost Greg Chesson (Larry McVoy)
> 5. Re: We've lost Greg Chesson (Mary Ann Horton)
> 6. Re: We've lost Greg Chesson (Norman Wilson)
> 7. Re: We've lost Greg Chesson (Dave Horsfall)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 29 Jun 2015 17:30:16 +1000 (EST)
> From: Dave Horsfall <dave(a)horsfall.org>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] We've lost Greg Chesson
> Message-ID: <alpine.BSF.2.11.1506291728540.96902(a)aneurin.horsfall.org>
> Content-Type: text/plain; charset="utf-8"
>
> Haven't found any more info...
>
> --
> Dave Horsfall (VK2KFU) "Those who don't understand security will suffer"
> http://www.horsfall.org/ It's just a silly little web site, that's all...
>
> ---------- Forwarded message ----------
> Date: Sun, 28 Jun 2015 18:50:42 -0400
> From: Dave Farber
> To: ip <ip(a)listbox.com>
> Subject: [IP] Death of Greg Chesson
>
> ---------- Forwarded message ----------
> From: "Lauren Weinstein"
> Date: Jun 28, 2015 6:43 PM
> Subject: Death of Greg Chesson
> To: <dave(a)farber.net>
> Cc:
>
>
> Dave, fyi.
>
> https://plus.google.com/u/0/+LaurenWeinstein/posts/bRdbj1B1qQG
>
> --Lauren--
> Lauren Weinstein (lauren(a)vortex.com) http://www.vortex.com/lauren
> Founder:
> ?- Network Neutrality Squad: http://www.nnsquad.org
> ?- PRIVACY Forum: http://www.vortex.com/privacy-info
> Co-Founder: People For Internet Responsibility:
> http://www.pfir.org/pfir-info
> Member: ACM Committee on Computers and Public Policy
> Lauren's Blog: http://lauren.vortex.com
> Google+: http://google.com/+LaurenWeinstein
> Twitter: http://twitter.com/laurenweinstein
> Tel: +1 (818) 225-2800 / Skype: vortex.com
>
> Archives [feed-icon-10x10.jpg] | Modify Your Subscription | Unsubscribe Now
> [listbox-logo-small.png]
>
> ------------------------------
>
> Message: 2
> Date: Mon, 29 Jun 2015 07:44:58 -0500
> From: Clem Cole <clemc(a)ccc.com>
> To: Dave Horsfall <dave(a)horsfall.org>
> Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] We've lost Greg Chesson
> Message-ID:
> <CAC20D2MqtPfMe_SdkCfFFthF_5Rth8y2Q6U2xm-ZgbkAA0tQZg(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Greg had been sick for a while. Sad loss.
> Clem
>
> On Mon, Jun 29, 2015 at 2:30 AM, Dave Horsfall <dave(a)horsfall.org> wrote:
>
>> Haven't found any more info...
>>
>> --
>> Dave Horsfall (VK2KFU) "Those who don't understand security will
>> suffer"
>> http://www.horsfall.org/ It's just a silly little web site, that's
>> all...
>>
>> ---------- Forwarded message ----------
>> Date: Sun, 28 Jun 2015 18:50:42 -0400
>> From: Dave Farber
>> To: ip <ip(a)listbox.com>
>> Subject: [IP] Death of Greg Chesson
>>
>> ---------- Forwarded message ----------
>> From: "Lauren Weinstein"
>> Date: Jun 28, 2015 6:43 PM
>> Subject: Death of Greg Chesson
>> To: <dave(a)farber.net>
>> Cc:
>>
>>
>> Dave, fyi.
>>
>> https://plus.google.com/u/0/+LaurenWeinstein/posts/bRdbj1B1qQG
>>
>> --Lauren--
>> Lauren Weinstein (lauren(a)vortex.com) http://www.vortex.com/lauren
>> Founder:
>> - Network Neutrality Squad: http://www.nnsquad.org
>> - PRIVACY Forum: http://www.vortex.com/privacy-info
>> Co-Founder: People For Internet Responsibility:
>> http://www.pfir.org/pfir-info
>> Member: ACM Committee on Computers and Public Policy
>> Lauren's Blog: http://lauren.vortex.com
>> Google+: http://google.com/+LaurenWeinstein
>> Twitter: http://twitter.com/laurenweinstein
>> Tel: +1 (818) 225-2800 / Skype: vortex.com
>>
>> Archives [feed-icon-10x10.jpg] | Modify Your Subscription | Unsubscribe
>> Now
>> [listbox-logo-small.png]
>> _______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
>>
>>
>
---------- Forwarded message ----------
Subject: Re: [GreenKeys] Replacing Chad, was: Black tape
On Sun, 21 Jun 2015, Jones, Douglas W wrote:
> The flaws in the Votomatic were a bit subtle, but in retrospect, if you
> read the patents for the IBM Portapunch (the direct predecessor of the
> Votomatic) and for the Votomatic, you find that the flaw that was its
> eventual downfall -- at least in the public's mind -- is fully
> documented in the patents. Of course IBM's (and later CESI's, after IBM
> walked away from the Votomatic in the late 1960s) salesmen never
> mentioned those flaws.
Portapunch? Arrgghh!
We had to put up with that Satan-spawn in our University days... Only 40
columns wide, it had specific encodings for FORTRAN words, and it was just
as well that FORTRAN ignored white space (I know this for a fact, when I
fed a deck into a "real" card reader one time).
Being but mere impecunious Uni students and having to actually buy the
things, we resorted to fixing mis-punches by literally sticky-taping the
chads back.
For some reason, the computer operators (IBM 360/50) hated us...
I really do hope that the inventor of Portapunch is still having holes
punched through him with a paper-clip.
Anyone got Plan 9 4th edition manuals or Inferno Manuals? I will be
interested to buy them. Vitanuova used to sell them, but their payment
system is down for years now and has been brought up to Charles many
times, but it seem to be still down, so there is no way to buy it from
them.
--
Ramakrishnan
Hi All.
As mentioned a few weeks ago, I have a full set of the O'Reilly X11 books
from the early 90s.
I'm willing to send them on to a better home for the cost of mailing
from Israel.
One person said they were interested but didn't follow up with me, so
I'm again offering to the list. First one to get back to me wins. :-)
Thanks,
Arnold
Prompted by another thread, I decided to share about some of my
experience with providing printed BSD manuals.
I was given a 4.4BSD set with the understanding that I would work on
preparing new print editions using NetBSD. It was a significant
undertaking. I ended up just doing Section 8 System. Here is a summary
of what I did:
- Build the NetBSD distribution (which gets the manual pages generated
or at least put in place).
- Manual clean up, like remove a link to manual page that wasn't needed
and remove a duplicated manual (in two sub-sections).
- Learned about permuted index (the long KWIC index cross-reference).
Generated a list of characters and terms to ignore for building my
permuted index. Wrote script to generate it, including converting to
LaTeX using longtable. This resulted in 2937 entries and was 68 pages in
printed book.
- Create list of all an section 8 pages, pruned for duplicate inodes.
- Also have a list of 40 filenames of other manpages to include in the
man8 book. These are system maintenance procedures or commands that are
in wrong section or could be section 8 (or weren't installed). (Examples
are ipftest, pkg_admin, postsuper.)
- Generate a sorted list of all the manuals.
- Look for any missing manual pages. Script to check for libexec or sbin
tools not in man8, such as supfilesrv or supscan is really supservers.8
and missing kdigest.8. Get those files in place as needed. I cannot
remember now, but I think I may have wrote some missing manuals or got
others to submit some (officially).
- Script to make sure all man pages are in order. This found some
duplicate manual pages with different inodes, wrong man macro DT values,
wrong filename, wrong sections, etc. Some of these were reported
upstream or fixed.
- Script to create the book as a single huge postscript file, then a
PDF. Reviewed the possible ROFF related errors and warnings.
(On 2009-10-23, it was 1304 pdf pages from 572 manuals.)
- Script to figure out licenses. This was substantial! It looked for
copyright patterns in manual source, excluded junk formatting like
revision control markers, include some extra licenses that weren't
included in the manpage itself (like GPL2). Then another script to
generate LaTeX from the copyrights and licenses. It removed duplicate
license statements and sorted the copyrights. So some license
statements had many copyrights using the same license verbiage.
This represented 620 copyrighted files with approximately 683 copyright
lines and 109 different licenses. Yes 109! That resulted in
approximately 68 printed pages, pages 1461 through 1529. This didn't
duplicate any license verbiage. (I just realized that was the same
length as my permuted index.)
A few things to note: Some authors chose to use different names for
different copyright statements. Some authors used their names or
assigned the copyright to the project. Some licenses included software
or authors names instead of generic terms. Many BSD style licenses were
slightly changed with different grammar, etc. Many contributors created
own license or reworded someone else's existing license text. As an
example: "If it breaks then you get to keep both pieces." :)
The four most common license statements represented 113 "THE REGENTS AND
CONTRIBUTORS" manuals, 75 "THE NETBSD FOUNDATION, INC." manuals, and 35
"IBM" (aka Postfix) manuals, and 30 generic "THE AUTHOR AND
CONTRIBUTORS" manuals.
I found many were missing licenses. I hunted down original authors,
looked in CVS history, etc to help resolve some of these. I also
reported about still missing licenses to the project. We will assume
they meant it is open source and can be distributed since nobody has
complained for years (even prior to my printed work) :)
- Generated a list of required advertising acknowledgments in LaTeX to
import into one printed book (and for webpage).
- Split my long PDF into two volumes. Used LaTeX pdfpages package and
includepdf to include the generated PDFs. I made sure the page numbers
continued in the second book from end of previous volume. (After
printing, I realized a mistake where the second volume had odd numbers
on left pages, but order is still correct, so I assume nobody else
noticed.)
Historically, the System Manager's Manual (SMM) also included other
system installation and administration documentation (in addition to the
manual pages). My work didn't include that documentation (some of which
was unmaintained since 4.4BSD in 1993 and covers some software and
features that are no longer included with NetBSD). That could be another
project.
I only did the SMM / manual section 8. I realized if I did all manuals,
my book set would be well over ten thousand printed pages. The amount of
work and initial printing costs would not be worth it with the little
money it could bring in. It was certainly a learning experience, plus
some benefit such as cleanup of some mandoc/roff code, filename renames,
copyright/license additions, and manual pages added.
Jeremy C. Reed
echo 'EhZ[h ^jjf0%%h[[Zc[Z_W$d[j%Xeeai%ZW[ced#]dk#f[d]k_d%' | \
tr '#-~' '\-.-{'
I recently came across this:
http://www.cs.princeton.edu/~bwk/202
It's been there for a while but I hadn't noticed it. It describes the
trials and tribulations of getting the Mergenthaler 202 up and running
at Bell Labs and is very interesting reading.
I have already requested that they archive their work with TUHS and
gotten a positve response about this from David Brailsford. In the
meantime, it's fun reading!
Enjoy,
Arnold
I noticed that the assembly source file for blackjack is missing from
the source tree so I tried to recreate it, so far unsuccessfully.
My first idea was to grab bj.s from 2.11BSD and assemble it the Unix
v5 as command. That seems to generate a bunch of errors. Also other
assembly source files don't seem to have .even in them.
Another idea would be generate the source code from the executable
itself, but there doesn't seem to be a disassembler for early Unix.
It's possible that v5 bj.s was printed out somewhere but so far no
luck finding it.
Mark
Mark Longridge:
chmod 0744 bj
Dave Horsfall:
That has to be the world's oddest "chmod" command.
======
Not by a long shot.
Recently, for reasons related both to NFS permissions and to
hardware testing, I have occasionally been making directories
with mode 753.
At the place I worked 20 years ago, we wanted a directory
into which anonymous ftp could write, so that people could
send us files; but we didn't want it to become a place for
creeps to stash their creepy files. I thought about the
problem briefly, then made the directory with mode 0270,
owned by the user used for anonymous ftp and by a group
containing all the staff members allowed to receive files
that way. That way creeps could deposit files but couldn't
see what was there.
I also told cron to run every ten minutes, changing the
permissions of any file in that directory to 0060.
Oh, and I had already maniacally (and paranoiacally)
excised from ftpd the code allowing ftp to change permissions.
I admit I can't think of a reason to use 744 offhand, since
if you can read the file you can copy it and make the copy
executable. But UNIX permissions can be used in so many
interesting ways that I'm not willing to claim there is no
such reason just because I can't see what it is.
Norman Wilson
Toronto ON
OK, success...
in Unix v5:
as bj.s etc.s us.s
ld a.out -lc
mv a.out bj
chmod 0744 bj
It seems to work OK now. Probably should work on v6 and v7 as well.
Mark
> From: Mark Longridge <cubexyz(a)gmail.com>
> My first idea was to grab bj.s from 2.11BSD and assemble it the Unix v5
> as command. That seems to generate a bunch of errors. Also other
> assembly source files don't seem to have .even in them.
My first question was going to be 'Maybe try an earlier version of the
source?', but I see there is no earlier version online. Odd. ISTR that some
of the fun things in V6 came without source, maybe blackjack was the same way?
> Another idea would be generate the source code from the executable
> itself, but there doesn't seem to be a disassembler for early Unix.
Where's the binary? I'd like to take a look at it, and see if the source was
assembler, or C (there's a C version in the source tree, too). Then I can
look and see how close it is to that 2.11 source - that may be a
re-implementation, and totally different.
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> OK, success...
Yeah, I just got there too, but by a slightly longer route!
(Google wasn't turning up the matches for the routines I needed, which you
found in etc.s, etc - it seems the source archive on Minnie isn't being
indexed by Google. So I wound up cobbling them together with a mix of stuff
from other places, along with stuff I wrote/modified.)
> Probably should work on v6 and v7 as well.
Does on V6, dunno about V7.
> It seems to work OK now.
Yes, but this is _not_ the source for the V5/V6 'bj'. (I just checked,
and the V5 and V6 binaries are identical.)
Right at the moment, I've used enough time on this - I may get back to
it later, and disassemble the V5/V6 binary and see what the original
source looks like.
Noel
> From: Noel Chiappa
> another assembler source file, which contains the following routines
> which are missing from bj.s:
I missed some. It also wants quest1, quest2 and quest5 (and maybe more).
This may present a bit of a problem, as I can't find any trace of them
anywhere, and will have to work out from the source what their arguments,
etc are, what they do, etc.
I wonder how on earth the 2.11 people got this to assemble? (Or maybe they
didn't?)
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> My first idea was to grab bj.s from 2.11BSD and assemble it the Unix v5
> as command. That seems to generate a bunch of errors.
I saw that there's a SysIII bj.s, which is almost identical to the 2.11 one;
so the latter is probably descended from the first, which I assume is Bell
source. So I grabbed it and tried to assemble it.
The errors are because bj.s is designed to be assembled along with another
assembler source file, which contains the following routines which are
missing from bj.s:
mesg
decml
nline
Dunno if you're aware of this, but, the line 'as a.s b.s' _doesn't_
separately assemble a.s and b.s, rather it's as if you'd typed
'cat a.s b.s > temp.s ; as temp.s'. (This is used in the demi-enigmatic
"as data.s l.s" in the system generation procedure.)
I looked around in the sources that come with V6, and I didn't see any obvious
such file. I'm going to whip the required routines up really quickly, and see
if the results assemble/run.
I looked to see if I could steal them from the binary of 'bj' on V6, and...
it looks like that binary is totally different from this source. Let me look
into this...
> Also other assembly source files don't seem to have .even in them.
The V6 assembler groks '.even'.
Noel
Hello All.
FYI.
Warren - can you mirror?
> Date: Thu, 11 Jun 2015 04:41:39 -0400 (EDT)
> From: Brian Kernighan <bwk(a)cs.princeton.edu>
> Subject: dmr web site (fwd)
>
> Finally indeed. I can't recall who else asked me about
> Dennis's pages, so feel free to pass this on.
> And someone ought to make a mirror. If I were not far
> away at the moment, I'd do so myself.
>
> Brian
>
> ---------- Forwarded message ----------
> Date: Tue, 09 Jun 2015 16:32:13 -0400
> To: Brian Kernighan <bwk(a)CS.Princeton.EDU>
> Subject: dmr web site
>
> finally, try this: https://www.bell-labs.com/usr/dmr/www/
>
> It's almost a complete copy of Dennis Ritchie's pages, with some
> adaptation needed for the new location. There are a few broken links,
> but hopefully they're not too annoying.
This new paper may be of interest to list readers:
Dan Murphy
TENEX and TOPS-20
IEEE Annals of the History of Computing 37(1) 75--82 (2015)
http://dx.doi.org/10.1109/MAHC.2015.15
In particular, the author notes on page 81:
>> ...
>> The fact that UNIX was implemented in a reasonably portable language
>> (at least as compared with 36-bit MACRO) also encouraged its spread to
>> new and less expensive machines. If I could have done just one thing
>> differently in the history of TENEX and TOPS-20, it would be to have
>> coded it in a higher-level language. With that, it's probable that the
>> system, or at least large parts of it, would have spread to other
>> architectures and ultimately survived the demise of the 36-bit
>> architecture.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Just cuz of this list and recent comments about SVRx and I see this comment in ftrap.s
fpreent: # this is the point we return to
# when we are executing the n+1th
# floating point instruction in a
# contiguous sequence of floating
# point instructions (floating
# pointlessly forever?)
Makes me wonder how many other humorous comments are buried in the code.
David
Oh, if no one out there has a SVR3.1 distribution (apparently for the 3b2), I’ve got one to send out….
Since early 2013 I've occasionally asked this list for help, and shared
the progress regarding the creation of a Unix Git repository containing
Unix releases from the 1970s until today [1].
On Saturday I presented this work [2, 3] at MSR '15: The 12th Working
Conference on Mining Software Repositories, and on Sunday I discussed
the work with the participants over a poster [4] (complete with commits
shown in a teletype (lcase) and a VT-220 font). Amazingly, the work
received the conference's "Best Data Showcase Award", for which I'm
obviously very happy.
I'd like to thank again the many individuals who contributed to the
effort. Brian W. Kernighan, Doug McIlroy, and Arnold D. Robbins helped
with Bell Labs login identifiers. Clem Cole, Era Eriksson, Mary Ann
Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze, and Anatole Shaw
helped with BSD login identifiers. The BSD SCCS import code is based on
work by H. Merijn Brand and Jonathan Gray.
A lot of work remains to be done. Given that the build process is
shared as open source code, it is easy to contribute additions and fixes
through GitHub pull requests on the build software repository [5], but
if you feel uncomfortable with that, just send me email. The most useful
community contribution would be to increase the coverage of imported
snapshot files that are attributed to a specific author. Currently,
about 90 thousand files (out of a total of 160 thousand) are getting
assigned an author through a default rule. Similarly, there are about
250 authors (primarily early FreeBSD ones) for which only the identifier
is known. Both are listed in the build repository's unmatched directory
[6], and contributions are welcomed (start with early editions; I can
propagate from there). Most importantly, more branches of open source
systems can be added, such as NetBSD OpenBSD, DragonFlyBSD, and illumos.
Ideally, current right holders of other important historical Unix
releases, such as System III, System V, NeXTSTEP, and SunOS, will
release their systems under a license that would allow their
incorporation into this repository. If you know people who can help in
this, please nudge them.
--Diomidis
[1] https://github.com/dspinellis/unix-history-repo
[2]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
(HTML)
[3]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf
(PDF)
[4]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/poster.pdf
(105MB)
[5] https://github.com/dspinellis/unix-history-make
[6]
https://github.com/dspinellis/unix-history-make/tree/master/src/unmatched
All, I finally remembered to export the unix-jun72 project over to Github:
https://github.com/DoctorWkt/unix-jun72
This was our effort to bring the 1st Edition Unix kernel back to life
along with the early C compilers and the 2nd Edition userland binaries.
Cheers, Warren
>
> On Thu, May 21, 2015 at 11:49 AM, Clem Cole <clemc(a)ccc.com <mailto:clemc@ccc.com>
> <mailto:clemc@ccc.com <mailto:clemc@ccc.com>>> wrote:
>
> ? HP/UX is an SVR3 & OSF/1 ancester. Solaris is SVR4. In fact
> it was the SVR4 license and deal between Sun and AT&T)? that
> forced the whole OSF creation. One of the "principles" of the
> OSF was "Fair and Stable" license terms.
>
> Which begs a question - since Solaris was SVR4 based and was
> made freely available via OpenSolaris et al, does that not
> make SVR4 open? I'm not a lawyer (nor play one on TV), but
> it does seem like that sets some sort of precedent.
This is indeed an interesting question. During the IBM vs SCO debacle,
IBM requested the use of TMGE to be used as an example for proof of
how the SVR4 kernel algorithms were already out in the public domain
and thus set the precedent. And this was also (eventually) approved by
AT&T for publication.
> From: Mary Ann Horton
> I have 5 AT&T SVR4 tapes among them .. Is it worth recovering them?
I would say that unless they are _known_ to be in a repository somewhere, yes
(unless it's going to cost a fortune - SVR4 isn't _that_ key a step in the
evolution, I don't think [but I stand to be corrected :-]).
Noel
Hi.
Can anyone still read 9 track tapes? We just uncovered two that date
from my wife's time in grad school, circa 1989 - 1990. They would
be tar tapes.
Thanks!
Arnold
A fantastic curatorial exploit!
> Deadly quote "and nobody cares about that early code history any more
> --so this is all water under the bridge."
This particular metaphor always reminds me of the Farberism: "That's
water over the bridge." Dave, a major presence at Bell Labs, master
malaprop, friend of many and collaborator with several of the early
Unix team, may be counted as an honorary Unixian.
Doug
> From: Aharon Robbins
> Can anyone still read 9 track tapes? We just un-covered two that date
> from my wife's time in grad school, circa 1989 - 1990. They would be
> tar tapes.
That old, the issue is not going to be the format (TAR is still grokkable),
but the physical condition of the tapes; that old, they might have issues
with shedding of oxide, etc (which a heat soak can mostly cure). If you
really want them read, I would recommend a specialist in reading old tapes;
it will cost, but if you really want the data... I have used Chuck Guzis, in
Washington (state) in the US.
Noel
Hoi.
What started as the plan to write a short portrait of cut(1)
for a free german online magazine (translation to English is
not done yet) became a closer look at the history of cut(1).
Well, the topic got me hooked at some point. The text is still
only about eight pages long and far from scientific quality,
but it features some facts not found in Wikipedia. ;-)
So, let me come to my questions.
1) The oldest version of cut that I found is this one in System III.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd
(The file date says 1980-04-11.) As the sccsid reads version 1.5,
there must be older code. How can I find it? Is there a story of
how cut appeared for the first time?
2) As far as I found out, POSIX.2-1992 introduced the byte mode
(-b) and added multi-byte support for the character mode. Is
this correct?
3) Old BSD sources reference POSIX Draft 9 (which, it seems,
they implement) but lack multi-byte support and the byte mode.
They also support decreasing lists, which, they state, POSIX
Draft 9 would not.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=4.3BSD-Reno/src/usr.bin/cut/cu…
The only POSIX.2 Draft I have access to is Draft 11.2.
http://www.nic.funet.fi/pub/doc/posix/p1003.2/d11.2/all.ps
It does specify the multi-byte stuff and does also allow
decreasing lists. Hence, it appears that these things were
added somewhen between Draft 9 and Draft 11.2. Does anyone
know details?
It would be great, if you can give me some pointers for
further research or even share some cut-stories from the
good old days. :-)
meillo
P.S.
In case you understand German, feel free to have a look at the
current version of the text: http://hg.marmaro.de/docs/cut/file/
I welcome your comments, but bear with me, the text issn't meant
to become a doctoral thesis; I just want to write it for fun and
to learn about the historical background.
Hello All.
I have a full set of the O'Reilly X reference manuals - Volumes 1-5, 6a
and 6b. Also "The X Window System In A Nutshell". These are all from
like the mid-1990s.
Are they worth hanging onto?
If not, does anyone on this list want them? If so, I'll send them for
the cost of postage from Israel.
Thanks,
Arnold
> From: Dave Horsfall <dave(a)horsfall.org>
>> In V6, the bootstrap in block 0 prompts for a file name, and when that
>> is entered, it loads that file into memory and starts it. (It doesn't
>> have to be in the root directory, IIRC - I'm pretty sure the bootstrap
>> will accept full path names.)
> I'm pretty sure that it didn't have the full namei() functionality, so
> all files had to be in the root directory.
It depends on what you mean by the first "it" above - if you meant 'V6', then
no. From the Distribution V6's /src/mdec/fsboot.s:
/ read in path name
/ breaking on '/' into 14 ch names
The process of breaking the name up into segments, and then later finding
each name in the appropriate directory, can be seen. The code is kind of
obscure, but if you look at the RL bootstap I dis-assembled:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/rlboot.s
it's pretty much the same code, and a little better commented in the 'read
in directories' part.
Noel
> From: Mark Longridge
> I'm not sure where Unix v1 is loading the kernel from.
> From: Warren Toomey
> Have a look here: https://code.google.com/p/unix-jun72/
Thanks for the pointer! From poking around there, it looks like V1 had
special 'cold boot' and 'warm boot' disk partitions.
I wonder why they lost the 'warm boot' capability in later versions? Maybe it
became reliable enough that the extra complexity of supporting it wasn't
worth it?
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I'm not sure where Unix v1 is loading the kernel from. .. In all the
> other versions of Unix there was always a file like 'unix' in the root
> directory but I guess Unix v1 was different?
I don't know much about the other versions, but it would all depend on what's
in the bootstrap (usually contained in block 0 of drive 0, at least on older
11's). In V6, the bootstrap in block 0 prompts for a file name, and when that
is entered, it loads that file into memory and starts it. (It doesn't have
to be in the root directory, IIRC - I'm pretty sure the bootstrap will accept
full path names.)
How did you create a V1 filesystem? (I don't know, BTW, what they look like -
is that documented anywhere?) It's probably not the same layout as the V6
(which I think is the same as V5).
Noel
Ok, I looked around for the instructions on how to assemble the Unix
v1 kernel and couldn't find anything so I tried:
as u0.s u1.s u2.s u3.s u4.s u5.s u6.s u7.s u8.s u9.s ux.s
and that made a.out and I stripped it and it looked like it was around
the same size as /etc/core (it was 16400 bytes rather than 16448 for
some reason).
I'm not sure where Unix v1 is loading the kernel from. I'm guessing
it's /etc/core and if that's the case then I must have been successful
building the kernel. In all the other versions of Unix there was
always a file like 'unix' in the root directory but I guess Unix v1
was different?
Mark
> From: Sergey Lapin
> Is there some archives of project Athena?
> I'd like to see how it was back then...
There is an _very_ extensive online archive of stuff here:
http://web.mit.edu/afs/
and what you're looking for might be in there _somewhere_.
If not, I know some people I can ask (I never used it myself). But, if so, a
bit more detail? Athena was huge, presumably you don't want all the students'
files! But just the operating system? (IIRC it was initially mostly 4.3BSD,
with some minor extensions.) Or the tools and applications they wrote as well?
(E.g. X-Windows, IIRC, came from Athena.) Most of that does seem to be in
that archive.
Noel
To tell whether Ken installed v6 or a copy of his home
system, look at /usr/dict/words. On the home system
that file is the wordlist from Webster's Collegiate
Dictionary, 7th edition, licensed for Bell Labs use
only. On distribution systems we substituted the wordlist
for "spell". The latter list contains many more proper
names, acronyms, etc than the dictionary did, in
particular names that appear in Unix documentation
such as Ritchie, Kernighan, and McIlroy. It also lacks
lots of trivially derivable words like "organizationally".
If you do have the Webster file rather than the spell
file, please don't propagate it.
Doug
I'm experimenting with adapting Unix history and lore using the new
EXPECT/SEND feature in simh. My favorite guinea pig is the story of Ken
Thompsons sabbatical at Berkeley where he brings up V6 on new 11/70 with
Bob Kridle and Jeff Schriebman. Any details not yet recorded in obvious
places[1] are of course more than welcome!
One of the things I'm trying to get right is what they actually brought
up there initially in 1975. This must have been standard V6 or the
Bell UNIX Ken brought with him, but I can't figure it out.
Salus has Schriebman, Haley and Joy installing the fixes on the 50 bugs
tape late summer 1976. This suggests it was stock V6 initially, but they
might have been playing on a different system or working from a fresh
install in 1976.
If it was stock V6 initially, what were they waiting for? Legal stuff?
If it was 1975 Bell UNIX, can I reconstruct this using the 54 patches
collected by Mike O'Brien[2], or is that going to be way off from what
Thompson left in Urbana-Champaign with Greg Chesson in 1975?
[1] http://www.tuhs.org/books.html minus the Bell journals for example
[2] Hidden in /usr/sys/v6unix/unix_changes in one of the Spencer tapes
http://www.tuhs.org/Archive/Applications/Spencer_Tapes/unsw3.tar.gz
> Does anything at all exist of PDP-7 Unics? All I know about is that
> there was a B language interpreter. Maybe a printout of the manual has
> survived?
There was no manual.
doug
Ok, the first question is:
Has anyone got Unix sysv running on PDP-11 via simh?
I downloaded some files from archive.org which included the file
'sys_V_tape' but so far I haven't got anywhere with it. Looks
interesting though.
Second question is:
What is the deal with Unix version 8? Except for the manuals v8 seems
to have disappeared into the twilight zone. Wikipedia doesn't say
much, only "Used internally, and only licensed for educational use".
So can we look at the source code? Was it sold in binary form only?
Ok, now the big question:
Does anything at all exist of PDP-7 Unics? All I know about is that
there was a B language interpreter. Maybe a printout of the manual has
survived?
Mark
Mark Longridge:
What is the deal with Unix version 8? Except for the manuals v8 seems
to have disappeared into the twilight zone. Wikipedia doesn't say
much, only "Used internally, and only licensed for educational use".
So can we look at the source code? Was it sold in binary form only?
=======
The Eighth Edition system was never released in any general way,
only to a few educational institutions (I forget the number but
it was no more than a dozen) under specific letter agreements that
forbade redistribution. It was never sold, in source or binary or
any other form; the tape included a bootstrap image and full source
code.
I was involved in all this--in fact one of the first nontrivial
things I did after arriving at Bell Labs was to help Dennis assemble
the tape--but that was more than 30 years ago and the details have
faded. The system as distributed ran only on the VAX-11/750 and
11/780. The bootstrap image on the tape was probably more restrictive
than that; if one of the licensees needed something different to
get started we would have tried to make it, but I don't remember
whether that ever happened.
Later systems (loosely corresponding to the Ninth and Tenth editions
of the manual) ran on a somewhat wider set of VAXes, in particular
the MicroVAX II and III and the VAX 8700 and 8550 (but not the dual-
processor 8800). There was never a real distribution of either of
those systems, though a few sites made special requests and got
hand-crafted snapshots under the same restrictive letter agreement.
So far as I know, no Research UNIX system after 7/e has ever been made
available under anything but a special letter agreement. There was
at one point some discussion amongst several interested parties
(including me and The Esteemed Warren Toomey) about strategies to
open up the later source code, but that was quashed by the IBM vs
The SCO Group lawsuit. It would likely be very hard to make happen
now, because I doubt there's anyone left inside Bell Labs with both
the influence and the interest, though I'd be quite happy to be
proven wrong on that.
I know of one place in the world where (a descendant of) that
system is still running, but I am not at the moment in a position
to say where that is. I do know, however, of at least two places
where there are safe copies of the source code, so it is unlikely
to disappear from the historic record even if that record cannot
be made open for a long time.
Norman Wilson
Toronto ON
(Computing Science Research Centre, Bell Labs, 1984-1990)
There was a posting on the SIMH list today from Joerg Hoppe
<j_hoppe(a)t-online.de> about a project to build a microfiche scanner
that has now successfully converted 53,545 document pages to
electronic form, and the files are being uploaded to the PDP-11
section of bitsavers.org. The scanner is described here:
http://retrocmp.com/projects/scanning-micro-fiches
There are links on that page to the rest of the story. It is an
amazing piece of work for a single person.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Claude Shannon passed away on this day in 2001.
Regarded as the Father of Information Theory, I doubt whether you'll go
through a day without bumping into him: computers, electronics, file
compression, audio sampling, you name it and he was probably behind it.
Please take a moment to remember him.
--
Dave Horsfall DTM (VK2KFU) "Bliss is a MacBook with a FreeBSD server."
http://www.horsfall.org/spam.html (and check the home page whilst you're there)
> From: Mark Longridge
> There's no reason for it to be mode 777 is there?
Not that I know of. Once UNIX has booted, it has no use for 'unix' (or
whatever file it booted from), and the boot loader doesn't even read the mode.
I think I habitually set mine to 644. (The 'execute' bits are, of course,
pointless...)
Noel
I just had it brought to my attention that the unix kernel is mode 777
in Unix v5 and v6:
ls -l /unix
-rwxrwx 1 root 27066 Mar 23 1975 /unix
There's no reason for it to be mode 777 is there? It seems rather dangerous.
In Unix v7 it defaults to mode 775 and in 32v it is 755. I figure it
setting it to mode 755 will work and so far it seems fine in v5.
Mark
> From: Dave Horsfall <dave(a)horsfall.org>
>> Once UNIX has booted, it has no use for 'unix' (or whatever file it
>> booted from)
> Didn't "ps" try and read its symbol table?
Sorry, meant 'UNIX the monolithic kernel'; yes, ps and siblings (e.g. iostat)
need to get the running system's symbol table.
> I had fun days when I booted, say, "/unix.new", and "ps" wouldn't
> sodding work...
Know that feeling! I added the following to one of the kernel data files:
char *endsys &end;
and then in programs which grab the system's symbol table, I have an nlist()
entry:
"_endsys",
with the follwing code:
/* Check that the namelist applies to the current system.
*/
checknms(symfile)
char *symfile;
{ char *chkloc, *chkval;
if (nl[0].type == 0)
cerror("No namelist\n");
chkloc = nl[ENDSYS].value;
chkval = rdloc(chkloc);
if (chkval != nl[END].value) {
cerror("Symbol table in %s doesn't match running system\n",
symfile);
}
}
on the theory that pretty much any change at all is going to result in a
change in the system's size (and thus the address of 'end').
Although in a split I/D system, this may not be true (you could change the
code, and have the data+BSS remain the same size); I should probably check
the location of 'etext' as well...
Anyway, a rebuilt system may result in the address of 'endsys' changing, and
thus the rdloc() won't return the contents of the running system's 'endsys',
but the chances of an essentially-random fetch being the same as the value of
'end' in /unix are pretty slim, I would say...
Noel
> From: Jacob Ritorto
> found a copy here, i think..
Ah, thanks.
You might want to look around in the parent directory; apparently there are
two differences between the 11/34 and 11/40, other than the clock and switch
register: the stack limit register, and different handling of
segmentation-violation aborted instructions (which affects instruction
restart on stack extension).
I don't know about 2.9, maybe it knows about these. For V6, the SLR won't be
an issue; the SLR is an option on the 11/40, so not all machines had it, and
m40.s in V6 doesn't use it. The instruction restart thing sounds like it will
be an issue with running V6 on a /34.
Noel
Would anyone know if it's still possible to just replace the platters and
clean the heads?
If the heads are really crashed, the only safe course is
to replace both the damaged heads and the damaged disk pack.
Anything else admits a substantial risk of carrying the
crash forward.
Cleaning the heads probably isn't an option; when they
crash, they don't just pick up material from the disk
platter, they may themselves be damaged enough that sharp
bits of the heads themselves are sticking out.
Norman Wilson
Toronto ON
> From: Noel Chiappa
> apparently there are two differences between the 11/34 and 11/40, other
> than the clock and switch register
Too early in the morning here, clearly... I was thinking of the 11/23 and the
11/40 here in the clock/SR comment, not the /34 and the /40.
_Iff_ the 11/34 is using the standard DL11-W console interface board (which
includes an LTC), there's no difference that I know of between the 11/34 and
the 11/40 on the LTC front (although the LTC is an option in the /40, so a /40
might not have one, in which case the V6 will panic on trying to boot unless
it has a KW11-P).
As for the switch register... I guess that on machines with a KY11-A, there
is no switch register? (Too lazy/busy to go read the manual(s) to confirm...)
Noel
> From: Jacob Ritorto
> I think it's something to do with the fact that he compiled it to run on
> an 11/23. Maybe it lacks unibus support.
No, the UNIBUS and QBUS appear (from the programming level) to be identical.
There are subtle differences (the /23 and its devices can address more than
256KB of memory, and some devices have minor differences between the QBUS and
UBIBUS - e.g. the QBUS DZ has only 4 lines, not 8), but in general, they
should be interchangeable.
> Maybe something to do with clock differences.
Again, if it boots at all, that's not it. (The vanilla /23 doesn't have a
software-controllable clock, and when booting Unix on one, one has to leave
the clock switched off till UNIX is running - at least, for the early versions
of UNIX.)
> I fired 2.9MSCP up in simh emulating an 11/23 and it works fine. Just to
> corroborate my hardware experience of it on the '34, I switch the cpu
> emulation to 11/34 and got a mostly identical crash sequence as with my
> real hardware.
Ah. Now we're getting somewhere! If the simulator crashes in the same way, it's
not flaky hardware (my first guess as to the cause).
What are the symptoms (in as much detail as you can give us)? What, if anything,
is printed before it dies?
> I changed ...
> UNIBUS_MAP = 0
> to
> UNIBUS_MAP = 1
The /34 doesn't have a UNIBUS map.
Noel
> From: Jacob Ritorto
> I jiggled the memory board and the seqfault went away.
Ugh. A flaky. I hate those....
> So now the real box is behaving more like the simh and just hanging,
> not panicing anymore.
Does it _always_ hang at the same point in time? If so, what are the
circumstances, - have you just tried to start multi-user operation, or what?
> How can I find this startup() you mention?
It's in machdep.c in sys/sys.
Noel
> From: Jacob Ritorto
> I set simh to 11/34 and I managed to get actual panics before (that I
> didn't record)
Ah.
> now I'm just getting hangs, mostly when hitting ctrl-D to bring system to
> mutiuser.
The fact that it boots to the shell OK indicates things are mostly OK. I
wonder what it's trying to do, in going to multi-user, that's wedging the
system?
> Same if I mount -a in single user and then try to access /usr (works for
> a while, then hangs.).
Ah. That sounds very much like a 'lost' interrupt. The process is waiting
(inside the kernel) for the device to complete, and ..... crickets.
> When hung, I can still get character echo to my terminal
So the machine is still running OK (most echoing is done inside the TTY
interrupt handler).
> but can't interrupt or background the running command, etc.
Like I said, it's sleeping inside the kernel, and missed the wakeup event.
If you have another console logged in, it would be interesting to see if that
one is frozen too. If not, we can use tools like 'ps', running on the second
line, to look at the first process and see what it's waiting for.
Single user, the following hack:
sh < /dev/ttyX > /dev/ttyX &
can be used to start a shell on another tty line (since going full multi-user
seems to wedge it).
> Would it help if I traced memory and single-stepped through the
> (apparently) infinite loop?
No, because it's very likely not a loop! ;-)
> here are some examples of crashes on the real pdp11/34 (booting via
> vtserver, then bringing in system from the MSCP disk), with the original
> 2.9bsd-MSCP kernel (the one specifically built for 11/23):
>
> CONFIGURE SYSTEM:
> ka6 = 2200
> aps = 141572
> pc = 50456 ps = 30250
> __ovno = 7
> trap type 11
> panic: trap
That's a segmentation fault. Very odd trap to get! 2.9 uses overlays right?
Maybe there's a problem with how some overlay fits, or something? I don't know
much about the overlay feature, never used it, sorry.
Most of the other data (PS address, PC, KDSA6 contents, etc) aren't much use
without a dump.
> and another: plain boring old hang at boot when trying to size devices.
> Can't even echo characters this time.
If the init process hasn't got as far an opening the TTY, you might not get
character echoing.
If that randomly lost interrupt got lost very early on, you might could see
this sort of behaviour.
> One thing I think is interesting is that it's claiming 158720KW of
> memory. Is that weird? ... Where's it getting that odd number? Vanilla
> 2.9.1 on the real 11/34 boots with
>
> Berkeley UNIX (Rev. 2.9.1) Sun Nov 20 14:55:50 PST 1983
> mem = 135872
No idea where it's coming from, but remember Beowulf Schaeffer's advice to
Gregory Pelton in "Flatlander".
And now that I think about it, if the system thinks it has more memory than it
actually does, that would definitely cause problems.
Probably you should put some printf()'s in startup() and see where it's coming
from.
Noel
> From: Cory Smelosky <b4(a)gewt.net>
> Only the 11/23+ can, early rev 11/23s couldn't go above 256K.
Correctamundo on the last part, but not the first. I have both 11/23+'s and
11/23's, and I can assure you that Rev C and later 11/23's (the vast majority
of them) can do more than 256KB. See:
http://www.ibiblio.org/pub/academic/computer-science/history/pdp-11/hardwar…
for more.
Noel
Hi,
Since my Fuji160 drive is rather head-crashed, I've replaced it with an
M2333k, which is a smaller SMD rig with more sectors than the 160.
Unfortunately, after many dip switch settings and config changes, I have to
conclude that the sc21 just doesn't work with this new disk.
I've plugged in my SC41 controller that speaks MSCP and supports the
M2333k correctly. So now it's a matter of getting a unix small enough to
run on the 11/34 that can also speak MSCP. Enter Jonathan Engdahl's
2.9bsd-MSCP.
I managed to restor a root dump from his distribution and am able to
occasionally boot it on my 11/34, but it crashes very soon after booting
and I don't understand why. I think it's something to do with the fact that
he compiled it to run on an 11/23. Maybe it lacks unibus support. Maybe
something to do with clock differences. Not sure. But I was thinking that I
could make it work by recompiling the kernel with 11/34 support.
I fired 2.9MSCP up in simh emulating an 11/23 and it works fine. Just to
corroborate my hardware experience of it on the '34, I switch the cpu
emulation to 11/34 and got a mostly identical crash sequence as with my
real hardware.
So I switched the emulation back to '23, rebooted and edited the assym.s
file found in Jonathan's /usr/src/sys/RA directory. I changed
PDP11 = 23.
to
PDP11 = 34.
as well as
UNIBUS_MAP = 0
to
UNIBUS_MAP = 1
and recompiled with 'make unix,' then copied the resultant unix to /unix.
I switched simh back to emulating an 11/34 and rebooted. It crashes
randomly just as it did before the kernel recompile.
Any idea what I'm missing here? My hope was to simply move this
freshly-compiled 11/34-friendly kernel onto my real 11/34 and have a
working hardware system.
thx
jake
Ok folks,
I've uploaded what I call Unix v5a to:
http://www.maxhost.org/other/unix-v5-feb-2015.tar.gz
I use simh on Linux to emulate the PDP-11/70.
The tarball contains:
unix_v5_rk.dsk
unix_v5_rk1.dsk
unix_v5_rk2.dsk
pdp11-v5.ini
readme-v5.txt
unix-v5a.sh
The original file is uv5swre.zip if anyone wants to compare them.
Mark
> From: Clem Cole <clemc(a)ccc.com>
> Once people started to partition them, then all sort of new things
> occurred and I that's when the idea of a dedicated swap partition came
> up. I've forgotten if that was a BSDism or UNIX/TS.
Well, vanilla V6 had i) partitioned drives (that was the only way to support
an RP03), and ii) the swap device in the c.c config file. That's all you need
to put swap in its own partition. (One of the other MIT-LCS groups with V6
did that; we didn't, because it was better to swap off the RK, which did
multi-block transfers in a single I/O operation.)
> As I recall in V6 and I think V7, the process was first placed in the
> swap image before the exec (or at least space reserved for it).
As best I understand it, the way fork worked in V6 was that if there was not
enough memory for an in-core fork (in which case the entire memory of the
process was just copied), the process was 'swapped out', and the swapped out
image assumed the identity of the child.
But this is kind of confusing, because when I was bringing up V6 under the
Ersatz11 simulator, I had a problem where the swapper (process 0) was trying
to fork (with the child becoming 1 and running /etc/init), and it did the
'swap out' thing. But there was a ton of free memory at that point, so... why
was it doing a swap? Eh, something to investigate sometime...
Noel
> From: Jacob Ritorto
> I'm having trouble understanding how to get my swap configured. Since
> rl02s are so little, the MAKE file in /dev doesn't partition them into
> a, b, c, etc. However, when MAKE makes the /dev/rl0 device, it uses
> only 8500 of its 10000 blocks, so what would presumably be intended as
> swap space does exist. Swap is usually linked to the b partition,
> right? So how do I create this b partition on an rl02?
I don't know how the later systems work, but in V6, the swap device, and the
start block / # of blocks are specified in the c.c configuration file (i.e.
they are compiled into the system). So you can take one partition, and by
specifying less than the full size to 'mkfs', you can use the end of the
partition for swap space (which is presumably what's happening with /dev/rl0
here).
Noel
Dave Horsfall:
I also wrote a paper on a "bad block" system, where something like inum
"-1" contained the list of bad sectors, but never saw it through.
====
During the file system change from V6 to V7, the i-number of
the root changed from 1 to 2, and i-node 1 became unused.
At least some versions of the documentation (I am too harried
to look it up at the moment) claimed i-node 1 was reserved
for holding bad blocks, to keep them out of the free list,
but that the whole thing was unimplemented.
I vaguely remember implementing that at some point: writing
a tool to add named sectors to i-node 1. Other tools needed
fixing too, though: dump, I think, still treated i-node 1
as an ordinary file, and tried to dump the contents of
all the bad blocks, more or less defeating the purpose.
I left all that behind when I moved to systems with MSCP disks,
having written my own driver code that implemented DEC's
intended port/class driver split, en passant learning how
to inform the disk itself of a bad block so it would hide it
from the host.
I'd write more but I need to go down to the basement and look
at one of my modern* 3TB SATA disks, which is misbehaving
even though modern disks are supposed to be so much better ...
Norman Wilson
Toronto ON
* Not packaged in brass-bound leather like we had in the old days.
You can't get the wood, you know.
what about using another minor device? Is xp0d mapped elsewhere?
Since it's a BSD, won't it try by default to read a partition
table from the first few sectors of the disk?
Norman Wilson
Toronto ON
Hi,
I'm having trouble understanding how to get my swap configured. Since
rl02s are so little, the MAKE file in /dev doesn't partition them into a,
b, c, etc. However, when MAKE makes the /dev/rl0 device, it uses only 8500
of its 10000 blocks, so what would presumably be intended as swap space
does exist. Swap is usually linked to the b partition, right? So how do I
create this b partition on an rl02? Or am I getting this horribly wrong?
thx
jake
Hi,
I'm running 2.9BSD on a pdp11/34 with an Emulex sc21 controller to some
Fuji160 disks. Booting with root on RL02 for now, but want to eventually
have the whole system on the Fujis and disconnect the rl02s.
While the previous owner of the disks appears to have suffered a
headcrash near cylinder 0, I'm having an impressive degree of success
writing to other parts of the disk.
However, when I try to mkfs, I can see the heads trying to write on the
headcrashed part of the disk. (Nice having those plexiglass covers!)
Is there a way to tell mkfs (or perhaps some other program) to not try to
write on the damaged cylinders?
thx
jake
So, I have a chance to buy a copy of a Version 5 manual, but it will be a
lot. I looked, and the Version 5 manual doesn't appear to be online. So while
normally at the price this is at, I would pass, it might be worth it for me
to buy it, and scan it to make it available.
But, I looked in the "FAQ on the Unix Archive and Unix on the PDP-11", and it
says:
5th Edition has its on-line manual pages missing. ... Fortunately, we do
have paper copies of all the research editions of UNIX from 1st to 7th
Edition, and these will be scanned in and OCR'd.
Several questions: First, when it says "we do have paper copies of all the
research editions of UNIX", I assume it means 'we do have paper copies of
_the manuals for_ all the research editions of UNIX', not 'we do have paper
copies of _the source code for_ all the research editions of UNIX'?
Second, if it is 'manuals', did the scan/OCR thing ever happen, or is it
likely to anytime in the moderate future (next couple of years)?
Third, would a scanned (which I guess we could OCR) version of this manual be
of much use (it would not, after all, be the NROFF source, although probably
a lot of the commands will be identical to the V6 ones, for which we do have
the NROFF)?
Advice, please? Thanks!
Noel
> From: Tom Ivar Helbekkmo
> There was no fancy I/O order juggling, so everything was written in the
> same chronological order as it was scheduled.
> ...
> What this means is that the second sync, by waiting for its own
> superblock writes, will wait until all the inode and file data flushes
> scheduled by the first one have completed.
Ah, I'm not sure this is correct. Not all disk drivers handled requests in a
'first-come, first-served' order (i.e. where a request for block X, which was
scheduled before a request for block Y, actually happened before the
operation on block Y). It all depends on the particular driver; some drivers
(e.g. the RP driver) re-organized the waiting request queue to optimize head
motion, using a so-called 'elevator algorithm'.
(PS: For a good time, do "dd if=/dev/[large_partition] of=/dev/null" on a
running system with such a disk, and a lot of users on - the system will
apparently come to a screeching halt while the 'up' pass on the disk
completes... I found this out the hard way, needless to say! :-)
Since the root block is block 1 in the partition, one might think that even
with an elevator algorithm, this would tend to guarantee that doing it would
more or less guarantee that all other pending operations would have completed
(since it could only happen at the end of 'down' pass); _but_ the elevator
algorithm is in terms of actual physical block numbers, so blocks in another
lower partition might still remain to be written.
But now that I think about it a bit, if such blocks existed, that partition's
super-block would also need to be written, so when that one completed, the
disk queue would be empty.
But the point remains - because there's no guarantee of _overall_ disk
operation ordering in V6, scheduling a disk request and waiting for it to
complete does not guarantee that all previously-requested disk operations
will have completed before it does.
I really think the whole triple-sync thing is mythology. Look through the V6
documentation and although IIRC there are instructions on how to shut the
system down, it's not mentioned. We certainly never used it at MIT (and I
still don't), and I've never seen a problem with disk corruption _when the
system was deliberately shut down_.
Noel
Yo Jacob,
I'm ex-sun but I don't know too much about Illumos. Care to give us
the summary of why I might care about it?
On Wed, Dec 31, 2014 at 01:16:00AM -0500, Jacob Ritorto wrote:
> Hey, thanks, Derrik.
> I don't mess with Linux much (kind of an Illumos junkie by trade ;), but
> I bet gcc would. I did out of curiosity do it with the Macintosh cc (Apple
> LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)) and it throws
> warnings about our not type-defining functions because you're apparently
> supposed to do this explicitly these days, but it dutifully goes on to
> assume int and compiles our test K&R stuff mostly fine. It does
> unfortunately balk pretty badly at the naked returns we initially had,
> though. Wish it didn't because it strikes me as being beautifully simple..
>
> thx again for the encouragement!
> jake
>
>
> On Wed, Dec 31, 2014 at 1:02 AM, Derrik Walker v2.0 <dwalker(a)doomd.net>
> wrote:
>
> > On Wed, 2014-12-31 at 00:44 -0500, Jacob Ritorto wrote:
> >
> > >
> > > P.S. if anyone's bored enough, you can check out what we're up to at
> > > https://github.com/srphtygr/dhb. I'm trying to get my 11yo kid to
> > > spend a little time programming rather than just playing video games
> > > when he's near a computer. He'a actually getting through this stuff
> > > and is honestly interested when he understands it and sees it work --
> > > and he even spotted a bug before me this afternoon! Feel free to
> > > raise issues, pull requests, etc. if you like -- I'm putting him
> > > through the git committing and pair programming paces, so outside
> > > interaction would be kinda fun :)
> > >
> > >
> > > P.P.S. We're actually using 2.11bsd after all..
> > >
> > I'm curious, will gcc on a modern Linux system compile K&R c?
> >
> > Maybe when I get a little time, I might try to see if I can compile it
> > on a modern Fedora 21 system with gcc.
> >
> > BTW: Great job introducing him to such a classic environment. A few
> > years ago, my now 18 year old had expressed some interest in graphics
> > programming and was in awe over an SGI O2 I had at the time, so I got
> > him an Indy. He played around with a bit of programming, but
> > unfortunately, he lost interest.
> >
> > - Derrik
> >
> >
> > _______________________________________________
> > TUHS mailing list
> > TUHS(a)minnie.tuhs.org
> > https://minnie.tuhs.org/mailman/listinfo/tuhs
> >
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
> when you - say - run less to display a file, it switches to a dedicated
> region in the terminal memory buffer while printing its output, then
> restores the buffer to back where you were to begin with when you exit
> the pager
Sorry for veering away from Unix history, but this pushed one of the hottest
of my buttons. Less is the epitome of modern Unix decadence. Besides the
maddening behavior described above, why, when all screens have a scroll bar,
should a pager do its own scrolling? But for a quantitative measure of
decadence, try less --help | wc. It takes excess to a level not dreamed of
in Pike's classic critique, "cat -v considered harmful".
Doug
Hi all, I came across this last week:
http://svnweb.freebsd.org/
It's a Subversion VCS of all the CSRG releases. I'm not sure if it
has been mentioned here before.
Cheers, Warren
<much discussion about quadratic search removed>
All I remember (and still support to this day) is that I’ve got a TERMCAP=‘string’ in my login scripts to set termcap to the specific terminal I’m logging in with.
Long ago this made things much faster. Today I think that it is just a holdover that I’m not changing due to inertia, rather than any real need for it.
David
—
David Barto
david(a)kdbarto.org
> Noel Chiappa
> The change is pretty minor: in this piece of code:
>
> case reboot:
> termall();
> execl(init, minus, 0);
> reset();
>
> just change the execl() line to say:
>
> execl(init, init, 0);
I patched init in v5 and now ps shows /etc/init as expected, even
after going from multi to single to multi mode.
Looks like init.c was the same in v5 and v6.
> Noel Chiappa:
> Just out of curiousity, why don't you set the SR before you boot the machine?
> That way it'll come up single user, and you can icheck/dcheck before you go
> to multi-user mode. I prefer doing it that way, there's as little as possible
> going on, in case the disk is slightly 'confused', so less chance any bit-rot
> will spread...
I actually do file system checks on v5 as it's the early unix I use the most:
check -l /dev/rk0
check -u /dev/rk0
same for rk1, rk2.
The v5 manual entry for check references the 'restor' command,
although the man page for that is missing.
Your idea of starting up in single user mode is a good one although
I'm not sure if it's necessary to check the file system on each boot
up. I've been running this disk image of v5 for about two years and no
blow-ups as yet. I also keep various snapshots of v5, v6 and v7 disk
images for safety reasons.
And there are text files of all the source code changes I've made, so
if disaster strikes I can redo it all.
Mark
> From: Clem Cole
> ps "knew" about some kernel data structures and had to compiled with
> the same sources that your kernel used if you want all the command
> field in particular to look reasonable.
Not just the command field!
The real problem was that all the parameters (e.g. NPROC) were not stored in
the kernel anywhere, so if you wanted to have one copy of the 'ps' binary
which would work on two different machines (but which were running the same
version of the kernel)... rotsa ruck.
I have hacked my V6 to include lines like:
int ninode NINODE;
int nfile NFILE;
int nproc NPROC;
etc so 'ps' can just read those variables to find the table sizes in the
running kernel. (Obviously if you modify a table format, then you're SOL.)
> From: Ronald Natalie
> The user structure of the currently running process is the only one
> that is guaranteed to be in memory ... Any processes that were swapped
> you could read the user structure so things that were stored there were
> often unavailable (particularly the command name).
Well, 'ps' (even the V6 stock version) was actually prepared to poke around
on the swap device to look at the images of swapped-out processes. And the
command name didn't come from the U area (it wasn't saved there in stock V6),
'ps' actually had to look on the top of the user stack (which is why it
wasn't guaranteed to be accurate - the user process could smash that).
> From: Clem cole
> IIRC we had a table of sleep addresses so that ps could print the
> actual thing you were waiting for not just an address.
I've hacked my copy of 'ps' to grovel around in the system's symbol table,
and print 'wchan' symbolically. E.g. here's some of the output of 'ps' on
my system:
TTY F S UID PID PPID PRI NIC CPU TIM ADDR SZ TXT WCHAN COMMAND
?: SL S 0 0 0-100 0 -1 127 1676 16 runout <swapper>
?: L W 0 1 0 40 0 0 127 1774 43 0 proc+26 /etc/init
?: L W 0 11 1 90 0 0 127 2405 37 tout /etc/update
8: L W 0 12 1 10 0 0 127 2772 72 2 kl11 -
a: L W 0 13 1 40 0 0 127 3122 72 2 proc+102 -
a: L R 0 22 13 100 0 10 0 3422 138 3 ps axl
b: L W 0 14 1 10 0 0 127 2120 41 1 dz11+40 - 4
It's pretty easy to interpret this to see what each process is waiting for.
Noel
> From: Noel Chiappa
> For some reason, the code for /etc/init .. bashes the command line so
> it just has '-' in it, so it looks just like a shell.
BTW, that may be accidental, not a deliberate choice - e.g. someone copied
another line of code which exec'd a shell, and didn't change the second arg.
> I fixed my copy so it says "/etc/init", or something like that. ... I
> can upload the 'fixed' code tomorrow.
The change is pretty minor: in this piece of code:
case reboot:
termall();
execl(init, minus, 0);
reset();
just change the execl() line to say:
execl(init, init, 0);
>> I'm not sure if unix of the v6 or v5 era was designed to go from multi
>> user to single user mode and then back again.
> I seem to recall there's some issue, something like in some cases
> there's an extra shell left running attached to the console
So the bug is that in going from single-user to multi-user, by using "kill -1
1" in single-user with the switch register set for multi-user, it doesn't
kill the running single-user shell on the console. The workaround to that bug
which I use is to set the CSWR and then ^D the running shell.
In general, the code in init.c isn't quite as clean/clear as would be optimal
(which is part of why I haven't tried to fix the above bug), but _in general_
it does support going back and forth.
> From: Ronald Natalie
> our init checked the switch register to determine whether to bring up
> single or multiuser
I think that's standard from Bell, actually.
> I believe our system shutdown if you kill -1-1 (HUP to init).
The 'stock' behaviour is that when that happens, it checks the switch
register, and there are three options (the code is a little hard to follow,
but I'm pretty sure this is right):
- if it's set for single-user, it shuts down all the other children, and
brings up a console shell; when done, it does the next
- if it's set for 'reboot', it just shuts down all children, and restarts
the init process (presumably so one can switch to a new version of the init
without restarting the whole machine);
- if it's not set for either, it re-reads /etc/ttys, and for any lines which
have switched state in that file, it starts/kills the process listening to
that line (this allows one to add/drop lines dynamically).
> From: Clem Cole
> it's probably worth digging up the v6 version of fsck.
That's on that MIT V6 tape, too. Speaking of which, time to write the code to
grok the tape...
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I've finally managed to get Unix v5 and v6 to go into single user mode
> while running under simh.
> ...
> dep system sr 173030 (simh command)
Just out of curiousity, why don't you set the SR before you boot the machine?
That way it'll come up single user, and you can icheck/dcheck before you go
to multi-user mode. I prefer doing it that way, there's as little as possible
going on, in case the disk is slightly 'confused', so less chance any bit-rot
will spread...
> Now I'm in muti user mode .. but then if I do a "ps -alx" I get:
>
> TTY F S UID PID PRI ADDR SZ WCHAN COMMAND
> ?: 3 S 0 0-100 1227 2 5676 ????
> ?: 1 W 0 1 40 1324 6 5740 -
> The ps command doesn't show the /etc/init process explicitly, although
> I'm pretty sure it is running.
No, it's there: the second line (PID 1). For some reason, the code for
/etc/init (in V6 at least, I don't know anything about V5) bashes the command
line so it just has '-' in it, so it looks just like a shell.
I fixed my copy so it says "/etc/init", or something like that. The machine
my source is on is powered down at the moment; if you want, I can upload the
'fixed' code tomorrow.
> I'm not sure if unix of the v6 or v5 era was designed to go from multi
> user to single user mode and then back again.
I seem to recall there's some issue, something like in some cases there's an
extra shell left running attached to the console, but I don't recall the
details (too lazy to look for the note I made about the bug; I can look it up
if you really want to know).
> Would it be safer to just go to single user and then shut it down?
I don't usually bother; I just log out all the shells except the one on the
console, so the machine is basically idle; then do a 'sync', and shortly
after than completes, I just halt the machine.
Noel
adding the list back
On Tue, Jan 6, 2015 at 10:42 AM, Michael Kerpan <mjkerpan(a)kerpan.com> wrote:
> This is a cool development. Does this code build into a working version of
> Coherent or is this mainly useful to study? Either way, it should be
> interesting to look at the code for a clone specifically aimed at low-end
> hardware.
>
> Mike
>
Ok, I've finally managed to get Unix v5 and v6 to go into single user
mode while running under simh.
I boot up unix as normal, that is to say in multi-user mode.
Then a ctrl-e and
dep system sr 173030 (simh command)
then c to continue execution of the operating system and finally "kill -1 1".
This gets me from multi user mode to single user mode. I can also go
back to multi user mode with:
ctrl-e and dep system sr 000000
then once again c to continue execution of the operating system and "kill -1 1".
Now I'm in muti user mode, and I can telnet in as another user so it
seems to be working but then if I do a "ps -alx" I get:
TTY F S UID PID PRI ADDR SZ WCHAN COMMAND
?: 3 S 0 0-100 1227 2 5676 ????
?: 1 W 0 1 40 1324 6 5740 -
8: 1 W 0 51 40 2456 19 5766 -
?: 1 W 0 55 10 1377 6 42066 -
?: 1 W 0 5 90 1734 5 5440 /etc/update
?: 1 W 0 32 10 2001 6 42126 -
?: 1 W 0 33 10 2054 6 42166 -
?: 1 W 0 34 10 2127 6 42226 -
?: 1 W 0 35 10 2202 6 42266 -
?: 1 W 0 36 10 2255 6 42326 -
?: 1 W 0 37 10 2330 6 42366 -
?: 1 W 0 38 10 2403 6 42426 -
8: 1 R 0 59 104 1472 17 ps alx
The ps command doesn't show the /etc/init process explicitly, although
I'm pretty sure it is running. I'm not sure if unix of the v6 or v5
era was designed to go from multi user to single user mode and then
back again. Would it be safer to just go to single user and then shut
it down?
Mark
Friend asked an odd question:
Were VAXen ever used to send/receive faxes large-scale? What software was
used and how was it configured?
Was any of this run on any of the UCB VAXen?
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On 2015-01-06 23:56, Clem Cole<clemc(a)ccc.com> wrote:
>
> On Tue, Jan 6, 2015 at 5:45 PM, Noel Chiappa<jnc(a)mercury.lcs.mit.edu>
> wrote:
>
>> >I have no idea why DEC didn't put it in the 60 - probably helped kill that
>> >otherwise intersting machine, with its UCS, early...
>> >
> ?"Halt and confuse ucode" had a lot to do with it IMO.
>
> FYI: The 60 set the record of going from production to "traditional
> products" faster than? anything else in DEC's history. As I understand
> it, the 11/60 was expected to a business system and run RSTS. Why the WCS
> was put in, I never understood, other than I expect the price of static RAM
> had finally dropped and DEC was buying it in huge quantities for the
> Vaxen. The argument was that they could update the ucode cheaply in the
> field (which to my knowledge the never did). But I asked that question
> many years ago to one of the HW manager, who explained to me that it was
> felt separate I/D was not needed for the targeted market and would have
> somehow increased cost. I don't understand why it would have cost any
> more but I guess it was late.
No, field upgrade of microcode can not have been it. The WCS for the
11/60 was an option. Very few machines actually had it. It was for
writing your own extra microcode as addition to the architecture.
The basic microcode for the machine was in ROM, just like on all the
other PDP-11s. And DEC sold a compiler and other programs required to
develop microcode for the 11/60. Not that I know of anyone who had them.
I've "owned" four PDP-11/60 systems in my life. I still have a set of
boards for the 11/60 CPU, but nothing else left around.
The 11/60 was, by the way, not the only PDP-11 with WCS. The 11/03 (if I
remember right) also had such an option. Obviously the microcode was not
compatible between the two machines, so you couldn't move it over from
one to the other.
Also, reportedly, someone at DEC implemented a PDP-8 on the 11/60,
making it the fastest PDP-8 ever manufactured. I probably have some
notes about it somewhere, but I'd have to do some serious searching if I
were to dig that up.
But yes, the 11/60 went from product to "traditional" extremely fast.
Split I/D space was one omission from the machine, but even more serious
was the decision to only do 18-bit addressing on it. That killed it very
fast.
Someone else mentioned the MFPI/MFPD instructions as a way of getting
around the I/D restrictions. As far as I know (can tell), they are
possible to use to read/write instruction space on a machine. I would
assume that any OS would set both current and previous mode to user when
executing in user space.
The documentation certainly claims they will work. I didn't think of
those previously, but they would allow you to read/write to instruction
space even when you have split I/D space enabled.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Yep, the only time this was ever trully useful was so you could put an
> a.out directly into the boot block I think.
Well, sort of. If you had non position-independent code, it would blow out
(it would be off by 020). Also, some bootstraps (e.g. the RL, IIRC) were so
close to 512. bytes long that the extra 020 was a problem. And it was so easy
to strip off:
dd if=a.out of=fooboot bs=1 skip=16
I'm not sure that anything actually used the fact that 407 was 'br .+020', by
the V6 era; I think it was just left over from older Unixes (where it was not
in fact stripped on loading). Not just on executables running under Unix; the
boot-loader also stripped it, so it wasn't even necessary to strip the a.out
header off /unix.
Noel
On 2015-01-06 20:57, Milo Velimirovi?<milov(a)cs.uwlax.edu> wrote:
> Bringing a conversation back online.
> On Jan 6, 2015, at 6:22 AM,arnold(a)skeeve.com wrote:
>
>>> >>Peter Jeremy scripsit:
>>>> >>>But you pay for the size of $TERMCAP in every process you run.
>> >
>> >John Cowan<cowan(a)mercury.ccil.org> wrote:
>>> >>A single termcap line doesn't cost that much, less than a KB in most cases.
>> >
>> >In 1981 terms, this has more weight. On a non-split I/D PDP-11 you only
>> >have 32KB to start with. (The discussion a few weeks ago about cutting
>> >yacc down to size comes to mind?)
> (Or even earlier than ?81.) How did pdp11 UNIXes handle per process memory? It?s suggested above that there was a 50-50 split of the 64KB address space between instructions and data. My own recollection is that you got any combination of instruction and data space that was <64KB. This would also be subject to limits of pdp11 memory management unit.
>
> Anyone have a definitive answer or pointer to appropriate man page or source code?
You are conflating two things. :-)
A standard PDP-11 have 64Kb of virtual memory space. This can be divided
any way you want between data and code.
Later model PDP-11 processors had a hardware feature called split I/D
space. This meant that you could have one 64Kb virtual memory space for
instructions, and one 64Kb virtual memory space for data.
(This also means that the text you quoted was incorrect, as it stated
that you had 32Kb, which is incorrect. It was/is 32 Kword.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2015-01-06 22:59, random832(a)fastmail.us wrote:
> On Tue, Jan 6, 2015, at 15:20, Johnny Billquist wrote:
>> >Later model PDP-11 processors had a hardware feature called split I/D
>> >space. This meant that you could have one 64Kb virtual memory space for
>> >instructions, and one 64Kb virtual memory space for data.
> Was it possible to read/write to the instruction space, or execute the
> data space? From what I've seen, the calling convention for PDP-11 Unix
> system calls read their arguments from directly after the trap
> instruction (which would mean that the C wrappers for the system calls
> would have to write their arguments there, even if assembly programs
> could have them hardcoded.)
Nope. A process cannot read or write to instruction space, nor can it
execute from data space.
It's inherent in the MMU. All references related to the PC will be done
from I-space, while everything else will be done through D-space.
So the MMU have two sets of page registers. One set maps I-space, and
another maps D-space. Of course, you can have them overlap, in which
case you get the traditional appearance of older models.
The versions of Unix I am aware of push arguments on the stack. But of
course, the kernel can remap memory, and so can of course read the
instruction space. But the user program itself would not be able to
write anything after the trap instruction.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole <clemc(a)ccc.com>
> Depends the processor. For the 11/45 class processors, you had a 17th
> address bit, which was the I/D choice. For the 11/40 class you shared
> the instructions and data space.
To be exact, the 23, 24, 34, 35/40 and 60 all had a single shared space.
(I have no idea why DEC didn't put it in the 60 - probably helped kill that
otherwise intersting machine, with its UCS, early...). The 44, 45/50/55, 70,
73, 83/84, and 93/94 had split.
> From: random832(a)fastmail.us
> the calling convention for PDP-11 Unix system calls read their
> arguments from directly after the trap instruction (which would mean
> that the C wrappers for the system calls would have to write their
> arguments there, even if assembly programs could have them hardcoded.)
Here's the code for a typical 'wrapper' (this is V6, not sure if V7 changed
the trap stuff):
_lseek:
jsr r5,csv
mov 4(r5),r0
mov 6(r5),0f
mov 8(r5),0f+2
mov 10.(r5),0f+4
sys indir; 9f
bec 1f
jmp cerror
1:
jmp cret
.data
9:
sys lseek; 0:..; ..; ..
Note the switch to data space for storing the arguments (at the 0: label
hidden in the line of data), and the 'indirect' system call.
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Some access at the kernel level can be done with MFPI and MPFD
> instructions.
Unless you hacked your hardware, in which case it was possible from user mode
too... :-)
I remember how freaked out we were when we tried to use MFPI to read
instruction space, and it didn't work, whereupon we consulted the 11/45
prints, only to discover that DEC had deliberately made it not work!
> From: Ronald Natalie <ron(a)ronnatalie.com>
> After the changes to the FS, you'd get missing blocks and a few 0-0
> inodes (or ones where the links count was higher than the links). These
> while wasteful were not going to cause problems.
It might be worth pointing out that due to the way pipes work, if a system
crashed with pipes open, even (especially!) with the disk perfectly sync'd,
you'll be left with 0-0 inodes. Although as you point out, those were merely
crud, not potential sourdes of file-rot.
Noel
Apparently the message I was replying to was off-list, but it seems like
a waste to have typed all this out (including answering my own question)
and have it not go to the list.
On Tue, Jan 6, 2015, at 17:35, random832(a)fastmail.us wrote:
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/factor.s
> wrchar:
> mov r0,ch
> mov $1,r0
> sys write; ch; 1
> rts r5
>
> Though it looks like the C wrappers use the "indirect" system call which
> reads a "fake" trap instruction from the data segment. Looking at the
> implementation of that, my question is answered:
>
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/sys/trap.c
> if (callp == sysent) { /* indirect */
> a = (int *)fuiword((caddr_t)(a));
> pc++;
> i = fuword((caddr_t)a);
> a++;
> if ((i & ~077) != SYS)
> i = 077; /* illegal */
> callp = &sysent[i&077];
> fetch = fuword;
> } else {
> pc += callp->sy_narg - callp->sy_nrarg;
> fetch = fuiword;
> }
>
> http://minnie.tuhs.org/TUHS/Archive/PDP-11/Trees/V7/usr/man/man2/indir.2
> The main purpose of indir is to allow a program to
> store arguments in system calls and execute them
> out of line in the data segment.
> This preserves the purity of the text segment.
>
> Note also the difference between V2 and V5 libc - clearly support for
> split I/D machines was added some time in this interval.
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V2/lib/write.s
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s4/write.s
Quoting Dan Cross <crossd(a)gmail.com>:
> On Tue, Jan 6, 2015 at 12:33 PM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>
>> On 2015-01-06 17:56, Dan Cross wrote:
>>>
>>> I believe that Mary Ann is referring to repeatedly looking up
>>> (presumably different) elements in the entry. Assuming that e.g. `vi`
>>> looks up O(n) elements, where $n$ is the number of elements, doing a
>>> linear scan for each, you'd end up with quadratic behavior.
>>>
>>
>> Assuming that you'd look up all the elements of the termcap entry at
>> startup, and did each one from scratch, yes.
>
>
> Yes. Isn't that exactly what Mary Ann said was happening? :-)
Yes
> But that would beg the question, why is vi doing a repeated scan of the
>> terminal entry at startup, if not to find all the capabilities and store
>> this somewhere? And if doing a look for all of them, why not scan the
>> string from start to finish and store the information as it is found? At
>> which point we move from quadratic to linear time.
>>
>
> I don't think she said it did things intelligently, just that that's how it
> did things. :-)
>
> But now we're getting into the innards of vi, which I never looked at
> anyway, and I guess is less relevant in this thread anyway.
vi does indeed look up all the various capabilities it will need,
once, when it starts up. It uses the documented interface, which
is tgetent followed by tgetstr/tgetnum/tgetflag for each capability.
tgetent did a linear search.
>> The short of it (from what I got out of it) is that the move from termcap
>> to terminfo was mostly motivated by attribute name changing away from fixed
>> 2 character names.
>>
>> A secondary motivation would be performance, but I don't really buy that
>> one. Since we only moved to terminfo on systems with plenty of memory,
>> performance of termcap could easily be on par anyway.
>>
>
> I tend to agree with you and I'll go one further: it seems that frequently
> we tend to identify a problem and then go to 11 with the "solution." I can
> buy that termcap performance was an issue; I don't know that going directly
> to hashed terminfo files was the optimal solution. A dbm file of termcap
> data and a hash table in whatever library parsed termcap would go a long
> way towards fixing the performance issues. Did termcap have to be
> discarded just to add longer names? I kind of tend to doubt it, but I
> wasn't there and don't know what the design criteria were, so my
> very-much-after-the-fact second guessing is just that.
It's been 30+ years, so the memory is a little fuzzy. But as I recall,
I did measure the performance and that's how I discovered that the
quadratic algorithm was causing a big performance hit on the hardware
available at the time (I think I was on a VAX 11/750, this would have
been about 1982.)
I was making several improvements at the same time. The biggest one
was rewriting curses to improve the screen update optimization, so it
would use insert/delete line/character on terminals supporting it.
Cleaning up the mess termcap had become (the format had become horrible
to update, and I was spending a lot of time making updates with all
the new terminals coming out) and improving startup time (curses also
had to read in a lot of attributes) were part of an overall cleanup.
IIRC, there may also have been some restrictions on parameters to string
capabilities that needed to be generalized.
Hashing could have been done differently, using DBM or some other method.
In fact, I'd used DBM to hash /etc/aliases in delivermail years earlier
(I have an amusing story about the worlds slowest email I'll tell some
other time) but at the time, it seemed best to break with termcap
and go with a cleaner format.
On 2015-01-01 17:11, Mary Ann Horton<mah(a)mhorton.net> wrote:
>
> The move was from termcap to terminfo. Termlib was the library for termcap.
Doh! Thanks for the correction. Finger fart.
> There were two problems with termcap. One was that the two-character
> name space was running out of room, and the codes were becoming less and
> less mnemonic.
Ah. Yes, that could definitely be a problem.
> But the big motivator was performance. Reading in a termcap entry from
> /etc/termcap was terribly slow. First you had to scan all the way
> through the (ever-growing) termcap file to find the correct entry. Once
> you had it, every tgetstr (etc) had to scan from the beginning of the
> entry, so starting up a program like vi involved quadratic performance
> (time grew as the square of the length of the termcap entry.) The VAX
> 11/780 was a 1 MIPS processor (about the same as a 1 MHz Pentium) and
> was shared among dozens of timesharing users, and some of the other
> machines of the day (750 and 730 Vaxen, PDP-11, etc.) were even slower.
> It took forever to start up vi or more or any of the termcap-based
> programs people were using a lot.
Hum. That seems like it would be more of an implementation issue. Why
wouldn't you just cache all the entries for the terminal once and for
all? terminfo never came to 16-bit systems anyway, so we're talking
about systems with plenty of memory. Caching the terminal information
would not be a big memory cost.
Thanks for the insight.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Bob Swartz, founder of Mark Williams Co, has allowed the sources for
COHERENT to be published under a three-clause BSD license. Steve Ness is
hosting them. They are available here:
http://nesssoftware.com/home/mwc/source.php
For reference, for folks who don't know what COHERENT is, it started as a
clone of 7th Edition, but grew more modern features over time. Dennis
Ritchie's recollections of his interaction with it:
https://groups.google.com/forum/#!msg/alt.folklore.computers/_ZaYeY46eb4/5B…
And of course the requisite Wikipedia link:
http://en.wikipedia.org/wiki/Coherent_(operating_system)
- Dan C.
PS: I hold a soft spot for COHERENT in my heart. I became interested in
Unix in high school, but this was before Linux was really a thing and
access to other systems was still hard to come by. I spotted an ad for
COHERENT in the back of one of the PC-oriented publications at the time,
"Computer Shopper" or some such, and realized that it was *almost* within
my reach financially and that I could install it on the computer I already
owned. Over the next month or so, I scraped up enough money to buy a copy,
did so, and put it on my PC. It was quirky compared to actual Unix
distributions, but it was enough to give one a flavor for things. The
manual, in particular, did not follow the traditional Unix format, but
rather was an alphabetical "lexicon" of commands, system calls and
functions and was (I've always thought) particularly well done. Links to
the COHERENT lexicon and various other documents:
http://www.nesssoftware.com/home/mwc/.
I graduated onto other systems rather quickly, but COHERENT served as my
introduction to Unix and Unix-like systems.
PPS: Bob Swartz is the father of the late Aaron Swartz.
On 2015-01-06 17:32, Mary Ann Horton<mah(a)mhorton.net> wrote:
>
> On 01/06/2015 04:22 AM,arnold(a)skeeve.com wrote:
>>> >>Peter Jeremy scripsit:
>>>> >>>But you pay for the size of $TERMCAP in every process you run.
>> >John Cowan<cowan(a)mercury.ccil.org> wrote:
>>> >>A single termcap line doesn't cost that much, less than a KB in most cases.
>> >In 1981 terms, this has more weight. On a non-split I/D PDP-11 you only
>> >have 32KB to start with. (The discussion a few weeks ago about cutting
>> >yacc down to size comes to mind...)
>> >
>> >On a Vax with 2 Meg of memory, 512 bytes is a whole page, and it might
>> >even be paged out, and BSD on the vax didn't have copy-on-write.
>> >
>> >ISTR that the /etc/termcap file had a comment saying something like
>> >"you should move the entries needed at your site to the top of this file."
>> >Or am I imagining it?:-)
>> >
>> >In short - today, sure, no problem - back then, carrying around a large
>> >environment made more of a difference.
>> >
>> >Thanks,
>> >
>> >Arnold
> Even with TERMCAP in the environment, there's still that quadratic
> algorithm every time vi starts up.
I must be stupid or something. What quadratic algorithm?
vi gets the "correct" terminal database entry directly from the
environment. Admittedly, getting any variable out of the environment
means a linear search of the environment, but that's about it.
What am I missing? And once you have that, any operation still means
either searching through the terminal definition for the right function,
which in itself is also linear, unless you hash that up in your program.
But I fail to see where the quadratic behavior comes in.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
A very nice addition to the archives. Thank you.
I well remember our disbelief that Mark Williams wrote all its own
code, unlike other vendors who had professed the same. As Dennis
described, we had fun putting together black-box tests that
recognized undocumented quirks (or bugs) in our code. We were
duly impressed when the results came back negative.
Doug
A prosperous New Years to all us old UNIX farts.
Years ago the USENIX conference was in Atlanta. It was a stark contrast between us and the Southern Baptists who were in town for their conference as well (punctuated at some goofball Baptist standing up in the middle of one of the restaurants to sing God Bless America or some such).
Anyhow, right before the conference someone (I think it was Dennis) made some comment about nobody ever having asked him for a cast of his genitals. A couple of friends decided we needed to issue genital casting kits to certain of the UNIX notables. I went out to an art supply store and bought plaster, paper cups, popsicle sticks to mix with, etc… Gould computers let me use one of their booth machines and a printer to print out the instructions. I purloined some bags from the hotel. It was pointed out that you need vaseline in order for the plaster to not stick to the skin. Great, I head into the hotel gift shop and grab ten tiny jars of vaseline. As I plop these on the counter at the cashier, she looks at me for a minute and then announces…
I guess y’all aren’t with the baptists.
People took it pretty tongue in cheek when they were presented. All except Redman who flew off the handle.
Dave Horsfall:
> At yet another, we had a Sun 3/50 window connected to a Convex, and acted
> all innocent when various dweebs did the old "echo 99k2vp..." etc trick.
John Cowan:
High-precision approximation to sqrt(2). What's the trick?
======
Not really a trick, just a hoary old zero-order CPU benchmark:
echo 99k2vp | time dc
You can see why letting people type that on a Convex thinking it was
a Sun 3/50 might have entertainment value.
Modern systems are far too fast for that to be worth while, though.
I still use a variant of it: a shell script called dc-N, containing
dc <<!
99k[2vszla1-dsa0<b]sb${1-500}salbx
!
meant to be invoked as
time dc-N 10000
or the like. (The internal default of 500 has long since gone
stale too, of course.)
Norman Wilson
Toronto ON
On 2014-12-31 21:14, Clem Cole<clemc(a)ccc.com> wrote:
>
> Jake - you have lots of help from others and using curses(3) is definitely
> the right way to program.
>
> But to answer your specific question about printf(string), according to
> Chapter 3 (Programmer's Info) of my old VT-100 user's guide, I think what
> is you are doing wrong is that "\033c" is not the ANSI clear to end of
> screen command.
Right...
> When I saw your message on my iPhone last night, the cache said - wait that
> can't be correct. But I could not remember why. So I had to wait until
> I got back home today to look in my basement.
>
> As I suspected, it's not an ANSI sequence. So are you running in VT-100
> (ANSI) mode or VT52 mode? I ask because it is close to the VT52 cursor
> right command which is actually: "\033C" but I do not remember is case
> mattered.
Case do matter.
> In VT52 mode you need to send the terminal: "\033H\033J" to clear the
> screen.
>
> In ANSI mode, it becomes: "\033[1;1\033[0J"
Shorter form: "\033[H\033[J"
> A few things to remember:
> 1.) Clear takes the current cursor position and clears from there to end of
> X (where X depends on mode, and type of clear). So you need to move the
> cursor to home position (aka 1,1).
Not really. It's way more advanced than that.
If we start with the generic clear screen, it is CSI Pn J
Where CSI can be expressed as ESC [ (or "\033[" in the same parlance as
above.)
Pn, then is an argument to the function, while J is the actual function
(clear screen in this case).
Now, Pn can cause many things:
0 Clear from cursor to end of screen
1 Clear from cursor to end of screen
2 Clear from beginning of screen to cursor
3 Clear whole screen
If no argument is given, the default is 0.
> 2.) VT-100's did not implement the full ANSI spec like Ann Arbor, Heathkit,
> Wyse etc. So there are a number of things that those terminals did
> better. A really good reason to you curses(3) because all the knowledge is
> keep in the termcap and as a programmer you don't need to worry about it.
Probably true. However, I'm not sure Ann Arbor or Heathkit did much
better. As far as I can remember, they were always more "weird", but I
might just be confused. However, curses(3) is definitely a good way of
not having to care about different terminal oddities.
> 3.) I saw sites were VT52 mode was sometimes preferred because it was good
> enough for most editing, and needed fewer chars to do manipulation. On
> slow serial lines, this sometimes was helpful. That said, give me an AAA
> any day. Like others, I still miss that terminal.:-)
Yeah, the VT52 was simpler, and had shorter control strings. But of
course, with the additional limitations that came with that.
Personally, I'd give an AAA or a Heathkit away if one was dropped on me.
A VT100 I would keep. :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
, but I can't see how you're supposed to clear the screen on a vt100 in
2.9BSD. I guess printf'ing ("\033c") would do the trick, but I assumed
there was a more proper way; something that leverages the vt100 termcap
entry and does the right thing. Anyone?
thx
jake
Evening all,
Am I correct in my guess that 4.4BSD was built cross on an HP300? I have
never found a binary dist of anything other than HP300 4.4...and my
attempts to build 4.4 on ULTRIX/SunOS have so far not succeeded...it had
to have been built SOMEHOW.
I picked up an HP300 to help me get somewhere...but it seems to only have
a 68010. :(
I either need to find a definitive 68020-minimum one on ebay...someone
with one available...or some tips of actually cross-building 4.4 for MIPS
or SPARCv7 (I have physical hardware for either)
I am very determined to run pure 4.4 on something bigger than a PIC32. ;)
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
> From: Mary Ann Horton <mah(a)mhorton.net>
> I loved my Ambassador!
Ditto!
> Still have one.
Argh! Now you've made me want one for my vintage -11's! Alas, I don't see one
on eBay.... :-(
Noel
The information give is correct. You could possibly argue that you
shouldn't be using those functions, but should be using the curses(3)
library instead, which in turn uses this stuff... But it's all up to how
complex you want to be. :-)
Johnny
On 2014-12-31 07:16, Jacob Ritorto<jacob.ritorto(a)gmail.com> wrote:
> Mary, this is exactly what I needed -- good to go now; thank you!
>
> As a side note: Man, what an intimidating can of braindamage I've opened!:)
>
> thanks all!
> jake
>
> P.S. if anyone's bored enough, you can check out what we're up to at
> https://github.com/srphtygr/dhb. I'm trying to get my 11yo kid to spend a
> little time programming rather than just playing video games when he's near
> a computer. He'a actually getting through this stuff and is honestly
> interested when he understands it and sees it work -- and he even spotted a
> bug before me this afternoon! Feel free to raise issues, pull requests,
> etc. if you like -- I'm putting him through the git committing and pair
> programming paces, so outside interaction would be kinda fun:)
>
> P.P.S. We're actually using 2.11bsd after all..
>
>
> On Tue, Dec 30, 2014 at 9:33 PM, Mary Ann Horton<mah(a)mhorton.net> wrote:
>
>> >This is the right info. Be sure to scroll up to see how to use tgetent,
>> >tgetstr, and tputs. You aren't likely to need any padding.
>> >
>> >Essentially:
>> > tgetent using getenv("TERM") gets you the whole entry from
>> >/etc/termcap
>> > tgetstr of "cl" gets you the "clear"
>> >sequence
>> > tputs outputs the "clear"
>> >sequence
>> >
>> >
>> >On 12/30/2014 06:22 PM, Dan Stromberg wrote:
>> >
>>> >>Check outhttps://www.gnu.org/software/termutils/manual/termcap-1.3/
>>> >>html_mono/termcap.html#SEC30
>>> >>- especially the "cl" entry.
>>> >>
>>> >>ISTR the database being at /etc/termcap normally.
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
This was brought to my attention on another mailing list.
"Marking a new chapter for one of the country’s most architecturally
and historically significant buildings, Somerset Development has
announced a new name for the iconic former Bell Labs complex in
Holmdel, N.J. The two-million-square-foot building – now named Bell
Works – is currently undergoing a more than $100-million adaptive
reuse redevelopment that will transform the facility into a dynamic
mixed-use center.”
http://patch.com/new-jersey/holmdel-hazlet/somerset-development-unveils-bel…http://bell.works/
> From: Dave Horsfall <dave(a)horsfall.org>
> a 9-track tape followed me home, but even if I knew where it was now it
> ain't gonna be readable after over 30 years...
Umm, don't be too sure!
I have several sets of backup tapes from one of the V6 machines at MIT, and
those are also roughly 30 years old, and they are not in the best shape (they
sat in my basement for most of that time). I sent one off to someone who
specializes in reading old tapes, and he's gotten almost all the bits off of
it (a few records had unrecoverable read errors, but the vast majority were
OK - like roughly 15 read errors in around 1500 records).
So do look for that tape (unless the material is all already online).
I hope to annouce a vast trove of stuff soon from my tapes (once I figure out
how to interpret the bits - they are written by a sui generis application
called 'saveRVD', and the _only_ documentation of how it did it is... on that
tape! :-) That includes a lot of code written at MIT, as well as stuff
from elsewhere.
Coming sbould be BCPL, Algol, LISP and some other languages; MACRO-11 and the
DEC linker (which I guess are also available from UNSW tapes),but _also_
programs to convert back and forth from .REL to a.out format, and to .LDA
format; and a whole ton of other applications (I have no idea what all is
there - if anyone is interested, I can make a pass through my manuals and try
and make a list).
Noel
I've seen a couple of less than flattering references here; what was the
problem with them?
At UNSW, we couldn't afford the DH-11, so ended up with the crappy DJ-11
instead (the driver for the DH-11 had the guts ripped out of it in an
all-nighter by Ian Johnston as I recall), and when the DZ-11 came along we
thought it was the bees' knees.
Sure, the original driver was as slow as hell, but the aforesaid IanJ
reworked it and made it faster by at least 10x; amongst other things, I
think he did away with the character queues and used the buffer pool
instead, getting 9600 on all eight channels simultaneously, possibly even
full-duplex.
--
Dave Horsfall DTM (VK2KFU) "Bliss is a MacBook with a FreeBSD server."
http://www.horsfall.org/spam.html (and check the home page whilst you're there)
> From: Clem Cole
A few comments on aspects I know something of:
> BTW: the Arpanet was not much better at the time
The people at BBN might disagree with you... :-)
But seriously, throughout its life, the ARPANET had 'load-dependent routing',
i.e. paths were adjusted not just in response to links going up or down, but
depending on load (so that traffic would avoid loaded links).
The first attempt at this (basically a Destination-Vector algorithm, i.e. like
RIP but with non-static per-hop costs) didn't work too well, for reasons I
won't get into unless anyone cares. The replacement, the first Link-State
routing algorithm, worked much, much, better; but it still had minor issues
damping fixed most of those too).
> DH11's which were a full "system unit"
Actually, two; they were double (9-slot, I guess?) backplanes.
> The MIT guys did ARP for ChaosNet which quickly migrated down the street
> to BBN for the 4.1 IP stack.
Actually, ARP was jointly designed by David Plummer and I for use on both
TCP/IP and CHAOS (which is why it has that whole multi-protocol thing going);
we added the multi-hardware thing because, well, we'd gone half-way to making
it totally general by adding multi-protocol support, so why stop there?
As soon as it was done it was used on a variety of IP-speaking MIT machines
that were connected to a 10MBit Ethernet; I don't recall them all, but one
kind was the MIT C Gateway multi-protocol routers.
> Hey it worked just fine at the time.
For some definition of 'work'! (Memories of wrapping protocol A inside
protocol B, because some intervening router/link didn't support protocol A,
only B...)
Noel
Hi all,
Wanting to set up an 11/34 or 11/23 with a unix that's at least
contemporary enough to run telnet and ftp. From what I can gather on line,
I guess 2.10 is the best shot, but it's apparently a little less popular
and I can't fin enough docs about it to determine if it'll run with the
hardware I have. Am I on the right track here, or should I be considering
backporting the programs to 2.9? Pointers to 2.10 Setup manual would be
most welcome as well as suggestions on where to find other resources to
meet this goal..
thx
jake
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
> The replacement, the first Link-State routing algorithm, worked much,
> much, better; but it still had minor issues
>
> damping fixed most of those too).
Oop, the editor 'ate' a line there (or, rather the editor's operator spaced
out :-): it should say "it still had minor issues, such as oscillation
between two equal-cost paths, with the traffic 'chasing itself' from path to
path; proper damping fixed most of those too".
> I always give Dave Clark credit (what I call "Clark's Observation") for
> the most powerful part of the replacement for the ARPAnet - aka the
> idea of a network of network.
Not sure exactly what you're referring to here (the concept of an internet
as a collection of networks seems to have occurred to a number of people,
see the Internet Working Group notes from the early 70s).
> Dave once quipped: "Why does a change at CMU have to affect MIT?"
Subnets (which first appeared at MIT, due to our, ah, fractured
infrastructure) again were an idea which occurred to a number of people all
at the same time; in part because MIT's CHAOSNET already had a collection of
subnets (the term may in fact come from CHAOSNET, I'd have to check) inside
MIT.
> I've forgotten what we did at CMU at the time, but I remember the MIT
> folk were not happy about it.
I used to know the answer to that, but I've forgotten what it was! I have this
bit set that CMU did something sui generis, not plain ARP sub-netting, but I
just can't remember the details! (Quick Google search...) Ah, I see, it's
described in RFC-917 - it's ARP sub-netting, but instead of the first-hop
router answering the ARP based on its subnet routing tables, it did something
where ARP requests were flooded across the entire network.
No wonder we disapproved! :-)
> Thought, didn't you guys have the 3Mbit stuff like we did at CMU and
> UCB first?
MIT, CMU and Stanford all got the 3Mbit Ethernet at about the same time, as
part of the same university donation scheme. (I don't recall UCB being part
of that - maybe they got it later?)
The donation included a couple of UNIBUS 3Mbit Ethernet cards (a hex card,
IIRC) the first 3MB connections at MIT were i) kludged into one of the MIT-AI
front-end 11's (I forget the details, but I know it just translated CHAOS
protocol packets into EFTP so they could print things on the Dover laser
printer), and ii) a total kludge I whipped up which could forward IP packets
back and forth between the 3M Ethernet, and the other MIT IP-speaking LANs.
(It was written in MACRO-11, and with N interfaces, it used hairy macro
expansions to create separate code for each of all 'N^2' possible forwarding
paths!) Dave Clark did the Alto TCP/IP implementation (which was used to
create a TFTP->EFTP translating spooler for IP access to the Dover).
I can give you the exact data, if you care, because Dave Clark and I had
a competition to see who could be done first, and the judge (Karen Sollins)
declared it a draw, and I still have the certificate! :-)
Noel
> From: Clem Cole <clemc(a)ccc.com>
> two issues. first DEC subsetted the modem control lines so running
> modems - particularly when you wanted hardware flow control like the
> trailblazers - did not work.
?? We ran dialup modems on our DZ11s (1200 bps Vadics, IIRC) with no problems,
so you must be speaking only some sort of high-speed application where you
needed the hardware flow control, or something, when you say "running modems
... did not work".
Although, well, since the board didn't produce an interrupt when a modem
status line (e.g. 'carrier detect') changed state, we did have to do a kludge
where we polled the device to catch such modem control line changes. Maybe
that's what you were thinking of?
> To Dave the DZ was great because it was two boards to do what he
> thought was the same thing as a DH
To prevent giving an incorrect impression to those who 'were not there', each
single DZ hex board supported 8 lines (fully independent of any other cards);
the full DH replacement did need two boards, though.
Noel
> From: Ronald Natalie
>> each single DZ hex board supported 8 lines (fully independent of any
>> other cards); the full DH replacement did need two boards, though.
> Eh? The DH/DM from Able was a single hex board and it supported 16
> lines.
To be read in the context of Clem's post which I was replying to: to replace
the line capacity of a DH (16 lines), one needed two DZ cards.
Noel
> From: Dave Horsfall <dave(a)horsfall.org>
> what was the problem with them?
Well, not a _problem_, really, but.... 'one interrupt per output character'
(and no way around that, really). So, quite a bit of overhead when you're
running a bunch of DZ lines, at high speeds (e.g. 9600 baud).
I dunno, maybe there was some hackery one could pull (e.g. only enabling
interrupts on _one_ line which was doing output, and on a TX interrupt,
checking all the other lines to see if any were i) ready to take another
character, and ii) had a character waiting to go), but still, it's still going
to be more CPU overhead than DMA (which is what the DH used).
Noel
> From: Clem Cole <clemc(a)ccc.com>
> an old Able "Enable" board which will allow you to put 4Megs of memory
> in an 40 class processor (you get a cache plus a new memory MAP with 22
> bits of address like the 45 class processors).
But it doesn't add I/D to a machine without it, though, right? (I tried
looking for Enable documentation online, couldn't find any. Does anyone know
of any?)
I recall at MIT we had a board we added to our 11/45 which added a cache, and
the ability to have more than 256KB of memory, but I am unable to remember
much more about it (e.g. who made it, or what it was called) - it must have
been one of these.
I recall we had to set up the various memory maps inside the CPU to
permanently point to various ranges of UNIBUS address space (so that, e.g.
User I space was assigned 400000-577777), and then the memory map inside the
board mapped those out to the full 4MB space; the code changes were (IIRC)
restricted to m45.s; for the rest of the code, we just redefined UISA0 to
point to the one on the added board, etc. And the board had a UNIBUS map to
allow UNIBUS devices access to all of real memory, just like on an 11/70.
> From: Jacob Ritorto <jacob.ritorto(a)gmail.com>
> So does that single board contain the memory and everything, or is this
> a backplane mod/special memory kind of setup?
I honestly don't recall much about how the board we had at MIT worked, but i)
the memory was not on the board itself, and ii) there had to be some kind of
special arrangements for the memory, since the stock UNIBUS only has 18 bits
of address space. IIRC, the thing we had could use standard Extended UNIBUS
memory cards.
I don't recall how the mapping board got access to the memory - whether the
whole works came with a small custom backplane which one stuck between the
CPU and the rest of the system, and into which the new board and the EUB
memory got plugged, or what. I had _thought_ it somehow used the FastBUS
provision in the 11/45 backplane for high-speed memory (since with the board
in, the machine had a basic instruction time of 300nsec if you hit the cache,
just like an 11/70), and plugged in there somewhere, but maybe not, since
apparently this board you have is for a /34? Or maybe there were two
different models, one for the /45 and one for the /34?
> With the enable34 board, do I have some hope of getting 2.11bsd on this
> one?
Since I doubt it adds I/D to machines that don't already have it, I would
guess no. Unless somehow one can use overlays, etc, to fit 2.11 into 56KB of
address space (note, not 'memory').
> I do have an 11/73 I'm working on that could run that build much more
> easily and appropriately..
That's where I'd go.
I do have that MIT V6 Unix with TCP/IP, where the TCP is almost entirely in
user space (only incoming packet demux, etc is in the kernel), and I have
found backup tapes for it, which are off at an old tape specialist being
read, and the interim reports on readability are good, but until that
happens, I'm not sure we'll be seeing TCP/IP on non-split-I/D machines.
Noel
Jim / Nick, That's kinda my problem: can't find enough documentation on
2.10 to ascertain if I can / should run it. I know 2.9 is OK for the 11/34,
but 2.9 doesn't have telnet or ftp and I want this machine to be easily
reachable on the net.
From what I've read, 2.11 is right out except for the little glimmer of
hope in the docs that it "would probably only require a moderate amount of
squeezing to fit on machines with less memory, but it would also be very
unhappy about the prospect," which I think roughly translates to, "don't
try it on a puny thing like an 11/34."
I wonder if porting telnet and ftp to 2.9 on the 11/34 would be my best
hope? But with a much more antique tcp stack, it sounds daunting.
On Thu, Nov 20, 2014 at 9:05 PM, Jim Carpenter <jim(a)deitygraveyard.com>
wrote:
>
>
> 2.10 and 2.11 require split I/D, right? I'm positive 2.9 was the
> latest I could run on my 11/34.
>
> Jim
>
Hi all,
I've been using window(1) on my simh-emulated 11/73, but it can't handle
terminals much larger than 80x24, failing with "Out of memory."
I'd like to use window(1) to drive a big xterm, like 132x66, for
instance, because I'd like to reduce the number of telnet connections to
the host.
How does one go about analyzing and remediating the memory contention in
this environment?
If anyone's interested, we could set up a pair programming session to
work on it together, which I think would be most instructive, for me, at
least.
Bear in mind that this is just for pdp11 voyeurism / fun.
thx
jake
I've been thinking more about early yacc.
It's not mentioned explicitly but I'm wondering if early Yacc's output
(say in Unix version 3) was in B language since it was written in B
language? It seems logical but I can't back up this assertion as
there's no executable or source code that I can find. I assume there
had to be some sort of B language compiler at some point but the
hybrid v1/v2 unix I've looked at doesn't have it.
And I'm still wondering what yacc was used for in the Unix v5 era.
There's no *.y at all, e.g. no expr and no bc. I still have some hopes
of modifying bc to run on Unix v5, or at least getting some simple
yacc program to work under the v5 version.
Mark
I just saw this video mentioned on reddit...
https://www.youtube.com/watch?v=XvDZLjaCJuw
UNIX: Making Computers Easier To Use -- AT&T Archives film from 1982, Bell
Laboratories
It features many of Bell UNIX folks, and even includes a brief example of
speak in action at about the 15:20 mark.
It's really cool to see the proliferation of UNIX by 1982 inside Bell.
The Xerox Alto and CP/M are not Unix-derived, but the first in
particular influenced the design of Unix workstations and the X11
Window System in the 1980s, so this story may be of interest to list
readers:
Exposed: Xerox Alto and CP/M OS source code released
The Computer History Museum has made the code behind yet more
historic software available for download
http://www.itworld.com/article/2838925/exposed-xerox-alto-and-cp-m-os-sourc…
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
On 2014-10-28 13:42, Clem Cole <clemc(a)ccc.com> wrote:
> yes:
> http://repository.cmu.edu/cgi/viewcontent.cgi?article=3241&context=compsci
Cool. I knew CMU did a lot of things with 11/40 machines. I didn't know
they had modified them to be able to write their own microcode, but
thinking about it, it should have been obvious. As they did
multiprocessor systems based on the 11/40, they would have had to modify
the microcode anyway.
> I had a 60 running v7 years later. we also toyed with adding CSV/CRET
> but never did it because we got an 11/70
>> >On Oct 27, 2014, at 9:09 PM, Dave Horsfall<dave(a)horsfall.org> wrote:
>> >
>>> >>On Mon, 27 Oct 2014, Clem Cole wrote:
>>> >>
>>> >>[...] because the CMU 11/40E had special CSV/CRET microcode which we
>>> >>could not use on the 11/34.
>> >
>> >The 40E had microcode whilst the vanilla 40 didn't? I thought only the 60
>> >was micro-programmable; I never did get around to implementing CSV/CRET on
>> >our 60 (Digital had a bunch of them when a contract with a publishing
>> >house fell through).
DEC actually made two PDP-11s that were micro programmable. The 11/60
and the 11/03 (if I remember right). DEC never had microprogramming for
the 11/40, but obviously CMU did that.
Ronald Natalie<ron(a)ronnatalie.com> wrote:
>> >On Oct 27, 2014, at 10:06 PM, Clem Cole<clemc(a)ccc.com> wrote:
>> >
>> >yes:http://repository.cmu.edu/cgi/viewcontent.cgi?article=3241&context=comp… <http://repository.cmu.edu/cgi/viewcontent.cgi?article=3241&context=compsci>
>> >
>> >I had a 60 running v7 years later. we also toyed with adding CSV/CRET but never did it because we got an 11/70
> Problem with the 60 was it lacked Split I/D (as did the 40's). We kind of relied on that for the kernels towards the end of the PDP-11 days,
> We struggled with the lack of I/D on the 11/34 and 11/23 at BRL but finally gave up when TCP came along. We just didn't have enough segments to handle all the overlaying needed to do. I recycled all the non split-I/D machines into BRL GATEWAYS.
>
> Of course, there was the famous (or imfamous) MARK instruction. This thing was sort of a kludge, you actually pushed the instruction on the stack and then did the RTS into the stack to execute the MARK to pop the stack and jump back to the caller. I know of no compiler (either DEC-written or UNIX) that used the silly thing. It obviously wouldn't work in split I/D mode anyhow. Years later while sitting in some DEC product announcement presentation, they announced the new T-11 chip (the single chip PDP-11) and the speaker said that it supported the entire instruction set with the exception of MARK. Me and one other PDP-11 trivia guy are going "What? No mark instruction?" in the back of the room.
Yurg... The MARK instruction was just silly. I never knew of anyone who
actually used it. Rumors have it that DEC just came up with it to be
able to extend some patent for a few more years related to the whole
PDP-11 architecture.
Clem Cole<clemc(a)ccc.com> wrote:
>> >Problem with the 60 was it lacked Split I/D (as did the 40's).
>
> ?A problem was that it was 40 class processor and as you says that means it
> was shared I/D (i.e. pure 16 bits) - so it lacked the 45 class 17th bit.
> The 60 has went into history as the machine that went from product to
> "traditional products" faster than any other DEC product (IIRC 9 months).
> I'm always surprised to hear of folks that had them because so few were
> actually made.
I picked up four 11/60 machines from a place in the late 80s. I still
have a complete set of CPU cards, but threw the last machine away about
10 years ago.
> I've forgotten the details nows, but they also had some issues when running
> UNIX. Steve Glaser and I chased those for a long time. The 60 had the HCM
> instruction sequences (halt a confuse microcode) which were some what
> random although UNIX seemed to hit them. DEC envisioned it as a commercial
> machine and added decimal arithmetic to it for RSTS Cobol.? I'm not sure
> RSX was even supported on it.
RSX-11M supports it. So do RSTS/E and RT-11. RSX-11M-PLUS obviously
don't, since it have a minimal requirement of 22-bit addressing.
The microcode specific instructions are interesting. But in general
shouldn't crash things, but of course kernel is a different story. :-)
Johnny
has anyone ever tried to compile any of the old C compilers with a 'modern'
C compiler?
I tried a few from the 80's (Microsoft/Borland) and there is a bunch of
weird stuff where integers suddenly become structs, structures reference
fields that aren't in that struct,
c01.c
register int t1;
....
t1->type = UNSIGN;
And my favorite which is closing a bunch of file handles for the heck of it,
and redirecting stdin/out/err from within the program instead of just
opening the file and using fread/fwrite..
c00.c
if (freopen(argv[2], "w", stdout)==NULL ||
(sbufp=fopen(argv[3],"w"))==NULL)
How did any of this compile? How did this stuff run without clobbering
each-other?
I don't know why but I started to look at this stuff with some half hearted
attempt at getting Apout running on Windows. Naturally there is no fork, so
when a child process dies, the whole thing crashes out. I guess I could
simulate a fork with threads and containing all the cpu variables to a
structure for each thread, but that sounds like a lot of work for a limited
audience.
But there really is some weird stuff in v7's c compiler.
Wow BSD on a supercomputer! That sounds pretty cool!
http://web.ornl.gov/info/reports/1986/3445600639931.pdf
>From here it mentions it could scale to 16 process execution modules
(CPU's?)
while here http://ftp.arl.mil/mike/comphist/hist.html it mentions 4 PEMs
which each could run 8 processes.
It still looks like an amazing machine.
-----Original Message-----
From: Ronald Natalie
To: Noel Chiappa
Cc: tuhs(a)minnie.tuhs.org
Sent: 10/27/14 11:09 PM
Subject: Re: [TUHS] speaking of early C compilers
We thought the kernels got cleaned up a lot by the time we got to the
BSD releases. We were wrong.
When porting our variant of the 4 BSD to the Denelcor HEP supercomputer
we found a rather amusing failure.
The HEP was a 64 bit word machine but it had partial words of 16 and 32
bits. The way it handled these was to encode the word size in the
lower bits of the address (since the bottom three weren't used in word
addressing anyhow). If the bottom three were zero, then it was the
full word. If it was 2 or 6, it was the left or right half word, and
1,3, 5, and 7 yielded the four quarter words. (Byte operations used
different instructions so they directly addressed the memory).
Now Mike Muuss who did the C compiler port made sure that all the casts
did the right thing. If you cast "int *" to "short *" it would tweak
the low order bits to make things work. However the BSD kernel in
several places did what I call conversion by union: essentially this:
union carbide {
char* c;
short* s;
int* i;
} u;
u.s = ...some valid short* ...
int* ip = u.i;
Note the compiler has no way of telling that you are storing and
retrieving through different union members and hence the low order bits
ended up reflecting the wrong word size and this led to some flamboyant
failures. I then spent a week running around the kernel making these
void* and fixing up all the access points to properly cast the accesses
to it.
The other amusing thing was what to call the data types. Since this
was a word machine, there was a real predisposition to call the 64 bit
sized thing "int" but that meant we needed another typename for the 32
bit thing (since we decided to leave short for the 16 bit integer).
I lobbied hard for "medium" but we ended up using int32. Of course,
this is long before the C standards ended up reserving the _ prefix for
the implementation.
The afore mentioned fact that all the structure members shared the same
namespace in the original implementation is why the practice of using
letter prefixes on them (like b_flags and b_next etc... rather than just
flags or next) that persisted long after the C compiler got this issue
resolved.
Frankly, I really wish they'd have fixed arrays in C to be properly
functioning types at the same time they fixed structs to be proper types
as well. Around the time of the typesetter or V7 releases we could
assign and return structs but arrays still had the silly "devolve into
pointers" behavior that persists unto this day and still causes problems
among the newbies.
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
> From: Dave Horsfall <dave(a)horsfall.org>
> What, as opposed to spelling creat() with an "e"?
Actually, that one never bothered me at all!
I tended to be more annoyed by _extra_ characters; e.g. the fact that 'change
directory' was (in standard V6) "chdir" (as opposed to just plain "cd") I
found far more irritating! Why make that one _five_ characters, when most
common commands are two?! (cc, ld, mv, rm, cp, etc, etc, etc...)
Noel
Norman Wilson writes today:
>> ...
>> -- Dennis, in one of his retrospective papers (possibly that
>> in the 1984 all-UNIX BLTJ issue, but I don't have it handy at
>> the moment) remarked about ch becoming chdir but couldn't
>> remember why that happened.
>> ...
The reference below contains on page 5 this comment by Dennis:
>> (Incidentally, chdir was spelled ch; why this was expanded when we
>> went to the PDP-11 I don't remember)
@String{pub-PH = "Pren{\-}tice-Hall"}
@String{pub-PH:adr = "Upper Saddle River, NJ 07458, USA"}
@Book{ATT:AUS86-2,
author = "AT{\&T}",
key = "ATT",
title = "{AT}{\&T UNIX} System Readings and Applications",
volume = "II",
publisher = pub-PH,
address = pub-PH:adr,
pages = "xii + 324",
year = "1986",
ISBN = "0-13-939845-7",
ISBN-13 = "978-0-13-939845-2",
LCCN = "QA76.76.O63 U553 1986",
bibdate = "Sat Oct 28 08:25:58 2000",
bibsource = "http://www.math.utah.edu/pub/tex/bib/master.bib",
acknowledgement = ack-nhfb,
xxnote = "NB: special form AT{\&T} required to get correct
alpha-style labels.",
}
That chapter of that book comes from this paper:
@String{j-ATT-BELL-LAB-TECH-J = "AT\&T Bell Laboratories Technical Journal"}
@Article{Ritchie:1984:EUT,
author = "Dennis M. Ritchie",
title = "Evolution of the {UNIX} time-sharing system",
journal = j-ATT-BELL-LAB-TECH-J,
volume = "63",
number = "8 part 2",
pages = "1577--1593",
month = oct,
year = "1984",
CODEN = "ABLJER",
DOI = "http://dx.doi.org/10.1002/j.1538-7305.1984.tb00054.x"
ISSN = "0748-612X",
ISSN-L = "0748-612X",
bibdate = "Fri Nov 12 09:17:39 2010",
bibsource = "Compendex database;
http://www.math.utah.edu/pub/tex/bib/bstj1980.bib",
abstract = "This paper presents a brief history of the early
development of the UNIX operating system. It
concentrates on the evolution of the file system, the
process-control mechanism, and the idea of pipelined
commands. Some attention is paid to social conditions
during the development of the system.",
acknowledgement = ack-nhfb,
fjournal = "AT\&T Bell Laboratories Technical Journal",
topic = "computer systems programming",
}
Incidentally, on modern systems with tcsh and csh, I use both chdir
and cd; the long form does the bare directory change, whereas the
short form is an alias that also updates the shell prompt string and
the terminal window title.
I also have a personal alias "xd" (eXchange Directory) that is short
for the tcsh & bash sequence "pushd !*; cd .", allowing easy jumping
back and forth between pairs of directories, with updating of prompts
and window titles.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Jason Stevens
> has anyone ever tried to compile any of the old C compilers with a
> 'modern' C compiler?
> ...
> How did any of this compile? How did this stuff run without clobbering
> each-other?
As Ron Natalie said, the early kernels are absolutely littered with all sorts
of stuff that, by today's standards, are totally unacceptable. Using a
variable declared as an int as a pointer, using a variable declared as a
'foo' pointer as a 'bar' pointer, yadda-yadda.
I ran (tripped, actually :-) across several of these while trying to get my
pipe-splicing code to work. (I used Version 6 since i) I am _totally_
familiar with it, and ii) it's what I had running.)
For example, I tried to be all nice and modern and declared my pointer
variables to be the correct type. The problem is that Unix generated unique
ID's to sleep on with code like "sleep(p+1, PPIPE)", and the value generated
by "p+1" depends on what type "p" is declared as - and if you look in pipe.c,
you'll see it's often declared as an int pointer. So when _I_ wrote
"sleep((p + 1), PPIPE)", with "p" declared as a "stuct file pointer", I got
the wrong number.
I can only speculate as to why they wrote code like this. I think part of it
is, as Brantley Coile points out, historical artifacts due to the evolution
of C from (originally) BCPL. That may have gotten them used to writing code
in a certain way - I don't know. I also expect the modern mindset (of being
really strict about types, and formal about coverting data from one to
another) was still evolving back then - partly because they often didn't
_have_ the tools (e.g. casts) to do it right. Another possibility is that
they were using line editors, and maintaining more extensive source is a pain
with an editor like that. Why write "struct file *p" wnen you can just write
"*p"? And of course everyone was so space-concious back then, with those tiny
disks (an RK05 pack is, after all, only 2.5MB - only slightly larger than a
3.5" floppy!) every byte counted.
I have to say, though, that it's really kind of jarring to read this stuff.
I have so much respect for their overall structure (the way the kernel is
broken down into sub-systems, and the sub-systems into routines), how they
managed to get a very powerful (by anyone's standards, even today's) OS into
such a small amount of code... And the _logic_ of any given routine is
usually quite nice, too: clear and efficient. And I love their commenting
style - no cluttering up the code with comments unless there's something that
really needs elucidation, just a short header to say, at a high level, what
the routine does (and sometimes how and why).
So when I see these funky declarations (e.g. "int *p" for something that's
_only_ going to be used to point to a "struct file"), I just cringe - even
though I sort of understand (see above) why it's like that. It's probably the
thing I would most change, if I could.
Noel
Noel Chiappa:
> I tended to be more annoyed by _extra_ characters; e.g. the fact that
> 'change directory' was (in standard V6) "chdir" (as opposed to just
> plain "cd") I found far more irritating! Why make that one _five_
> characters, when most common commands are two?! (cc, ld, mv, rm, cp,
> etc, etc, etc...)
In the earliest systems, e.g. that on the PDP-7, the change-directory
command was just `ch'.
Two vague memories about the change:
-- Dennis, in one of his retrospective papers (possibly that
in the 1984 all-UNIX BLTJ issue, but I don't have it handy at
the moment) remarked about ch becoming chdir but couldn't
remember why that happened.
-- Someone else, possibly Tom Duff, once suggested to me that
in the earliest systems, the working directory was the only
thing that could be changed: no chown, no chmod. Hence just
ch for chdir. I don't know offhand whether that's true, but
it makes a good story.
Personally I'd rather have to type chdir and leav off th
trailing e on many other words than creat if it let me off
dealing with pieces of key system infrastructure that insist
on printing colour-change ANSI escape sequences (with, so far
as I can tell, no way to disable them) and give important files
names beginning with - so that grep pattern * produces an error.
But that happens in Linux, not UNIX.
Norman Wilson
Toronto ON