On 2015-12-12 07:16, William Pechter<pechter(a)gmail.com> wrote:
>
> Warren Toomey wrote:
>> >On Sat, Dec 12, 2015 at 03:54:16PM +1100, Peter Jeremy wrote:
>>> >>Also, I've seen suggestions that there's a 2.11BSD patch later than
>>> >>447 but I can't find anything "official" andwww.2bsd.com is either
>>> >>down or inaccessible from all the systems I have access to. Does
>>> >>anyone know if 448 or later were released? And given the issues with
>>> >>www.2bsd.com would someone be willing to mirror it (assuming we can
>>> >>got a copy of it)?
>> >[ Back to a real keyboard ]. Yes I'd be very happy to mirror 2bsd.com.
>> >Does anybody know what's happened to Steven Schultz?
>> >
>> >Cheers, Warren
>> >_______________________________________________
>> >TUHS mailing list
>> >TUHS(a)minnie.tuhs.org
>> >http://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
> Last patch is 447 from June 2012.
Uh. No. 447 is from December 31, 2008.
See /VERSION in the patch set, which holds the patch version and date
for the patch.
And I did an unofficial 448 in 2010, which I have tried to spread, and
which I suspect is the patch referred to above...
> I can get to the site just fine... pasted the patch below if it helps
> anyone.
> I haven't heard anything about him. Haven't worked at the same company
> since the early 1990's...
I used to talk with him a lot in the past, but have not been able to
raise him, and haven't seen anything from him in over 5 years... No idea
what he is up to nowadays...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Random832
> Interestingly, the SysIII sum.c program, which I assume yields the same
> result for this input, appears to go through the whole input
> accumulating the sum of all the bytes into a long, then adds the two
> halves of the long at the end rather than after every byte.
That's the same hack a lot of TCP/IP checksums routines used on machines with
longer words; add the words, then fold the result in the shorter length at the
end. The one I wrote for the 68K back in '84 did that.
> This suggests that the two programs would give different results for
> very large files that overflow a 32-bit value.
No, I don't think so, depending on the exact detals of the implementation. As
long as when folding the two halves together, you add any carry into the sum,
you get the same result as doing it into a 16-bit sum. (If my memory of how
this all works is correct - the neurons aren't what they used to be,
especially late in the day... :-)
> Also, if this sign extends, then its behavior on "negative" (high bit
> set) bytes is likely to be very different from the SysIII one, which
> uses getc.
I have this bit set that in C, 'char' is defined to be signed, and
furthermore that when you assign a shorter int to a longer one, the sign is
extended. So if one has a char holding '0200' octal (i.e. -128), assigning it
to a 16-bit int should result in the latter holding '0177600' (i.e. still
-128). So in fact I think they probably act the same.
Noel
> From: Will Senn
> I noticed that the sum utility from v6 reports a different checksum
> than it does using the sum utility from v7 for the same file.
> ... does anyone know what's going on here?
> Why is sum reporting different checksum's between v6 and v7?
The two use different algorithms to accumulate the sum (I have added comments
to the relevant portion of the V6 assembler one, to help understand it):
V6:
mov $buf,r2 / Pointer to buffer in R2
2: movb (r2)+,r4 / Get new byte into R4 (sign extends!)
add r4,r5 / Add to running sum
adc r5 / If overflow, add carry into low end of sum
sob r0,2b / If any bytes left, go around again
Read the description of MOVB in the PDP-11 Processor manual.
V7:
while ((c = getc(f)) != EOF) {
nbytes++;
if (sum&01)
sum = (sum>>1) + 0x8000;
else
sum >>= 1;
sum += c;
sum &= 0xFFFF;
}
I'm not clear on some of that, so I'll leave its exact workings as an
exercise, but I'm pretty sure it's not a equivalent algorithm (as in,
something that would produce the same results); it's certainly not
identical. (The right shift is basically a rotate, so it's not a straight sum,
it's more like the Fletcher checksum used by XNS, if anyone remembers that.)
Among the parts I don't get, for instance, sum is declared as 'unsigned',
presumably 16 bits, so the last line _should_ be a NOP!? Also, with 'c' being
implicitly declared as an 'int', does the assignment sign extend? I have this
vague memory that it does. And does the right shift if the low bit is one
really do what the code seems to indicate it does? I have this bit that ASR on
the PDP-11 copies the high bit, not shifts in a 0 (check the processor
manual). That is, of course, assuming that the compiler implements the '>>'
with an ASR, not a ROR followed by a clear of the high bit, or something.
Noel
Ok, it definitely sounds like the v6tar source is around somewhere so
if someone could point me in the right direction...
I've only seen the binary, and I can't remember where I got it.
Mark
All,
While working on the latest episode of my saga about moving files
between v6 and v7, I noticed that the sum utility from v6 reports a
different checksum than it does using the sum utility from v7 for the
same file. To confirm, I did the following on both systems:
# echo "Hello, World" > hi.txt
# cat hi.txt
Hello, World
Then on v6:
# sum hi.txt
1106 1
But on v7:
# sum hi.txt
37264 1
There is no man page for the utility on v6, and it's assembler. On v7,
there's a manpage and it's C:
man sum
...
Sum calculates and prints a 16-bit checksum for the named
file, and also prints the number of blocks in the file.
...
A few questions:
1. I'll eventually be able to read assembly and learn what the v6
utility is doing the hard way, but does anyone know what's going on here?
2. Why is sum reporting different checksum's between v6 and v7?
3. Do you know of an alternative to check that the bytes were
transferred exactly? I used od and then compared the text representation
of the bytes on the host using diff (other than differences in output
between v6 and v7 related to duplicate lines, it worked ok but is clunky).
Thanks,
Will
All,
In my exploration of v6, I followed the advice in "Setting up Unix -
Seventh Edition" and copied v6tar from v7 to v6. Life is good. However,
tar is using mt1 and it is hard coded into the source, tar.c:
char magtape[] = "/dev/mt1";
As the subject line suggested, I have two questions for those of you who
might know:
1. Why is it hard coded?
2. Why is it the second device and not the first?
Interestingly, it took me a little while to figure out it was doing this
because I didn't actually move files between v6 and v7 until today.
Before this my tests had been limited to separate tests on v6 and v7
along the lines of:
cd /wherever
tar c .
followed by
tar t
list of files
cd /elsewhere
tar x
files extracted and matching
What it was doing was writing to the non-existant /dev/mt1, which it
then created, tarring up stuff, and exiting. Then when I listed the
contents of the tarfile, or extracted the contents, it was successful.
But, when I went to move the tape between v6 and v7, the tape (mt0) was
blank, of course. It was at this point that I followed Noel's advice
and "Used the source", and figured out that it was hard-coded as you see
above.
Thanks,
Will
That's exactly right. ld performs the same task as LOAD did on BESYS,
except it builds the result in the file system rather than user
space. Over time it became clear that "linker" would be a better
term, but that didn't warrant canning the old name. Gresham's law
then came into play and saddled us with the ponderous and
misleading term, "link editor".
Doug
> My understanding, which predates my contact with Unix, is that the
> original toochains for single-job machines consisted of the assembler
> or compiler, the output of which was loaded directly into core with
> the loader. As things became more complicated (and slow), it made
> sense to store the memory image somewhere on drum, and then load that
> image directly when you wanted to run it. And that in some systems
> the name "loader" stuck, even though it no longer loaded. Something
> like the modern ISP use of the term "modem" to mean "router". But I
> don't have anything to back up this version; comments welcome.
> estabur (who thought these names up, I know 8 characters is limiting,
> but c'mon)
'establish user mode registers'
> the 411 header is read by a loader
Actually, it's read by the exec() system call (in sys1.c).
Noel
> From: Dave Horsfall
> I love those PDP-11 instructions, such as "blos" and "sob" :-)
Yes, but alas, there is no 'jump [on] no carry' instruction! (Yes, yes, I
know about BCC! :-) Although I guess the x86 has one...
Noel
> Yes the V6 kernel runs in split I and D mode, but it doesn't end up
> supporting any more data. I.e. the kernel is still a 407 (or 410) file.
> _etext/_edata/_end are still referencing the same 64K space.
Err, actually, not really.
The thing is that to build the split-I/D kernel, one sets the linker to
produce an output file which still contains the relocation bits. That is then
post-processed by 'sysfix', which does wierd magic (moves the text up to
020000, in terms of address space; and puts the data _below_ the text, in the
actual output file). So while the files concerned may have a '407' in their
header, they definitely aren't what one normally finds in a linked 407 or 410
file.
In particular, data addresses start at 0, and can run up to 0140000 (i.e. up
to 56KB), while text addreses start at 020000 and can run up to 0160000. So,
_etext/_edata/_end are not, in fact, in the same 64K space. And the total of
data (initialized and un-initialized) together with the text can be much
larger than 64KB - up to 112KB (modor so.
Noel
J.F. Ossanna (jfo) was born in 1928; he helped give us Unix, and developed
the ROFF series (which I still use).
And Ada Lovelace, the world's first computer programmer, was coded in 1815.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Ronald Natalie
> I'm pretty sure the V6 kernel didn't run in split I/D.
Nope. From 'SETTING UP UNIX - Sixth Edition':
"Another difference is that in 11/45 and 11/70 systems the instruction and
data spaces are separated inside UNIX itself."
And if you don't believe that, check out:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s
the source! ;-)
> It wasn't too involved of a change to make a split I/D kernel.
> Mike Muuss and his crew at JHU did it.
Maybe you're remembering the process on a pre-V6 system?
> We spent more time getting the bootstrap to work than anything else I
> recall.
It's possible you're remembering that, as distributed, V6 didn't support load
images where the text+initialized-data was larger than 24KW-delta; it would
have been pretty eaay to up that to 28KW-delta (change a parameter in the
bootstrap source, and re-assemble), but after that, the V6 bootstrap would
have had to have been extensively re-worked.
And there were _also_ a variety of issues with handling maximal large images
in the startup code. Once operating, the kernel has segments KI1-KI7 available
the hold the system's code; however, it's not clear that all of KI1-7 are
really usable, since the system can't 'see' enough code while in the code
relocation phase in the startup to fill them all. E.g. during code relocation,
KI7 is ripped off to hold a pointer to I/O space (since KD7 is set to point to
low memory just after the memory that KD6 points to).
These might have been issues in systems which were ARPANET-connected (i.e.
ran NCP), as that added a very large amount of code to the kernel.
Noel
> From: Will Senn
> my now handy-dandy PDP11/40 processor handbook
That's good for the instruction set, but for the memory management hardware,
etc you'll really want one of the {/44, /45, /70, /73, etc} group, since only
those models support split I+D.
> the 18 bits holding the word 000407
You mean '16 bits', right? :-)
> This means that branches are to 9th, 10th, 11th and 7th words,
> respectively. It'll be a while before I really understand what the
> ramifications are.
Only the '407' is functional. (IIRC, in early UNIX versions, the OS didn't
strip the header on loading a new program into memory, so the 407 was actually
executed.) The others are just magic numbers, inspired by the '407' - the
code always starts at byte 020 in the file.
> Oh and by the way, jumping between octal and decimal is weird, but
> convenient once you get the hang of it - 512 is 1000, which is nifty
> and makes finding buffer boundaries in an octal dump easy :).
The _real_ reason octal is optimal for PDP-11 is that when looking at core,
most instructions are more easily understood in octal, because the PDP-11 has
8 registers (3 bits), and 3 bits worth of mode modifier, and the fields are
aligned to show up in octal.
I.e. in the double-op instruction '0awxyz', the 'a' octit gives the opcode,
'w' is the mode for the first operand, 'x' is the register for the first
operand, and 'y' and 'z' similarly for the second operand. So '12700' is
'MOV (PC)+, R0' - AKA 'MOV #bbb, R0', where 'bbb' is the contents of the word
after the '12700'.
Noel
> From: Will Senn <will.senn(a)gmail.com>
> The problem is this, when I attempt to execute the v6tar binary on the
> v6 system (it works in v7) it errors out:
> v6tar
> v6tar: too large
That's an error message from the shell; the exec() call on the command
('v6tar') is returning an ENOMEM error. Looking in the kernel, that comes from
estabur() in main.c; there are a number of potential causes, but the most
likely is that 'v6tar' is linked to be split I+D, and your V6 emulation is on
a machine that doesn't have split I+D (e.g. an 11/40). If that's not it,
please dump the a.out header of 'v6tar', so we can work out what's causing the
ENOMEM.
Noel
> From: Will Senn
> Thanks for supplying the logic trail you followed as well!
"Use the source, Luke!" This is particularly true on V6, where it's assumed
that recourse to the source (which is always at hand - long before RMS and
'Free Software', mind) will be an early step.
> when you say dump the a.out header, how do you do that?
On vanilla V6? Hmm. On a system with 'more' (hint, hint ;-), I'd just do 'od
{file} | more', and stop it after the first page. Without 'more', I'd probably
send the 'od' output to a file, and use 'ed' to look at the first couple of
lines.
Back in the day, of course, on a (slow) printing terminal, one could have just
said 'od', and aborted it after the first couple of lines. These days, with
video terminals, 'more' is kind of really necessary. Grab the one off my V6
Unix site, it's V6-ready (should be a compile-and-go).
Noel
> From: Mark Longridge
> I've never been able to transfer any file larger than 64K to Unix V5 or
> V6.
Huh?
# hrd /windows/desktop/TheMachineStops.htm Mach.htm
Xfer complete: 155+38
# l Mach.htm
154 -rw-rw-r-- 1 root 78251 Oct 25 12:13 Mach.htm
#
'more' shows that the contents are all there, and fine. ('hrd' is a command
in my V6 under Ersatz11 that reads an arbitrary file off the host file
system. Guess I need to set the date on the system!)
V6 definitely supports fairly large files; see the code in bmap() in subr.c,
which shows that the basic structure on disk can describe files of 7*256
(1792) + 256*256 (65536) blocks, or 67328 blocks total (34MB).
(In reality, of course, a file can't reach that limit; first, a disk
partition in V6 is limited to 64K blocks, but from that one has to deduct
blocks for the ilist, etc; further, the argument to bmap() is an int, which
limits the 'block number in file' to 16 bits, and in fact the code returns an
error if the high bit in the 'block number in file' is set.)
> I also don't recall seeing any file on V5 or V6 larger than 65536
> bytes
I don't think there is one; the largest are just less than 64KB. I don't
think this is deliberate, other than in the sense that they didn't put any
huge files in the distro so it would fit on a couple of RK packs.
> dd if=/dev/mt0 of=cont.a bs=1 count=90212
> ..
> 24676+0 records in
> 24676+0 records out
> Now, if we take 90212 and subtract 65536 we get 24676 bytes. So there
> definitely seems to be some 64K limit here
Probably 'count' is an 'int' in dd, i.e. limited to 16 bits. No longs in V6 C
(as distributed, although later versions of the C compiler for V6 do support
longs - see my 'bringing up Unix' pages).
Noel
> From: Noel Chiappa
> the most likely is that 'v6tar' is linked to be split I+D, and your V6
> emulation is on a machine that doesn't have split I+D (e.g. an 11/40)
Now that I think about it, the linked systems that are part of the V6 distro
tape are all linked to run on an 11/40. They will boot and run OK on a more
powerful machine (/45 or /70), but they will act like they are on a /40 -
i.e. no split I+D support/use (user or kernel). So to get split I+D support,
you need to build a new Unix binary, with m45.s instead of m40.s. If you
haven't done that, that's probably what the problem is.
Aside: V6 comes in two flavours: no split I+D at all, or split I+D in both
the kernel and user. For some reason that I can't recall, we actually
produced an 'm43.s', BITD at MIT, which ran the kernel in non-split-I-D, but
supported split I-D for the users.
I wish I could remember why we did this - it couldn't have been to save
memory (the machine didn't have a great deal on it when this was done -
although I have this vague memory that that was why we did it), because
running split I+D in the kernel does not, I think, use any more physical
memory (provided you don't fiddle with the parameters like the number of
buffers) than running non-split. Or maybe it does?
One possible reason was that the odd layout of memory with split I+D in the
kernel made debugging kernel code harder (we were doing a lot of kernel
hacking to support early networking work); another was that we were just being
conservative, didn't need to extra space in the kernel that I+D allowed, and
so didn't want to run it.
Noel
All, in the next few days I'm migrating minnie.tuhs.org from one VM to
another, so as to upgrade the OS and clean out the system. I think I've
got the mail subsystem up and running, but as usual there may be bugs.
I'll send out another message when the system is cut over. If things
don't seem to be right, e-mail me at:
wkt at tuhs.org, or
warren.toomey at tafe.qld.edu.au if the tuhs.org one fails.
Cheers all, Warren
On Tue, 8 Dec 2015, Brantley Coile wrote:
> We were indeed lucky that admiral hooper was with us. I know people who
> still cherish their "nano" seconds.
Ah yes, the 1ft piece of wire... Got a photo of it?
> By the way, she wouldn't have said she coined the term "debugging". That
> is at least as old as Thomas Edison. She said she was the first to a
> actually find a real bug!
For those who may be new around here:
https://en.wikipedia.org/wiki/Grace_Hopper#/media/File:H96566k.jpg
Yes, that is a real bug, found inside a real computer.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
All,
According to "Setting Up Unix - Seventh Edition", by Haley and Ritchie:
The best way to convert file systems from 6th edition (V6) to 7th
edition (V7) format is to use tar(1). However, a special version of tar
must be prepared to run on V6.
The document goes on to describe a reasonable method to make v6tar on v7
and copy the binary over to the v6 system. I successfully built the
v6tar binary, which will execute in the v7 environment. I then moved it
over to the v6 system and did a byte compare on the file using od to
dump the octal bytes and then comparing them to the v7 version. The
match was perfect.
The problem is this, when I attempt to execute the v6tar binary on the
v6 system (it works in v7) it errors out:
v6tar
v6tar: too large
on the v7 system, it works:
v6tar
tar: usage tar -{txru}[cvfblm] [tapefile] [blocksize] file1 file2...
I don't think the binary is too large, is is only 18148 bytes.
ls -l v6tar
-rwxrwxrwx 1 root 18148 Oct 10 14:09 v6tar
Help. First, what does too large mean? Second, does this sound familiar
to anyone? etc.
Thanks,
Will
OK, slightly OT...
Rear Admiral Grace ("Amazing") Hopper PhD was given unto us in 1906. She
was famous for coining the term "debugging", whereby a moth was removed
from a relay contact in a *real* computer[*].
However, she must be condemned for giving us COBOL; yes, I know that vile
language, but I carefully leave it off my CV, as it seemed to be designed
for suits (Business Studies of course, but nothing technical) to spy upon
their programmers.
[*]
Defined, of course, where you could open a door and step inside it; I
actually did that once.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> It might not be so much a set of macros as just using a
> subset of raw groff.
Yes, there were no macros back then. If you format the
document using raw groff, the odds are that you will be
speaking the same roff that Dennis did.
> Doug having been there, might know/remember the actually lineage.
Aside from some fuzziness about who wrote what and in what
language, here's what happened:
To port Jerry Saltzer's Runoff (presumably written in MAD)
to Multics, either Dennis or Bob Morris or both together
reimplemented it (presumably in PL/I). To coexist with
Saltzer's version on CTSS, the new program needed a
distinct name, hence roff.
The early Multics PL/I compiler was far from a production
tool. Justifiably, the Bell Labs comp center didn't
support it. To get roff into general use at the Labs,
I undertook yet another implementation in BCPL. I added
functionality (number registers, three-part headings, etc)
and kept the new name. Molly Wagner added hyphenation.
Eventually, I added macros that were usable either as
commands or (when parameterless) embedded in text.
Almost as soon as Unix was up on the PDP-11 one of Ken, Dennis
or Ossanna reimplemented a pre-macro version of roff (presumably
in assembler or B). I'm quite sure roff never ran on the PDP-7.
Ossana had a grander plan and undertook nroff. When he learned
of the availability of the Graphic Systems CAT phototypesetter,
he promptly generalized nroff to handle it. Joe replaced the
CAT's paper tape reader with a direct wire to the computer.
It all worked swimmingly--nothing like the travails when the
CAT was replaced by the more capable Merganthaler Linotron.
An interesting question of priority is whether nroff or
BCPL roff was first to have a macro capability. Though
I don't remember for sure, the fact that BCPL roff unified
registers, macros, strings and diversions suggests that
I abstracted from nroff facilities.
Doug
All,
In the same vein as my prior note, I have made a note available on the
process of getting up and running on Unix Seventh Edition in a SimH
PDP-11 environment. The text is located at:
http://decuser.blogspot.com/2015/12/installing-and-using-research-unix.html
I welcome comments, suggestions, and even criticisms.
While I have learned a lot since my last blog entry (many thanks to
Hellwig Geisse, Nelson Beebe, Noel Chiappa, Clement Cole and several
others), I am still learning about these environments. I originally
invested time in getting v7 running so that I could more easily work
with v6, after having gone there, I believe that it was time very well
spent. I know a lot more about special devices, tape formats, and so on
than I did before as a result of taking the fork in the road.
Thanks for everyone's help.
Oh, and by the way, there appears to be quite a bit of active interest
in this topic - the blog post has been viewed several thousand times
since I posted it, two weeks ago.
Kind regards,
Will
I have set up v7 following [1] and I would like to better understand the
process of adding a disk to the environment. Here is what I know:
The system has one RP06 with two partitions rp0 and rp3 which correspond
to the two block devices rp0, rp3, and the two character devices rrp0,
and rrp3. The special files look like so:
brw-r--r-- 1 root 6, 0 Dec 31 19:05 /dev/rp0
brw-r--r-- 1 root 6, 7 Dec 31 19:04 /dev/rp3
crw-r--r-- 1 root 14, 0 Dec 31 19:01 /dev/rrp0
crw-r--r-- 1 root 14, 7 Dec 31 19:01 /dev/rrp3
This meshes with the device classes switches in c.c:
The block device switch:
struct bdevsw bdevsw[] =
{
...
nulldev, nulldev, hpstrategy, &hptab, /* hp = 6 */
...
}
The character device switch:
struct cdevsw cdevsw[] =
{
...
nulldev, nulldev, hpread, hpwrite, nodev, nulldev, 0, /* hp =
14 */
...
}
I would like to add another RP disk to the environment. After I attach
an RP04/05/06 to the system, what should I use as the major/minor device
numbers? To put it differently, it doesn't seem correct to me to use 6,1
for the block device or 14,1 for the character device on the new drive
as it's a completely different disk from rp0 and rp3 which are just
partitions on the first drive and have 6,0, 6,7, and 14,0, 14,7. If each
RP can have 8 partitions and there can be 8 drives, what is the correct
major, minor numbers to use with v7 for multiple devices?
c.c only lists one vector each for the hp device (one block vector where
hp = 6, and one char vector where hp = 15).
Thanks,
Will
[1] Haley, C. B. & Ritchie, D. M. (1979). Setting Up Unix – Seventh
Edition (pp. 497-505) in UNIX programmer's manual, Vol. 2, Revised and
Expanded Version. Bell Laboratories: NY.
In exploring v6, I have found some uses for having a running v7 instance...
When I try to install the RP bootblock during the installation procedure
for installing Version 7 Unix following the original documentation:
ftp://ftp.uvsq.fr/pub/tuhs/PDP-11/Documentation/v7_setup.html
I am unable to boot from the RP06 disk that I installed into the boot
block onto via:
dd if=/usr/mdec/hpuboot of=/dev/rp0 count=1
No error, it just hangs. I compared hpuboot to the bootblock at it
matches byte for byte. I also compared it to the hpuboot file in Henry
Spencer's tape image (I am using Keith Bostic's tape) and it matches
that as well.
I am asking this list because I thought y'all might know if there was a
problem with:
1) the hpuboot binary on the tapes
2) v7 using RP06
3) something else helpful :) (maybe it's not supposed to be loaded to
byte 0 on the disk image, although that's how it works with v6?)
I am aware that the system can be booted from tape, but that seems hokey
(obviously it works, since that's how the installation process works in
the first place, but I think it is reasonable to expect to be able to
boot from the RP06). Interestingly, there are and RL02 and RK05 v7
images that boot from disk available, but not RP.
I will ask on the SimH list, if y'all don't think it's appropriately
directed.
Thoughts?
Thanks,
Will
All,
I am studying Unix v6 using SimH and I am documenting the process, as I
go, as part of my own learning process. I have much to learn about Unix,
Unix v6 in particular, the PDP architecture and its relationship with
v6, and SimH's emulation of the PDP, so, I am taking notes... I thought
that I would share the notes in raw form as occasional blog posts in the
hope that the knowledge that I work to obtain, might be made available
and useful to others. I also believe that these forms of communication,
as insignificant as they may seem individually are part of helping to
preserve the knowledge of our computing history, in the aggregate. Here
is a link to the first post, a run at an installation walk-through:
http://decuser.blogspot.com/2015/11/installing-and-using-research-unix.html
I am open to feedback and criticism, but please keep in mind that I am a
relative newbie to v6 and PDP land, some of my assumptions are
probably/undoubtedly wrong, but definitely fixable :).
Regards,
Will
> From: Will Senn <will.senn(a)gmail.com>
> a deeper read will require the reader to have knowledge beyond what is
> required of most modern software developers (PDP-11 architecture,
> assembly language, and UNIX are prerequisite).
Well, for pretty much any _operating system_ (as opposed to applications),
one will need to know something about the details of the machine it is
intended to run on; depending on which part of the OS one is looking at, it
will be more or less. E.g. switching processes probably requires a fair
amount, since one needs to know about internal CPU registers, etc; whereas
working on the file system, one probably doesn't need to know very much about
the machine.
> It will also require access to a lab where the ideas covered can be
> experimented with.
Actually, Lions/V6 was used in operating systems courses using simulated
machines; one at MIT, 6.828 "Operating Systems Engineering":
https://pdos.csail.mit.edu/6.828/
used it for a while before the students started complaining about being
forced to learn an obsolete machine. They thereupon wrote a V6 clone for the
x86 architecture, 'XV6' (see the top of that page), which is apparently now
used for similar courses at quite a few other universities.
> The v6 kernel ... packs in features that were either unavailable in
> larger more established systems or may have been present in some form,
> but were orders of magnitude more lines of code and attendant
> complexity. It was and remains an amazing operating system and worthy
> of contemporary study.
I don't think you will find too many people here who disagree! ;-)
> So, I was thinking that next up, I would write up notes to help the
> modern reader engage with v6 more easily in order to follow works like
> Lyons.
Check around online to see what exists, first; there has been stuff written
since Lions! ;-)
Noel
Hi,
Don't forget the Zuse machines, which were later proven to be Turing
complete. It is certainly fascinating to see handling binary floating point
numbers in a purely mechanical device (check it out if you happen to be in
Berlin). Later machines were electromechanical and electronics.
Regards,
Szabolcs
>
> 2015.12.04. 15:52 ezt írta ("John Cowan" <cowan(a)mercury.ccil.org>):
>>
>> Greg 'groggy' Lehey scripsit:
>>
>> > Take a look at CSIRAC in the Melbourne museum, the oldest computer in
>> > the world. It's worth it, even if they don't have it running.
>>
>> Well, there's the Antikythera mechanism.
>>
>> --
>> John Cowan http://www.ccil.org/~cowan cowan(a)ccil.org
>> In the sciences, we are now uniquely privileged to sit side by side
>> with the giants on whose shoulders we stand. --Gerald Holton
>> _______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
below
On Fri, Dec 4, 2015 at 12:02 AM, Will Senn <will.senn(a)gmail.com> wrote:
> 1. a utility on the host that is capable of copying a directory and its
> contents, recursively, onto a blank magtape/dectape/rk image that is then
> readable in the v6 environment
>
Right - you want a common archive format between the two systems that talk
to the tape device.
You can either create your own or better take on the old ones that exist.
> 2. a tar and unzip binary for v6 that is capable of dealing with the
> tarball (but isn't the tarball going to exceed the max file size anyway, if
> so this won't work)
>
I think you have a many to chose from off the top of my head I can think of
each with different advantages (more in a minute):
- tar
- cpio
- tp/stp
- ar (new format)
You seem to also want a compression tool, but you might try compressing on
the modern system - but there are solution here also.
- pack/unpack was the old v5/v6 compression tool - I've forgotten where
it was sourced check the first USENIX tape in 77
- porting a modern zip/gzip/bzip
> 3. an alternative archiver that runs on FreeBSD or Mac OS X, that can
> create a single file archive from a subdirectory's contents on the host
> (the resultant file would need to be extractable on v6, and if file size is
> too limited, won't work either).
>
That is a lot of work and unless this is going to be a very long term
thing, I'm not so sure it's worth it. Basically you want a virtual FS on
the v6 system and the simulator. If you are going to do this alot, then
its worth it. Think the VFS that vmware and like offer.
> 4. some kind of directory transfer utility that works over telnet that can
> be executed from a FreeBSD or Mac OS X host and that can be executed on the
> v6 system as well.
>
the original unix kermit will compile using the v6 compiler (maybe the
v5) compiler. You have to dig in the archives, but you want a version
from Columbia circa 1977 and you be fine. The latest version will use
things in the language first described in the white book - aka Typersetter
C (Dennis was wrote the book starting with v6, but's not published until
v7). If you a later compiler running on v6 you'll be fine.
> 5. a utility capable of creating an empty magtape/dectape/rk image and
> another capable of making a filesystem on the image and another of
> populating the image (analogous to fdisk rkimage; mkfs rkimage; rkcopy dir
> rkimage)
>
You could move the file system creation tools and set of a virtual v6 FS.
It's a lot of work and unless this is going to be a very long term thing,
I'm not so sure it's worth it.
As for the archivers which in the short term is likely to be your best bet:
1. tar - there a couple of versions of tar for v6 including binaries.
I personally would start there.
2. cpio was written for PWB 1.0 which is v6 kernel based. That binary
should run. But IIRC correctly the original cpio was only binary headers
(the -c/ascii headers was added later). So you'll need to be careful on
the modern computer and make sure you set the switches so that he created
the proper endian/byte swapping -- ness in the header
3. tp/stp - on the original USENIX tape is a "super tp" that replaced
the original one. The binary should run as is. The code for it is
pre-K&R so compiling it with a modern compiler will be a little bit of
work. Also, IIRC the "directory" which is on the front of the tape is
binary, so you'll need to make sure you write everything in PDP-11 format.
4. ar - was updated by the community. Eventually, V7 took the "new ar"
from original USENIX tape. Again that binary should just run fine.
Although I don't think its directory is recursive so it may fail that
requirement for you
Clem
All,
I am trying to figure out how to get parts of 1BSD added into a pristine
v6 install, but the question I have relates to moving more than a
handful of files from a host system into v6, which lacks several
capabilities that are taken for granted from v7 onward (tar, unzip, and
so on).
For background, in looking at the 1bsd tarball, exploded out, I saw that
ex was available on the tape in a binary form that is suitable for a
PDP-11/40 and I thought it would make life easier in v6 to have ex. So,
I used dd to move the a.outNOID file onto a file, which can be used as
a raw RK image and then off the RK image loaded in the PDP-11 into the
v6 system as the executable file ex, and that worked. I was able to run
ex (well, sort of, I get the colon prompt anyway... I haven't figured
out how it actually works yet). Yeeha! Having had success of a sort with
a single executable from the 1BSD tape, I would like to see if other
parts of 1BSD will work in the environment and if I can properly install
those parts.
Individually moving files using dd is tedious in the extreme (there are
many files in the tarball). I know there has to be a better way. Since
v6 doesn't have tar, or unzip, it doesn't seem likely that using dd to
move the tarball into v6 will be help matters. But, if there was a way
to dd a subdirectory and its contents onto an RK image and get them off
again into a useable v6 file system, that would work.
My question for the group is based on the preceding discussion and the
following assumption:
1. given a tarball such as 1bsd.tar.gz from the TUHS archive located at:
/PDP-11/Distributions/ucb
2. with a running SimH PDP-11/40 instance
with a virtual TU10 magtape
with a virtual TU56 dectape
with a virtual RK05 hard drive
3. running v6 as the operating system
What is an efficient method of moving the files of the 1bsd
distribution, or any other set of files and directories, into the v6
operating environment?
Here are some approaches that seem reasonable, but that I haven't been
able to figure out, if you know better, please do tell:
1. a utility on the host that is capable of copying a directory and its
contents, recursively, onto a blank magtape/dectape/rk image that is
then readable in the v6 environment
2. a tar and unzip binary for v6 that is capable of dealing with the
tarball (but isn't the tarball going to exceed the max file size anyway,
if so this won't work)
3. an alternative archiver that runs on FreeBSD or Mac OS X, that can
create a single file archive from a subdirectory's contents on the host
(the resultant file would need to be extractable on v6, and if file size
is too limited, won't work either).
4. some kind of directory transfer utility that works over telnet that
can be executed from a FreeBSD or Mac OS X host and that can be executed
on the v6 system as well.
5. a utility capable of creating an empty magtape/dectape/rk image and
another capable of making a filesystem on the image and another of
populating the image (analogous to fdisk rkimage; mkfs rkimage; rkcopy
dir rkimage)
If I am asking the wrong questions, or thinking badly, I would
appreciate a steer in the right direction.
Regards,
Will
> From: Will Senn <will.senn(a)gmail.com>
> I am studying Unix v6 using SimH and I am documenting the process
I did a very similar exercise using the Ersatz11 simulator; I have a lot
of stuff about the process here:
http://www.chiappa.net/~jnc/tech/V6Unix.html
It contains a number of items that you might find useful, e.g.: "V6 as
distributed is strictly a 20th Century operating system. Literally. You can't
set the date to anytime in the 21st century, for two reasons. First, the
'date' command only take a 2-digit year number. Second, even if you fix that,
the ctime() library routine has a bug in it that makes it stop working in the
closing months of 1999."
> the PDP architecture
Technically, a PDP-11 - there were a number of different PDP architectures:
https://en.wikipedia.org/wiki/Programmed_Data_Processor
is a decent listing of them; several (PDP-8, PDP-10, etc) were very popular
and successful.
A few things I noted in your first post:
> I am using the Ken Wellsch tape because it boots and is stated to be
> identical to Dennis Ritchie's tape other than being bootable and having
> a different timestamp on root.
The only differences I could discover between the two are that in the Wellsch
versions i) a Western Electric rights notice (which prints on booting) has
been added to ken/main.c, and the Unix bootable images; and ii) the RK pack
images do have, as you noted, the bootstrap in block 0.
> Note: sh is critically important, don't muck it up :). The issue is
> that if you do, there really isn't an easy way to recover.
One should _never_ install a new shell version as '/bin/sh' until it has been
run and tested for a while (for the exact reason you mention). Happily, in
Unix, as far as the OS is concerned, the command interpreter is just another
program, so it's trivial to name a new binary of the shell 'nsh' or
something, and run that for a while to make sure it's working OK, before
installing it as '/bin/sh'.
> a special file (whatever that is)
Special files are UNIXisms for 'devices'. _All_ devices in Unix appear as
'special files' in the file system, usually (but not necessarily) in /dev -
that location is a convention, not a requirment of the OS.
Noel
On Sun, Nov 29, 2015 at 08:55:23PM -0800, Paul McJones wrote:
> Thanks very much for making the original and the OCR-enhanced versions
> of Doug’s scan of the “UnixEditionZero” document available
> on tuhs.org. I notice that even with Nelson’s enhanced version,
> the file size is still large for a scanned text document, apparently
> because it was originally scanned in RGB mode, 24 bits/pixel. The
> attached version is 2.5MB, and to my eye is identical looks identical
> to UnixEditionZero-OCR.pdf.
Paul, I've added your version into the same directory. Thanks!
Warren
Hi all,
In v2 no5 AUUGN Jun-Jul 1980, Andy Tanenbaum announced the availability of a Portable Pascal Compiler for the then proposed ISO standard. A tape was made for v6, v7, and non-unix platforms. Does anyone know if there is a tape image around that has the distro?
On a related note, has anyone successfully installed 1BSD on a v6 install running in SImH? 1BSD has the Berkeley Pascal Instructional system on it.
Regards,
Will
Sent from my iPhone
I'm too tired to dig for the exact words in the ISO standard,
but I had the impression that the official C rule these days
is that the effect of writing on a string literal is undefined.
So it's legal for an implementation to make strings read-only,
or to point several references to "What's the recipe today, Jim"
to one copy of the stripng in memory, and even to point uses of
"Jim" to the tail of the same string. Or both.
It is also legal for every string literal to reside in its own
memory and to be writable, but since the effect is undefined,
code that relies on that is on thin ice, especially if meant to
be portable code.
I have used, and even fixed (unrelated) bugs in, a compiler
that merged identical strings. I forget whether it also looked
for suffix matches. Whether the strings went in read-only
memory was up to the code generator (of course); in the new
back-end I wrote for it, I made them so. This turned up quite a
few fumbles in very-old UNIX code that assumed unique, writable
string literals, especially those that called mktemp(3). To my
mind that just meant the programs needed to be fixed to match
current standards (just as many old programs needed fixes to
compile without error in ISO C), so I fixed them.
I didn't (and still don't) like Joy's heavy-handed hack, but I
see his point, and think it's just fine for the language rules
to allow the compiler to do it hacklessly.
Norman Wilson
Toronto ON
I've gotten sucked into an embedded system project and they are running out of
memory. I have a vague memory of some sort of tool that I think Bill Joy
wrote (or maybe he told me about it) that would do some magic processing of
all the string constants and somehow it de-dupped the space.
Though now that I'm typing this that doesn't seem possible. Does this ring
a bell with anyone? I'm sure it was for the PDP 11 port.
Thanks,
--lm
Thanks, Doug and Warren, for the new files at
http://www.tuhs.org/Archive/PDP-11/Distributions/research/McIlroy_v0/
At the TUHS mirror at my site, you can find an additional file
http://www.math.utah.edu/pub/mirrors/minnie.tuhs.org/PDP-11/Distributions/r…ftp://ftp.math.utah.edu/pub/mirrors/minnie.tuhs.org/PDP-11/Distributions/re…
that is less than half the size, and is (somewhat) searchable, thanks
to Adobe Acrobat Pro 11 OCR conversion. Please include that in the
TUHS master archive, even renaming it to the original file, if you
wish.
I like the beginning of "Section 2. Hardware", where Dennis wrote:
>> ...
>> The PDP-11 on which UNIX is implemented is a 16-bit 12K computer,
>> and UNIX occupies 8K words. More than half of this space, however, is
>> utilized for a variable number of disk buffers; with some loss of
>> speed the number of buffers could be cut significantly.
>> ...
How much more powerful early Unix was compared to CP/M and MS-DOS, in
a small fraction of their memory space!
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Woe betide the user if any string was changed at run time...
That was then. Now it would be OK to do so for const data.
(Unless the tool chain has discarded const indications.)
Doug
> It's worth noting that Unix was built for troff. Typesetting patents
if I recall correctly.
This is a stretch. Unix was really built because Ken and Dennis
had a good idea. The purchase of a PDP-11 for it was in part
justified by the goal of making a word-processing system. The
first in-house "sale" of Unix was indeed to the patent department
for typing patents--the selling point was that roff could be
made (by an overnight modification) to print line numbers as
USPTO required, whereas that was not a feature of a commercial
competitor. The timeline is really roff--Unix--patent--nroff--troff.
Though roff antedated Unix, it did not motivate Unix.
> Is this The UNIX Time-Sharing System, or related to it? The same
> claim appears in the first paragraph:
> https://www.bell-labs.com/usr/dmr/www/cacm.html
This draft clearly dates from 1971. Pieces of it were worked
into subsequent versions of the manual as well as published
descriptions of Unix, including the SIGOPS/CACM paper.
Doug
Hi,
I wanted to at least give it a try porting 2.11 BSD to my Z8001
machine. I din't really wrote any kernel part until now so it
will be a huge learning curve for sure. No idea what my spare
time permits, but... at least I'm planning giving it a try.
I didn't found something like "thing you should do first when
porting 2.11 BSD to another architecture" online so I thought
myself... maybe it would be good to start with the standalone
utilities - more precisely with "disklabel".
Is there a good "HOWTO" for "first things first" as implementing
disklabel seems to require quite some "device work" before the
first "hello world" is there - is there something else which
should be could be done first and does not require so much to
port (the whole disk subsystem on that machine is different
from "usual" disk subsystems as it is handled via a PIO)
Regards, Oliver
I know that I'm jumping the gun a bit, but if/when someone has any news
of any 50th anniversary celebrations for Unix in mid-2019?
I'd love to start planning things now, given I'm in Australia and I also
need to convince my darling wife of the need for a holiday in the U.S
[or elsewhere 8-) ].
I will keep asking every six months.
Cheers, Warren
> I've not seen anything before Dennis' scan of the 1st
> Edition manuals. Can you make a scan of this one available?
I shall, as I had intended to do if this document was as
unknown or forgotten by others as it was by me.
Doug
> The phototypesetter version of Unix was V7.
I'm not sure of what's being said here. Manuals from
the 4th edition on were phototypeaet, first on a
CAT and later a Linotron (if I remember the name right).
Doug
Hi all, I just receivd this e-mail from Will Senn who has just joined
the TUHS mailing list:
----- Forwarded message from Will Senn -----
Hi,
I am conducting research on older UNIX operating systems and came
across a letter from Richard Wolf to Ian Johnstone, dated Feb 5, 1979.
On p. 29 of the AUUGN, Volume 1 number 3, Mr. Wolf refers to a set of
101 fixes for research version 6. In my research, I am currently using
v6 and wondered if you knew where I might find the fixes or if the
bits are known to exist?
Kind Regards,
Will
----- End forwarded message -----
Will, there was a "50 bugs" tape for 6th Edition Unix that was "released"
to Unix owners in a very interesting distribution method: see
http://www.groklaw.net/articlebasic.php?story=20060616172103795
You can find it in the Unix Archive. Look in Applications/Spencer_Tapes/unsw3.tar.gz. It is the file usr/sys/v6unix/unix_changes.
Does anybody know of something which could be described as "101 fixes for
research version 6"? The phototypesetter version of Unix was V7.
Cheers all and welcome to the list Will.
Warren
https://groups.google.com/d/msg/net.unix/Cya18ywIebk/2SI8HrSciyYJ
Apparently the 8th Edition shell had the ability to export functions via
the environment.
I'm wondering - were there (are there?) any other shells other than bash
that picked up this feature? How was it implemented, considering this
was the cause of the "Shellshock" vulnerability?
I was amused to see it come up in one of the olduse.net newsgroups I've
been following.
Interestingly, the SysIII version of cut.c does not have the line
mentioned here. That's because it doesn't initialize _any_ of the flag
variables. The line was added some time between then and SysV, and that
is the _only_ significant change between the SysIII and pdp11v versions.
https://groups.google.com/d/msg/net.bugs.usg/iAkgNVBJNSo/PgXAC2vi044J
Hi,
I'm struggling on reimplementing the C code for the link()
syscall.
Usually on SYSIII and V7 you have something like:
link()
{
register struct inode *ip, *xp;
register struct a {
char *target;
char *linkname;
} *uap;
[...]
u.u_dirp = (caddr_t)uap->linkname;
[...]
}
The problem now on my system is, u_dirp in the user struct
is saddr_t (*long) and not caddr_t (*char) and I wonder how
I have to assign uap->linkname.
The original ASM code looks like:
_link::
{
dec fp,#~L2
ldm _stkseg+4(fp),r8,#6
ldl rr8,_u+36
[...]
ldl rr2,rr8(#4)
ldl rr4,rr2
and r4,#32512
ldl _u+78,rr4
[...]
I had the same problem already 7 years ago but didn't came up
with a solution back then.
http://home.salatschuessel.net/quest/problems.php
What came to my mind in the meantime is the following and maybe
someone can check if this is right:
1. _u+78 (u.u_dirp) contains a pointer - so what is assigned
here in ASM is a memory address.
2. The memory notation for accessing segmented data on Z8001
seems to be 0xSS00XXXX where SS is the segment number up
to 127 and XXXX is the relative address in that segment.
3. This means ANDing 0xSS00 with 0x7F00 means to strip out
all invalid data from the segment-position of the address,
to make sure it can only be between 0 and 127 (0x0000 and
0x7F00).
I wonder how the assignment of uap->linkname to u.u_dirp has
to be done correctly?!
I see http://archive.org/details/BillJoyInterview but the source is
unknown. Does anyone know who conducted this interview or where it came
from? (I tried to contact the archive years ago but didn't hear back.)
Most of the stories I have alternative sources for but I'd like to cite
some of this content in a book I am authoring.
Also it doesn't seem to have a starting place. It appears the beginning
of the interview is missing. (Also it has eight sections marked with
"[Skipped portion you requested.]" and 27 page breaks.)
It appears it may have been OCR'd (Exacfiy = Exactly, correcfiy =
correctly, f'mally = finally, f'md = find, f'n'st = first, llliac =
ILLIAC, Riogin = Rlogin, HTrP: = HTTP: and many other OCR-like typos),
plus misspelled names where the originally typed wrong (so I assume the
transcriber wasn't directly related to this story, like deck = DEC,
Favory = Fabry, Gerwitz = Gurwitz, "E-bid(?) ASCH" = maybe EBCDIC to
ASCII).
If anyone knows the source for this interview or a proper bibtex entry
for it, I'd appreciate it.
Howdy,
I'm the secretary of the Atlanta Historical Computing Society, and a lurker
here on the TUHS list.
We're starting our process of looking for speakers at our upcoming VCF SE
4.0. It'll be April 2nd and 3rd 2016
in the Atlanta area. Since I've enjoyed reading and hearing about all the
UNIX history on this list,
I was wondering if anybody here might be willing to speak at our event.
It seems there is a good
deal of history that is captured in the minds of the members of this
list... which might make a number
of good presentations.
We're open on ranges of topics. We've had many different people speak...
the first editor of Byte,
the artist who did the covers of many Byte magazines, Jason Scott from
archive.org, some early SWTPC
engineers, some early Apple engineers including Daniel Kottke. We also
have members from the
various Vintage Computer groups from around the U.S. speak (and of course
some local members),
and some University Archivists who are starting to have to deal with old
media. This year we will have
Jerry Manock (the designer from Apple who established their design
group...designed cases for Mac, etc.)
as one speaker.
We love to learn about the history, esp. from the folks who lived it. I am
just slightly too young to have
been there (was born in '65) but always enjoy the talks. We can
accommodate from a 30 min talk to
an hour. We have a professional sound set up and stage. We have a
co-sponsor, the Computer
Museum of America that is being established in the area as well. We have
between 5 and 10 slots to fill.
We aren't a large group, but we do have a limited budget to assist with
travel, lodging, etc. We can handle
"nice" but not the Ritz :-)
If anybody is interested, please contact me and I can provide further
details. And if you'd be interested
but can't make this year, please still contact me, maybe we can work
something out in the future.
Thanks!
Earl Baugh
Secretary
Atlanta Historical Computing Society.
Hi,
does someone know where "u" is defined on SYSIII or V7?
sys/user.h states:
extern struct user u;
But I wonder where it is defined? On ZEUS I have u.o but I'm
not able to correctly disassemble it. Right now I'm guessing
that it should be something like:
u module
$segmented
$abs %F600
global
_u array [%572 byte]
end u
But the resulting object (u.o.hd) does not match 100% the existing
u.o on the system (u.o.orig.hd).
--- u.o.orig.hd 2008-05-16 21:52:12.000000000 +0200
+++ u.o.hd 2008-05-16 21:52:16.000000000 +0200
@@ -3,6 +3,6 @@
00000020 00 00 00 01 00 00 00 00 01 00 00 00 00 00 00 00
|................|
00000030 00 00 00 02 00 00 00 00 00 00 00 00 1e 00 75 5f
|..............u_|
00000040 70 00 00 00 00 00 01 00 00 00 1e 01 75 5f 64 00
|p...........u_d.|
-00000050 00 00 00 00 3e 00 f6 00 61 3e 5f 75 00 00 00 00 |....>..a>_u....|
+00000050 00 00 00 00 01 00 f6 00 61 01 5f 75 00 00 00 00 |.......a._u....|
00000060 00 00 |..|
00000062
iPhone email
> On Nov 13, 2015, at 2:38 PM, Brantley Coile <brantleycoile(a)icloud.com> wrote:
>
> For performance reasons an assembly symbol "u" was defined to be a fixed address. That allowed us to use constructions like u.u_procp to generate a single address. It was very fast. Does this help?
>
> iPhone email
>
>> On Nov 13, 2015, at 2:33 PM, Oliver Lehmann <lehmann(a)ans-netz.de> wrote:
>>
>>
>> Oliver Lehmann <lehmann(a)ans-netz.de> wrote:
>>
>>> u module
>>> $segmented
>>> $abs %F600
>>>
>>> global
>>>
>>> _u array [%572 byte]
>>>
>>> end u
>>
>> By any way - is here someone on the list understanding Z8000 PLZ/ASM? ;)
>>
>> The problem is, that "u" must be available in the address space on this
>> location for the kernel to function correctly:
>>
>> # define UBASE 0x3E00F600 /* kernel virtual addr of user struct */
>>
>> And with the above ASM code, it is placed on 0x0100F600. I also tried
>> of course $abs 0x3E00F600 but it makes no difference. It is always
>> placed at 0x0100F600 and I have zero clue why
>>
>> the original object from the system:
>>
>> #67 nm /usr/sys/conf/u.o
>> 3e00f600 A _u
>> 01000000 s u_d
>> 0000 s u_p
>>
>>
>> my object generated from my u.s:
>>
>> #68 nm u.o
>> 0100f600 A _u
>> 01000000 s u_d
>> 0000 s u_p
>>
>> Somehow I need to get the address right.... This is why I wanted to
>> look up how the original SYSIII or V7 was doing it (even if the asm
>> would be of course completely different).
>> _______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
I'm not sure how old cut is, but a quick look at the code gave me the
idea it could be backported to V7, as I'm fairly sure that cut wasn't
in V7.
It doesn't look like it needs a lot of stuff, just fclose, puts, do
and while loops. Even a v6 or v5 backport doesn't seem too difficult.
Mark
> /* (-s option: serial concatenation like old (127's) paste command */
>
> For that matter, what's the "old (127's) paste command" it refers to?
I can't remember 127 ever having a "paste" command. We did have "ov",
which overlaid adjacent pairs of formatted pages to make two-column
text. "Serial concatenation" would seem to be what was done by "pr"
or "cat".
"ov" figured in the flurry of demos on the day of pipes' birth.
nroff | ov | ov
made four-column output.
For that matter, what's the "old (127's) paste command" it refers to?
Every organization at AT&T had a number as well as a name.
In the early days of UNIX, the number for Computer Science
Research was 127. At some point a 1 was prepended, making
it 1127, but old-timers still used the three-digit code.
So it's a good guess that `127's paste command' means
one that came from, or had been modified in, Research.
I don't know when or where, though. I don't see a paste
command in V7. paste.c in V8 has exactly the same comment
at the top.
Norman Wilson
Toronto ON
>> I thought PWB (makers of "make") came from Harvard?
> PWB ... came straight out of Bell. Not sure about all the
> applications (well, SCCS came from Bell).
PWB did not create make; Stu Feldman did it in research.
PWB did make SCCS. I believe it also originated cico,
find and eval. Probably more, too, but I can't reliably
separate PWB's other contributions from USG's.
Doug
Hi,
i have an old Z8001 based SysIII variant and I would love to have
TCP/IP on it (SLIP first, later with a homebrew ethernet device).
I wonder if someone ever saw TCP/IP available on a System III?
I have lets say 90% of the kernel running on it as source
available and I started digging in the available 4.2 BSD sources.
It looks like there would be much to do to hack in TCP/IP on my
own (no IPC, no Net, no PTY, no....).
I got K5JB running (userland TCP/IP implementation) after I fixed
some C code because the C Compiler available on the system is.....
kinda limited.
telnetd is of course not working as there are no pseudo-teletypes
on this SYSIII. At least I got ping, echoping and ftpd up and
running via SLIP
(10.1.1.2 is my SysIII box:)
# ping -c3 10.1.1.2
PING 10.1.1.2 (10.1.1.2): 56 data bytes
64 bytes from 10.1.1.2: icmp_seq=0 ttl=254 time=316.317 ms
64 bytes from 10.1.1.2: icmp_seq=1 ttl=254 time=297.328 ms
64 bytes from 10.1.1.2: icmp_seq=2 ttl=254 time=296.369 ms
--- 10.1.1.2 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 296.369/303.338/316.317/9.186 ms
# ftp 10.1.1.2
Connected to 10.1.1.2.
220 FTP version K5JB.k37 ready at Tue Apr 30 22:25:47 1991
Name (10.1.1.2:root): test
331 Enter PASS command
Password:
230 Logged in
ftp> get sa.timer
local: sa.timer remote: sa.timer
500 Unknown command
500 Unknown command
200 Port command okay
150 Opening data connection for RETR sa.timer
2571 0.53 KB/s
226 File sent OK
2571 bytes received in 00:05 (0.48 KB/s)
ftp> get wega
local: wega remote: wega
200 Port command okay
150 Opening data connection for RETR wega
98723 0.51 KB/s
226 File sent OK
98723 bytes received in 03:05 (0.51 KB/s)
ftp> exit
221 Goodbye!
#
So I wonder if someone got anything SYSIII -> Net/TCP/IP related
which could help me in any way to get a SYSIII kernel capable of
TCP/IP and PTYs to get a telnetd up and running via SLIP is my
first goal.
Regards,
Oliver
I just got on this list today, and I see that Larry McVoy asks:
"I wish Marc was on this list, be fun to chat."
I'd be happy to chime in on SCCS or early PWB questions, to the extent I
remember anything.
I did see a thread about PWB contributions in which people are trying to
sort out what came from research and what from the PWB group (under Evan
Ivie). As I recall, PWB was always based on research. Dick Haight would
install the latest research system from time-to-time, and then the
so-called "PWB UNIX" was whatever he had taken from research plus stuff we
were developing, such as SCCS. Unlike, say, Columbus UNIX, our kernel
always matched research at the system call level, so there never was such a
thing as a PWB-kernel dependency.
(I think the USG system was run quite differently: They had their own
system, and would merge improvements from research into it. I could be
wrong about this, as I never worked in the USG group.)
--Marc
Anyone have some sun4c or hp300 gear they'd be persuaded to part with? Preferred in the SF Bay Area? It's getting a bit too difficult using broken emulators and broken cross compilers...
Sent from my iPhone
Hi Marc,
TUHS == The Unix Historical Society, it's a mailing list as well as a
repository of Unix source code (including yours). A lot of the Bell
Labs guys are on the list, it has weird topics like the current one of
how to get System III booting on a Zilog something that is 16 bits but
can address 8MB in segments.
There was a side discussion of PWB and SCCS came up and I started talking
about how cool SCCS was and how RCS gave it an undeserved bad rap. In
the process I said "I wish Marc was on this list" and John Cowan said
here is his email, go ask him.
I think you'd have fun on the list, it's old school unix. Lots of signal,
very little noise. I personally would love to have you there, SCCS was
brilliant. It would be fun to pick your brain about how that happened.
And for the record your advanced unix programming book has influenced
how I code. It error checks when there could be errors and passes when
there shouldn't be errors. I feel like that book threaded the needle -
error checking matters except when it doesn't. It taught me a lot and
I pass it on to anyone who will listen.
If you want to get on the list send an email to wkt(a)tuhs.org. Be good
to have your voice here.
--lm
> cpio, expr, xargs, yacc, and lex first appeared outside
> the Bell Labs boundary in the PWB release
This gently corrects a statement in my posting: the name
of one of the PWB-originated programs is expr, not eval.
Doug
> From: Dave Horsfall <dave(a)horsfall.org>
> I thought PWB (makers of "make") came from Harvard?
PWB? As in "Programmer's Work Bench"? The OS part of that came straight out
of Bell - see pg. 266 in the first Unix BSTJ issue. Not sure about all the
applications (well, SCCS came from Bell).
Noel
Dan,
I wrote:
Quiz for the occasion: which major Unix utility adopted IPL's
unprecedented expression syntax?
You correctly responded:
troff.
I suppose, in a sense, that 'dc' also fits the bill but given that that is
inherent in it's stack based nature, I doubt that is what you meant.
The notion of precedence pertains specifically to infix notation, so
postfix dc is definitely not in the running.
Idle thought about my typo: Though APL is famously inscrutable, IPL
(specifically IPL-V) outshined it in that department.
Doug
Loved or loathed for inventing APL, we lost him in 2004. The best thing
you can say about APL (I used APL\360) is that it's, err, concise...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Sent to me by a friend:
https://youtu.be/vT_J6xc-Az0
There's another one there about "The C Programming Language" book
as well. And looks like more to come.
Arnold
On Fri, 2 Oct 2015 12:00:08 -0600, I posted to this list a summary of the
earliest mentions of Unix in several corporate technical journals.
This morning, I made a similar search in the complete bibliographies of
29 journals on the history of computing, mathematics, and science listed at
http://ftp.math.utah.edu/pub/tex/bib/index.html#content
As might be expected, there is little mention of Unix (or Linux) in those
publications: they only ones that I found are these:
+-----------------------+------------------+----------------------------------------------------------------------------------+
| filename | label | substr(title,1,80) |
+-----------------------+------------------+----------------------------------------------------------------------------------+
| cryptologia.bib | Morris:1982:CFU | Cryptographic Features of the UNIX Operating System |
| annhistcomput.bib | Tomayko:1989:ACI | Anecdotes: a Critical Incident; The First Port of UNIX |
| annhistcomput.bib | Tomayko:1989:AWC | Anecdotes: The Windmill Computer---An Eyewitness Report of the Scheutz Differenc |
| ieeeannhistcomput.bib | Toomey:2010:FEU | First Edition Unix: Its Creation and Restoration |
| ieeeannhistcomput.bib | Sippl:2013:IIM | Informix: Information Management on Unix |
+-----------------------+------------------+----------------------------------------------------------------------------------+
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Recent traffic on the TUHS list has discussed early publications about
UNIX at DECUS.
The Digital Technical Journal of Digital Equipment Corporation began
publishing in August 1985, and there is a nearly complete bibliography
at
http://www.math.utah.edu/pub/tex/bib/dectechj.bib
Change .bib to .html for a version with live hyperlinks.
The first publication there that mentions ULTRIX in its title is from
March 1986. Unix appears in a title first in Spring 1995.
The document collection at
http://bitsavers.trailing-edge.com/pdf/dec/decus/
doesn't appear to have much that might be related to Unix ports to DEC
hardware.
The Hewlett-Packard Journal is documented in
http://www.math.utah.edu/pub/tex/bib/hpj.bib
The first paper recorded there that mentions Unix or HP-UX is
from March 1984.
The Intel Technical Journal is covered in those archives as well at
http://www.math.utah.edu/pub/tex/bib/intel-tech-j.bib
but it only began relatively recently, in 1997.
The IBM Systems Journal began in 1962, and the IBM Journal of Research
and Development in 1957, and both are in those archives at
http://www.math.utah.edu/pub/tex/bib/ibmsysj.bibhttp://www.math.utah.edu/pub/tex/bib/ibmjrd.bib
In the Systems Journal, the first mention of Unix or AIX is in Fall
1979 (Unix) and then December 1987 (AIX). In the Journal of R&D, AIX
appears in January 1990, and Unix appears in abstracts sporadically,
but is in a title first in late Fall 2002.
In the Bell Systems Technical Journal, covered at
http://www.math.utah.edu/pub/tex/bib/bstj1970.bib
(and other decades from 1920 to 2010), the first mention of Unix in a
title is July/August 1978.
There may have been similar corporate technology journals at other
computer companies, such as CDC, Cray, Data General, English Electric,
Ferranti, Gould, Harris, NCR, Pr1me, Univac, Wang, and others, but
I've so far made no attempt to track them down and add bibliographic
coverage. Suggestions are welcome!
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Dave Horsfall:
Oh, and I also wrote many articles for AUUGN, and presented the original
Unix paper at a DECUS conference, just to stir up the VMSoids.
=====
Do you mean the first UNIX-related paper ever at a DECUS? If so,
do you mean DECUS Australia or DECUS at all? I'm pretty sure there
was UNIX-related activity in DECUS US in 1980, probably earlier, and
am quite sure there was by 1981 when I was on the sidelines of what
eventually became the UNIX SIG.
It was initially called the Special Software and Operating Systems SIG,
because DECUS US leadership always included a somewhat stodgy subgroup
who were more afraid of offending Digital's marketing people than of
serving the membership. So we ended up with a code name.
Since there were in fact Digital technical and marketing people supporting
the new SIG, it was only a couple of years before the name was fixed.
Norman Wilson
Toronto ON
(Lived in Los Angeles and then New Jersey during that period)
On Thu, Sep 24, 2015 at 9:27 AM, <arnold(a)skeeve.com> wrote:
> I think the Berkeley guys had an underground
> pipeline to Bell labs and some stuff got out that way. :-)
>
It was not underground at all. Tools packaged in BSD came from all over
the community. style and diction were released into the wild by
themselves before the were packaged into an AT&T USG UNIX or Research UNIX
release. It got them personally directly and had them installed at
Tektronix soon after first publishing and a talk about them at USENIX (IIRC
that was the Boulder conference in the "Black Hole" movie theatre.
Since I had a minor stake in it (as my first C program) fsck is another
good example of the path to UCB . Ted started the predecessor program
when he was at UMich (with Bill Joy). He did his OYOC year and later a
full PhD at CMU. He was one of my lab partners in his OYOC year. fsck
was a we know it now was done during that time ( and I helped him a bit).
He was bring the sources back and forth from Summit to CMU (at the time in
an RK05 or sometimes a bootable DOS tape image of one - I may still have
one of these). I believe he gave a copy of the sources very early to wnj
-- which is how it ended up in 4.1BSD. I don't think it was in the
original 3.0 or 4.0 packages as it was not in V5, V6 or V7 either. I
believe it was released in PWB 2.0 - not sure and Minnie does not seem to
have them.
I'm pretty the SCCS and cpio sources came through one of the PWB releases
(1 or 2) that UCB got from AT&T.
Clem
In late 2010, I released decade-specific bibliographies of the Bell
System Technical Journal (BSTJ) at
http://www.math.utah.edu/pub/tex/bib/bstj1920.bib
...
http://www.math.utah.edu/pub/tex/bib/bstj2010.bib
(change .bib to .html for versions with live hyperlinks).
I get weekly status reports for the hundreds of bibliographies in the
archives to which the bstj*.bib files belong, and until recently, I'd
been puzzling about the apparent cessation of publication of the Bell
Labs Technical Journal (its current name) in March 2014.
I now understand why: according to the Wiley Web site for the journal,
ownership and the archives have been transferred to IEEE, effective
with volume 19 (2014).
The bstj2010.bib file has accordingly been updated today with coverage
of (so far, only four) articles published by IEEE in volume 19. [The
first of those is a 50-year retrospective on the discovery of the
Cosmic Microware Background that provided some of the first solid
evidence for the Big Bang theory of the origin and evolution of the
universe, and led to the award of the 1978 Nobel Prize in Physics to
Arno Penzias and Robert Wilson. The article also includes a timeline
of important Bell Labs developments, and Unix is listed there.]
Older list readers may remember that a lot of the early research
publications about Unix appeared in the BSTJ issues, so this journal
should have considerable interest for TUHS list users, and the move of
the archives from Wiley to IEEE may make the back issues somewhat more
accessible outside those academic environments that have library
subscriptions for Wiley journals in electronic form.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
The disappearance of some troff-related documents that had
been on line at Bell Labs was recently reported on this
list. That turns out to have been a bureaucratic snafu.
Plan 9 and v7 are back now. It is hoped that CSTRs will
follow.
Doug
It seems that nroff had the ability to show underlined text very early
on, possibly as early as v3 according to the v3 manual.
I haven't managed to get this to work right under simh but I was
thinking maybe there's a way to do it. It needs an 'underline font'
but the mechanism of how this worked in the old days is a bit of
mystery to me. The output device would have to have the ability to
either display or print underlined text. Maybe someone can remember
which terminal devices supported this in the old days which worked
"out of the box" in the v5,v6 era.
Maybe there was the ability to use overstrike characters on the teletype?
In bash I can use:
echo -e "\e[4munderline\e[0m"
Shouldn't be too hard to hack up something that works in emulated v5.
Mark
> It seems that nroff had the ability to show underlined text very early
Pre-Unix roff had the .ul request. Thus I expect (but haven't checked)
that it was in Unix roff. It would be very surprising if nroff, which was
intended to be more capable that roff, didn't have some underlining
facility right from the start.
Doug
Unix was what the authors wanted for a productive computing environment,
not a bag of everything they thought somebody somewhere might want.
One objective, perhaps subliminal originally, was to make program
behavior easy to reason about. Thus pipes were accepted into research
Unix, but more general (and unruly) IPC mechanisms such as messages
and events never were.
The infrastructure had to be asynchronous. The whole point was to
surmount that difficult model and keep everyday programming simple.
User visibility of asynchrony was held to a minimum: fork(), signal(),
wait(). Signal() was there first and foremost to support SIGKILL; it
did not purport to provide a sound basis for asynchronous IPC.
The complexity of sigaction() is evidence that asynchrony remains
untamed 40 years on.
Doug
Hi All.
Here is BWK's contribution.
| Date: Thu, 24 Sep 2015 17:28:21 -0400 (EDT)
| From: Brian Kernighan <bwk(a)cs.princeton.edu>
| To: arnold(a)skeeve.com
| Subject: Re: please get out your flash light...
|
| Some answers interpolated, but lots remain mysteries...
|
| On Thu, 24 Sep 2015, arnold(a)skeeve.com wrote:
|
| > Hi. Can you shed some light?
| >
| >> From: Diomidis Spinellis <dds(a)aueb.gr>
| >> To: tuhs(a)minnie.tuhs.org
| >> Date: Thu, 24 Sep 2015 12:27:03 +0300
| >> Subject: [TUHS] Questions regarding early Unix contributors
| >>
| >> I found out that the book "Life with Unix" by Don Libes and Sandy
| >> Ressler has a seven page listing of Unix notables, and I'm using that to
| >> fill gaps in the contributors of the Unix history repository [1,2].
| >> Working through the list, the following questions came up.
| >>
| >> - Lorinda Cherry is credited with diction. But diction.c first appears
| >> in 4BSD and 2.10BSD. Did Lorinda Cherry implement it at Berkeley?
|
| Nina Macdonald, maybe? Lorinda worked with people in that group.
|
| >> - Is Chuck Haley listed in the book as the author of tar the same as
| >> Charles B. Haley who co-authored V7 usr/doc/{regen,security,setup}? He
| >> appears to have worked both at Bell labs (tar, usr/doc/*) and at
| >> Berkeley (ex, Pascal). Is this correct?
|
| I think so.
|
| >> - Andrew Koenig is credited with varargs. This is a four-line header
| >> file in V7. Did he actually write it?
| >>
| >> - Ted Dolotta is credited with the mm macros, but the document "Typing
| >> Documents with MM is written by by D. W. Smith and E. M. Piskorik. Did
| >> its authors only write the documentation? Did Ted Dolotta also write
| >> mmcheck?
|
| I don't think Ted wrote -mm; he might have been the manager of that
| group? Ask him: ted(a)dolotta.org
|
| >> Also, I'm missing the login identifiers for the following people. If
| >> anyone remembers them, please send me a note.
| >>
| >> Bell Labs, PWB, USG, USDL:
|
| ark
|
| >> Andrew Koenig
| >> Charles B. Haley
| >> Dick Haight
|
| Maybe rhaight, but don't quote me. Last address I have is from
| long ago: rhaight(a)jedi.accn.org
|
| >> Greg Chesson
|
| Can't remember whether it was grc or greg
|
| >> Herb Gellis
| >> Mark Rochkind
|
| You probably mean Marc J Rochkind. I think it was mmr, but
| ask him: rochkind(a)basepath.com
|
| >> Ted Dolotta
| >>
| >> BSD:
| >> Bill Reeves
| >> Charles B. Haley
| >> Colin L. Mc Master
| >> Chris Van Wyk
|
| Was Chris ever part of BSD? He was at Stanford, then Bell Labs,
| where he was cvw.
|
| >> Douglas Lanam
| >> David Willcox
| >> Eric Schienbrood
| >> Earl T. Cohen
| >> Herb Gellis
| >> Ivan Maltz
| >> Juan Porcar
| >> Len Edmondson
| >> Mark Rochkind
|
| See above
|
| >> Mike Tilson
| >> Olivier Roubine
| >> Peter Honeyman
|
| honey (remember honeydanber?
|
| >> R. Dowell
| >> Ross Harvey
| >> Robert Toxen
| >> Tom Duff
|
| td
|
| >> Ted Dolotta
| >> T. J. Kowalski
|
| frodo
|
| >> Finally, I've summarized all contributions allocated through file path
| >> regular expressions [3] into two tables ordered by author [4]. (The
| >> summary is auto-generated by taking the last significant part of each
| >> path regex.) If you want, please have a look at them and point out
| >> omissions and mistakes.
| >>
| >> I will try to commit all responses I receive with appropriate credit to
| >> the repository. (You can also submit a GitHub pull-request, if you prefer.)
| >>
| >> [1] https://github.com/dspinellis/unix-history-repo
| >> [2]
| >> http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf
| >> [3]
| >> https://github.com/dspinellis/unix-history-make/tree/master/src/author-path
| >> [4] http://istlab.dmst.aueb.gr/~dds/contributions.pdf
| >>
| >> Diomidis
| >> _______________________________________________
| >> TUHS mailing list
| >> TUHS(a)minnie.tuhs.org
| >> https://minnie.tuhs.org/mailman/listinfo/tuhs
I can assure you that Lorinda Cherry wrote most of the important
code in WWB, including style and diction. The idea for them
came from Bill Vesterman at Rutgers. Lorinda already had parts,
a real tour de force, which assigned parts of speech to words
in a text. Style was the killer app for parts and was running
within days of his approach to the labs wondering whether
such a thing could be built. Lorinda also wrote deroff, which
these tools of course needed. WWB per se was packaged by
USDL; I am sorry I can't remember the name of the guiding
spirit. So Lorinda's code detoured through there on its
way into research Unix.
Chris van Wyk was cvw. He was at Bell Labs, not BSD.
Chuck Haley is indeed Charles B. Haley.
Andy Koenig was ark.
A few scattered answers, some redundant with those of others:
-- Lorinda Cherry (llc) worked at Bell Labs. She wrote diction (and
the rest of the Writer's Workbench tools) there, in the early
1980s; if some people saw it first in BSD releases that is just
an accident of timing (too late for V7) and exposure (I'm pretty
sure it was available in the USG systems, which weren't generally
accessible until a year or two later).
Lorinda is one of the less-known members of the original Computer
Science Research Center who nevertheless wrote or co-wrote a lot
of things we now take for granted, like dc and bc and eqn and
libplot.
Checking some of this on the web, I came across an interesting
tidbit apparently derived from an interview with Lorinda:
http://www.princeton.edu/~hos/frs122/precis/cherry2.htm
I wholly endorse what she says about UNIX and the group it came from.
One fumble in the text: `Bob Ross' who liked to break programs is
surely really Bob Morris.
-- So far as I know, Tom Duff (td) was never at Berkeley. He's
originally from Toronto; attended U of T; was at Lucasfilm for a
while (he has a particular interest in graphics, though he is a
very sharp and subtle programmer in general); started at Bell Labs
in 1984, not long before I did. He left sometime in the 1990s,
lives in Berkeley CA, but works neither at UCB nor at Google but
at Pixar.
-- T. J. Kowalski (frodo) was at Bell Labs; when I was there he
worked in the research group down the hall (Acoustics, I think), with
whom Computer Science shared a lot of UNIX-releasted stuff. Ted is
well-known for his work on fsck, but did a lot of other stuff, including
being the first to get Research UNIX to work on the MicroVAX II. He
also had a high-quality mustache.
-- Andrew Koenig (ark) was part of the Computer Science group when
I was there in the latter 1980s. He was a early adopter of C++.
asd, the automatic-software distributor we used to keep the software
in sync on the 20-or-so systems that ran Research UNIX, was his work.
-- Mike Tilson was, I think, one of the founders of HCR (Human Computing
Resources), a UNIX-oriented software company based in Toronto in the
early 1980s. The company was later acquired by SCO, in the days when
SCO was still a technical company rather than a den of lawyers.
-- Peter Honeyman (honey) was never, I think, at Berkeley, though
he is certainly of the right character. In the 1980s he was variously
(sometimes concurrently?) working for some part of AT&T and at Princeton.
For many years now he has been in Ann Arbor MI at the University of
Michigan, where his still-crusty manner appears not to interfere with
his being a respected researcher and much-liked teacher.
Norman Wilson
Toronto ON
(Bell Labs Computing Science Research, 1984-1990)
I found out that the book "Life with Unix" by Don Libes and Sandy
Ressler has a seven page listing of Unix notables, and I'm using that to
fill gaps in the contributors of the Unix history repository [1,2].
Working through the list, the following questions came up.
- Lorinda Cherry is credited with diction. But diction.c first appears
in 4BSD and 2.10BSD. Did Lorinda Cherry implement it at Berkeley?
- Is Chuck Haley listed in the book as the author of tar the same as
Charles B. Haley who co-authored V7 usr/doc/{regen,security,setup}? He
appears to have worked both at Bell labs (tar, usr/doc/*) and at
Berkeley (ex, Pascal). Is this correct?
- Andrew Koenig is credited with varargs. This is a four-line header
file in V7. Did he actually write it?
- Ted Dolotta is credited with the mm macros, but the document "Typing
Documents with MM is written by by D. W. Smith and E. M. Piskorik. Did
its authors only write the documentation? Did Ted Dolotta also write
mmcheck?
Also, I'm missing the login identifiers for the following people. If
anyone remembers them, please send me a note.
Bell Labs, PWB, USG, USDL:
Andrew Koenig
Charles B. Haley
Dick Haight
Greg Chesson
Herb Gellis
Mark Rochkind
Ted Dolotta
BSD:
Bill Reeves
Charles B. Haley
Colin L. Mc Master
Chris Van Wyk
Douglas Lanam
David Willcox
Eric Schienbrood
Earl T. Cohen
Herb Gellis
Ivan Maltz
Juan Porcar
Len Edmondson
Mark Rochkind
Mike Tilson
Olivier Roubine
Peter Honeyman
R. Dowell
Ross Harvey
Robert Toxen
Tom Duff
Ted Dolotta
T. J. Kowalski
Finally, I've summarized all contributions allocated through file path
regular expressions [3] into two tables ordered by author [4]. (The
summary is auto-generated by taking the last significant part of each
path regex.) If you want, please have a look at them and point out
omissions and mistakes.
I will try to commit all responses I receive with appropriate credit to
the repository. (You can also submit a GitHub pull-request, if you prefer.)
[1] https://github.com/dspinellis/unix-history-repo
[2]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf
[3]
https://github.com/dspinellis/unix-history-make/tree/master/src/author-path
[4] http://istlab.dmst.aueb.gr/~dds/contributions.pdf
Diomidis
> From: Clem Cole
> Eric Schienbrood
> .. Noel might remember his MIT moniker
No, alas; and I tried 'finger Schienbrood(a)lcs.mit.edu' and got no result.
Maybe he was in some other part of MIT, not Tech Sq?
> From: Arnold Skeeve
> Here too I think stuff written at ATT got out through Berkeley. (SCCS)
That happened at MIT, too - we had SCCS quite early (my MIT V6 manual has
it), plus all sorts of other stuff (e.g. TROFF).
I think some of it may have come through Jon Sieber, who, while he was in high
school, had been part of (IIRC) a Scout troop which had some association with
Bell Labs, and continued to have contacts there after he became an MIT
undergrad.
Noel
> From: Peter Jeremy <peter(a)rulingia.com>
> Why were the original read(2) and write(2) system calls written to
> offer synchronous I/O only?
A very interesting question (to me, particularly, see below). I don't think
any of the Unix papers answer this question?
> It's relatively easy to create synchronous I/O functions given
> asynchronous I/O primitives but it's impossible to do the opposite.
Indeed, and I've seen operating systems (e.g. a real-time PDP-11 OS I worked
with a lot called MOS) that did that.
I actually did add asynchronous I/O to V6 UNIX, for use with very early
Internet networking software being done at MIT (in a user process). Actually,
it wasn't just asynchronous, it was _raw_ asynchronous I/O! (The networking
device was DMA, and the s/w did DMA directly into the user process' memory.)
The code also allowed more than one outstanding I/O request, too. (So the
input could be re-enabled on the device ASAP, without having to wake up a
process, have it run, do a new read call, etc.)
We didn't redo the whole Unix I/O system, to support/use asyn I/O throughout,
though; I just kind of warted it onto the side. (IIRC, it notified the user
process via a signal that the I/O had completed; the user software then had
to do an sgtty() call to get the transfer status, size, etc.)
Anyway, back to the original topic: I don't want to speculate (although I
could :-); perhaps someone who was around 'back then' can offer some insight?
If not, time for speculation! :-)
Noel
Why were the original read(2) and write(2) system calls written to offer
synchronous I/O only? It's relatively easy to create synchronous I/O
functions given asynchronous I/O primitives but it's impossible to do the
opposite.
Multics (at least) supported asynchronous I/O so the concept wasn't novel.
And any multi-tasking kernel has to support asynchronous I/O internally so
suitable code exists in the kernel.
--
Peter Jeremy
As I was dropping off to sleep last night, I wondered why the superuser
account on Unix is called root.
There's a hierarchy of directories and files beginning at the tree root /.
There's a hierarchy of processes rooted with init. But there's no hierarchy
of users, so why the moniker "root"?
Any ideas?
Cheers, Warren
> Did any Unix or Unix like OS ever zero fill on realloc?
> On zero fill, I doubt many did that. Many really early on when memory
> was small.
This sparks rminiscence. When I wrote an allocation strategy somewhat
more sophisticated than the original alloc(), I introduced realloc() and
changed the error return from -1 to the honest pointer value 0. The
latter change compelled a new name; "malloc" has been with us ever since.
To keep the per-byte cost of allocation low, malloc stuck with alloc's
nonzeroing policy. The minimal extra code to handle calls that triggered
sbrk had the startling property that five passes through the arena might
be required in some cases--not exactly scalable to giant virtual address
spaces!
It's odd that the later introduction of calloc() as a zeroing malloc()
has never been complemented by a similar variant of realloc().
> Am I the only one that remembers realloc() being buggy on some systems?
I've never met a particular realloc() bug, but realloc does inherit the
portability bug that Posix baked into malloc(). Rob Pike and I
requested that malloc(0) be required to return a pointer distinct from
any live pointer. Posix instead allowed an undefined choice between
that behavior and an error return, confounding it with the out-of-memory
indication. Maybe it's time to right the wrong and retire "malloc".
The name "alloc" might be recycled for it. It could also clear memory
and obsolete calloc().
Doug
Dave Horsfall:
Today is The Day of the Programmer, being the 0x100'th day of the year.
===
Are you sure you want to use that radix as your standard?
You risk putting a hex on our profession.
Norman Wilson
Toronto ON
> Today is The Day of the Programmer, being the 0x100'th day of the year.
Still further off topic, but it reminds me of a Y2K incident circa 1960.
Our IBM 7090 had been fitted with a homegrown time-of-day clock (no, big
blue did not build such into their machines back then). The most significant
bits of the clock registered the day of the year. On day 0x100 the clock
went negative and the system went wild.
Doug
Hi there,
we just restored our PDP-11/23+ rebulding a new PSU around a
normal PC PSU and creating the real time clock needed for some
OS.
we're wondering about what UNIX can eventually run on it :)
http://museo.freaknet.org/en/restauro-pdp1123plus/
bye,
Gabriele
--
[ ::::::::: 73 de IW9HGS : http://freaknet.org/asbesto ::::::::::: ]
[ Freaknet Medialab :: Poetry Hacklab : Dyne.Org :: Radio Cybernet ]
[ NON SCRIVERMI USANDO LETTERE ACCENTATE - NON MANDARMI ALLEGATI ]
[ *I DELETE* EMAIL > 100K, ATTACHMENTS, HTML, M$-WORD DOC and SPAM ]
Today is The Day of the Programmer, being the 0x100'th day of the year.
Take a bow, all programmers...
Did you know that it's an official professional holiday in Russia?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I'll support shark-culling when they have been observed walking on dry land.
On Thu, 10 Sep 2015, Norman Wilson wrote:
> #$%^&*\{{{
>
> NO CARRIER
>
> +++
> ATH
My favourite would be:
+++
(pause - this was necessary)
ATZ
I.e. a reset.
I think there were even worse ones in the Hayes codes, like ATH3 or
something. Dammit, but mental bit-rot is setting in.
Of course, I never did such an evil thing, your honour... Honest! Never!
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I'll support shark-culling when they have been observed walking on dry land.
I've used realloc a lot, and never run into bugs. Maybe I've just
been lucky, or maybe it's that I probably didn't use it that much
until the latter 1980s, and then more with pukka Doug malloc code
than with the stuff floating around elsewhere.
Never mind that sometime around 1995 I found a subtle bug in the
pukka Doug malloc (not realloc): arena blew up badly when presented
with a certain pattern of alternating small and large allocs and frees,
produced by a pukka Brian awk regression test. I had a lot of (genuine)
fun tracking that down, writing some low-level tools to extract the
malloc and free calls and sizes and then a simulator in (what else?)
awk to test different fixes. Oh, for the days when UNIX was that
simple.
I've never heard before of a belief that the new memory belonging
to realloc is always cleared, except in conjunction with the utterly-
mistaken belief that that's true of malloc as well. I don't think it
was ever promised to be true, though it was probably true by accident
just often enough (just as often as with malloc) to fool the careless.
Norman Wilson
Toronto ON
I’ve just had a discussion with my boss about this and he claimed that it did at one point and I said it hasn’t in all the unix versions I’ve ever played with (v6, v7, sys III, V, BSD 2, 3, 4, SunOS and Solaris).
My question to this illustrious group is: Did any Unix or Unix like OS ever zero fill on realloc?
David
I never used realloc(), only malloc() and calloc().
Checking a few unixes I have access to all reallocs() seem to state
either nothing on contents of memory added or state explicitly
'undefined'.
The only function which zeroes allocated memory is calloc() it seems.
Unixes checks: SCO UNIX 3.2V4.2, Digital Unix 4.0G, Tru64 Unix V5.1B,
HP-UX 11.23, HP-UX 11.31
Cheers
On 9/11/15, tuhs-request(a)minnie.tuhs.org <tuhs-request(a)minnie.tuhs.org> wrote:
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. Did realloc ever zero the new memory? (David)
> 2. Re: Did realloc ever zero the new memory? (Jim Capp)
> 3. Re: Did realloc ever zero the new memory? (David)
> 4. Re: Did realloc ever zero the new memory? (Larry McVoy)
> 5. Re: Did realloc ever zero the new memory? (David)
> 6. Re: Did realloc ever zero the new memory? (Larry McVoy)
> 7. Re: Did realloc ever zero the new memory? (Clem Cole)
> 8. Re: Did realloc ever zero the new memory? (Dave Horsfall)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 10 Sep 2015 12:52:45 -0700
> From: David <david(a)kdbarto.org>
> To: tuhs(a)minnie.tuhs.org
> Subject: [TUHS] Did realloc ever zero the new memory?
> Message-ID: <E798E102-2C50-4AB2-92CC-188D05AA951F(a)kdbarto.org>
> Content-Type: text/plain; charset=utf-8
>
> I?ve just had a discussion with my boss about this and he claimed that it
> did at one point and I said it hasn?t in all the unix versions I?ve ever
> played with (v6, v7, sys III, V, BSD 2, 3, 4, SunOS and Solaris).
>
> My question to this illustrious group is: Did any Unix or Unix like OS ever
> zero fill on realloc?
>
> David
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 10 Sep 2015 16:10:41 -0400 (EDT)
> From: Jim Capp <jcapp(a)anteil.com>
> To: david(a)kdbarto.org
> Cc: tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Did realloc ever zero the new memory?
> Message-ID: <5962857.12872.1441915841364.JavaMail.root@zimbraanteil>
> Content-Type: text/plain; charset="utf-8"
>
> On every system that I've ever used, I believe that realloc did not do a
> zero fill. There might have been a time when malloc did a zero fill, but I
> never counted on it. I always performed a memset or bzero after a malloc.
> I'm pretty sure that I counted on realloc to NOT perform a zero fill.
>
>
> $.02
>
>
>
> From: "David" <david(a)kdbarto.org>
> To: tuhs(a)minnie.tuhs.org
> Sent: Thursday, September 10, 2015 3:52:45 PM
> Subject: [TUHS] Did realloc ever zero the new memory?
>
> I?ve just had a discussion with my boss about this and he claimed that it
> did at one point and I said it hasn?t in all the unix versions I?ve ever
> played with (v6, v7, sys III, V, BSD 2, 3, 4, SunOS and Solaris).
>
> My question to this illustrious group is: Did any Unix or Unix like OS ever
> zero fill on realloc?
>
> David
>
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
>
Can't say much more, really...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Concerned about shark attacks? Then don't go swimming in their food bowl...
---------- Forwarded message ----------
From: Jim Haynes
Cc: greenkeys(a)mailman.qth.net
Subject: Re: [GreenKeys] Teletype Industrial Design
On Fri, 28 Aug 2015, Jack wrote:
> How were they still applying for patents in 1993?
>
> D332,465 (1993) Sokolowski
>
It was filed for in 1988 and was assigned to AT&T Bell Laboratories.
So I guess that was after what was left of Teletype had gone to
Naperville. And what was left of Bell Labs was still part of AT&T,
before the spinoff of Lucent in 1996.
Incidentally if you google for Bell Laboratories the first thing that
comes up is
Bell Laboratories - Home
www.belllabs.com/
An exclusive manufacturer of rodent control products, Bell Laboratories produces
the highest quality rodenticides and other rodent control products available to
...
Just stirring up the gene pool, so to speak... And who hasn't played
chess with a computer, and caught it cheating?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer"
RIP Cecil the Lion; he was in pain for two days, thanks to some brave hunter.
---------- Forwarded message ----------
Date: Sat, 15 Aug 2015 16:49:19 -0400
From: Christian Gauger-Cosgrove
To: David Tumey
Cc: GREENKEYS BULLETIN BOARD <greenkeys(a)mailman.qth.net>
Subject: Re: [GreenKeys] TELETYPE Chess Anyone??
On 15 August 2015 at 16:37, David Tumey via GreenKeys
<greenkeys(a)mailman.qth.net> wrote:
> house. I got to play a game of Chess on a Model 33/PDP-?? and it totally
> blew my mind. I knew that I wanted that to be part of my current teletype
You know, the current version of the SIMH emulator can connect to
serial ports now. If you want I can help you setup SIMH's PDP-11
simulator running a PDP-11 UNIX which of course has chess you play. So
that it'll work on your TTY.
Only required information is: "What serial port is your Teletype's
current loop adapter connected?" and "What do you want to run? V6, V7,
Ultrix-11, RT-11 (V4, V5.3, V5.7), RSTS/E (V7, V10.1-L), RSX-11/M+,
DSM-11 (kill it with fire)? All of the above?"
Cheers,
Christian
--
Christian M. Gauger-Cosgrove
STCKON08DS0
Contact information available upon request.
______________________________________________________________
GreenKeys mailing list
Home: http://mailman.qth.net/mailman/listinfo/greenkeys
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:GreenKeys@mailman.qth.net
2002-to-present greenkeys archive: http://mailman.qth.net/pipermail/greenkeys/
1998-to-2001 greenkeys archive: http://mailman.qth.net/archive/greenkeys/greenkeys.html
Randy Guttery's 2001-to-2009 GreenKeys Search Tool: http://comcents.com/tty/greenkeyssearch.html
This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Ah hah! My stray memory of `Not War?' must date from my TOPS-10 days.
I can't find a trace of the string `love' anywhere in any of the make
sources in Kirk's multi-CD compendium of historic BSD, so it certainly
can't have been from there.
Norman Wilson
Toronto ON
So me, being an uber-geek, tried it on a few boxen again...
On the Mac:
ozzie:~ dave$ make love
make: *** No rule to make target `love'. Stop.
Boring...
On FreeBSD:
aneurin% make love
Not war.
Thank you for keeping the faith!
And on my tame penguin:
dave@debbie:~$ make love
-bash: make: command not found
Sigh...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Watson never said: "I think there is a world market for maybe five computers."
On 30 July 2015 at 17:11, Norman Wilson <norman(a)oclsc.org> wrote:
> My vague memory is that the original make, e.g. in V7, printed
> `Don't know how to make love.' This was not a special case:
> `don't know how to make XXX' was the normal message.
>
> There was a variant make that printed
> Not war?
> if asked to make love without explicit instructions. I thought
> that appeared in 3BSD or 4BSD, but I could be mistaken.
On Solaris 10, /usr/css/bin/make reports:
make: Fatal error: Don't know how to make target `love'
N.
My vague memory is that the original make, e.g. in V7, printed
`Don't know how to make love.' This was not a special case:
`don't know how to make XXX' was the normal message.
There was a variant make that printed
Not war?
if asked to make love without explicit instructions. I thought
that appeared in 3BSD or 4BSD, but I could be mistaken.
Norman Wilson
Toronto ON
I recall playing with this on the -11, but it seems to have become extinct
(the program, I mean). I seem to recall that it was written in PDP-11
assembly; did it ever get rewritten in C?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer"
"The complexity for minimum component costs has increased at a rate of
roughly a factor of two per year." -- G.Moore, Electronics, Vol 38 No 8, 1965
> The punning pseudonym, the complaint at the end that Unix and C are dead
> and nothing is even on the horizon to replace them, and the general snarky
> tone suggest to me that it's Rob "Mark V. Shaney" Pike. In that case,
> the affiliation with Bellcore is a blind ("not Goodyear, Goodrich").
VSM, MVS: what other mystery authors in Unix land identify thus with VMS?
Doug
I authored those files so I could render the Seventh Edition manuals
as PDF in 1998 (long after I had departed the Labs). As pic did not
exist yet (Kernighan had not written it) there were never any
original pic files for these documents. I do not know what 1127 was
doing to publish diagrams at the time.
The Bell logo I did directly in postscript so \(bs would render. The logo
was originally it's own custom "character" just like an A, B or C, on the
phototypesetter's optical font wheel.
You can see what they look liked from the v7 PDF manuals --
In Volume 2A (http://plan9.bell-labs.com/7thEdMan/v7vol2a.pdf)
bs.ps is on variety of pages such as 129, 130, 216
ms.pic is on page 127
make.ps is on page 282
In Volume 2B (http://plan9.bell-labs.com/7thEdMan/v7vol2b.pdf)
implfig1.pic is on page 162
implfig2.pic is on page 168
these are the PDF page numbers (where the title is page 1)
> From: Mark Longridge <cubexyz(a)gmail.com>
>
> I came across some Unix files in v7add such as bs.ps for the Bell logo
> and ms.pic (described as Figure 1 for msmacros).
>
> http://www.maxhost.org/other/ms.pic
>
> I was wondering if there was some viewer or conversion program so we
> could look at pic files from this era?
>
> Mark
>
>
I came across some Unix files in v7add such as bs.ps for the Bell logo
and ms.pic (described as Figure 1 for msmacros).
http://www.maxhost.org/other/ms.pic
I was wondering if there was some viewer or conversion program so we
could look at pic files from this era?
Mark
Peter Salus noted there was workshop in Newport, RI, in 1984 concerning
"Distributed UNIX." The report on “Distributed UNIX” by Veigh S. Meer [a
transparent pseudonym] appeared in /;login:/ 9.5 (November 1984), pp.
5-9. So who was "Veigh S. Meer"? The affiliation says "Bellcore," but
who was there in 1984? Peter's first thought was Peter Langston.
Any ideas?
Debbie
Dave Horsfall:
Call me slow, but can someone please explain the joke? If it's American
humo[u]r, then remember that I'm British/Australian...
There is no such thing as American humour, because Yanks don't know
how to spell.
They can't get the wood either.
Norman Wilson
(reformed Yank)
Toronto ON
When I get back to a keyboard I thought maybe it would be nice to share
some Greg stories, I have enough of them. So I'll try and do that.
If anyone wants to do that in person, USENIX ATC is next
week and would be an appropriate venue. Perhaps a Greg
Chesson Memorial BOF?
Norman Wilson
Toronto ON
Much to my surprise I see there isn't a WIKI page on Greg Chesson yet?
Maybe some of his friends get get together on submit one ?
Cheers,
rudi
On 6/30/15, tuhs-request(a)minnie.tuhs.org <tuhs-request(a)minnie.tuhs.org> wrote:
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. We've lost Greg Chesson (Dave Horsfall)
> 2. Re: We've lost Greg Chesson (Clem Cole)
> 3. Re: We've lost Greg Chesson (Larry McVoy)
> 4. Re: We've lost Greg Chesson (Larry McVoy)
> 5. Re: We've lost Greg Chesson (Mary Ann Horton)
> 6. Re: We've lost Greg Chesson (Norman Wilson)
> 7. Re: We've lost Greg Chesson (Dave Horsfall)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 29 Jun 2015 17:30:16 +1000 (EST)
> From: Dave Horsfall <dave(a)horsfall.org>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] We've lost Greg Chesson
> Message-ID: <alpine.BSF.2.11.1506291728540.96902(a)aneurin.horsfall.org>
> Content-Type: text/plain; charset="utf-8"
>
> Haven't found any more info...
>
> --
> Dave Horsfall (VK2KFU) "Those who don't understand security will suffer"
> http://www.horsfall.org/ It's just a silly little web site, that's all...
>
> ---------- Forwarded message ----------
> Date: Sun, 28 Jun 2015 18:50:42 -0400
> From: Dave Farber
> To: ip <ip(a)listbox.com>
> Subject: [IP] Death of Greg Chesson
>
> ---------- Forwarded message ----------
> From: "Lauren Weinstein"
> Date: Jun 28, 2015 6:43 PM
> Subject: Death of Greg Chesson
> To: <dave(a)farber.net>
> Cc:
>
>
> Dave, fyi.
>
> https://plus.google.com/u/0/+LaurenWeinstein/posts/bRdbj1B1qQG
>
> --Lauren--
> Lauren Weinstein (lauren(a)vortex.com) http://www.vortex.com/lauren
> Founder:
> ?- Network Neutrality Squad: http://www.nnsquad.org
> ?- PRIVACY Forum: http://www.vortex.com/privacy-info
> Co-Founder: People For Internet Responsibility:
> http://www.pfir.org/pfir-info
> Member: ACM Committee on Computers and Public Policy
> Lauren's Blog: http://lauren.vortex.com
> Google+: http://google.com/+LaurenWeinstein
> Twitter: http://twitter.com/laurenweinstein
> Tel: +1 (818) 225-2800 / Skype: vortex.com
>
> Archives [feed-icon-10x10.jpg] | Modify Your Subscription | Unsubscribe Now
> [listbox-logo-small.png]
>
> ------------------------------
>
> Message: 2
> Date: Mon, 29 Jun 2015 07:44:58 -0500
> From: Clem Cole <clemc(a)ccc.com>
> To: Dave Horsfall <dave(a)horsfall.org>
> Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] We've lost Greg Chesson
> Message-ID:
> <CAC20D2MqtPfMe_SdkCfFFthF_5Rth8y2Q6U2xm-ZgbkAA0tQZg(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Greg had been sick for a while. Sad loss.
> Clem
>
> On Mon, Jun 29, 2015 at 2:30 AM, Dave Horsfall <dave(a)horsfall.org> wrote:
>
>> Haven't found any more info...
>>
>> --
>> Dave Horsfall (VK2KFU) "Those who don't understand security will
>> suffer"
>> http://www.horsfall.org/ It's just a silly little web site, that's
>> all...
>>
>> ---------- Forwarded message ----------
>> Date: Sun, 28 Jun 2015 18:50:42 -0400
>> From: Dave Farber
>> To: ip <ip(a)listbox.com>
>> Subject: [IP] Death of Greg Chesson
>>
>> ---------- Forwarded message ----------
>> From: "Lauren Weinstein"
>> Date: Jun 28, 2015 6:43 PM
>> Subject: Death of Greg Chesson
>> To: <dave(a)farber.net>
>> Cc:
>>
>>
>> Dave, fyi.
>>
>> https://plus.google.com/u/0/+LaurenWeinstein/posts/bRdbj1B1qQG
>>
>> --Lauren--
>> Lauren Weinstein (lauren(a)vortex.com) http://www.vortex.com/lauren
>> Founder:
>> - Network Neutrality Squad: http://www.nnsquad.org
>> - PRIVACY Forum: http://www.vortex.com/privacy-info
>> Co-Founder: People For Internet Responsibility:
>> http://www.pfir.org/pfir-info
>> Member: ACM Committee on Computers and Public Policy
>> Lauren's Blog: http://lauren.vortex.com
>> Google+: http://google.com/+LaurenWeinstein
>> Twitter: http://twitter.com/laurenweinstein
>> Tel: +1 (818) 225-2800 / Skype: vortex.com
>>
>> Archives [feed-icon-10x10.jpg] | Modify Your Subscription | Unsubscribe
>> Now
>> [listbox-logo-small.png]
>> _______________________________________________
>> TUHS mailing list
>> TUHS(a)minnie.tuhs.org
>> https://minnie.tuhs.org/mailman/listinfo/tuhs
>>
>>
>
---------- Forwarded message ----------
Subject: Re: [GreenKeys] Replacing Chad, was: Black tape
On Sun, 21 Jun 2015, Jones, Douglas W wrote:
> The flaws in the Votomatic were a bit subtle, but in retrospect, if you
> read the patents for the IBM Portapunch (the direct predecessor of the
> Votomatic) and for the Votomatic, you find that the flaw that was its
> eventual downfall -- at least in the public's mind -- is fully
> documented in the patents. Of course IBM's (and later CESI's, after IBM
> walked away from the Votomatic in the late 1960s) salesmen never
> mentioned those flaws.
Portapunch? Arrgghh!
We had to put up with that Satan-spawn in our University days... Only 40
columns wide, it had specific encodings for FORTRAN words, and it was just
as well that FORTRAN ignored white space (I know this for a fact, when I
fed a deck into a "real" card reader one time).
Being but mere impecunious Uni students and having to actually buy the
things, we resorted to fixing mis-punches by literally sticky-taping the
chads back.
For some reason, the computer operators (IBM 360/50) hated us...
I really do hope that the inventor of Portapunch is still having holes
punched through him with a paper-clip.
Anyone got Plan 9 4th edition manuals or Inferno Manuals? I will be
interested to buy them. Vitanuova used to sell them, but their payment
system is down for years now and has been brought up to Charles many
times, but it seem to be still down, so there is no way to buy it from
them.
--
Ramakrishnan
Hi All.
As mentioned a few weeks ago, I have a full set of the O'Reilly X11 books
from the early 90s.
I'm willing to send them on to a better home for the cost of mailing
from Israel.
One person said they were interested but didn't follow up with me, so
I'm again offering to the list. First one to get back to me wins. :-)
Thanks,
Arnold
Prompted by another thread, I decided to share about some of my
experience with providing printed BSD manuals.
I was given a 4.4BSD set with the understanding that I would work on
preparing new print editions using NetBSD. It was a significant
undertaking. I ended up just doing Section 8 System. Here is a summary
of what I did:
- Build the NetBSD distribution (which gets the manual pages generated
or at least put in place).
- Manual clean up, like remove a link to manual page that wasn't needed
and remove a duplicated manual (in two sub-sections).
- Learned about permuted index (the long KWIC index cross-reference).
Generated a list of characters and terms to ignore for building my
permuted index. Wrote script to generate it, including converting to
LaTeX using longtable. This resulted in 2937 entries and was 68 pages in
printed book.
- Create list of all an section 8 pages, pruned for duplicate inodes.
- Also have a list of 40 filenames of other manpages to include in the
man8 book. These are system maintenance procedures or commands that are
in wrong section or could be section 8 (or weren't installed). (Examples
are ipftest, pkg_admin, postsuper.)
- Generate a sorted list of all the manuals.
- Look for any missing manual pages. Script to check for libexec or sbin
tools not in man8, such as supfilesrv or supscan is really supservers.8
and missing kdigest.8. Get those files in place as needed. I cannot
remember now, but I think I may have wrote some missing manuals or got
others to submit some (officially).
- Script to make sure all man pages are in order. This found some
duplicate manual pages with different inodes, wrong man macro DT values,
wrong filename, wrong sections, etc. Some of these were reported
upstream or fixed.
- Script to create the book as a single huge postscript file, then a
PDF. Reviewed the possible ROFF related errors and warnings.
(On 2009-10-23, it was 1304 pdf pages from 572 manuals.)
- Script to figure out licenses. This was substantial! It looked for
copyright patterns in manual source, excluded junk formatting like
revision control markers, include some extra licenses that weren't
included in the manpage itself (like GPL2). Then another script to
generate LaTeX from the copyrights and licenses. It removed duplicate
license statements and sorted the copyrights. So some license
statements had many copyrights using the same license verbiage.
This represented 620 copyrighted files with approximately 683 copyright
lines and 109 different licenses. Yes 109! That resulted in
approximately 68 printed pages, pages 1461 through 1529. This didn't
duplicate any license verbiage. (I just realized that was the same
length as my permuted index.)
A few things to note: Some authors chose to use different names for
different copyright statements. Some authors used their names or
assigned the copyright to the project. Some licenses included software
or authors names instead of generic terms. Many BSD style licenses were
slightly changed with different grammar, etc. Many contributors created
own license or reworded someone else's existing license text. As an
example: "If it breaks then you get to keep both pieces." :)
The four most common license statements represented 113 "THE REGENTS AND
CONTRIBUTORS" manuals, 75 "THE NETBSD FOUNDATION, INC." manuals, and 35
"IBM" (aka Postfix) manuals, and 30 generic "THE AUTHOR AND
CONTRIBUTORS" manuals.
I found many were missing licenses. I hunted down original authors,
looked in CVS history, etc to help resolve some of these. I also
reported about still missing licenses to the project. We will assume
they meant it is open source and can be distributed since nobody has
complained for years (even prior to my printed work) :)
- Generated a list of required advertising acknowledgments in LaTeX to
import into one printed book (and for webpage).
- Split my long PDF into two volumes. Used LaTeX pdfpages package and
includepdf to include the generated PDFs. I made sure the page numbers
continued in the second book from end of previous volume. (After
printing, I realized a mistake where the second volume had odd numbers
on left pages, but order is still correct, so I assume nobody else
noticed.)
Historically, the System Manager's Manual (SMM) also included other
system installation and administration documentation (in addition to the
manual pages). My work didn't include that documentation (some of which
was unmaintained since 4.4BSD in 1993 and covers some software and
features that are no longer included with NetBSD). That could be another
project.
I only did the SMM / manual section 8. I realized if I did all manuals,
my book set would be well over ten thousand printed pages. The amount of
work and initial printing costs would not be worth it with the little
money it could bring in. It was certainly a learning experience, plus
some benefit such as cleanup of some mandoc/roff code, filename renames,
copyright/license additions, and manual pages added.
Jeremy C. Reed
echo 'EhZ[h ^jjf0%%h[[Zc[Z_W$d[j%Xeeai%ZW[ced#]dk#f[d]k_d%' | \
tr '#-~' '\-.-{'
I recently came across this:
http://www.cs.princeton.edu/~bwk/202
It's been there for a while but I hadn't noticed it. It describes the
trials and tribulations of getting the Mergenthaler 202 up and running
at Bell Labs and is very interesting reading.
I have already requested that they archive their work with TUHS and
gotten a positve response about this from David Brailsford. In the
meantime, it's fun reading!
Enjoy,
Arnold
I noticed that the assembly source file for blackjack is missing from
the source tree so I tried to recreate it, so far unsuccessfully.
My first idea was to grab bj.s from 2.11BSD and assemble it the Unix
v5 as command. That seems to generate a bunch of errors. Also other
assembly source files don't seem to have .even in them.
Another idea would be generate the source code from the executable
itself, but there doesn't seem to be a disassembler for early Unix.
It's possible that v5 bj.s was printed out somewhere but so far no
luck finding it.
Mark
Mark Longridge:
chmod 0744 bj
Dave Horsfall:
That has to be the world's oddest "chmod" command.
======
Not by a long shot.
Recently, for reasons related both to NFS permissions and to
hardware testing, I have occasionally been making directories
with mode 753.
At the place I worked 20 years ago, we wanted a directory
into which anonymous ftp could write, so that people could
send us files; but we didn't want it to become a place for
creeps to stash their creepy files. I thought about the
problem briefly, then made the directory with mode 0270,
owned by the user used for anonymous ftp and by a group
containing all the staff members allowed to receive files
that way. That way creeps could deposit files but couldn't
see what was there.
I also told cron to run every ten minutes, changing the
permissions of any file in that directory to 0060.
Oh, and I had already maniacally (and paranoiacally)
excised from ftpd the code allowing ftp to change permissions.
I admit I can't think of a reason to use 744 offhand, since
if you can read the file you can copy it and make the copy
executable. But UNIX permissions can be used in so many
interesting ways that I'm not willing to claim there is no
such reason just because I can't see what it is.
Norman Wilson
Toronto ON
OK, success...
in Unix v5:
as bj.s etc.s us.s
ld a.out -lc
mv a.out bj
chmod 0744 bj
It seems to work OK now. Probably should work on v6 and v7 as well.
Mark
> From: Mark Longridge <cubexyz(a)gmail.com>
> My first idea was to grab bj.s from 2.11BSD and assemble it the Unix v5
> as command. That seems to generate a bunch of errors. Also other
> assembly source files don't seem to have .even in them.
My first question was going to be 'Maybe try an earlier version of the
source?', but I see there is no earlier version online. Odd. ISTR that some
of the fun things in V6 came without source, maybe blackjack was the same way?
> Another idea would be generate the source code from the executable
> itself, but there doesn't seem to be a disassembler for early Unix.
Where's the binary? I'd like to take a look at it, and see if the source was
assembler, or C (there's a C version in the source tree, too). Then I can
look and see how close it is to that 2.11 source - that may be a
re-implementation, and totally different.
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> OK, success...
Yeah, I just got there too, but by a slightly longer route!
(Google wasn't turning up the matches for the routines I needed, which you
found in etc.s, etc - it seems the source archive on Minnie isn't being
indexed by Google. So I wound up cobbling them together with a mix of stuff
from other places, along with stuff I wrote/modified.)
> Probably should work on v6 and v7 as well.
Does on V6, dunno about V7.
> It seems to work OK now.
Yes, but this is _not_ the source for the V5/V6 'bj'. (I just checked,
and the V5 and V6 binaries are identical.)
Right at the moment, I've used enough time on this - I may get back to
it later, and disassemble the V5/V6 binary and see what the original
source looks like.
Noel
> From: Noel Chiappa
> another assembler source file, which contains the following routines
> which are missing from bj.s:
I missed some. It also wants quest1, quest2 and quest5 (and maybe more).
This may present a bit of a problem, as I can't find any trace of them
anywhere, and will have to work out from the source what their arguments,
etc are, what they do, etc.
I wonder how on earth the 2.11 people got this to assemble? (Or maybe they
didn't?)
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> My first idea was to grab bj.s from 2.11BSD and assemble it the Unix v5
> as command. That seems to generate a bunch of errors.
I saw that there's a SysIII bj.s, which is almost identical to the 2.11 one;
so the latter is probably descended from the first, which I assume is Bell
source. So I grabbed it and tried to assemble it.
The errors are because bj.s is designed to be assembled along with another
assembler source file, which contains the following routines which are
missing from bj.s:
mesg
decml
nline
Dunno if you're aware of this, but, the line 'as a.s b.s' _doesn't_
separately assemble a.s and b.s, rather it's as if you'd typed
'cat a.s b.s > temp.s ; as temp.s'. (This is used in the demi-enigmatic
"as data.s l.s" in the system generation procedure.)
I looked around in the sources that come with V6, and I didn't see any obvious
such file. I'm going to whip the required routines up really quickly, and see
if the results assemble/run.
I looked to see if I could steal them from the binary of 'bj' on V6, and...
it looks like that binary is totally different from this source. Let me look
into this...
> Also other assembly source files don't seem to have .even in them.
The V6 assembler groks '.even'.
Noel
Hello All.
FYI.
Warren - can you mirror?
> Date: Thu, 11 Jun 2015 04:41:39 -0400 (EDT)
> From: Brian Kernighan <bwk(a)cs.princeton.edu>
> Subject: dmr web site (fwd)
>
> Finally indeed. I can't recall who else asked me about
> Dennis's pages, so feel free to pass this on.
> And someone ought to make a mirror. If I were not far
> away at the moment, I'd do so myself.
>
> Brian
>
> ---------- Forwarded message ----------
> Date: Tue, 09 Jun 2015 16:32:13 -0400
> To: Brian Kernighan <bwk(a)CS.Princeton.EDU>
> Subject: dmr web site
>
> finally, try this: https://www.bell-labs.com/usr/dmr/www/
>
> It's almost a complete copy of Dennis Ritchie's pages, with some
> adaptation needed for the new location. There are a few broken links,
> but hopefully they're not too annoying.
This new paper may be of interest to list readers:
Dan Murphy
TENEX and TOPS-20
IEEE Annals of the History of Computing 37(1) 75--82 (2015)
http://dx.doi.org/10.1109/MAHC.2015.15
In particular, the author notes on page 81:
>> ...
>> The fact that UNIX was implemented in a reasonably portable language
>> (at least as compared with 36-bit MACRO) also encouraged its spread to
>> new and less expensive machines. If I could have done just one thing
>> differently in the history of TENEX and TOPS-20, it would be to have
>> coded it in a higher-level language. With that, it's probable that the
>> system, or at least large parts of it, would have spread to other
>> architectures and ultimately survived the demise of the 36-bit
>> architecture.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Just cuz of this list and recent comments about SVRx and I see this comment in ftrap.s
fpreent: # this is the point we return to
# when we are executing the n+1th
# floating point instruction in a
# contiguous sequence of floating
# point instructions (floating
# pointlessly forever?)
Makes me wonder how many other humorous comments are buried in the code.
David
Oh, if no one out there has a SVR3.1 distribution (apparently for the 3b2), I’ve got one to send out….
Since early 2013 I've occasionally asked this list for help, and shared
the progress regarding the creation of a Unix Git repository containing
Unix releases from the 1970s until today [1].
On Saturday I presented this work [2, 3] at MSR '15: The 12th Working
Conference on Mining Software Repositories, and on Sunday I discussed
the work with the participants over a poster [4] (complete with commits
shown in a teletype (lcase) and a VT-220 font). Amazingly, the work
received the conference's "Best Data Showcase Award", for which I'm
obviously very happy.
I'd like to thank again the many individuals who contributed to the
effort. Brian W. Kernighan, Doug McIlroy, and Arnold D. Robbins helped
with Bell Labs login identifiers. Clem Cole, Era Eriksson, Mary Ann
Horton, Kirk McKusick, Jeremy C. Reed, Ingo Schwarze, and Anatole Shaw
helped with BSD login identifiers. The BSD SCCS import code is based on
work by H. Merijn Brand and Jonathan Gray.
A lot of work remains to be done. Given that the build process is
shared as open source code, it is easy to contribute additions and fixes
through GitHub pull requests on the build software repository [5], but
if you feel uncomfortable with that, just send me email. The most useful
community contribution would be to increase the coverage of imported
snapshot files that are attributed to a specific author. Currently,
about 90 thousand files (out of a total of 160 thousand) are getting
assigned an author through a default rule. Similarly, there are about
250 authors (primarily early FreeBSD ones) for which only the identifier
is known. Both are listed in the build repository's unmatched directory
[6], and contributions are welcomed (start with early editions; I can
propagate from there). Most importantly, more branches of open source
systems can be added, such as NetBSD OpenBSD, DragonFlyBSD, and illumos.
Ideally, current right holders of other important historical Unix
releases, such as System III, System V, NeXTSTEP, and SunOS, will
release their systems under a license that would allow their
incorporation into this repository. If you know people who can help in
this, please nudge them.
--Diomidis
[1] https://github.com/dspinellis/unix-history-repo
[2]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
(HTML)
[3]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.pdf
(PDF)
[4]
http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/poster.pdf
(105MB)
[5] https://github.com/dspinellis/unix-history-make
[6]
https://github.com/dspinellis/unix-history-make/tree/master/src/unmatched
All, I finally remembered to export the unix-jun72 project over to Github:
https://github.com/DoctorWkt/unix-jun72
This was our effort to bring the 1st Edition Unix kernel back to life
along with the early C compilers and the 2nd Edition userland binaries.
Cheers, Warren
>
> On Thu, May 21, 2015 at 11:49 AM, Clem Cole <clemc(a)ccc.com <mailto:clemc@ccc.com>
> <mailto:clemc@ccc.com <mailto:clemc@ccc.com>>> wrote:
>
> ? HP/UX is an SVR3 & OSF/1 ancester. Solaris is SVR4. In fact
> it was the SVR4 license and deal between Sun and AT&T)? that
> forced the whole OSF creation. One of the "principles" of the
> OSF was "Fair and Stable" license terms.
>
> Which begs a question - since Solaris was SVR4 based and was
> made freely available via OpenSolaris et al, does that not
> make SVR4 open? I'm not a lawyer (nor play one on TV), but
> it does seem like that sets some sort of precedent.
This is indeed an interesting question. During the IBM vs SCO debacle,
IBM requested the use of TMGE to be used as an example for proof of
how the SVR4 kernel algorithms were already out in the public domain
and thus set the precedent. And this was also (eventually) approved by
AT&T for publication.
> From: Mary Ann Horton
> I have 5 AT&T SVR4 tapes among them .. Is it worth recovering them?
I would say that unless they are _known_ to be in a repository somewhere, yes
(unless it's going to cost a fortune - SVR4 isn't _that_ key a step in the
evolution, I don't think [but I stand to be corrected :-]).
Noel
Hi.
Can anyone still read 9 track tapes? We just uncovered two that date
from my wife's time in grad school, circa 1989 - 1990. They would
be tar tapes.
Thanks!
Arnold
A fantastic curatorial exploit!
> Deadly quote "and nobody cares about that early code history any more
> --so this is all water under the bridge."
This particular metaphor always reminds me of the Farberism: "That's
water over the bridge." Dave, a major presence at Bell Labs, master
malaprop, friend of many and collaborator with several of the early
Unix team, may be counted as an honorary Unixian.
Doug
> From: Aharon Robbins
> Can anyone still read 9 track tapes? We just un-covered two that date
> from my wife's time in grad school, circa 1989 - 1990. They would be
> tar tapes.
That old, the issue is not going to be the format (TAR is still grokkable),
but the physical condition of the tapes; that old, they might have issues
with shedding of oxide, etc (which a heat soak can mostly cure). If you
really want them read, I would recommend a specialist in reading old tapes;
it will cost, but if you really want the data... I have used Chuck Guzis, in
Washington (state) in the US.
Noel
Hoi.
What started as the plan to write a short portrait of cut(1)
for a free german online magazine (translation to English is
not done yet) became a closer look at the history of cut(1).
Well, the topic got me hooked at some point. The text is still
only about eight pages long and far from scientific quality,
but it features some facts not found in Wikipedia. ;-)
So, let me come to my questions.
1) The oldest version of cut that I found is this one in System III.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd
(The file date says 1980-04-11.) As the sccsid reads version 1.5,
there must be older code. How can I find it? Is there a story of
how cut appeared for the first time?
2) As far as I found out, POSIX.2-1992 introduced the byte mode
(-b) and added multi-byte support for the character mode. Is
this correct?
3) Old BSD sources reference POSIX Draft 9 (which, it seems,
they implement) but lack multi-byte support and the byte mode.
They also support decreasing lists, which, they state, POSIX
Draft 9 would not.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=4.3BSD-Reno/src/usr.bin/cut/cu…
The only POSIX.2 Draft I have access to is Draft 11.2.
http://www.nic.funet.fi/pub/doc/posix/p1003.2/d11.2/all.ps
It does specify the multi-byte stuff and does also allow
decreasing lists. Hence, it appears that these things were
added somewhen between Draft 9 and Draft 11.2. Does anyone
know details?
It would be great, if you can give me some pointers for
further research or even share some cut-stories from the
good old days. :-)
meillo
P.S.
In case you understand German, feel free to have a look at the
current version of the text: http://hg.marmaro.de/docs/cut/file/
I welcome your comments, but bear with me, the text issn't meant
to become a doctoral thesis; I just want to write it for fun and
to learn about the historical background.
Hello All.
I have a full set of the O'Reilly X reference manuals - Volumes 1-5, 6a
and 6b. Also "The X Window System In A Nutshell". These are all from
like the mid-1990s.
Are they worth hanging onto?
If not, does anyone on this list want them? If so, I'll send them for
the cost of postage from Israel.
Thanks,
Arnold
> From: Dave Horsfall <dave(a)horsfall.org>
>> In V6, the bootstrap in block 0 prompts for a file name, and when that
>> is entered, it loads that file into memory and starts it. (It doesn't
>> have to be in the root directory, IIRC - I'm pretty sure the bootstrap
>> will accept full path names.)
> I'm pretty sure that it didn't have the full namei() functionality, so
> all files had to be in the root directory.
It depends on what you mean by the first "it" above - if you meant 'V6', then
no. From the Distribution V6's /src/mdec/fsboot.s:
/ read in path name
/ breaking on '/' into 14 ch names
The process of breaking the name up into segments, and then later finding
each name in the appropriate directory, can be seen. The code is kind of
obscure, but if you look at the RL bootstap I dis-assembled:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/rlboot.s
it's pretty much the same code, and a little better commented in the 'read
in directories' part.
Noel
> From: Mark Longridge
> I'm not sure where Unix v1 is loading the kernel from.
> From: Warren Toomey
> Have a look here: https://code.google.com/p/unix-jun72/
Thanks for the pointer! From poking around there, it looks like V1 had
special 'cold boot' and 'warm boot' disk partitions.
I wonder why they lost the 'warm boot' capability in later versions? Maybe it
became reliable enough that the extra complexity of supporting it wasn't
worth it?
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I'm not sure where Unix v1 is loading the kernel from. .. In all the
> other versions of Unix there was always a file like 'unix' in the root
> directory but I guess Unix v1 was different?
I don't know much about the other versions, but it would all depend on what's
in the bootstrap (usually contained in block 0 of drive 0, at least on older
11's). In V6, the bootstrap in block 0 prompts for a file name, and when that
is entered, it loads that file into memory and starts it. (It doesn't have
to be in the root directory, IIRC - I'm pretty sure the bootstrap will accept
full path names.)
How did you create a V1 filesystem? (I don't know, BTW, what they look like -
is that documented anywhere?) It's probably not the same layout as the V6
(which I think is the same as V5).
Noel
Ok, I looked around for the instructions on how to assemble the Unix
v1 kernel and couldn't find anything so I tried:
as u0.s u1.s u2.s u3.s u4.s u5.s u6.s u7.s u8.s u9.s ux.s
and that made a.out and I stripped it and it looked like it was around
the same size as /etc/core (it was 16400 bytes rather than 16448 for
some reason).
I'm not sure where Unix v1 is loading the kernel from. I'm guessing
it's /etc/core and if that's the case then I must have been successful
building the kernel. In all the other versions of Unix there was
always a file like 'unix' in the root directory but I guess Unix v1
was different?
Mark
> From: Sergey Lapin
> Is there some archives of project Athena?
> I'd like to see how it was back then...
There is an _very_ extensive online archive of stuff here:
http://web.mit.edu/afs/
and what you're looking for might be in there _somewhere_.
If not, I know some people I can ask (I never used it myself). But, if so, a
bit more detail? Athena was huge, presumably you don't want all the students'
files! But just the operating system? (IIRC it was initially mostly 4.3BSD,
with some minor extensions.) Or the tools and applications they wrote as well?
(E.g. X-Windows, IIRC, came from Athena.) Most of that does seem to be in
that archive.
Noel
To tell whether Ken installed v6 or a copy of his home
system, look at /usr/dict/words. On the home system
that file is the wordlist from Webster's Collegiate
Dictionary, 7th edition, licensed for Bell Labs use
only. On distribution systems we substituted the wordlist
for "spell". The latter list contains many more proper
names, acronyms, etc than the dictionary did, in
particular names that appear in Unix documentation
such as Ritchie, Kernighan, and McIlroy. It also lacks
lots of trivially derivable words like "organizationally".
If you do have the Webster file rather than the spell
file, please don't propagate it.
Doug
I'm experimenting with adapting Unix history and lore using the new
EXPECT/SEND feature in simh. My favorite guinea pig is the story of Ken
Thompsons sabbatical at Berkeley where he brings up V6 on new 11/70 with
Bob Kridle and Jeff Schriebman. Any details not yet recorded in obvious
places[1] are of course more than welcome!
One of the things I'm trying to get right is what they actually brought
up there initially in 1975. This must have been standard V6 or the
Bell UNIX Ken brought with him, but I can't figure it out.
Salus has Schriebman, Haley and Joy installing the fixes on the 50 bugs
tape late summer 1976. This suggests it was stock V6 initially, but they
might have been playing on a different system or working from a fresh
install in 1976.
If it was stock V6 initially, what were they waiting for? Legal stuff?
If it was 1975 Bell UNIX, can I reconstruct this using the 54 patches
collected by Mike O'Brien[2], or is that going to be way off from what
Thompson left in Urbana-Champaign with Greg Chesson in 1975?
[1] http://www.tuhs.org/books.html minus the Bell journals for example
[2] Hidden in /usr/sys/v6unix/unix_changes in one of the Spencer tapes
http://www.tuhs.org/Archive/Applications/Spencer_Tapes/unsw3.tar.gz
> Does anything at all exist of PDP-7 Unics? All I know about is that
> there was a B language interpreter. Maybe a printout of the manual has
> survived?
There was no manual.
doug
Ok, the first question is:
Has anyone got Unix sysv running on PDP-11 via simh?
I downloaded some files from archive.org which included the file
'sys_V_tape' but so far I haven't got anywhere with it. Looks
interesting though.
Second question is:
What is the deal with Unix version 8? Except for the manuals v8 seems
to have disappeared into the twilight zone. Wikipedia doesn't say
much, only "Used internally, and only licensed for educational use".
So can we look at the source code? Was it sold in binary form only?
Ok, now the big question:
Does anything at all exist of PDP-7 Unics? All I know about is that
there was a B language interpreter. Maybe a printout of the manual has
survived?
Mark
Mark Longridge:
What is the deal with Unix version 8? Except for the manuals v8 seems
to have disappeared into the twilight zone. Wikipedia doesn't say
much, only "Used internally, and only licensed for educational use".
So can we look at the source code? Was it sold in binary form only?
=======
The Eighth Edition system was never released in any general way,
only to a few educational institutions (I forget the number but
it was no more than a dozen) under specific letter agreements that
forbade redistribution. It was never sold, in source or binary or
any other form; the tape included a bootstrap image and full source
code.
I was involved in all this--in fact one of the first nontrivial
things I did after arriving at Bell Labs was to help Dennis assemble
the tape--but that was more than 30 years ago and the details have
faded. The system as distributed ran only on the VAX-11/750 and
11/780. The bootstrap image on the tape was probably more restrictive
than that; if one of the licensees needed something different to
get started we would have tried to make it, but I don't remember
whether that ever happened.
Later systems (loosely corresponding to the Ninth and Tenth editions
of the manual) ran on a somewhat wider set of VAXes, in particular
the MicroVAX II and III and the VAX 8700 and 8550 (but not the dual-
processor 8800). There was never a real distribution of either of
those systems, though a few sites made special requests and got
hand-crafted snapshots under the same restrictive letter agreement.
So far as I know, no Research UNIX system after 7/e has ever been made
available under anything but a special letter agreement. There was
at one point some discussion amongst several interested parties
(including me and The Esteemed Warren Toomey) about strategies to
open up the later source code, but that was quashed by the IBM vs
The SCO Group lawsuit. It would likely be very hard to make happen
now, because I doubt there's anyone left inside Bell Labs with both
the influence and the interest, though I'd be quite happy to be
proven wrong on that.
I know of one place in the world where (a descendant of) that
system is still running, but I am not at the moment in a position
to say where that is. I do know, however, of at least two places
where there are safe copies of the source code, so it is unlikely
to disappear from the historic record even if that record cannot
be made open for a long time.
Norman Wilson
Toronto ON
(Computing Science Research Centre, Bell Labs, 1984-1990)
There was a posting on the SIMH list today from Joerg Hoppe
<j_hoppe(a)t-online.de> about a project to build a microfiche scanner
that has now successfully converted 53,545 document pages to
electronic form, and the files are being uploaded to the PDP-11
section of bitsavers.org. The scanner is described here:
http://retrocmp.com/projects/scanning-micro-fiches
There are links on that page to the rest of the story. It is an
amazing piece of work for a single person.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Claude Shannon passed away on this day in 2001.
Regarded as the Father of Information Theory, I doubt whether you'll go
through a day without bumping into him: computers, electronics, file
compression, audio sampling, you name it and he was probably behind it.
Please take a moment to remember him.
--
Dave Horsfall DTM (VK2KFU) "Bliss is a MacBook with a FreeBSD server."
http://www.horsfall.org/spam.html (and check the home page whilst you're there)
> From: Mark Longridge
> There's no reason for it to be mode 777 is there?
Not that I know of. Once UNIX has booted, it has no use for 'unix' (or
whatever file it booted from), and the boot loader doesn't even read the mode.
I think I habitually set mine to 644. (The 'execute' bits are, of course,
pointless...)
Noel
I just had it brought to my attention that the unix kernel is mode 777
in Unix v5 and v6:
ls -l /unix
-rwxrwx 1 root 27066 Mar 23 1975 /unix
There's no reason for it to be mode 777 is there? It seems rather dangerous.
In Unix v7 it defaults to mode 775 and in 32v it is 755. I figure it
setting it to mode 755 will work and so far it seems fine in v5.
Mark
> From: Dave Horsfall <dave(a)horsfall.org>
>> Once UNIX has booted, it has no use for 'unix' (or whatever file it
>> booted from)
> Didn't "ps" try and read its symbol table?
Sorry, meant 'UNIX the monolithic kernel'; yes, ps and siblings (e.g. iostat)
need to get the running system's symbol table.
> I had fun days when I booted, say, "/unix.new", and "ps" wouldn't
> sodding work...
Know that feeling! I added the following to one of the kernel data files:
char *endsys &end;
and then in programs which grab the system's symbol table, I have an nlist()
entry:
"_endsys",
with the follwing code:
/* Check that the namelist applies to the current system.
*/
checknms(symfile)
char *symfile;
{ char *chkloc, *chkval;
if (nl[0].type == 0)
cerror("No namelist\n");
chkloc = nl[ENDSYS].value;
chkval = rdloc(chkloc);
if (chkval != nl[END].value) {
cerror("Symbol table in %s doesn't match running system\n",
symfile);
}
}
on the theory that pretty much any change at all is going to result in a
change in the system's size (and thus the address of 'end').
Although in a split I/D system, this may not be true (you could change the
code, and have the data+BSS remain the same size); I should probably check
the location of 'etext' as well...
Anyway, a rebuilt system may result in the address of 'endsys' changing, and
thus the rdloc() won't return the contents of the running system's 'endsys',
but the chances of an essentially-random fetch being the same as the value of
'end' in /unix are pretty slim, I would say...
Noel
> From: Jacob Ritorto
> found a copy here, i think..
Ah, thanks.
You might want to look around in the parent directory; apparently there are
two differences between the 11/34 and 11/40, other than the clock and switch
register: the stack limit register, and different handling of
segmentation-violation aborted instructions (which affects instruction
restart on stack extension).
I don't know about 2.9, maybe it knows about these. For V6, the SLR won't be
an issue; the SLR is an option on the 11/40, so not all machines had it, and
m40.s in V6 doesn't use it. The instruction restart thing sounds like it will
be an issue with running V6 on a /34.
Noel
Would anyone know if it's still possible to just replace the platters and
clean the heads?
If the heads are really crashed, the only safe course is
to replace both the damaged heads and the damaged disk pack.
Anything else admits a substantial risk of carrying the
crash forward.
Cleaning the heads probably isn't an option; when they
crash, they don't just pick up material from the disk
platter, they may themselves be damaged enough that sharp
bits of the heads themselves are sticking out.
Norman Wilson
Toronto ON
> From: Noel Chiappa
> apparently there are two differences between the 11/34 and 11/40, other
> than the clock and switch register
Too early in the morning here, clearly... I was thinking of the 11/23 and the
11/40 here in the clock/SR comment, not the /34 and the /40.
_Iff_ the 11/34 is using the standard DL11-W console interface board (which
includes an LTC), there's no difference that I know of between the 11/34 and
the 11/40 on the LTC front (although the LTC is an option in the /40, so a /40
might not have one, in which case the V6 will panic on trying to boot unless
it has a KW11-P).
As for the switch register... I guess that on machines with a KY11-A, there
is no switch register? (Too lazy/busy to go read the manual(s) to confirm...)
Noel
> From: Jacob Ritorto
> I think it's something to do with the fact that he compiled it to run on
> an 11/23. Maybe it lacks unibus support.
No, the UNIBUS and QBUS appear (from the programming level) to be identical.
There are subtle differences (the /23 and its devices can address more than
256KB of memory, and some devices have minor differences between the QBUS and
UBIBUS - e.g. the QBUS DZ has only 4 lines, not 8), but in general, they
should be interchangeable.
> Maybe something to do with clock differences.
Again, if it boots at all, that's not it. (The vanilla /23 doesn't have a
software-controllable clock, and when booting Unix on one, one has to leave
the clock switched off till UNIX is running - at least, for the early versions
of UNIX.)
> I fired 2.9MSCP up in simh emulating an 11/23 and it works fine. Just to
> corroborate my hardware experience of it on the '34, I switch the cpu
> emulation to 11/34 and got a mostly identical crash sequence as with my
> real hardware.
Ah. Now we're getting somewhere! If the simulator crashes in the same way, it's
not flaky hardware (my first guess as to the cause).
What are the symptoms (in as much detail as you can give us)? What, if anything,
is printed before it dies?
> I changed ...
> UNIBUS_MAP = 0
> to
> UNIBUS_MAP = 1
The /34 doesn't have a UNIBUS map.
Noel
> From: Jacob Ritorto
> I jiggled the memory board and the seqfault went away.
Ugh. A flaky. I hate those....
> So now the real box is behaving more like the simh and just hanging,
> not panicing anymore.
Does it _always_ hang at the same point in time? If so, what are the
circumstances, - have you just tried to start multi-user operation, or what?
> How can I find this startup() you mention?
It's in machdep.c in sys/sys.
Noel
> From: Jacob Ritorto
> I set simh to 11/34 and I managed to get actual panics before (that I
> didn't record)
Ah.
> now I'm just getting hangs, mostly when hitting ctrl-D to bring system to
> mutiuser.
The fact that it boots to the shell OK indicates things are mostly OK. I
wonder what it's trying to do, in going to multi-user, that's wedging the
system?
> Same if I mount -a in single user and then try to access /usr (works for
> a while, then hangs.).
Ah. That sounds very much like a 'lost' interrupt. The process is waiting
(inside the kernel) for the device to complete, and ..... crickets.
> When hung, I can still get character echo to my terminal
So the machine is still running OK (most echoing is done inside the TTY
interrupt handler).
> but can't interrupt or background the running command, etc.
Like I said, it's sleeping inside the kernel, and missed the wakeup event.
If you have another console logged in, it would be interesting to see if that
one is frozen too. If not, we can use tools like 'ps', running on the second
line, to look at the first process and see what it's waiting for.
Single user, the following hack:
sh < /dev/ttyX > /dev/ttyX &
can be used to start a shell on another tty line (since going full multi-user
seems to wedge it).
> Would it help if I traced memory and single-stepped through the
> (apparently) infinite loop?
No, because it's very likely not a loop! ;-)
> here are some examples of crashes on the real pdp11/34 (booting via
> vtserver, then bringing in system from the MSCP disk), with the original
> 2.9bsd-MSCP kernel (the one specifically built for 11/23):
>
> CONFIGURE SYSTEM:
> ka6 = 2200
> aps = 141572
> pc = 50456 ps = 30250
> __ovno = 7
> trap type 11
> panic: trap
That's a segmentation fault. Very odd trap to get! 2.9 uses overlays right?
Maybe there's a problem with how some overlay fits, or something? I don't know
much about the overlay feature, never used it, sorry.
Most of the other data (PS address, PC, KDSA6 contents, etc) aren't much use
without a dump.
> and another: plain boring old hang at boot when trying to size devices.
> Can't even echo characters this time.
If the init process hasn't got as far an opening the TTY, you might not get
character echoing.
If that randomly lost interrupt got lost very early on, you might could see
this sort of behaviour.
> One thing I think is interesting is that it's claiming 158720KW of
> memory. Is that weird? ... Where's it getting that odd number? Vanilla
> 2.9.1 on the real 11/34 boots with
>
> Berkeley UNIX (Rev. 2.9.1) Sun Nov 20 14:55:50 PST 1983
> mem = 135872
No idea where it's coming from, but remember Beowulf Schaeffer's advice to
Gregory Pelton in "Flatlander".
And now that I think about it, if the system thinks it has more memory than it
actually does, that would definitely cause problems.
Probably you should put some printf()'s in startup() and see where it's coming
from.
Noel
> From: Cory Smelosky <b4(a)gewt.net>
> Only the 11/23+ can, early rev 11/23s couldn't go above 256K.
Correctamundo on the last part, but not the first. I have both 11/23+'s and
11/23's, and I can assure you that Rev C and later 11/23's (the vast majority
of them) can do more than 256KB. See:
http://www.ibiblio.org/pub/academic/computer-science/history/pdp-11/hardwar…
for more.
Noel
Hi,
Since my Fuji160 drive is rather head-crashed, I've replaced it with an
M2333k, which is a smaller SMD rig with more sectors than the 160.
Unfortunately, after many dip switch settings and config changes, I have to
conclude that the sc21 just doesn't work with this new disk.
I've plugged in my SC41 controller that speaks MSCP and supports the
M2333k correctly. So now it's a matter of getting a unix small enough to
run on the 11/34 that can also speak MSCP. Enter Jonathan Engdahl's
2.9bsd-MSCP.
I managed to restor a root dump from his distribution and am able to
occasionally boot it on my 11/34, but it crashes very soon after booting
and I don't understand why. I think it's something to do with the fact that
he compiled it to run on an 11/23. Maybe it lacks unibus support. Maybe
something to do with clock differences. Not sure. But I was thinking that I
could make it work by recompiling the kernel with 11/34 support.
I fired 2.9MSCP up in simh emulating an 11/23 and it works fine. Just to
corroborate my hardware experience of it on the '34, I switch the cpu
emulation to 11/34 and got a mostly identical crash sequence as with my
real hardware.
So I switched the emulation back to '23, rebooted and edited the assym.s
file found in Jonathan's /usr/src/sys/RA directory. I changed
PDP11 = 23.
to
PDP11 = 34.
as well as
UNIBUS_MAP = 0
to
UNIBUS_MAP = 1
and recompiled with 'make unix,' then copied the resultant unix to /unix.
I switched simh back to emulating an 11/34 and rebooted. It crashes
randomly just as it did before the kernel recompile.
Any idea what I'm missing here? My hope was to simply move this
freshly-compiled 11/34-friendly kernel onto my real 11/34 and have a
working hardware system.
thx
jake
Ok folks,
I've uploaded what I call Unix v5a to:
http://www.maxhost.org/other/unix-v5-feb-2015.tar.gz
I use simh on Linux to emulate the PDP-11/70.
The tarball contains:
unix_v5_rk.dsk
unix_v5_rk1.dsk
unix_v5_rk2.dsk
pdp11-v5.ini
readme-v5.txt
unix-v5a.sh
The original file is uv5swre.zip if anyone wants to compare them.
Mark
> From: Clem Cole <clemc(a)ccc.com>
> Once people started to partition them, then all sort of new things
> occurred and I that's when the idea of a dedicated swap partition came
> up. I've forgotten if that was a BSDism or UNIX/TS.
Well, vanilla V6 had i) partitioned drives (that was the only way to support
an RP03), and ii) the swap device in the c.c config file. That's all you need
to put swap in its own partition. (One of the other MIT-LCS groups with V6
did that; we didn't, because it was better to swap off the RK, which did
multi-block transfers in a single I/O operation.)
> As I recall in V6 and I think V7, the process was first placed in the
> swap image before the exec (or at least space reserved for it).
As best I understand it, the way fork worked in V6 was that if there was not
enough memory for an in-core fork (in which case the entire memory of the
process was just copied), the process was 'swapped out', and the swapped out
image assumed the identity of the child.
But this is kind of confusing, because when I was bringing up V6 under the
Ersatz11 simulator, I had a problem where the swapper (process 0) was trying
to fork (with the child becoming 1 and running /etc/init), and it did the
'swap out' thing. But there was a ton of free memory at that point, so... why
was it doing a swap? Eh, something to investigate sometime...
Noel
> From: Jacob Ritorto
> I'm having trouble understanding how to get my swap configured. Since
> rl02s are so little, the MAKE file in /dev doesn't partition them into
> a, b, c, etc. However, when MAKE makes the /dev/rl0 device, it uses
> only 8500 of its 10000 blocks, so what would presumably be intended as
> swap space does exist. Swap is usually linked to the b partition,
> right? So how do I create this b partition on an rl02?
I don't know how the later systems work, but in V6, the swap device, and the
start block / # of blocks are specified in the c.c configuration file (i.e.
they are compiled into the system). So you can take one partition, and by
specifying less than the full size to 'mkfs', you can use the end of the
partition for swap space (which is presumably what's happening with /dev/rl0
here).
Noel
Dave Horsfall:
I also wrote a paper on a "bad block" system, where something like inum
"-1" contained the list of bad sectors, but never saw it through.
====
During the file system change from V6 to V7, the i-number of
the root changed from 1 to 2, and i-node 1 became unused.
At least some versions of the documentation (I am too harried
to look it up at the moment) claimed i-node 1 was reserved
for holding bad blocks, to keep them out of the free list,
but that the whole thing was unimplemented.
I vaguely remember implementing that at some point: writing
a tool to add named sectors to i-node 1. Other tools needed
fixing too, though: dump, I think, still treated i-node 1
as an ordinary file, and tried to dump the contents of
all the bad blocks, more or less defeating the purpose.
I left all that behind when I moved to systems with MSCP disks,
having written my own driver code that implemented DEC's
intended port/class driver split, en passant learning how
to inform the disk itself of a bad block so it would hide it
from the host.
I'd write more but I need to go down to the basement and look
at one of my modern* 3TB SATA disks, which is misbehaving
even though modern disks are supposed to be so much better ...
Norman Wilson
Toronto ON
* Not packaged in brass-bound leather like we had in the old days.
You can't get the wood, you know.
what about using another minor device? Is xp0d mapped elsewhere?
Since it's a BSD, won't it try by default to read a partition
table from the first few sectors of the disk?
Norman Wilson
Toronto ON
Hi,
I'm having trouble understanding how to get my swap configured. Since
rl02s are so little, the MAKE file in /dev doesn't partition them into a,
b, c, etc. However, when MAKE makes the /dev/rl0 device, it uses only 8500
of its 10000 blocks, so what would presumably be intended as swap space
does exist. Swap is usually linked to the b partition, right? So how do I
create this b partition on an rl02? Or am I getting this horribly wrong?
thx
jake
Hi,
I'm running 2.9BSD on a pdp11/34 with an Emulex sc21 controller to some
Fuji160 disks. Booting with root on RL02 for now, but want to eventually
have the whole system on the Fujis and disconnect the rl02s.
While the previous owner of the disks appears to have suffered a
headcrash near cylinder 0, I'm having an impressive degree of success
writing to other parts of the disk.
However, when I try to mkfs, I can see the heads trying to write on the
headcrashed part of the disk. (Nice having those plexiglass covers!)
Is there a way to tell mkfs (or perhaps some other program) to not try to
write on the damaged cylinders?
thx
jake
So, I have a chance to buy a copy of a Version 5 manual, but it will be a
lot. I looked, and the Version 5 manual doesn't appear to be online. So while
normally at the price this is at, I would pass, it might be worth it for me
to buy it, and scan it to make it available.
But, I looked in the "FAQ on the Unix Archive and Unix on the PDP-11", and it
says:
5th Edition has its on-line manual pages missing. ... Fortunately, we do
have paper copies of all the research editions of UNIX from 1st to 7th
Edition, and these will be scanned in and OCR'd.
Several questions: First, when it says "we do have paper copies of all the
research editions of UNIX", I assume it means 'we do have paper copies of
_the manuals for_ all the research editions of UNIX', not 'we do have paper
copies of _the source code for_ all the research editions of UNIX'?
Second, if it is 'manuals', did the scan/OCR thing ever happen, or is it
likely to anytime in the moderate future (next couple of years)?
Third, would a scanned (which I guess we could OCR) version of this manual be
of much use (it would not, after all, be the NROFF source, although probably
a lot of the commands will be identical to the V6 ones, for which we do have
the NROFF)?
Advice, please? Thanks!
Noel
> From: Tom Ivar Helbekkmo
> There was no fancy I/O order juggling, so everything was written in the
> same chronological order as it was scheduled.
> ...
> What this means is that the second sync, by waiting for its own
> superblock writes, will wait until all the inode and file data flushes
> scheduled by the first one have completed.
Ah, I'm not sure this is correct. Not all disk drivers handled requests in a
'first-come, first-served' order (i.e. where a request for block X, which was
scheduled before a request for block Y, actually happened before the
operation on block Y). It all depends on the particular driver; some drivers
(e.g. the RP driver) re-organized the waiting request queue to optimize head
motion, using a so-called 'elevator algorithm'.
(PS: For a good time, do "dd if=/dev/[large_partition] of=/dev/null" on a
running system with such a disk, and a lot of users on - the system will
apparently come to a screeching halt while the 'up' pass on the disk
completes... I found this out the hard way, needless to say! :-)
Since the root block is block 1 in the partition, one might think that even
with an elevator algorithm, this would tend to guarantee that doing it would
more or less guarantee that all other pending operations would have completed
(since it could only happen at the end of 'down' pass); _but_ the elevator
algorithm is in terms of actual physical block numbers, so blocks in another
lower partition might still remain to be written.
But now that I think about it a bit, if such blocks existed, that partition's
super-block would also need to be written, so when that one completed, the
disk queue would be empty.
But the point remains - because there's no guarantee of _overall_ disk
operation ordering in V6, scheduling a disk request and waiting for it to
complete does not guarantee that all previously-requested disk operations
will have completed before it does.
I really think the whole triple-sync thing is mythology. Look through the V6
documentation and although IIRC there are instructions on how to shut the
system down, it's not mentioned. We certainly never used it at MIT (and I
still don't), and I've never seen a problem with disk corruption _when the
system was deliberately shut down_.
Noel
Yo Jacob,
I'm ex-sun but I don't know too much about Illumos. Care to give us
the summary of why I might care about it?
On Wed, Dec 31, 2014 at 01:16:00AM -0500, Jacob Ritorto wrote:
> Hey, thanks, Derrik.
> I don't mess with Linux much (kind of an Illumos junkie by trade ;), but
> I bet gcc would. I did out of curiosity do it with the Macintosh cc (Apple
> LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)) and it throws
> warnings about our not type-defining functions because you're apparently
> supposed to do this explicitly these days, but it dutifully goes on to
> assume int and compiles our test K&R stuff mostly fine. It does
> unfortunately balk pretty badly at the naked returns we initially had,
> though. Wish it didn't because it strikes me as being beautifully simple..
>
> thx again for the encouragement!
> jake
>
>
> On Wed, Dec 31, 2014 at 1:02 AM, Derrik Walker v2.0 <dwalker(a)doomd.net>
> wrote:
>
> > On Wed, 2014-12-31 at 00:44 -0500, Jacob Ritorto wrote:
> >
> > >
> > > P.S. if anyone's bored enough, you can check out what we're up to at
> > > https://github.com/srphtygr/dhb. I'm trying to get my 11yo kid to
> > > spend a little time programming rather than just playing video games
> > > when he's near a computer. He'a actually getting through this stuff
> > > and is honestly interested when he understands it and sees it work --
> > > and he even spotted a bug before me this afternoon! Feel free to
> > > raise issues, pull requests, etc. if you like -- I'm putting him
> > > through the git committing and pair programming paces, so outside
> > > interaction would be kinda fun :)
> > >
> > >
> > > P.P.S. We're actually using 2.11bsd after all..
> > >
> > I'm curious, will gcc on a modern Linux system compile K&R c?
> >
> > Maybe when I get a little time, I might try to see if I can compile it
> > on a modern Fedora 21 system with gcc.
> >
> > BTW: Great job introducing him to such a classic environment. A few
> > years ago, my now 18 year old had expressed some interest in graphics
> > programming and was in awe over an SGI O2 I had at the time, so I got
> > him an Indy. He played around with a bit of programming, but
> > unfortunately, he lost interest.
> >
> > - Derrik
> >
> >
> > _______________________________________________
> > TUHS mailing list
> > TUHS(a)minnie.tuhs.org
> > https://minnie.tuhs.org/mailman/listinfo/tuhs
> >
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
> when you - say - run less to display a file, it switches to a dedicated
> region in the terminal memory buffer while printing its output, then
> restores the buffer to back where you were to begin with when you exit
> the pager
Sorry for veering away from Unix history, but this pushed one of the hottest
of my buttons. Less is the epitome of modern Unix decadence. Besides the
maddening behavior described above, why, when all screens have a scroll bar,
should a pager do its own scrolling? But for a quantitative measure of
decadence, try less --help | wc. It takes excess to a level not dreamed of
in Pike's classic critique, "cat -v considered harmful".
Doug
Hi all, I came across this last week:
http://svnweb.freebsd.org/
It's a Subversion VCS of all the CSRG releases. I'm not sure if it
has been mentioned here before.
Cheers, Warren
<much discussion about quadratic search removed>
All I remember (and still support to this day) is that I’ve got a TERMCAP=‘string’ in my login scripts to set termcap to the specific terminal I’m logging in with.
Long ago this made things much faster. Today I think that it is just a holdover that I’m not changing due to inertia, rather than any real need for it.
David
—
David Barto
david(a)kdbarto.org
> Noel Chiappa
> The change is pretty minor: in this piece of code:
>
> case reboot:
> termall();
> execl(init, minus, 0);
> reset();
>
> just change the execl() line to say:
>
> execl(init, init, 0);
I patched init in v5 and now ps shows /etc/init as expected, even
after going from multi to single to multi mode.
Looks like init.c was the same in v5 and v6.
> Noel Chiappa:
> Just out of curiousity, why don't you set the SR before you boot the machine?
> That way it'll come up single user, and you can icheck/dcheck before you go
> to multi-user mode. I prefer doing it that way, there's as little as possible
> going on, in case the disk is slightly 'confused', so less chance any bit-rot
> will spread...
I actually do file system checks on v5 as it's the early unix I use the most:
check -l /dev/rk0
check -u /dev/rk0
same for rk1, rk2.
The v5 manual entry for check references the 'restor' command,
although the man page for that is missing.
Your idea of starting up in single user mode is a good one although
I'm not sure if it's necessary to check the file system on each boot
up. I've been running this disk image of v5 for about two years and no
blow-ups as yet. I also keep various snapshots of v5, v6 and v7 disk
images for safety reasons.
And there are text files of all the source code changes I've made, so
if disaster strikes I can redo it all.
Mark
> From: Clem Cole
> ps "knew" about some kernel data structures and had to compiled with
> the same sources that your kernel used if you want all the command
> field in particular to look reasonable.
Not just the command field!
The real problem was that all the parameters (e.g. NPROC) were not stored in
the kernel anywhere, so if you wanted to have one copy of the 'ps' binary
which would work on two different machines (but which were running the same
version of the kernel)... rotsa ruck.
I have hacked my V6 to include lines like:
int ninode NINODE;
int nfile NFILE;
int nproc NPROC;
etc so 'ps' can just read those variables to find the table sizes in the
running kernel. (Obviously if you modify a table format, then you're SOL.)
> From: Ronald Natalie
> The user structure of the currently running process is the only one
> that is guaranteed to be in memory ... Any processes that were swapped
> you could read the user structure so things that were stored there were
> often unavailable (particularly the command name).
Well, 'ps' (even the V6 stock version) was actually prepared to poke around
on the swap device to look at the images of swapped-out processes. And the
command name didn't come from the U area (it wasn't saved there in stock V6),
'ps' actually had to look on the top of the user stack (which is why it
wasn't guaranteed to be accurate - the user process could smash that).
> From: Clem cole
> IIRC we had a table of sleep addresses so that ps could print the
> actual thing you were waiting for not just an address.
I've hacked my copy of 'ps' to grovel around in the system's symbol table,
and print 'wchan' symbolically. E.g. here's some of the output of 'ps' on
my system:
TTY F S UID PID PPID PRI NIC CPU TIM ADDR SZ TXT WCHAN COMMAND
?: SL S 0 0 0-100 0 -1 127 1676 16 runout <swapper>
?: L W 0 1 0 40 0 0 127 1774 43 0 proc+26 /etc/init
?: L W 0 11 1 90 0 0 127 2405 37 tout /etc/update
8: L W 0 12 1 10 0 0 127 2772 72 2 kl11 -
a: L W 0 13 1 40 0 0 127 3122 72 2 proc+102 -
a: L R 0 22 13 100 0 10 0 3422 138 3 ps axl
b: L W 0 14 1 10 0 0 127 2120 41 1 dz11+40 - 4
It's pretty easy to interpret this to see what each process is waiting for.
Noel
> From: Noel Chiappa
> For some reason, the code for /etc/init .. bashes the command line so
> it just has '-' in it, so it looks just like a shell.
BTW, that may be accidental, not a deliberate choice - e.g. someone copied
another line of code which exec'd a shell, and didn't change the second arg.
> I fixed my copy so it says "/etc/init", or something like that. ... I
> can upload the 'fixed' code tomorrow.
The change is pretty minor: in this piece of code:
case reboot:
termall();
execl(init, minus, 0);
reset();
just change the execl() line to say:
execl(init, init, 0);
>> I'm not sure if unix of the v6 or v5 era was designed to go from multi
>> user to single user mode and then back again.
> I seem to recall there's some issue, something like in some cases
> there's an extra shell left running attached to the console
So the bug is that in going from single-user to multi-user, by using "kill -1
1" in single-user with the switch register set for multi-user, it doesn't
kill the running single-user shell on the console. The workaround to that bug
which I use is to set the CSWR and then ^D the running shell.
In general, the code in init.c isn't quite as clean/clear as would be optimal
(which is part of why I haven't tried to fix the above bug), but _in general_
it does support going back and forth.
> From: Ronald Natalie
> our init checked the switch register to determine whether to bring up
> single or multiuser
I think that's standard from Bell, actually.
> I believe our system shutdown if you kill -1-1 (HUP to init).
The 'stock' behaviour is that when that happens, it checks the switch
register, and there are three options (the code is a little hard to follow,
but I'm pretty sure this is right):
- if it's set for single-user, it shuts down all the other children, and
brings up a console shell; when done, it does the next
- if it's set for 'reboot', it just shuts down all children, and restarts
the init process (presumably so one can switch to a new version of the init
without restarting the whole machine);
- if it's not set for either, it re-reads /etc/ttys, and for any lines which
have switched state in that file, it starts/kills the process listening to
that line (this allows one to add/drop lines dynamically).
> From: Clem Cole
> it's probably worth digging up the v6 version of fsck.
That's on that MIT V6 tape, too. Speaking of which, time to write the code to
grok the tape...
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I've finally managed to get Unix v5 and v6 to go into single user mode
> while running under simh.
> ...
> dep system sr 173030 (simh command)
Just out of curiousity, why don't you set the SR before you boot the machine?
That way it'll come up single user, and you can icheck/dcheck before you go
to multi-user mode. I prefer doing it that way, there's as little as possible
going on, in case the disk is slightly 'confused', so less chance any bit-rot
will spread...
> Now I'm in muti user mode .. but then if I do a "ps -alx" I get:
>
> TTY F S UID PID PRI ADDR SZ WCHAN COMMAND
> ?: 3 S 0 0-100 1227 2 5676 ????
> ?: 1 W 0 1 40 1324 6 5740 -
> The ps command doesn't show the /etc/init process explicitly, although
> I'm pretty sure it is running.
No, it's there: the second line (PID 1). For some reason, the code for
/etc/init (in V6 at least, I don't know anything about V5) bashes the command
line so it just has '-' in it, so it looks just like a shell.
I fixed my copy so it says "/etc/init", or something like that. The machine
my source is on is powered down at the moment; if you want, I can upload the
'fixed' code tomorrow.
> I'm not sure if unix of the v6 or v5 era was designed to go from multi
> user to single user mode and then back again.
I seem to recall there's some issue, something like in some cases there's an
extra shell left running attached to the console, but I don't recall the
details (too lazy to look for the note I made about the bug; I can look it up
if you really want to know).
> Would it be safer to just go to single user and then shut it down?
I don't usually bother; I just log out all the shells except the one on the
console, so the machine is basically idle; then do a 'sync', and shortly
after than completes, I just halt the machine.
Noel
adding the list back
On Tue, Jan 6, 2015 at 10:42 AM, Michael Kerpan <mjkerpan(a)kerpan.com> wrote:
> This is a cool development. Does this code build into a working version of
> Coherent or is this mainly useful to study? Either way, it should be
> interesting to look at the code for a clone specifically aimed at low-end
> hardware.
>
> Mike
>
Ok, I've finally managed to get Unix v5 and v6 to go into single user
mode while running under simh.
I boot up unix as normal, that is to say in multi-user mode.
Then a ctrl-e and
dep system sr 173030 (simh command)
then c to continue execution of the operating system and finally "kill -1 1".
This gets me from multi user mode to single user mode. I can also go
back to multi user mode with:
ctrl-e and dep system sr 000000
then once again c to continue execution of the operating system and "kill -1 1".
Now I'm in muti user mode, and I can telnet in as another user so it
seems to be working but then if I do a "ps -alx" I get:
TTY F S UID PID PRI ADDR SZ WCHAN COMMAND
?: 3 S 0 0-100 1227 2 5676 ????
?: 1 W 0 1 40 1324 6 5740 -
8: 1 W 0 51 40 2456 19 5766 -
?: 1 W 0 55 10 1377 6 42066 -
?: 1 W 0 5 90 1734 5 5440 /etc/update
?: 1 W 0 32 10 2001 6 42126 -
?: 1 W 0 33 10 2054 6 42166 -
?: 1 W 0 34 10 2127 6 42226 -
?: 1 W 0 35 10 2202 6 42266 -
?: 1 W 0 36 10 2255 6 42326 -
?: 1 W 0 37 10 2330 6 42366 -
?: 1 W 0 38 10 2403 6 42426 -
8: 1 R 0 59 104 1472 17 ps alx
The ps command doesn't show the /etc/init process explicitly, although
I'm pretty sure it is running. I'm not sure if unix of the v6 or v5
era was designed to go from multi user to single user mode and then
back again. Would it be safer to just go to single user and then shut
it down?
Mark
Friend asked an odd question:
Were VAXen ever used to send/receive faxes large-scale? What software was
used and how was it configured?
Was any of this run on any of the UCB VAXen?
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On 2015-01-06 23:56, Clem Cole<clemc(a)ccc.com> wrote:
>
> On Tue, Jan 6, 2015 at 5:45 PM, Noel Chiappa<jnc(a)mercury.lcs.mit.edu>
> wrote:
>
>> >I have no idea why DEC didn't put it in the 60 - probably helped kill that
>> >otherwise intersting machine, with its UCS, early...
>> >
> ?"Halt and confuse ucode" had a lot to do with it IMO.
>
> FYI: The 60 set the record of going from production to "traditional
> products" faster than? anything else in DEC's history. As I understand
> it, the 11/60 was expected to a business system and run RSTS. Why the WCS
> was put in, I never understood, other than I expect the price of static RAM
> had finally dropped and DEC was buying it in huge quantities for the
> Vaxen. The argument was that they could update the ucode cheaply in the
> field (which to my knowledge the never did). But I asked that question
> many years ago to one of the HW manager, who explained to me that it was
> felt separate I/D was not needed for the targeted market and would have
> somehow increased cost. I don't understand why it would have cost any
> more but I guess it was late.
No, field upgrade of microcode can not have been it. The WCS for the
11/60 was an option. Very few machines actually had it. It was for
writing your own extra microcode as addition to the architecture.
The basic microcode for the machine was in ROM, just like on all the
other PDP-11s. And DEC sold a compiler and other programs required to
develop microcode for the 11/60. Not that I know of anyone who had them.
I've "owned" four PDP-11/60 systems in my life. I still have a set of
boards for the 11/60 CPU, but nothing else left around.
The 11/60 was, by the way, not the only PDP-11 with WCS. The 11/03 (if I
remember right) also had such an option. Obviously the microcode was not
compatible between the two machines, so you couldn't move it over from
one to the other.
Also, reportedly, someone at DEC implemented a PDP-8 on the 11/60,
making it the fastest PDP-8 ever manufactured. I probably have some
notes about it somewhere, but I'd have to do some serious searching if I
were to dig that up.
But yes, the 11/60 went from product to "traditional" extremely fast.
Split I/D space was one omission from the machine, but even more serious
was the decision to only do 18-bit addressing on it. That killed it very
fast.
Someone else mentioned the MFPI/MFPD instructions as a way of getting
around the I/D restrictions. As far as I know (can tell), they are
possible to use to read/write instruction space on a machine. I would
assume that any OS would set both current and previous mode to user when
executing in user space.
The documentation certainly claims they will work. I didn't think of
those previously, but they would allow you to read/write to instruction
space even when you have split I/D space enabled.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Yep, the only time this was ever trully useful was so you could put an
> a.out directly into the boot block I think.
Well, sort of. If you had non position-independent code, it would blow out
(it would be off by 020). Also, some bootstraps (e.g. the RL, IIRC) were so
close to 512. bytes long that the extra 020 was a problem. And it was so easy
to strip off:
dd if=a.out of=fooboot bs=1 skip=16
I'm not sure that anything actually used the fact that 407 was 'br .+020', by
the V6 era; I think it was just left over from older Unixes (where it was not
in fact stripped on loading). Not just on executables running under Unix; the
boot-loader also stripped it, so it wasn't even necessary to strip the a.out
header off /unix.
Noel
On 2015-01-06 20:57, Milo Velimirovi?<milov(a)cs.uwlax.edu> wrote:
> Bringing a conversation back online.
> On Jan 6, 2015, at 6:22 AM,arnold(a)skeeve.com wrote:
>
>>> >>Peter Jeremy scripsit:
>>>> >>>But you pay for the size of $TERMCAP in every process you run.
>> >
>> >John Cowan<cowan(a)mercury.ccil.org> wrote:
>>> >>A single termcap line doesn't cost that much, less than a KB in most cases.
>> >
>> >In 1981 terms, this has more weight. On a non-split I/D PDP-11 you only
>> >have 32KB to start with. (The discussion a few weeks ago about cutting
>> >yacc down to size comes to mind?)
> (Or even earlier than ?81.) How did pdp11 UNIXes handle per process memory? It?s suggested above that there was a 50-50 split of the 64KB address space between instructions and data. My own recollection is that you got any combination of instruction and data space that was <64KB. This would also be subject to limits of pdp11 memory management unit.
>
> Anyone have a definitive answer or pointer to appropriate man page or source code?
You are conflating two things. :-)
A standard PDP-11 have 64Kb of virtual memory space. This can be divided
any way you want between data and code.
Later model PDP-11 processors had a hardware feature called split I/D
space. This meant that you could have one 64Kb virtual memory space for
instructions, and one 64Kb virtual memory space for data.
(This also means that the text you quoted was incorrect, as it stated
that you had 32Kb, which is incorrect. It was/is 32 Kword.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2015-01-06 22:59, random832(a)fastmail.us wrote:
> On Tue, Jan 6, 2015, at 15:20, Johnny Billquist wrote:
>> >Later model PDP-11 processors had a hardware feature called split I/D
>> >space. This meant that you could have one 64Kb virtual memory space for
>> >instructions, and one 64Kb virtual memory space for data.
> Was it possible to read/write to the instruction space, or execute the
> data space? From what I've seen, the calling convention for PDP-11 Unix
> system calls read their arguments from directly after the trap
> instruction (which would mean that the C wrappers for the system calls
> would have to write their arguments there, even if assembly programs
> could have them hardcoded.)
Nope. A process cannot read or write to instruction space, nor can it
execute from data space.
It's inherent in the MMU. All references related to the PC will be done
from I-space, while everything else will be done through D-space.
So the MMU have two sets of page registers. One set maps I-space, and
another maps D-space. Of course, you can have them overlap, in which
case you get the traditional appearance of older models.
The versions of Unix I am aware of push arguments on the stack. But of
course, the kernel can remap memory, and so can of course read the
instruction space. But the user program itself would not be able to
write anything after the trap instruction.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole <clemc(a)ccc.com>
> Depends the processor. For the 11/45 class processors, you had a 17th
> address bit, which was the I/D choice. For the 11/40 class you shared
> the instructions and data space.
To be exact, the 23, 24, 34, 35/40 and 60 all had a single shared space.
(I have no idea why DEC didn't put it in the 60 - probably helped kill that
otherwise intersting machine, with its UCS, early...). The 44, 45/50/55, 70,
73, 83/84, and 93/94 had split.
> From: random832(a)fastmail.us
> the calling convention for PDP-11 Unix system calls read their
> arguments from directly after the trap instruction (which would mean
> that the C wrappers for the system calls would have to write their
> arguments there, even if assembly programs could have them hardcoded.)
Here's the code for a typical 'wrapper' (this is V6, not sure if V7 changed
the trap stuff):
_lseek:
jsr r5,csv
mov 4(r5),r0
mov 6(r5),0f
mov 8(r5),0f+2
mov 10.(r5),0f+4
sys indir; 9f
bec 1f
jmp cerror
1:
jmp cret
.data
9:
sys lseek; 0:..; ..; ..
Note the switch to data space for storing the arguments (at the 0: label
hidden in the line of data), and the 'indirect' system call.
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Some access at the kernel level can be done with MFPI and MPFD
> instructions.
Unless you hacked your hardware, in which case it was possible from user mode
too... :-)
I remember how freaked out we were when we tried to use MFPI to read
instruction space, and it didn't work, whereupon we consulted the 11/45
prints, only to discover that DEC had deliberately made it not work!
> From: Ronald Natalie <ron(a)ronnatalie.com>
> After the changes to the FS, you'd get missing blocks and a few 0-0
> inodes (or ones where the links count was higher than the links). These
> while wasteful were not going to cause problems.
It might be worth pointing out that due to the way pipes work, if a system
crashed with pipes open, even (especially!) with the disk perfectly sync'd,
you'll be left with 0-0 inodes. Although as you point out, those were merely
crud, not potential sourdes of file-rot.
Noel
Apparently the message I was replying to was off-list, but it seems like
a waste to have typed all this out (including answering my own question)
and have it not go to the list.
On Tue, Jan 6, 2015, at 17:35, random832(a)fastmail.us wrote:
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/factor.s
> wrchar:
> mov r0,ch
> mov $1,r0
> sys write; ch; 1
> rts r5
>
> Though it looks like the C wrappers use the "indirect" system call which
> reads a "fake" trap instruction from the data segment. Looking at the
> implementation of that, my question is answered:
>
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/sys/trap.c
> if (callp == sysent) { /* indirect */
> a = (int *)fuiword((caddr_t)(a));
> pc++;
> i = fuword((caddr_t)a);
> a++;
> if ((i & ~077) != SYS)
> i = 077; /* illegal */
> callp = &sysent[i&077];
> fetch = fuword;
> } else {
> pc += callp->sy_narg - callp->sy_nrarg;
> fetch = fuiword;
> }
>
> http://minnie.tuhs.org/TUHS/Archive/PDP-11/Trees/V7/usr/man/man2/indir.2
> The main purpose of indir is to allow a program to
> store arguments in system calls and execute them
> out of line in the data segment.
> This preserves the purity of the text segment.
>
> Note also the difference between V2 and V5 libc - clearly support for
> split I/D machines was added some time in this interval.
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V2/lib/write.s
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s4/write.s
Quoting Dan Cross <crossd(a)gmail.com>:
> On Tue, Jan 6, 2015 at 12:33 PM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>
>> On 2015-01-06 17:56, Dan Cross wrote:
>>>
>>> I believe that Mary Ann is referring to repeatedly looking up
>>> (presumably different) elements in the entry. Assuming that e.g. `vi`
>>> looks up O(n) elements, where $n$ is the number of elements, doing a
>>> linear scan for each, you'd end up with quadratic behavior.
>>>
>>
>> Assuming that you'd look up all the elements of the termcap entry at
>> startup, and did each one from scratch, yes.
>
>
> Yes. Isn't that exactly what Mary Ann said was happening? :-)
Yes
> But that would beg the question, why is vi doing a repeated scan of the
>> terminal entry at startup, if not to find all the capabilities and store
>> this somewhere? And if doing a look for all of them, why not scan the
>> string from start to finish and store the information as it is found? At
>> which point we move from quadratic to linear time.
>>
>
> I don't think she said it did things intelligently, just that that's how it
> did things. :-)
>
> But now we're getting into the innards of vi, which I never looked at
> anyway, and I guess is less relevant in this thread anyway.
vi does indeed look up all the various capabilities it will need,
once, when it starts up. It uses the documented interface, which
is tgetent followed by tgetstr/tgetnum/tgetflag for each capability.
tgetent did a linear search.
>> The short of it (from what I got out of it) is that the move from termcap
>> to terminfo was mostly motivated by attribute name changing away from fixed
>> 2 character names.
>>
>> A secondary motivation would be performance, but I don't really buy that
>> one. Since we only moved to terminfo on systems with plenty of memory,
>> performance of termcap could easily be on par anyway.
>>
>
> I tend to agree with you and I'll go one further: it seems that frequently
> we tend to identify a problem and then go to 11 with the "solution." I can
> buy that termcap performance was an issue; I don't know that going directly
> to hashed terminfo files was the optimal solution. A dbm file of termcap
> data and a hash table in whatever library parsed termcap would go a long
> way towards fixing the performance issues. Did termcap have to be
> discarded just to add longer names? I kind of tend to doubt it, but I
> wasn't there and don't know what the design criteria were, so my
> very-much-after-the-fact second guessing is just that.
It's been 30+ years, so the memory is a little fuzzy. But as I recall,
I did measure the performance and that's how I discovered that the
quadratic algorithm was causing a big performance hit on the hardware
available at the time (I think I was on a VAX 11/750, this would have
been about 1982.)
I was making several improvements at the same time. The biggest one
was rewriting curses to improve the screen update optimization, so it
would use insert/delete line/character on terminals supporting it.
Cleaning up the mess termcap had become (the format had become horrible
to update, and I was spending a lot of time making updates with all
the new terminals coming out) and improving startup time (curses also
had to read in a lot of attributes) were part of an overall cleanup.
IIRC, there may also have been some restrictions on parameters to string
capabilities that needed to be generalized.
Hashing could have been done differently, using DBM or some other method.
In fact, I'd used DBM to hash /etc/aliases in delivermail years earlier
(I have an amusing story about the worlds slowest email I'll tell some
other time) but at the time, it seemed best to break with termcap
and go with a cleaner format.
On 2015-01-01 17:11, Mary Ann Horton<mah(a)mhorton.net> wrote:
>
> The move was from termcap to terminfo. Termlib was the library for termcap.
Doh! Thanks for the correction. Finger fart.
> There were two problems with termcap. One was that the two-character
> name space was running out of room, and the codes were becoming less and
> less mnemonic.
Ah. Yes, that could definitely be a problem.
> But the big motivator was performance. Reading in a termcap entry from
> /etc/termcap was terribly slow. First you had to scan all the way
> through the (ever-growing) termcap file to find the correct entry. Once
> you had it, every tgetstr (etc) had to scan from the beginning of the
> entry, so starting up a program like vi involved quadratic performance
> (time grew as the square of the length of the termcap entry.) The VAX
> 11/780 was a 1 MIPS processor (about the same as a 1 MHz Pentium) and
> was shared among dozens of timesharing users, and some of the other
> machines of the day (750 and 730 Vaxen, PDP-11, etc.) were even slower.
> It took forever to start up vi or more or any of the termcap-based
> programs people were using a lot.
Hum. That seems like it would be more of an implementation issue. Why
wouldn't you just cache all the entries for the terminal once and for
all? terminfo never came to 16-bit systems anyway, so we're talking
about systems with plenty of memory. Caching the terminal information
would not be a big memory cost.
Thanks for the insight.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Bob Swartz, founder of Mark Williams Co, has allowed the sources for
COHERENT to be published under a three-clause BSD license. Steve Ness is
hosting them. They are available here:
http://nesssoftware.com/home/mwc/source.php
For reference, for folks who don't know what COHERENT is, it started as a
clone of 7th Edition, but grew more modern features over time. Dennis
Ritchie's recollections of his interaction with it:
https://groups.google.com/forum/#!msg/alt.folklore.computers/_ZaYeY46eb4/5B…
And of course the requisite Wikipedia link:
http://en.wikipedia.org/wiki/Coherent_(operating_system)
- Dan C.
PS: I hold a soft spot for COHERENT in my heart. I became interested in
Unix in high school, but this was before Linux was really a thing and
access to other systems was still hard to come by. I spotted an ad for
COHERENT in the back of one of the PC-oriented publications at the time,
"Computer Shopper" or some such, and realized that it was *almost* within
my reach financially and that I could install it on the computer I already
owned. Over the next month or so, I scraped up enough money to buy a copy,
did so, and put it on my PC. It was quirky compared to actual Unix
distributions, but it was enough to give one a flavor for things. The
manual, in particular, did not follow the traditional Unix format, but
rather was an alphabetical "lexicon" of commands, system calls and
functions and was (I've always thought) particularly well done. Links to
the COHERENT lexicon and various other documents:
http://www.nesssoftware.com/home/mwc/.
I graduated onto other systems rather quickly, but COHERENT served as my
introduction to Unix and Unix-like systems.
PPS: Bob Swartz is the father of the late Aaron Swartz.
On 2015-01-06 17:32, Mary Ann Horton<mah(a)mhorton.net> wrote:
>
> On 01/06/2015 04:22 AM,arnold(a)skeeve.com wrote:
>>> >>Peter Jeremy scripsit:
>>>> >>>But you pay for the size of $TERMCAP in every process you run.
>> >John Cowan<cowan(a)mercury.ccil.org> wrote:
>>> >>A single termcap line doesn't cost that much, less than a KB in most cases.
>> >In 1981 terms, this has more weight. On a non-split I/D PDP-11 you only
>> >have 32KB to start with. (The discussion a few weeks ago about cutting
>> >yacc down to size comes to mind...)
>> >
>> >On a Vax with 2 Meg of memory, 512 bytes is a whole page, and it might
>> >even be paged out, and BSD on the vax didn't have copy-on-write.
>> >
>> >ISTR that the /etc/termcap file had a comment saying something like
>> >"you should move the entries needed at your site to the top of this file."
>> >Or am I imagining it?:-)
>> >
>> >In short - today, sure, no problem - back then, carrying around a large
>> >environment made more of a difference.
>> >
>> >Thanks,
>> >
>> >Arnold
> Even with TERMCAP in the environment, there's still that quadratic
> algorithm every time vi starts up.
I must be stupid or something. What quadratic algorithm?
vi gets the "correct" terminal database entry directly from the
environment. Admittedly, getting any variable out of the environment
means a linear search of the environment, but that's about it.
What am I missing? And once you have that, any operation still means
either searching through the terminal definition for the right function,
which in itself is also linear, unless you hash that up in your program.
But I fail to see where the quadratic behavior comes in.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
A very nice addition to the archives. Thank you.
I well remember our disbelief that Mark Williams wrote all its own
code, unlike other vendors who had professed the same. As Dennis
described, we had fun putting together black-box tests that
recognized undocumented quirks (or bugs) in our code. We were
duly impressed when the results came back negative.
Doug
A prosperous New Years to all us old UNIX farts.
Years ago the USENIX conference was in Atlanta. It was a stark contrast between us and the Southern Baptists who were in town for their conference as well (punctuated at some goofball Baptist standing up in the middle of one of the restaurants to sing God Bless America or some such).
Anyhow, right before the conference someone (I think it was Dennis) made some comment about nobody ever having asked him for a cast of his genitals. A couple of friends decided we needed to issue genital casting kits to certain of the UNIX notables. I went out to an art supply store and bought plaster, paper cups, popsicle sticks to mix with, etc… Gould computers let me use one of their booth machines and a printer to print out the instructions. I purloined some bags from the hotel. It was pointed out that you need vaseline in order for the plaster to not stick to the skin. Great, I head into the hotel gift shop and grab ten tiny jars of vaseline. As I plop these on the counter at the cashier, she looks at me for a minute and then announces…
I guess y’all aren’t with the baptists.
People took it pretty tongue in cheek when they were presented. All except Redman who flew off the handle.