All, a status update on the PDP-7 Unix restoration project at
https://github.com/DoctorWkt/pdp7-unix
The system is pretty much complete now. We have as much of the original
code working as we can. We have rewritten things like the shell and some
other utilities (ls etc.). The ed editor and the native assembler both
work. We also have written a user-mode PDP-7 simulator to test things
and an assembler to make building things faster.
The system boots up under SimH with a filesystem and you can see what things
were like back in 1970.
One big missing utility is roff. As of today, I've written a compiler that
inputs a vaguely C-like language and outputs PDP-7 code. Using this, I've
compiled a minimalist roff which is enough to format man pages. This is
a separate project here: https://github.com/DoctorWkt/h-compiler
Now we are hoping to get the Living Computer Museum people to bring it up
on their real PDP-7. Unfortunately, it doesn't have a disk drive. The
expected solution is to build a disk simulator with an FPGA and SD card.
There is no time frame for this, but it is in the works.
Thanks go to Phil Budne and Robert Swierczek for all their hard work
in building and testing things, and also to Norman Wilson for supplying
scans of the original documents.
Cheers, Warren
Hello everyone!
I had been lurking this list for long, this is my first post to this
list.
I read with a lot of interest, an old Usenix paper by the late Richard
Stevens on a system called "Portals":
<https://www.usenix.org/legacy/publications/library/proceedings/neworl/steve…>
It explores a lot of ideas that found itself in Plan 9, like a
filesystem interface for sockets etc. Wondering if this survived in any
existing, so called "modern" Unix. I have always felt the need to have
something like this in Unix.
Cheers
--
Ramakrishnan
On 2016-04-02 04:00, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Saturday, 2 April 2016 at 1:06:58 +1100, Dave Horsfall wrote:
>> On Mon, 28 Mar 2016, scj(a)yaccman.com wrote:
>>
>>> ... and I once heard an old-timer growl at a young programmer "I've
>>> written boot loaders that were shorter than your variable names!"
>>
>> Ah, the 512-byte boot blocks... We got pretty inventive in those days
>> (and this was before secondary loaders!) with line editing etc.
>
> I was thinking more of the RIM loader on the PDP-8. 16 words or 24
> bytes.
Bah! The RK8E bootloader for OS/8: 2 words... :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Hi TUHSers,
For a long time now, I have had a theory that I've never seen
substantiated (or disproved) in print. After Steve Johnson's recollection
of how hard it was to type on the Teletype terminals, I'm going to throw
this thought out for consideration.
One of Unix's signature hallmarks is its terseness: short command names
like mv, ln, cp, cc, ed; short options (a dash and a single letter),
programs with just a few, if any, options at all, and short path names:
"usr" instead of "user", "src" instead of "source" and so on.
I have long theorized that the reason for the short names is that since
typing was so physically demanding, it was natural to make the command
names (and all the rest) be short and easier to type. I don't know if
this was a conscious decision, but I suspect it more likely to have been
an unconscious / natural one.
Today, I started wondering if this wasn't at least part of the reason
for commands being simple, with few if any options. After all, if I
have to type 'man foo' to remember how foo works, I don't want to wait
for 85 pages of printout (at 110 characters per second!) to finally see
what option -z does after wading through the descriptions of options -a
through -y.
I certainly think there's some truth to this idea; longer command
names and especially GNU style long options didn't appear until the
video terminal era when terminals were faster (9600 or 19200 baud!) and
much less physically demanding to use. How MUCH correlation is there,
I don't claim to know, but I think there's definitely some.
For the record, I did use the paper teletypes some, mainly at a university
where I took summer classes, connected to a Univac system. I remember
how hard it was to use them. You could almost set your watch by when
it would crash around noon time, as the load went up. :-) On Unix I
only used VDTs, except if I was at a console DECwriter.
Anyway, that's my thought. :-) Comments and or insights, especially from
those who were there, would be welcome.
Thanks,
Arnold
The Unix History repository on GitHub [1] aims to provide the evolution
of Unix from the 1970s until today under Git revision control. Through
a few changes recently made [2] it's now possible for individual
contributors to have their GitHub profile linked to their early Unix
contributions. Ken Thompson graciously made this move last week
following a personal email invitation. I think it would be really cool
if more followed. This would send a powerful message of continuity and
tradition in computing to youngsters joining GitHub today.
What you need to do is the following.
- Create a GitHub profile (if you haven't already got one)
- Click on https://github.com/settings/emails
- Add the email address(es) associated with your early Unix commits
(e.g. foo(a)research.uucp or bar(a)ucbvax.berkeley.edu) You can easily find
an author's commits and email addresses recorded in the repository
through the web search form http://www.spinellis.gr/cgi-bin/namegrep.pl
- GitHub will tell you that a verification email has been sent to your
(probably defunct) email address. Don't worry. Your account will be
linked to the address even without the verification step.
- Adding your photograph to your profile will increase the vividness of
GitHub's revision listings.
If you're in contact with Unix contributors who are not on this list,
please forward them this message. Also, if your name isn't properly
associated with the repository's commits, drop me an email message (or a
GitHub pull request for the corresponding file [3]), and I'll add it.
[1] https://github.com/dspinellis/unix-history-repo
[2] The modifications involved the change of UUCP addresses to use the
.uucp pseudo-domain rather than a ! path and the listing of co-authors
within the commit message.
[3]
https://github.com/dspinellis/unix-history-make/tree/master/src/author-path
Diomidis - http://www.spinellis.gr
Just a friendly word from the guy who runs the TUHS list.
Historical details, with verifiable facts: OK.
Questions and replies about old systems: OK.
Semi-off-topic threads: mostly OK, they usually peter out.
Comments about systems (good or bad): fine.
Comments about individuals and their motivations/actions
(especially if the comments are pejorative): not good at all.
If you think a thread is going to devolve into a slanging match
between people, then a) don't fuel the flames by posting replies,
b) walk away and calm down, c) let me know.
We've had a few threads recently which are coming close to the
edge, and I hate acting as a censor/wet blanket, so please
avoid saying things that will raise other people's hackles.
Back to you regularly scheduled notalgia....
Warren
Marc Rochkind:
BSD is the new kind on the block. I don't think it came along until 1977 or
so. Research UNIX I don't think picked up SCCS ever. SCCS first appeared in
the PWB releases, if you don't count the earlier version in SNOBOL4 for the
IBM mainframes.
=====
Correct. We never needed no stinkin' revision control in Research.
More fairly, early systems like SCCS were so cumbersome that a
community that was fairly small, in which everyone talked to
everyone, and in which there was no glaring need wasn't willing
to adopt them.
I remember trying SCCS for a few small personal projects back in
1979 or so (well before I moved to New Jersey), finding it just
too clunky for the benefits it gave me, and giving up. Much later,
I found RCS just as messy. One thing that really bugged me was
those systems' inherent belief that you rarely want to keep a
checked-out copy of something except while you're working on it.
Another, harder to work around, is that in any nontrivial project
there are often stages when I want to make changes of scope broader
than a single file: factor common stuff out into a new file, merge
things into a single file, rename files, etc.
CVS was a big step forward, but not enough. Subversion was the
first revision-control that didn't feel like a huge burden to me.
None of which is to say that SCCS and RCS were useless; they were
important pioneers, and for the big projects that originally
spawned them I'm sure they were indispensible. But I can't imagine
Ken or Dennis putting up with them for very long, and I'm glad I
never had to.
Norman Wilson
Toronto ON
> These are USED cards. That's OK. No duty!
Quite the opposite happened to me in Britain. I wanted to
import an early computer-generated film to show. When I
inquired whether there would be any customs implications,
I was asked whether the film was exposed or not. Britain
charged duty only on exposed film.
With apologies for straying ever farther from Unix,
Doug
> From: Dave Horsfall
> That makes sense, and someone forgot to document it...
Or perhaps it was added precisely to get rid of the window, and then someone
discovered that it could be used to freeze the system, so they decided they'd
better not document it?
If the system had MOS memory, and you had to power cycle the machine to get it
out of this state, there wouldn't be any evidence left of who did the deed
(unless the system was writing extensive audit trailing to disk), so it would
be a great 'system assasin' (aka vandal) tool.
Noel
PS: I guess this is more PDP-11ish than UNIXish - apologies for the off-topic!
On 21 March 2016 at 17:43, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Tuesday, 22 March 2016 at 1:11:07 +1100, Dave Horsfall wrote:
>>
>> Walking down the corridors of Comp Sci, a student in front of me
>> dropped his entire deck of approx 2000 cards, all over the floor...
>> I have no idea whether he got them sorted, but I sure as hell used
>> rubber bands after that!
>
> But that's what the sequence numbers in columns 73 to 80 are for!
I did that religiously, even with my small PL/C runs -- PL/C runs were
free. One day, they decided to extend the code area to the entire
card.... and so I learned another feature of the card punch.
N.
>
> Greg
> --
Thanks for some additional information.
On 2016-03-28 18:16, Milo Velimirović wrote:
>
>> On Mar 28, 2016, at 9:44 AM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>>
>> On 2016-03-28 16:18, Noel Chiappa wrote:
>>> > From: Dave Horsfall <dave(a)horsfall.org>
>>>
>
> [ Wait & RK discussion snipped.]
>
>>
>>
>>> > I know that Kevin Dawson (I think) tried it on my /40 as well
>>>
>>> The 11/40 does not have the SPL instruction; see the '75-'76 PDP-11 Processor
>>> Handbook, pg. 4-5. (Again, sorry, just want to be accurate.)
>>
>> This is also a pretty important point. But one which also begs the question how the splxxx() functions in Unix worked back then. Or did Unix not use this pattern and these functions back when the 11/40 was relevant?
>
> These functions existed in V6 and can be found in the file, m40.s, that was assembled with the rest of the kernel to generate a unix that would run on a /40 class machine.
Aha. Great. Thanks. Yes, BIS and BIC on the PSW obviously works, but
this would definitely not block interrupts for the next instruction. So
at least in that case, a WAIT could result in the kernel sitting around
waiting for the next interrupt. I don't really think DEC intend WAIT to
be used in the way Unix uses it, and it don't really have the properties
that would be ideas for Unix. Also somewhat indicated by the fact that
DEC did not use WAIT this way themselves.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2016-03-27 23:49, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
>
> > From: Johnny Billquist
>
> > It would also be interesting if anyone can come up with a good reason
> > why SPL should work that way.
>
> So that when doing:
>
> SPL 0
> WAIT
>
> you don't lose by having the interrupt happen between the SPL and the WAIT?
Hmm. A good point. If you depend on WAIT waking you up at an interrupt,
then you need SPL to block here. But this also means that you must be at
SPL 7 before any of this, otherwise you are still exposed to this
problem (nothing says that the interrupt won't happen before the SPL as
well).
In general, I would say that this is not the way I would write code, but
checking in RSX and 2.11BSD I can tell that RSX do not use this pattern,
and does a WAIT without any SPL, while 2.11BSD do an SPL 0 followed by
WAIT. And the routine in 2.11BSD is also called at SPL 7.
So obviously, both ways have been done, and 2.11BSD will work
potentially with a slight degration if the SPL do not block interrupts.
It will still work fine, as you will, at a minimum, get an interrupt at
the next clock tick, which will wake it up. But it might possibly be
sitting in a WAIT slightly longer than required.
RSX in fact just loops after the WAIT. If an interrupt should cause the
system to be able to do something more productive, it will not return to
the idle loop. So yes, it also detects in the interrupt exit processing,
that it was/is in the idle loop.
I still haven't had time to investigate properly. But at least processor
and chip manuals do not say that SPL will block interrupts. But that is
no guarantee that it don't in reality.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Dave Horsfall <dave(a)horsfall.org>
> SPL 7 was only used by the clock interrupt
Err, according to the 1975 Peripherals Handbook, both are BR6. (Sorry, only
interested in accuracy.)
> even the published Unibus spec was known to be wrong, in order to keep
> 3rd-party kit out of it (it was something subtle to do with buss timing,
> so sometimes the card worked, and sometimes it didn't, doing wonders for
> your reputation).
I don't know about that, but we built two UNIBUS DMA networking devices,
relying on the UNIBUS description in the 1975 Peripherals Handbook, and they
both worked fine (one became a product for Proteon).
> Slightly longer? I think it was Lions himself who used to teach us that
> a lost interrupt is nasty :-(
The interrupt isn't lost, it's just that the OS does a WAIT when it should
perhaps return and start up some user process - but that resumption of doing
user computations is delayed by at most 1 clock tick (some other device may
interrupt during the WAIT, before the clock does).
> Anyone here remember overlapped seeks on the RK-11 failing under Unix
I'd be interested in the details of this. The V6 RK driver didn't use them,
but the RK11-D does claim to support them (having spent a modest amount of
time looking at the drawings), so I'd very much like to know what the bug was.
> I know that Kevin Dawson (I think) tried it on my /40 as well
The 11/40 does not have the SPL instruction; see the '75-'76 PDP-11 Processor
Handbook, pg. 4-5. (Again, sorry, just want to be accurate.)
> Christ, but this is starting to sound like some religion or other.
I am only interested in correct data.
Noel
> From: Johnny Billquist
> this also means that you must be at SPL 7 before any of this
Yes, I assumed that (since it wouldn't make sense otherwise :-).
> In general, I would say that this is not the way I would write code, but
> ... 2.11BSD do an SPL 0 followed by WAIT.
Right; even if one does something like have every interrupt set a flag (which
is cleared while interrupts are disabled), and check that after lowering the
priority, but before doing the WAIT, there's _still_ a window between that
check, and the WAIT (although I guess it's less likely to be hit, since the
interrupt request would have to be posted _in that window_, not be hanging
around waiting to be serviced).
The only way (that I can work out) to atomically lower the priority and wait
is to do an RTI with the PC on the stack pointing to the WAIT instruction, but
I'm not sure even that is guaranteed to work.
I guess it all depends on the CPU implementation: does it check for pending
interrupts before each instruction, or only at the end of each instuction, or
what? If before, and there's an interrupt pending, it will go off before the
WAIT is executed. Although I suppose if it's at the end, it would do the check
at the end of the RTI, and do the interrupt then.
And whether it's at the end, or the beginning, WAIT itself must be special
cased, to check for pending interrupts during the execution (which can take an
indeterminate amount of time).
> 2.11BSD will work potentially with a slight degration if the SPL do not
> block interrupts. It will still work fine, as you will, at a minimum,
> get an interrupt at the next clock tick, which will wake it up. But it
> might possibly be sitting in a WAIT slightly longer than required.
Yes, exactly.
> RSX in fact just loops after the WAIT. If an interrupt should cause the
> system to be able to do something more productive, it will not return to
> the idle loop. So yes, it also detects in the interrupt exit processing,
> that it was/is in the idle loop.
Does it detect if it was _before_ the WAIT instruction? I would assume it does,
but I don't know anything sbout RSX.
> But at least processor and chip manuals do not say that SPL will block
> interrupts.
Yes, I looked too, in a variety of places (PDP-11 Architecture, including in
the 'model differences' table, 11/73 Tech Manual, etc). Crickets...
Noel
> From: Warren Toomey
> I thought it would be nice to get a feel for what it was like to use a
> real tty
Make sure it only prints 10 characters per second, then. (I think TTY's were
10 cps?) R-e-a-l-l-y s-l-o-w.
Noel
On 2016-03-27 08:18, Greg 'groggy' Lehey<grog(a)lemis.com> wrote:
> Isn't it wonderful that we no longer have issues with character
> representation?
I hope that comment was meant as a joke, ironic, cynical, or whatever...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Johnny Billquist
> It would also be interesting if anyone can come up with a good reason
> why SPL should work that way.
So that when doing:
SPL 0
WAIT
you don't lose by having the interrupt happen between the SPL and the WAIT?
Noel
On 2016-03-27 03:50, Dave Horsfall<dave(a)horsfall.org> wrote:
>
> On Fri, 25 Mar 2016, Johnny Billquist wrote:
>
>>> > >Some instructions inhibit the "check for interrupts at the end of this
>>> > >instruction" check. I'm most familiar with the 8080 EI instruction,
>>> > >which enabled interrupts after the following instruction (so things
>>> > >like EI;HLT didn't have a window). It seems the PDP-11 SPL behaves
>>> > >the same.
>> >
>> >I don't think it should on the PDP-11, and the documentation do not
>> >mention any such thing.
> It most certainly did, at least on the 11/70 that I used... Do you have
> experience otherwise?
I do not have any experience either way. I have never checked this. I'm
just saying that it don't make sense in my head, and the processor
handbook do not describe such a property of SPL. But now that I know,
I'm going to try and find out.
It might be correct. I'm just surprised if so, since there is no
technical need for SPL to act that way. And having SPL behave
differently than all other instructions means extra work for the people
who wrote the microcode.
It would also be interesting if anyone can come up with a good reason
why SPL should work that way.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
As attached, thanks to someone over on the RTTY list; not sure if it's
exactly what he wanted.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
---------- Forwarded message ----------
Date: Sat, 26 Mar 2016 18:52:59 -0700
From: tony.podrasky
To: Dave Horsfall <dave(a)horsfall.org>
Subject: Re: [GreenKeys] Teletype simulator? (fwd)
Hi Dave;
Attached is my TTY test program.
It is fairly simple. The only thing that might be
tricky is the type of UAR/T you are using.
Let me know if it compiles.
regards,
tony
On 03/26/2016 06:33 PM, Dave Horsfall wrote:
> On Fri, 25 Mar 2016, tony.podrasky wrote:
>
> > What do you mean, "a non-Windoze" TTY simulator?
>
> One that's in source form, not binary...
>
> > One of the programs I have takes e-mail and prints it on a 5-level ITA#2
> > machine.
> >
> > It is written in "C", and at such a low-level that it should compile on
> > ANY thing that runs "C".
>
> Got a copy you can send me to pass on?
>
--
"I read somewhere that 77 percent of all the
mentally ill live in poverty. Actually, I'm more
intrigued by the 23 per cent who are apparently
doing quite well for themselves." -Jerry Garcia
On 2016-03-24 03:00, "Ron Natalie"<ron(a)ronnatalie.com> wrote:
>
>> >Closest I've ever been murdered was when I "accidentally" filled the local
>> >11/70 with an uninterruptible instruction sequence."
> SPL instruction. The PDP-11 was odd that while SPL was a "privileged"
> instruction, rather that trapping if you did it in user mode, it just
> "ignored" it.
> Well, what it ignored was the actual change of the processor level. What
> it still implemented was the side effect was that interrupts were locked out
> until the next instruction fetch.
> If you filled your instruction space up with SPLs you could lock up the
> computer so that even the HALT key didn't work (you had to do a bus RESET).
Ok. Color me stupid, but I don't get it. I totally do not understand how
this locks anything out.
It is the normal behavior of any instruction that interrupts are not
recognized until the next instruction fetch. This is how the microcode
works, and it is also pretty much the same in any processor today.
Except for instructions that take a long time, and which can be
interrupted in the middle, the context preserved, and the instruction
restarted and continued, instructions are normally atomic. You cannot
get interrupts in the middle of an instruction.
Second, I cannot understand how filling the memory with SPL instructions
(or any other instruction) can lock out the CPU. As noted, they are
individual instructions. You still get a fetch between each instruction,
at which point, interrupts will be recognized.
Now, if you instead talked about actually raising the CPU to SPL 7, then
I agree that no interrupts will happen. But that is because you
essentially disabled interrupts.
The front panel still works though. It is not handled like an interrupt,
but it is true that it do interact with the processor states, and
normally if you pull HALT, it will only halt when it's going to fetch
the next instruction. You can, of course, also set the front panel
switch for single microcode instruction, at which point the CPU will
halt at the next microcode instruction instead, and you can single step
the microcode as well.
The one CPU I know you can sunset is the KA10 (PDP-10). I'm sure there
are others, but I have never seen how this could be done on a PDP-11, so
I'm most curious about this, and if you can provide more details I would
be most interested. As I also happen to know where a PDP-11/70 is
standing, I intend to test this out next I get close to it.
As for the KA10 (I think it was the KA10, but it might have been the
PDP-6), the problem is related to the indirect addressing feature. Since
memory is 36 bits, but addresses only 18, you have plenty of bits to
play with. And one of them is the indirect bit. And if you refer to a
memory location that also have the indirect bit set, you get another
memory access to get the actual content. The fun thing happens if you
set the indirect bit, and give your own address. This is then an
infinite memory reference. And the KA10 can not be broken out of that
lookup. The only solution is to pull the power plug.
The CPU is essentially stuck in one state, just tightly reading memory,
and then repeating reading memory. Later PDP-10 models have an explicit
check in the microcode in this loop to be able to break out of this.
Sorry for the offtopic content. :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2016-03-26 20:43, Clem Cole<clemc(a)ccc.com> wrote:
>
> On Fri, Mar 25, 2016 at 11:09 PM, Charles Anthony <
> charles.unix.pro(a)gmail.com> wrote:
>
>> >And Dec's RADIX-50, packing 3 characters into 16 bits. (IIRC the origin of
>> >the 6.3 filenames. bit I can't document that.)
>
> Sort of.... before ASCII, DEC used a few other 5 bit codes that were
> around such as baudot (look at the PDP-1/4 etc and KSR 28). RAD50 was a
> natural scheme for storing file name and using bits efficiently.
>
> Which, of course, lead to the abomination of case folding - it's not a bug,
> it's a feature 😂
>
> RAD50 gave us the x.y file name form with the implied dot et al. 6.3 and
> later 8.3 were natural directions from that coding. Using the .3 ext as a
> type tag of course followed that naturally given that's all that was stored
> in the disk "catalog." [And CP/M and PC/MS-DOS inherit that scheme -
> including the case folding silliness even though by that time all keyboard
> were upper and lower case and they stored the files in 8 bits].
Some other people already mentioned this, but... - SIXBIT. DEC might
have used baudot in the very early machines, but I would say that SIXBIT
dominated here for a long time. We see it both in the PDP-8, but also
the PDP-6 and its follow ons. RAD50 was the natural extension of SIXBIT
on a machine that did not have a word size that was a multiple of 6.
The x.y filename, as well as the 6+3 pattern predate the PDP-11. I would
say that in this area, the PDP-11 didn't come with anything new, but
just made life more complicated.
OS/8 for sure only have 6+2 filenames, but still in the x.y form.
TOPS-10 have, I think, 6+3. And the Monitor (I think that was the name
for the PDP-6 OS) was, I think, also 6+3.
And it was all SIXBIT.
And SIXBIT also give you the case folding.
I say the PDP-11 complicated life just because DEC was already so much
into having filenames stored more compact than normal text, and having a
6+3 pattern, so they came up with R50, which fits the bill, but it's
more headache than it was worth, if you ask me.
Since the PDP-11 have 8 bit bytes, it would have made much more sense to
just store filenames as 8 bit bytes. It would have cost some more
storage, but not that much. But it took time for DEC to realize that the
space savings here were not really a good tradeoff. Old habits die hard,
I guess.
By the way, RSX (and early VMS) actually use 9+3 filenames.
> UNIX of course, would put the "type" in the file itself (magic #) and force
> the storing of the dot, but removed the strict mapping of name and type.
> Having grown up in both systems, I see the value of each; but agree I think
> I find UNIX's scheme better and lot more flexible.
I think I agree on the point of having filenames in a free format. Not
sure I really like storing the type in the file itself. So I'm sortof
torn. Or rather, I would like to keep type separate from both.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Hey All,
I just saw in another thread the statement their are no odd requests here.
So I thought I would test that.
The day NeXT took over Apple I read a page somewhere on the internet that started with line.
Bow down to UNIX children of Macintosh... It then when on its Old Testament conquering tone about the new way of computing that was coming.
Does anybody have any idea who wrote this or where to find it?
Thanks,
Ben
Hello everyone,
I am Rocky and this is my first message. Before starting, I would like to thank you for all the valuable informations and stories you post here.
About the History of Unix, I was wondering with another guy why the rc script has that name. As many of you already know, and according to NetBSD, FreeBSD, OpenBSD (current) manual,
"The rc utility is the command script which controls" the startup of various services, "and is invoked by init(8)" (from DESCRIPTION).
"The rc command appeared in 4.0BSD" (from HISTORY).
Words may slightly change between the three distributions, but the meaning and the informations provided are the same. So, the etymology of rc does not appear in the man pages. Do you know how to recover it? Do (or did) the letters rc have some meaning in this context?
Cheers,
Rocky
On 2016-03-25 00:27, Milo Velimirovic wrote:
>
>> On Mar 24, 2016, at 6:06 PM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>>
>> On 2016-03-24 23:50, Peter Jeremy wrote:
>>> On 2016-Mar-24 11:17:18 +0100, Johnny Billquist <bqt(a)update.uu.se> wrote:
>>>> It is the normal behavior of any instruction that interrupts are not
>>>> recognized until the next instruction fetch. This is how the microcode
>>>> works, and it is also pretty much the same in any processor today.
>>> ...
>>>> individual instructions. You still get a fetch between each instruction,
>>>> at which point, interrupts will be recognized.
>>>
>>> Some instructions inhibit the "check for interrupts at the end of this
>>> instruction" check. I'm most familiar with the 8080 EI instruction,
>>> which enabled interrupts after the following instruction (so things like
>>> EI;HLT didn't have a window). It seems the PDP-11 SPL behaves the same.
>>
>> I don't think it should on the PDP-11, and the documentation do not mention any such thing.
>> There is a good reason the 8080 (and Z80, and others) have this property. The RETI instruction on these machines do not enable itnerrupts themselves, so just as you note, you need to both enable interrupts and return from interrupt atomically, or else you get into a mess.
>>
>> The PDP-11 RETI instruction changes the processor priority as a part of the instruction. You do not use SPL (whatever) before a RETI.
>> Thus, it do not make sense that SPL on a PDP-11 would have this property. If if indeed do disable recognizing interrupts after an SPL, it sounds more like a bug. I guess I'll go and read the microcode so see if that mentions any of this, since I'm sortof into reading it anyway as I was trying to debug a problem on an 11/70 only a couple of months ago…
>
> The PDP-11 has no RETI instruction; it has a RTT (ReTurn from Trap) and a RTI (ReTurn from Interrupt) instructions, but you already knew that, right? In some cases it’s problem that it’s not possible to determine which is appropriate or correct to use. According to the PDP11 Architecture Handbook the difference between them is in what happens when the RTx instruction loads a PSW that has the T bit set and when it forces a Trace trap. RTI - immediate trap, RTT traps after the instruction following the RTT.
Oops. Yes, it's RTI and RTT. But the names are beside the point, and so
is the difference between these two. The point is that the
instruction(s) do set the processor priority level, and you do not use
SPL in combination with them, so it makes no sense to have SPL inhibit
interrupts for any length at all. (And yes, I did know that.)
Oh, and as far as RTT vs. RTI goes, not it's not hard to know which one
you want. You want RTT for your debugger and RTI for everything else.
The difference is about what happens after the return. With RTT, the
T-bit trap will not trap until another instruction has executed. With
RTI, you would never manage to step to the next instruction with the
T-bit, since every time you returned, you'd get another trap.
But I bet you knew that as well... ;-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Seems like we are all contributing old card stories.. here is one of my
favorites from my past.
At CMU, all systems programmers working for the computer center had to put
shifts in as the operator behind the "glass door" doing the grunt stuff
(but we got all the computer time we wanted, an office and terminal - so it
was a good deal in those days). The student courses, in particular the
engineering intro to FORTRAN (WATFIV), used the TSS based 360/67 which we
programmed and ran; but they used the batch system on cards not timesharing
with the ASR 33's which was quite expensive. There was a traditional glass
room with the computer, its tapes and other gear, and a counter with a
"call human for help" button where "paying users'" could come ask questions
of the operator on duty. On the older side of the counter was the flock
of keypunch machines and a high speed card read. The printers were in
secure areas, so we would bring out student prints from their batch jobs as
needed and put them on the binds near the counter ( as was the pretty much
the standard of those days).
By rule, the system's programmers were also not supposed to help the
students with their assignments. They were supposed to get help from their
TA's and Profs, *etc*. who had regular hours. But often folks were up
very late working on assignments and no one from the course was around to
ask questions. And as the operator, if you had a minute, it was not
uncommon to have a little empathy for your brothers and sisters in arms on
the other side of the counter. As long as this was not abused, the TA's,
Profs as well as our bosses in the computer center tolerated the process.
But if we were obviously busy, we really did not have the time to do much
to help them.
One night I was working the over night operator shift with another coworker
who will be left nameless (but I will say that he's now a SVP at a large
computer firm these days). It was a very busy night for us for some
reason, probably something like printing bills, or checks for the school or
some such; along with a full backup, so we had our hands full between
mounting tapes, changing types of paper and print heads *etc.*, security
procedures with the results and the like. That night, there was also a big
assignment due shortly for one of the classes.
Sure enough the buzzer started ringing and it was a frustrated (and as I
remember somewhat clueless) student that needed help with his assignment.
He was claiming that the his deck was being rejected/was not working.
Note "turn around" from deposit card deck to receipt of print out was
probably in the order of 10-15 minutes, and sometimes longer. One of us
came out, showed him something like a missing "BATCH WATFIV" command card
or some such and reminded them of the official policy and probably pointed
to the sign, as we were very busy with our job. We would politely tell
them to try to find a TA or someone in the class that could help him.
The student went away, and we went back to work. A few minutes later the
buzzer went off again, same student, and the cycle repeated with some other
trivial issue. After the 4th or 5th time it was becoming a real issue
because we were really quite busy. At that point, my coworker came out and
said, here bring me your deck. He looked at them and quickly said -- "The
problem is you used the wrong color cards."😈
The student was completely dejected and walked away. I looked up and
said, man that was cruel. But it did buy us time to finish our work.
Never found out if he re-keypunched his cards.
Clem
Steve Johnson:
This reminds me that someone at BTL threw together a "TSO Shell". It was
a wrapper around /bin/sh that slept for 10 seconds before executing each
line...
=====
And after each command exited. Discarding anything typed ahead
during the sleep, of course.
And printed all-upper-case IEFCRAPNONSENSE messages even when a
command exited successfully.
I think I still have a copy somewhere. It dates from the 6/e era,
so it would need a lot of work to compile and run on a modern system.
Occasionally I think of converting it to ISO and POSIX even though
that seems contradictory somehow.
Norman Wilson
Toronto ON
Not quite a self reproducing program and I did take down one of the UCSD servers one day.
I was writing a shell script to do some complex and long running task. This was in the early days of the shell supporting functions. The script had a large number of functions and I named one of them to be the same name as the shell script.
I set it in motion and logged out, as I knew it would take several hours to finish the work.
The next day I logged in to find that the machine had the load spike as the shell script recursively started itself when it got to the function call that had the same name as the shell script. The admin kindly sent me a ‘top' output showing the load at several hundred and all the jobs having my name and being my shell script.
Under this he wrote: “Never do this again.”
I haven’t.
David
> Btw. It does not emulate the smell of small machine oil
or the mess of ppt punch chaff on the floor
Yes. I saw Clem's mail just as I was about to recommend
placing a small dish of machine oil beside the simulator.
Alas, it seems I missed out on the chad experience. Data was
regularly imported to the PDP-7 by ppt, but rarely exported.
Night-owl Ken must have taken some ppt backups, evidence of
which the midnight janitors whisked away.
Doug
It's a bit off-topic, but what were non-Unix filesystems like around 1969-1970?
The PDP-7 filesystem has i-nodes (file metadata) and filenames separate
from the i-nodes. This allows hard links and thus a non-tree structured
filesystem.
This has always struck me to be one of the most important features of
the Unix filesystem: names separated from the rest of the file metadata,
and arbitrary hard links so that there is no preferred filename.
Were these features in other contemporaneous filesystems?
As a side note, the PDP-7 kernel knows about the top-level directory ("dd")
but it is agnostic to the concept of "." and "..". What that means is
that you can build a filesystem with "." and ".." links and the kernel
will deal with them as per all other links. But you can also build a
filesystem without "." or ".." and the kernel doesn't care.
There's not enough evidence (source code, papers, anecdotes) to confirm
or deny the presence of "." in the PDP-7 code that Norman scanned for us.
".." does seem to exist as it is mentioned in one source file, ds.s.
It's an intruiging mystery.
Cheers, Warren
This came up today at work; what's the origin of the open file table? The
suggestion was made that, instead, a ref-counted data structure could be
allocated at open() time to serve the same purpose, and that a table of
open files was superfluous. My guess was that this made it (relatively)
easy to look up what files referred to a particular device?
> Those file structures are collected into a single, global table. The
> question is why this latter table? One could rather imagine an
> implementation where open() allocates (e.g., via malloc())
Depending on how malloc() is implemented, fragmentation can be
serious in a program that runs forever with as many frees as
allocs. Separate allocations for each item can also be costly in time.
One malloc() strategy is to divide the arena into regions,
each of which caters for blocks of a single size, so
fragmentation doesn't occur. In essence that's how the
system tables work, except that these tables have
hard limits. Now, if the tables could be reallocated
as needed ...
Another problem with per-item allocations is performance
monitoring and debugging. It's hard to see what's
going on in a well mixed dynamic storage heap.
Doug
https://newsroom.intel.com/news-releases/andrew-s-grove-1936-2016/
I know some of the processor people at intel and I was looking around,
found this, interesting read if you are into the history:
http://people.cs.clemson.edu/~mark/330/chronques.html
For those that don't know, Colwell did the P6 pipeline, I think under
Groove or right after Groove got cancer. There was P5, then P6, then
they did a different pipeline that they called Pentium 4 (made no sense
to me but their names never do). The Pentium 4 was the one where
they speculated on what the answer would be for some instructions.
As in you could do a load and they'd guess that it was zero or not.
They were going for great clock rate, and they got it, but they also
got instructions that would take 2000 cycles to get through.
That pipeline got booted and so far as I know, the Colwell P6 pipeline
lives on in every Intel processor after the Pentium 4.
Getting back to Andy, I loved his time as CEO, I think he did a lot of
good for that company. Here's to him!
Sorry to continue the detour from disk file systems to card trays, but this
> Walking down the corridors of Comp Sci, a student in front of me
> dropped his entire deck of approx 2000 cards, all over the floor...
> I have no idea whether he got them sorted, but I sure as hell used
> rubber bands after that!
reminded me that Vic Vyssotsky liked to say of his BLODI (block diagram)
language for simulating sample-data systems that it was the only card-safe
language. You could toss a program deck down the stairs, pick it up at the
bottom, submit it to the compiler, and it would work. That was 10 years
before the filing of the famous "natural order" patent on spreadsheets,
which ordered execution the same way.
Doug
On 2016-03-18 03:00, Warren Toomey <wkt(a)tuhs.org> wrote:
> It's a bit off-topic, but what were non-Unix filesystems like around 1969-1970?
> The PDP-7 filesystem has i-nodes (file metadata) and filenames separate
> from the i-nodes. This allows hard links and thus a non-tree structured
> filesystem.
>
> This has always struck me to be one of the most important features of
> the Unix filesystem: names separated from the rest of the file metadata,
> and arbitrary hard links so that there is no preferred filename.
>
> Were these features in other contemporaneous filesystems?
I don't know exactly how contemporary ODS-1 is. It's the file system
used on RSX-11, and I think it should atleast trace back to around 1972,
but I can't say more for sure.
Anyway, ODS-1, just like the Unix file system, have the directory hold
just the filename and a file identifier (pretty much the same as inode
number). There are of course some differences in details, but I would
say it is very similar to how Unix works. ODS-1 do not have reference
counting, but instead allows dangling directory entries that do not
point to valid files. Instead ODS-1 have a generation counter for each
file identifier, so that when it is reused, links to the old file will
not accidentally refer to the new file.
I would think that something like Multics had something similar, but I
have no idea about that one...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> Were these features [arbitrary hard links] in other contemporaneous filesystems?
Multics had arbitary "links", which were distinguished from "branches"--the
actual file. Links and branches coexisted in directories. Unix was consciously
derived from this model, but simplified so that only links have names
and branches live elsewhere (the inode list). The architecture is
described in http://www.multicians.org/fjcc4.html. This paper is
dates from 1964 or 1965; it was certainly authoritative in 1969.
I don't know whether it evolved further.
Christopher Strachey and Joe Stoy independently conceived a model
isomorphic to Unix for OS 6 at Oxford. The two systems were
essentially contemporaneous.
I am not aware of other systems that allowed multiple names for
one file, though clearly the scent was in the air at the time.
doug
> From: Johnny Billquist
> I would think that something like Multics had something similar
No, as far as I know, Multics was always 'old-style' - the directory contained
not just the name of the file, but also all the file's meta-data. Multics had
only symbolic links, I'm pretty sure.
To answer the original question, I can't think offhand of another system that
separated naming and file meta-data before Unix did it. I've always assumed
that that was one of Unix' novel ideas.
Noel
$ pdp7 unixv0.simh # Run the PDP-7 Unix kernel on SimH
PDP-7 simulator V4.0-0 Beta
sim> c # Start execution
login: ken
password: ken
@ date
Thu Jan 01 1970 00:00:05
@ ls -l
00004 drw-- 01 777 00050 dd
00035 drwr- 01 012 00060 .
00003 drw-- 01 777 00270 system
00036 lrwr- 01 777 00305 date
00037 lrwr- 01 777 00441 ls
@
inum perms lnks uid size file
Root was user-id -1, but the octal print routine sees as an unsigned
int and prints it truncated to three octal digits, 777.
So, the kernel boots and runs.
Cheers, Warren
(Posted to both The Unix Heritage Society and the TZ mailing list)
I've been off-and-on reading the "live minus thirty years" old usenet
feed at olduse.net, and noticed something that may be of interest to
both of these groups: The original mod.sources posting of the (as far as
I can tell) earliest available version of Arthur David Olson's timezone
handling code, in 1986.
https://groups.google.com/d/msg/mod.sources/gcolqTxTt9w/04ZtaYCxLvcJ
For the files present in both, it matches revision 7441f6b6 from the git
repository, except for SCCS IDs vs %W%.
https://github.com/eggert/tz/tree/7441f6b6705782743f40b9fc40cdcc80a498fda5
The git repository contains a file ialloc.c that is not present in the
release.
Probable renamed files - These appear in the git repository under their
new
names, but had the older names in the release.
New: localtime.c newctime.3 zdump.c zic.8 zic.c
Old: tzcomp.8 tzcomp.c tzdump.c settz.c settz.3
Files in the release but not this version of the git repository:
mkdir.c strchr.c: These never appear, though they're referenced in
Makefile edits.
pacificnew: doesn't appear until SCCS version 8.1 in revision aaf2a927
dated July 2006.
years.sh: Appears as SCCS 7.1 yearistype.sh, dated March 1992.
According to Ken, the inspiration for ++ and -- came
from the PDP-7 hardware, not from previous languages.
The PDP-7 supported only ++i, where i had to be one
of only a few memory addresses. "For symmetry", Ken
says, he put all four operations in B. (And he did
not use the hardware autoincrement.)
Doug
All,
Is there a good source of information about the Unix v6 filesystem
outside of the source code itself? Also, is there a source for the
history of the early Unix filesystems from v6 onward?
Thanks,
Will
> From: Brantley Coile
> But B's ++ and -- operators seem to be unique.
B seems to be like UNIX itself in this regard: a carefully selected set of
old ideas, along with a few new ones of equal value.
Noel
>> https://www.bell-labs.com/usr/dmr/www/kbman.html
>> https://www.bell-labs.com/usr/dmr/www/bintro.html
> Yup, there certainly were different versions of B.
Yes, kbman covers only one of the two implementations that
cohabited the PDP-11. The other was the same language, with
software paging, so it could have a larger data space.
Various aspects of the language were borrowed from PL/I,
BCPL and Algol 68. ++ and -- were novel operators. The
reversal of Algol's assignment operators (e.g. -=
became =-) was eventually repealed in C.
doug
e
So the PDP-7 code from Norman has a B interpreter. I know the history:
BCPL -> B -> NB -> C, but I don't recall seeing a decent decription of
the B language. Does anybody know of such a document? We'll need something
like this so we can use the interpreter once we get it working :-)
Cheers, Warren
I grabbed a copy as well. A quick grep showed something I had forgotten:
I ran a Source redistribution service.
David
>
> On Tue, 2 Feb 2016, Warren Toomey wrote:
>
>> This is temporarily at http://www.tuhs.org/Usenet/Usenet.tar.bz2 if
>> anybody else would like to grab it.
>
> Suitably grabbed :-) I know, I must finish my mirror some day (got a few
> health issues right now)...
>
At www.skeeve.com/Usenet.tar.bz2 is a copy of UUNET's archives of the
various USENET source newsgroups. I created this file on September 2 2004.
I made it for myself, since it was clear that uu.net would disappear
sometime soon...
It's 139 Meg - Warren maybe you can put it into the archives and everyone
else can get it from there? I think the person who hosts www.skeeve.com
has some monthly limits on data transfer and I don't want him blown
out of the water.
Thanks,
Arnold
> I'm still trying to get my around about how a program such as "egrep"
> which handles complex patterns can be faster than one that doesn't
> Is there a simple explanation, involving small words?
First, the big-word explanation. Grep is implemented as a nondeterministic
finite automaton, egrep as a deterministic finite automaton. Theory folk
abbreviate these terms to "NFA" and "DFA", but these are probably not
what you had in mind as "small words".
The way grep works is to keep a record of possible partial parsings
as it scans the subject text; it doesn't know which (if any) will
ultimately be the right one, hence "nondeterministic". For example,
suppose grep seeks pattern '^a.*bbbc' in text 'a.*bbbbbbc'. When
it has read past 3 b's, the possible partial parses are 'a.*', 'a.*b',
'a.*bb' and 'a.*bbb'. If it then reads another b, it splits the
first partial parse into 'a.*' and 'a.*b', updates the next two
to 'a.*bb' and 'a.*bbb', and kills off the fourth. If instead it
reads a c, recognition is successful; if anything else, all partials
are killed and recognition fails.
Egrep, by preprocessing the expression, produces separate code for
each of several possible states: "no b's", "1 b", 2 b's" and "3 bs".
When it's in state "1 b", for example, it switches on the next
character into "2 b's" or fails, depending on whether the
character is b or not--very fast. Grep, on the other hand, has to
update all the live parses.
So if egrep is so fast, why do we have grep? One reason is that
grep only needs to keep a list of possible progress points
in the expression. This list can't be longer than the expression.
In egrep, though, each state is a subset of progress points.
The number of possible subsets is exponential in the length
of the expression, so the recognition machine that egrep constructs
before attempting the parse can explode--perhaps overflowing
memory, or perhaps taking so long in construction that grep
can finish its slow parse before egrep even begins its fast parse.
To revert to the words of theory folks, grep takes time O(m*n) and
space O(m) worst case, while egrep takes time and space O(2^m+n).
(2^m is an overestimate, but you get the idea.)
That's the short story. In real life egrep overcomes the exponential
by lazily constructing the machine--not generating a state until it
is encountered in the parse, so no more than n states get constructed.
It's a complex program, though, for the already fancy preprocessing
must be interleaved with the parsing.
Egrep is a tour de force of classical computer science, and pays
off big on scanning big files. Still, there's a lot to say for
the simple elegance of grep (and the theoretical simplicity
of nondeterministic automata). On small jobs it can often win.
And it is guaranteed to work in O(m) memory, while egrep may need
O(n).
-------------------------------------------------
Ken Thompson invented the grep algorithm and published it in CACM.
A pointy-headed reviewer for Computing Reviews scoffed: why would
anybody want to use the method when a DFA could do the recognition
in time O(n)? Of course the reviewer overlooked the potentially
exponential costs of constructing a DFA.
Some years later Al Aho wrote the more complicated egrep in the
expectation that bad exponential cases wouldn't happen in everyday life.
But one did. This job took 30 seconds' preprocessing to prepare
for a fraction of a second's parsing. Chagrined, Al conceived
the lazy-evaluation trick to overcome the exponential and
achieved O(n) run time, albeit with a big linear constant.
In regard to the "short history of grep", I have always thought
my request that Ken liberate regular expressions from ed caused
him to write grep. I asked him one afternoon, but I can't remember
whether I asked in person or by email. Anyway, next morning I
got an email message directing me to grep. If Ken took it
from his hip pocket, I was unaware. I'll have to ask him.
Doug
A friend just sent me a pointer to this site, which appears not
to have been mentioned on this list before:
PDP 11/70 Emulator
http://skn.noip.me/pdp11/pdp11.html
The site lists these working guest O/Ses:
RL0 BSD 2.9
RL1 RSX 11M v3.2
RL2 RSTS/E v7.0
RL3 XXDP
RK0 Unix V5
RK1 RT11 v4.0
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
All, I've got the original PDP-7 cat working in my PDP-7 user-mode
simulator called "a7out". The cat.s source code and the a7out program are
in the Github repository in src/cmds and tools, respectively.
The repository is at https://github.com/DoctorWkt/pdp7-unix
The mailing list is at http://minnie.tuhs.org/pipermail/pdp7-unix/
I'm attaching the a.out assembly output as the as7 assembler in the
repository currently is not quite ready for cat.s.
To run the original cat, create a text file, e.g. echo hello > file1
$ ./a7out a.out file1
hello <- the output of the PDP-7 machine code
Cheers, Warren
P.S Thanks again to Norman for getting us the scans.
> Norman Wilson is going to try and get us some higher quality scans which
will help a great deal in deciphering some of the hard to read characters.
A second scan, high or low quality, is a tremendous help. Diffing them
is a really good way to spot trouble.
Doug
On Thu, Feb 25, 2016 at 01:43:03PM -0500, Robert Swierczek wrote:
> Do you know if anybody has taken up the challenge of transcribing and
> simulating the PDP-7 Unix source code you have uncovered in your
> post http://minnie.tuhs.org/pipermail/tuhs/2016-February/006622.html
> If not, I would love to get started on it as a project.
Hi Robert, yes there is a project underway to type it all in and bring it
up on SimH and hopefully on a real PDP-7. I've set up a mailing list for
the project, so let me know if you would like to join: I'll add you.
The repository is at https://github.com/DoctorWkt/pdp7-unix. I've started
on the S1 section (in scans/), and I've also started work on an assembler
and a user-mode simulator (in tools/)
Norman Wilson is going to try and get us some higher quality scans which
will help a great deal in deciphering some of the hard to read characters.
Cheers, Warren
> From: Random832
> They're 24 bits, aren't they?
Not according to the source:
typedef long daddr_t;
daddr_t s_fsize; /* size in blocks of entire volume */
short s_nfree; /* number of addresses in s_free */
daddr_t s_free[NICFREE];/* free block list */
(from param.h and filsys.h respectively).
> From: Ron Natalie
> The V6 block numbers were 24 bits.
Maybe you're thinking of the byte number within the file? The file length
was stored in an word plus a byte in the inode in V6:
char i_size0;
char *i_size1;
but the block number in the device was a word:
int s_fsize; /* size in blocks of entire volume */
int s_nfree; /* number of in core free blocks (0-100) */
int s_free[100]; /* in core free blocks */
"Use the source, Luke!"
Noel
> From: Will Senn
> Is there a good source of information about the Unix v6 filesystem
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man5/fs.5
> Also, is there a source for the history of the early Unix filesystems
> from v6 onward?
I don't know of one (although there is that article on the 4.2 filesystem),
but would love to hear of one.
I gather that V7 is basically V6 except the block numbers are 32 bits, not 16.
Noel
On Sun, Feb 21, 2016 at 09:21:14PM -0600, Will Senn wrote:
> Thanks for the link. The tools look useful. But, they appear to be extract from tape rather than create tape utils? I am away from a computer but will try them out later to make sure.
No, my bad. I thought they would make tapes, but I read the Readme files
and it doesn't look so. You could modify the mkfs.c tool that I wrote at
https://github.com/DoctorWkt/unix-jun72/blob/master/tools/mkfs.c
to write V6 filesystems. It shouldn't be too hard.
Cheers, Warren
Sometime back before the turn of the century, I remember
writing up a summary of the evolution of the UNIX file
system, starting with the earliest system I could find
information for (possibly the PDP-7 system) and running
through the printed manuals as things changed, up to
the Seventh Edition.
I think I've found it; I'll look it over and try to put
it somewhere on the web in the next day or two.
Norman Wilson
Toronto ON
> From: Will Senn
> Thanks for the link.
Sure. It's worth reading the entire V6 manual if you're going to be doing a
lot with it - lots of goodies hidden in places like that. Also the two BSTJ
Unix issues. (I think they are available online, now.)
> Supposing I created a byte faithful representation of a V6 filesystem
> on my mac, would I then be able to load the file in simh as an RK05 and
> mount and access its files and directories from a V6 instance?
That's really a SIMH question, and I don't use SIMH; I use Ersatz11. That is
certainly how Ersatz11 works; I just FTP'd the RK05 distro images over, set
them up as the files that 'implemented' various RK05 drives, and (modulo a
few teething Ersatz11 configuration issues) away it went.
Noel
> From: Random832
> That's the superblock. Look in ino.h.
Oh, right you are. Thanks for catching my mistake! (I don't have anything
like the same familiarity with V7 as I do with V6; never did any system
hacking on the former.)
Now that you mention it, I do seem to remember this kludge; IIRC, a later
Unix paper described the V7 inode layout. I never looked at the actual code,
though. Now that I do, it looks like iexpand() (in iget.c) is not exactly
portable! On a machine with a different byte order for the bytes within a
long, that ain't gonna work...
Noel
Hi all, Norman Wilson has kindly scanned in some PDP-7 Unix
source code that he has kept hidden away. I've just added
it into the Unix Archive at:
http://www.tuhs.org/Archive/PDP-11/Distributions/research/McIlroy_v0/
I've updated the Readme with the details. The files are 0*.pdf.
I'm not sure if there's enough there to bring up a kernel and
some applications. I'll leave that to someone who knows PDP-7
assembly programming :-)
Many thanks Norman!
Cheers, Warren
Of some possible intertest to the denizens here...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
---------- Forwarded message ----------
Date: Mon, 15 Feb 2016 07:11:44 +1100 (EST)
From: Dave Horsfall <dave(a)horsfall.org>
To: Applix List
Subject: APPLIX-L On this day... (Wirth, Feynman)
We gained Niklaus Wirth, otherwise known as Mr ALGOL (and thereby freeing
us from the chains of FORTRAN), back in 1934; you can either call him by
name, or call him by value (non-programmers are not expected to understand
this computer joke).
Upon the other paw, we lost Richard Feynman, back in 1988; he was the
bloke who sorted out those NASA management liars, over that little O-ring
incident... Well, that's what happens when the suits ignore the
engineers.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
John von Neumann halted in 1957; without him, we probably would not have
had computers as we know them (CPU-buss-memory etc).
--
Dave Horsfall
Unit 13, 79 Glennie St
North Gosford NSW 2250
0490 095 371
> There is a Henry Spencer <henry(a)spsystems.net>, who about a year ago or
> so posted to the IETF TLS list and posted to comp.compilers a decade
> ago.
I believe that's The Henry Spencer, all right. SP Systems is what
called (perhaps still does) himself when consulting.
I've already dug up and sent Warren another contact address for Henry,
gleaned from a mutual friend.
Norman Wilson*
Toronto ON
(Not to be confused with Norman D. Wilson, civil engineer,
after whom Wilson Avenue in Toronto is named)
One half of Unix, and what more can I say?
Well, I'll bet not many people know that he shares a birthday with Alice
Cooper...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Ken kindly tells me that both stories are right, though clearly
my impression that my query prompted Ken to write grep is wrong:
i dont see any differences between our stories.
you asked and i dug around and found it.
Would we have greps today, had that little incident not occurred?
Doug
All, I've spent some time working on the UTZoo Usenet Archive postings
from https://archive.org/download/utzoo-wiseman-usenet-archive
I've reformatted each group's postings into mbox format so I could run
them through the mailman archive tool. The results are here:
http://www.tuhs.org/Usenet/ You can now browse by group/year/month/thread.
I'll drop an index.html file in there tomorrow with a description of
each newsgroup. There are still some blemishes to fix up, as the archiver
failed to recognise the headers on some articles and they end up "posted"
in February 2016.
Other newgroups archives are here: https://archive.org/search.php?query=usenet. I might pull out some other Unix relates groups (aus.sources etc.) and
add them. Are there any other Usenet archives around?
Cheers, Warren
P.S If anybody is still trying to recover the old 2.11BSD patches, you may
find some of them lurking in http://www.tuhs.org/Usenet/comp.bugs.2bsd/
Hi,
I successfully made SIMH VAX-11/780 emulator run 32V, 3BSD and 4.0BSD.
Details are on my web site (thogh rather tarse):
http://zazie.tom-yam.or.jp/starunix/
Enjoy!
Naoki Hamada
nao(a)tom-yam.or.jp
Hi all, does anybody know of on-line historical Usenet archives that
I can link to, especially if they have unpacked articles (visible
subject lines would be better)?
What newsgroups are relevant? net.v7bugs, comp.sources.unix,
comp.sources.misc, net.sources, mod.sources, comp.sources.bugs?
What about platform or system-specific newsgroups?
I'll put the links here: http://wiki.tuhs.org/doku.php?id=publications:newsgroups
Thanks, Warren
P.S A good history of the legal side of Unix is here:
http://wiki.tuhs.org/doku.php?id=publications:theses
Does anybody have a working e-mail address for Henry Spencer? I've
tried his "zoo.utoronto..." address but the box is refusing SMTP
connections (from me, at least). Alternatively, could someone e-mail
him and see if he would be interested in joining the TUHS list?
And ditto for any other old Unix users!
Cheers, Warren
Can someone here ID the mystery person?
Embarrassingly, CHM has the person misidentified as well.
-------- Forwarded Message --------
Subject: [SIGCIS-Members] Need help identifying a photo
Date: Wed, 27 Jan 2016 16:46:45 +0000
From: Ceruzzi, Paul <CeruzziP(a)si.edu>
To: members(a)lists.sigcis.org <members(a)lists.sigcis.org>
There is a famous photo on Wikimedia commons, of what purports to be Ken
Thompson & Dennis Ritchie in front of a PDP-11, presumably working on
UNIX. The problem is that the seated person doesn’t look like either of
them. And he is clean-shaven. Could it be Bjarne Stroustrup? Does anyone
recall seeing T&R w/o facial hair? Any help in tracking this down would
be much appreciated! The photo has been reprinted in many places, and
I’d like to track this down before I inadvertently propagate an error.
https://commons.wikimedia.org/wiki/File:Ken_Thompson_(sitting)_and_Dennis_R…
Paul E. Ceruzzi
Curator, Division of Space History
National Air and Space Museum
MRC 311, PO Box 37012
Smithsonian Institution
Washington, DC 20013-7012
www.ceruzzi.com <http://www.ceruzzi.com>
ceruzzip(a)si.edu <mailto:ceruzzip@si.edu>
202-633-2414
[I feel like I'm spamming my own list]
I've tried to make contact with people in the UK that might have
copies of the UKUUG and EUUG newsletters: Peter Collinson,
Sunil Das, Bruce Anderson. No luck with this.
There are newsletters back to 1992 at http://www.ukuug.org/newsletter/
but I'm after the ones in the 1970s and 1980s. The current secretary
doesn't know about the earlier newsletters.
Who else can I contact?
Cheers, Warren
> we can probably substitute part of the db(1) man page from 1st
> Edition Unix for the missing page A7
That would be appropriate--properly documented, of course.
doug
Among the papers of the late Bob Morris I have found a
Unix manual that I don't remember at all--a draft by
Dennis Ritchie, in the style of (but not designated as)
a technical report with numbered sections and subsections.
It does not resemble the familiar layout of the numbered
editions. Besides the usual overview of kernel and shell,
it describes system calls and some commands, in a layout
unrelated to the familiar man-page style. Detailed
reference/tutorial manuals for as, roff, db and ed
are included as appendices.
The famous and well-justified claim that "UNIX contains a numer
of features very seldom offered even by larger systems"
appears on page 1.
A little poking around tuhs.org didn't reveal a copy of
this document. Does anybody know of one somewhere else?
Doug
> Dr. Wang invented the core memory at IBM BTW
Wang did make a magnetic-core storage device (a 2-core-per-bit
shift register) but Jay Forrester's core memory, first installed
on MIT's Whirlwind computer in 1953, is the one that actually
saw use and very quickly dominated the market.
Doug
Ok, I got a few questions about PDP-11.
First, I was wondering when Bell Labs got that first PDP-11/20 what
software (if any) came with it? I assume when one bought a PDP-11/20
you would get some type of OS with it.
According to the folks at alt.sys.pdp11 the PDP-11 computer doesn't
have anything equivalent to a PC's BIOS. But I know a bit about what a
PC's BIOS does and that includes RAM Initialization. Wouldn't the DRAM
on the PDP-11/something need to be initialized too? Perhaps an older
PDP-11 doesn't have DRAM but surely the later models did?
Now the last question has to do with what made the PDP-11 architecture
so great. Part of that had to be the relatively affordablility of the
PDP-11 and of course it was the machine that made Unix possible. It
seems though that there should have been a PDP-11 based desktop and as
far as I can tell that didn't happen. Instead we got a bunch of micros
with 8080, z80 and 6502 cpus, but nothing that could run Unix, at
least not a Unix v7 with source code.
Mark
> First, I was wondering when Bell Labs got that first PDP-11/20 what
software (if any) came with it?
> I have this bit set that they didn't get anything, they wrote a
cross-assembler on another machine. I know that when it came, it didn't have a
disk (wasn't ready yet), so it ran a chess problem (memory only) for quite a
while until the disk came.
That is exactly right. Unix was up and running as a time-sharing
system with remote access before a primitive DOS emerged from DEC.
The chess problem was enumeration of closed knight tours.
Doug
Noel Chiappa:
I'd lay good money that the vast majority of PDP-11's never ran Unix. And
UNIX might have happened on some other machine - it's not crucially tied to
the PDP-11 - in fact, the ease with which it could be used on other machines
was a huge part of its eventual success.
=======
I have to disagree in part: the PDP-11 is a big part of
what made UNIX so widespread, especially in university
departments, in the latter part of the 1970s.
That wasn't due so much to the PDP-11's technical details
as to its pricing. The PDP-11 was a big sales success
because it was such a powerful machine, with a price that
individual departments could afford. Without a platform
like that, I don't think UNIX would have spread nearly the
way it did, even before it began to appear in a significant
way on other architectures. Save for the VAX, which was
really a PDP-11 in a gorilla suit, that didn't really happen
until the early 1980s anyway, and I'm not convinced it
would have happened had UNIX not already spread so much
on the PDP-11.
It worked both ways, of course. I too suspect that a
majority (though I'm not so sure about `vast') of PDP-11s
never ran UNIX. But I also suspect that a vast majority
of those that did might not have been purchased without
UNIX as a magnet. I don't think those who weren't
around in the latter 1970s and early 1980s can appreciate
the ways in which UNIX captured many programmers and
sysadmins (the two were not so distinct back then!) as
no other competing system could. It felt enormously
more efficient and more pleasant to work on and with
UNIX than with any of the competition, whether from DEC
or elsewhere. At the very least, none of the other
system vendors had anything to match UNIX; and by the
same token, had UNIX not been there, other hardware
vendors' systems would have had better sales.
Sometime around 1981, the university department I worked
at, which already had a VAX-11/780 and a PDP-11/45 running
UNIX, wanted to get another system. Data General tried
very hard to convince us to buy their VAX-competitor.
I remember our visiting their local office to run some
FORTRAN benchmarks. The code needed some tweaking to
work under their OS, which DG claimed was better than
UNIX. Us UNIX people had trouble restraining our chuckles
as we watched the DG guys, who I truly believe were experts
in their own OS, taking 15 or 20 minutes to do things that
would have taken two or three with a few shell loops and
ed commands.
DG did not get the sale. We bought a second-hand VAX.
Blame UNIX.
Norman Wilson
Toronto ON
On 2016-01-25 02:11, John Cowan<cowan(a)mercury.ccil.org> wrote:
> Ronald Natalie scripsit:
>
>> >There were the Dec Professional 325 and 350 desktops which had the
>> >F-11 and the 380 had the J-11 (which should make a pretty snazzy little
>> >retro UNIX system)
> As well as the 310, which was not a desk*top* but a whole desk with a
> PDP/8-A built into it. The first regular job I ever had was with a
> company that sold these along with their accounting software.
The 310 was not called a Professional, though. It was the EDUsystem if I
remember right. There was also PDP-11 based EDUsystems, called 350. Not
the same as the desktop thingy...
Isn't it wonderful how DEC reused different designations sometimes.
There was also a DECstation 88, if I remember right, which was a PDP-8
based thing.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole
> to help debug the kernel, we even put adb into the core resident port of
> V7 which was tricky - Noel I seem to remember we .. stole that from you
> guys at MIT
Well, I certainly don't remember doing such a thing - but I should point out
that the Unix 'community' at MIT was not at all in good touch with each
other. So perhaps someone else at MIT did it? Or perhaps it was done after
I left for Proteon?
Also, the group I was in - CSR - was, during my time with them, not well
connected to other Unix users outside MIT. So even the things we _did_ do seem
not to have made it to many (any?) people. I'm not sure why this was:
probably, since we were working exclusively on early TCP/IP stuff, we were
mostly in touch with other networking people.
The disconnect to the rest of MIT may have been because, in our case, the
technical community at Tech Square didn't have good contacts with the rest of
campus; we were kind of self-sufficient. The AI Lab people had some contacts
with the Plasma fusion group, and later the EE department on campus, but CSR
(and maybe all of LCS - I'm not sure, the groups in LCS were pretty isolated
from each other) didn't.
Also, Tech Sq was mostly about PDP-10's - initially running ITS, later TWENEX
- and only a couple of smaller groups ran Unix. The DSSR group had an 11/70,
and we were quite close to them, but AFAIK we were the only two groups in Tech
Sq running Unix. I don't think anyone else at MIT had a PDP-10, until the EE
department on campus got an TWENEX machine, so there wasn't really anyone on
campus for most of Tech Sq to interact with.
Noel
On 2016-01-25 02:11, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
>
> > The later M9301 (see disassembly of the contents here:
> >http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/M9301-YA.mac
> > of one variant) didn't clear memory either
>
> OK, so_my_ memory is failing! That code does in fact test the memory.
>
> (Although, looking at it, I can't understand how it works; after writing the
> contents of R3 into the memory section it it asked to test, it complements the
> test value in R3, before comparing it with the memory it just wrote with R3,
> to make sure they are the same. Maybe there's an error in the dis-assembly?)
Read the code again, you missed it. :-)
The code first writes one value into memory (R3), then complements R3,
and for each location checks that the memory is *not* equal to R3, and
then writes R3 and checks that it now matches. Essentially checking that
it can be changed into a wanted value in time. And it does it two times.
First zeroing, and then writing ones, and then back to zeroes again, so
yes, the memory will be left containing all zeros, except for what
memory isn't tested.
> Anyway, it should have left the memory mostly containing all 0's.
Indeed.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Norman Wilson
> I have to disagree in part
You make a number of good points. A few comments:
> the PDP-11 is a big part of what made UNIX so widespread, especially in
> university departments
That last part was really a big factor, one not to be understated. That
penetration led to production of a whole generation of people who i) were
familiar with Unix, and ii) liked it, and were not about to put up with the
OS's being turned out by various vendors.
> I too suspect that a majority (though I'm not so sure about `vast') of
> PDP-11s never ran UNIX.
'Embedded systems'. The number of PDP-11's running timesharing was a small
share of the total number, I expect.
> I don't think those who weren't around in the latter 1970s and early
> 1980s can appreciate the ways in which UNIX captured many programmers
> ... as no other competing system could.
Very true. My jaw basically hit the floor when I first saw (ca. '75) what Unix
was like. People who didn't live through that transition can't _really_ grok
it, any more than my kids can really fully grok a world without mobile
phones. It wasn't as powerful as Multics, but I was completely blown away that
anyone could get that much capability into a PDP-11 OS.
Noel
>> It seems though that there should have been a PDP-11 based desktop
> Because DEC were a bunch of losers.
OK, that was kind of harsh. (Trying to send email too fast...) DEC had a lot
of really brilliant people, and they produced some awesome machines.
But when it comes to desktops, I think there is a certain amount of
bottom-line truth in that assessment: there was a huge market there
(obviously), and DEC should have been in a pretty good place to capture it,
but it completely failed to do so.
Why not? I put it down to corporate cultural intertia - ironically, the same
thing that allowed DEC to eat so much of IBM's lunch.
Just as IBM took way too long to understand that there was a very large
ecological niche for smaller machines _with customers who didn't want or need
the whole IBM hand-holding routine_, DEC never (or, at least, until way too
late to catch the wave) could change their mentality from producing really,
really well built computers for people who were all technical, to commodity
computers which needed to be made as absolutely cheaply as possible, and for
people who were non-technical.
The company as a whole just couldn't change its mindset that radically, that
quickly. (And a lot of the blame for that has to go to Ken Olsen, of course.
He just didn't grok how the world was changing.)
> There's some DEC history book which talks about DEC's multiple failures
> (on a variety of platforms, not just PDP-11 based ones) to get into the
> desktop market, if the title comes to me, I'll post it.
The best one on this topis, probably, is "Ultimate Entrepreneur", by Glenn
Rifkin and George Harrar, which gives a lot of detail on DEC's attempts to
build personal computers; also good is "DEC is Dead, Long Live DEC", by Edgar
Schein.
Noel
Clem Cole:
Also by the time DEC did try to build a workstation (after Masscomp,
Apollo, Sun et al had taken many of their engineers) it was too little too
late. The ship had sailed and they never recovered that market.
======
There was a window in the early 1990s when I think they could
have recovered. DEC had some pretty good MIPS-based workstations,
and Alpha was just coming out and was even better. Ultrix was
a good, solid system, and DEC OSF/1 (later Digital UNIX) was
getting there.
In 1994 or so, the group I worked in needed a new workgroup-sized
central server. Our existing stuff was mostly DECstations running
Ultrix (with a few SGI IRIX systems for specialized graphics).
We looked at the price and performance of various options:
everything SGI had was too pricey; Sun's was well behind in
performance (this was before UltraSPARC), and their OS was
primitive and required a lot of retrofitting to be usable
(this was also before Solaris 2 even came out, let alone
became stable; also before Sun grew up enough to ship a
decent X11 as part of the system).
So we bought a third-party system with an Alpha motherboard
in a PC-style case. In burn-in testing I discovered a bug in
the motherboard; the vendor were happy to fix it once they
could reproduce it in their lab (which took some doing, but
that was another story).
We were quite happy with that system, and would have bought
more had our entire department not been shut down in a
mostly-political fuss a couple of years later (that too is
another story).
DEC's desktop MIPS systems were quite good, and the Alpha
followons even better. Had the company's upper management
not by then lost all sense of how to run a company or to
sell anything ... but that was not to be.
Old-fart footnote: when our department shut down, I bought
some of our DECstations cheap from the university. I still
have them on a shelf downstairs; I've never done much with
them.
Norman Wilson
Toronto ON
> The later M9301 (see disassembly of the contents here:
> http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/M9301-YA.mac
> of one variant) didn't clear memory either
OK, so _my_ memory is failing! That code does in fact test the memory.
(Although, looking at it, I can't understand how it works; after writing the
contents of R3 into the memory section it it asked to test, it complements the
test value in R3, before comparing it with the memory it just wrote with R3,
to make sure they are the same. Maybe there's an error in the dis-assembly?)
Anyway, it should have left the memory mostly containing all 0's.
Noel
On 2016-01-24 19:01, Mark Longridge<cubexyz(a)gmail.com> wrote:
> Ok, I got a few questions about PDP-11.
>
> First, I was wondering when Bell Labs got that first PDP-11/20 what
> software (if any) came with it? I assume when one bought a PDP-11/20
> you would get some type of OS with it.
No. You might get diagnostics, but any kind of OS you would have to buy
separately, and there were several to choose from, depending on your needs.
> According to the folks at alt.sys.pdp11 the PDP-11 computer doesn't
> have anything equivalent to a PC's BIOS. But I know a bit about what a
> PC's BIOS does and that includes RAM Initialization. Wouldn't the DRAM
> on the PDP-11/something need to be initialized too? Perhaps an older
> PDP-11 doesn't have DRAM but surely the later models did?
RAM don't need to be initialized. Maybe you mean clearing it, so it
don't contain random information?
ECC memory, on the other hand needs to be initialized, but for those
PDP-11s who has that, the initialization is done in hardware.
> Now the last question has to do with what made the PDP-11 architecture
> so great. Part of that had to be the relatively affordablility of the
> PDP-11 and of course it was the machine that made Unix possible. It
> seems though that there should have been a PDP-11 based desktop and as
> far as I can tell that didn't happen. Instead we got a bunch of micros
> with 8080, z80 and 6502 cpus, but nothing that could run Unix, at
> least not a Unix v7 with source code.
The architecture is very easy to program on, and rather intuitive. You
have general registers, an orthogonal instruction set, and the machine
can be programmed as a stack based, a register based, or just plain
memory-to-memory style equally well.
In addition, I/O is pretty simple, as there are no special I/O
instructions. Same instructions as for anything else are also used for I/O.
Also, the memory model on the PDP-11 is pretty nice, with a proper MMU
which allows you to write reasonable OSes.
There were in fact desktop based PDP-11s, but DEC shot themselves in the
foot there. They were afraid of eating into their own business, so they
made the desktop PDP-11 incompatible in some ways with all other
PDP-11s, so you could in general not run much PDP-11 software on the
desktop, but had to develop specific programs for that platform. That,
and the fact that it took DEC too long to enter the market, meant that
the IBM PC had already become the standard by the time DEC came with the
Professional (the PDP-11 desktop).
DEC also made a couple of other PDP-11 based systems that were sortof
desktop, such as the VT-103, which was a VT100 with a PDP-11 inside.
Interesting idea, but the VT103 didn't have good mass storage, and had a
very slow and limited PDP-11 CPU. The PDT-11 was another attempt, with
similar issues as the VT-103.
If we were to examine prototype things, DEC did a lot more as well,
including a portable PDP-11 with an LCD display. Never became a product.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Mark Longridge
> when Bell Labs got that first PDP-11/20 what software (if any) came
> with it?
I have this bit set that they didn't get anything, they wrote a
cross-assembler on another machine. I know that when it came, it didn't have a
disk (wasn't ready yet), so it ran a chess problem (memory only) for quite a
while until the disk came. I think that's in the ACM paper, or if not, one of
the BSTJ Unix history papers.
> Perhaps an older PDP-11 doesn't have DRAM but surely the later models
> did?
MOS memory came in starting roughly around the time of the 11/04 and /34.
(Well, that's not quire right - there were bipolar and MOS memory options
for the 11/45, the second PDP-11 model, but they were kind of special.)
But the earliest ROM bootstraps were too small to have space for code to
clear memory, or anything like that. The diode-array BM792 ROM certainly
didn't.
The later M9301 (see disassembly of the contents here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/M9301-YA.mac
of one variant) didn't clear memory either, although there was probably room
in the ROMs by that point.
I suspect it didn't because nobody bothered with stuff like that back then -
you just wrote over whatever was already there. Properly written code would
never have referenced a location which had not been loaded or written to, that
way you couldn't get a parity error from random gubbish in semi-conductor at
power up (and of course core always had old data in it).
> Now the last question has to do with what made the PDP-11 architecture
> so great.
Bang/buck (in the metaphorical sense) ratio.
For a machine with a 16-bit word size (i.e. limited instruction size), it had
remarkable programming capability. Data could be in registers, pushed or
popped with a stack, at fixed addresses, PC-relative, indexed into a table,
etc, etc. And _all_ the instructions (basically) had acceess to _all_ those
modes.
As a result, the code density was probably higher than any similar sized
machine, and back when memory was core (i.e. expensive/limited), code density
was important.
The bus was also extremely flexible, given how simple it was: memory and
devices were all on the same (simple) bus.
> of course it was the machine that made Unix possible
I'd lay good money that the vast majority of PDP-11's never ran Unix. And
UNIX might have happened on some other machine - it's not crucially tied to
the PDP-11 - in fact, the ease with which it could be used on other machines
was a huge part of its eventual success.
> It seems though that there should have been a PDP-11 based desktop and
> as far as I can tell that didn't happen.
Because DEC were a bunch of losers. There's some DEC history book which talks
about DEC's multiple failures (on a variety of platforms, not just PDP-11
based ones) to get into the desktop market, if the title comes to me, I'll
post it.
Noel
---------- Forwarded message ----------
From: Clem Cole <clemc(a)ccc.com>
Date: Sat, Jan 23, 2016 at 3:00 PM
Subject: Re: [TUHS] Missing Documents for use with the Unix Time-Sharing
System, Sixth Edition
To: Will Senn <will.senn(a)gmail.com>
below....
On Sat, Jan 23, 2016 at 12:58 PM, Will Senn <will.senn(a)gmail.com> wrote:
> All,
>
> The Unix Sixth edition programmer's manual and other documents for use
> with Unix time-sharing system are available online, in html and postscript
> form from Wolfgang Helbig's site:
>
> http://wwwlehre.dhbw-stuttgart.de/~helbig/os/v6/doc/index.html
>
> There are papers some missing from the "Documents for use with the Unix
> Time-Sharing System":
>
Hmm - these should be with the v6 distribution - some of them are coming
with later editions... and except for updates to said system will be go'nuf
That said you are asking about the versions from v6. I do not seem to
have hardcopies easy to find. I'll keep looking there is some stuff in my
attic.
> RATFOR - A Preprocessor for Rational Fortran
> NROFF User's Manual
> A Manual for Tmg Compiler-writing Language
>
This is the doc that you might not find in other places, as I think tmg
stopped being distributed at some point. Doug as one of the authors I
believe may know the story.
> On the Security of UNIX
> The M6 Macro Processor
>
I think you mean m4 not m6
> A System for Typesetting Mathematics
> DC - An Interactive Desk Calculator
> BC - An Arbitrary Precision Desk-Calculator Language
> The Portable C Library (on UNIX)
> UNIX Summary
>
> Some of these are more interesting to me than others, but I tend towards
> shiny objects, so there is no telling when they will be of critical
> interest in the future. I have done quite a bit of searching for the NROFF
> document and the portable C library document and while I have found related
> works, I haven't come across the originals for sixth edition. Do any of
> y'all know where any or all of these documents are archived in their
> original/reproduced form?
Warren's V6 seems have many of them in:
http://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v6/v6doc.t…
> From: Will Senn
> I have cdb .. How do I exit it. %, CTRL-C, CTRL-D, CTRL-Z, Break,
> CTRL-Break, and so on just result in a ? being displayed.
CTL-D (EOF on input) works for me? Or maybe the version I have (it was a
binary only that came off the Shoppa disks, IIRC) is slightly different from
the one you have, and that only works in this version (which has a number of
extensions).
I don't think I ever found any other way to exit it. Although looking at the
code, it seems like probably the only way is to generate a 'quit with core
dump' interrupt - I forget what character that is in standard V6.
Noel
All,
The Unix Sixth edition programmer's manual and other documents for use
with Unix time-sharing system are available online, in html and
postscript form from Wolfgang Helbig's site:
http://wwwlehre.dhbw-stuttgart.de/~helbig/os/v6/doc/index.html
There are papers some missing from the "Documents for use with the Unix
Time-Sharing System":
RATFOR - A Preprocessor for Rational Fortran
NROFF User's Manual
A Manual for Tmg Compiler-writing Language
On the Security of UNIX
The M6 Macro Processor
A System for Typesetting Mathematics
DC - An Interactive Desk Calculator
BC - An Arbitrary Precision Desk-Calculator Language
The Portable C Library (on UNIX)
UNIX Summary
Some of these are more interesting to me than others, but I tend towards
shiny objects, so there is no telling when they will be of critical
interest in the future. I have done quite a bit of searching for the
NROFF document and the portable C library document and while I have
found related works, I haven't come across the originals for sixth
edition. Do any of y'all know where any or all of these documents are
archived in their original/reproduced form?
Regards,
Will
> From: Will Senn
> How did folks debug assembly routines in Unix v6, back in the day?
There are three different questions here, although you may not realize it:
- How did folks debug assembly routines in user programs in Unix v6
- How did folks debug assembly routines in the kernel in Unix v6
- How did folks debug assembly routines in PDP-11 standalone code created
with Unix v6
I did all three, and I used different methods for each.
For user code, there was no source-level debugger, so debugging C programs
and debugging code written in assembler were the same thing. I used 'adb'
(which is, stricly speaking, slightly post-V6 - our system at MIT was
actually sort of an early PWB clone), but V6 itself provides 'db' (and also,
IIRC, 'cdb'); all three are very similar.
For standalone code (in my case, a packet switch that ran on PDP-11's), I
used a version of DDT that was linked in with the rest of the code. The
original version was one in MACRO-11 which I inherited from Jim Mathis at
SRI, but I eventually re-wrote it in portable C, and it was used on the 68K,
uVax and 29K.
For kernel assembler code... I can't remember what I did! Although I wrote a
fair amount of it (I modified m45.s very extensively, to work with the Able
ENABLE card), so I must have done _something_, but I have no idea what. In
theory I could have linked DDT in with the kernel, but I don't think I ever
did so?
Recently I was debugging some kernel code (the splice() system call we were
discussing here), and I debugged it using... printf()'s! It was written in C,
but I don't really differentiate between debugging C code, and assembler.
> 2. No map file created by ld.
LD normally includes a symbol table in the output file, which 'nm' can dump.
> 3. No debugger that I can find.
See above.
> My workarounds include using OD to view the generated machine code
Use db/cdb/adb if you want to look at compiled code. Also, for 'cc', use the
-S flag.
Noel
All,
I'm finally returning to my study of v6 after digging a bit further into
assembly language and "other" pdp-11 operating systems. I even managed
to get hello, world working in assembly in v6 and interestingly enough,
I actually see how it works... for the most part :). Mini-note here:
http://decuser.blogspot.com/2016/01/hello-world-in-assembly-on-research.html
My question for y'all today is as follows (asked previously with a much
larger gap in understanding):
How did folks debug assembly routines in Unix v6, back in the day?
I realize that most folks didn't do assembly, but some did and I'm
curious what their approach might have been.
After having worked with RT-11 for a bit, I can see how I might develop
using RT-11 and then "port" a program across, but that seems less than
ideal. Here is my short list of missing features as I see them:
1. No listing file/cross reference list created by as.
2. No map file created by ld.
3. No debugger that I can find.
4. This is not a missing feature, but it deserves inclusion in the list,
the command as has possibly the most terse error messages I have ever
seen - B 12? Really? Thankfully, the awesome man command comes to the
rescue with the list of error codes.
My workarounds include using OD to view the generated machine code and
adding mesg calls.
Thoughts?
Will
All, I asked Peter Salus if there is an electronic version of his great
book "A Quarter Century of Unix". There isn't one, so I scanned my book
in. Peter has given TUHS the right to distribute the scanned version.
I've just added a link to the PDF of the scan here:
http://wiki.tuhs.org/doku.php?id=publications:quarter_century_of_unix
But, if you can, buy the paper version as well!
Many thanks to Peter for his generosity.
Cheers, Warren
Hoi.
Yesterday, I came across the file Berkeley_Unix_History.pdf on
my disk. It contains scanned articles of UNIX Review January 1985,
October 1985 and January 1986. Searching the web brought up this
online location for the file:
http://simson.net/ref/free_software/Berkeley_Unix_History.pdf
I read the articles for the first time and had a great time doing
so. Especially the ``Berkeley Underground'' article was pure fun!
Here, have some impression:
We modified the kernel to
support asynchronous I/O, distri-
buted files, security traces, "real-
time" interrupts for subprocess
multitasking, limited screen
editing, and various new system
calls. We wrote compilers, ass-
emblers, linkers, disassemblers,
database utilities, cryptographic
utilities, tutorial help systems,
games, and screen-oriented ver-
sions of standard utilities. User
friendly utilities for new users that
avoided accidental file deletion,
libraries to support common
operations on data structures
such as lists, strings, trees, sym-
bol tables, and libraries to perform
arbitrary precision arithmetic and
symbolic mathematics were other
contributions. We suggested im-
provements to many system calls
and to most utilities. We offered to
fix the option flags so that the dif-
ferent utilities were consistent
with one another.
To Us, nothing was sacred,
and We saw a great deal in UNIX
that could stand improvement.
Much of what We implemented, or
asked to be allowed to implement,
is now a part of System V and 4.2
BSD; others of our innovations are
still missing from all versions of
UNIX. Despite these accom-
plishments, it seemed that
whenever We asked The Powers
That Be to install Our software
and make it available to the rest of
the system's users, We were
greeted with stony silence.
Unfortunately, the scan is not complete as some pages are
missing. For example, page 43 (the title page of the mentioned
article) is among them.
Does anyone know where to get the full articles?
meillo
> From: Ronald Natalie
> a new GADS (Gateway Architecture and Data Structures) under Dave
> Mills's leadership would form which I attended until they morphed it
> into the IETF
To give a bit more detail, GADS was not producing needed stuff as fast as was
needed, so it was split into InArc and InEng, with Dave running InArc and
Corrigan (initially, Gross later) in charge.
Noel
It has been updated again:
The origin of the name cron is from the Greek word for time, χρόνος (chronos).[2][3] (Ken Thompson, author of cron, has confirmed this in a private communication with Brian Kernighan.)
So it would appear that if enough people bang on enough keyboards on the Wikipedia site, things can change.
David
All, although I can't contribute to the actual history of Unix, I can at
least document what happened "afterwards". Here's a piece about the
journey to make free Unix source code licenses available:
http://wiki.tuhs.org/doku.php?id=events:free_licenses
Comments welcome. Cheers, Warren
On 2016-01-06 01:54, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Mon, 4 Jan 2016, Ronald Natalie wrote:
>> >Just never figured out how to make good use of the MARK instruction on
>> >the PDP-11.
> RSX-11 probably used it, though, as could've RSTS...
Nope. Nothing in RSX uses it. As others said, probably nothing anywhere
used it.
And, as I pointed out, if you use MARK, then you cannot really use split
I/D-space, since the stack (data) needs to be in instruction space.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2016-01-05 18:43, Ronald Natalie<ron(a)ronnatalie.com> wrote:
>
> Just never figured out how to make good use of the MARK instruction on the PDP-11.
Not surprising. As others noted, few ever did. And apparently none
responding actually do either.
It *is* a stupid instruction in many ways. And it's not for multiple
returns either.
It's an odd way of handling stack cleanup without a frame pointer.
It's extremely bad, since it actually requires the stack to be in
instruction space. And yes, you are expected to execute on the stack.
The idea is that the caller pushes arguments on the stack, but the
cleanup of the stack is implicitly done in the subroutine itself, and at
the same time you get an argument pointer.
Example:
Calling:
MOV R5,-(SP) ; Save old R5
MOV <argn>,-(SP)
MOV <argn-1>,-(SP)
.
MOV <arg1>,-(SP)
MOV #MARKN,-(SP) ; Where the N in MARK N is the number of
arguments you just pushed.
MOV SP,R5
JSR PC,SUB
.
.
In the subroutine you then have the arguments available relative to R5.
So that arg1 is available at 2(R5) for example.
SUB: .
.
.
RTS R5
There is a lot going on at this point. The trick is to note that the
code does an RTS R5 to return. If people remember what happens at that
point, PC gets loaded with R5, while R5 gets loaded with the PC that we
actually would like to return to (since that's what is at the top of the
stack).
And R5 is pointing into the stack, at the MARK instruction, so execution
continues with actually performing the MARK.
MARK, in turn, will case SP <- PC + 2*N, thus restoring the stack
pointer to point to the place where the original R5 was stored.
Next, it does a PC <- R5, so that we now have the PC point to where we
actually want to return.
Next it does a R5 <- (SP)+, meaning we actually restored the original R5.
And so we have cleaned up the stack, preserved R5, and returned to the
caller again.
Also notice that the subroutine could have pushed any amount of data on
the stack before the return, and I suspect the idea was that the process
would not have needed to clean that up either. However, that fails,
since the RTS needs the return address at the top. But you can
essentially solve that by pushing -2(R5) before returning.
Ugly, isn't it? :-)
Johnny
I just re-found a quote about Unix processes that I'd "lost". It's by
Steve Johnson:
Dennis Ritchie encouraged modularity by telling all and sundry that
function calls were really, really cheap in C. Everybody started
writing small functions and modularizing. Years later we found out
that function calls were still expensive on the PDP-11, and VAX code
was often spending 50% of its time in the CALLS instruction. Dennis
had lied to us! But it was too late; we were all hooked...
http://www.catb.org/esr/writings/taoup/html/modularitychapter.html
Steve, can you recollect when you said this, was it just a quote for
Eric's book or did it come from elsewhere?
Does anybodu have a measure of the expense of function calls under Unix
on either platform?
Cheers, Warren
The comments in the rp06 walking across the floor reminds me of a time when I was installing netnews at a very new company and as the data transferred from the tape (we didn’t have a modem yet, that was to happen in a week or so) to the disk the disk got into a walking state. I was standing with my foot against the front of the drive to keep it from moving when a friend walked into the machine room and asked what I was doing. I said ‘keeping it all together’ and he asked about my foot. I moved it, the disk walked out a small amount and I replace my foot. He laughed and walked away. About 15 or 20 minutes later it was all back to ‘normal’ and I never had another problem with it.
David
> On Jan 2, 2016, at 1:31 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> several
> RP06 (200MB) removable disks (for a picture and description, see
>
> http://www.columbia.edu/cu/computinghistory/rp06.html
As late as 1990, every UNIX I knew of still used the
expensive calls/ret instructions for subroutine calls.
I vaguely remember a consensus (and I certainly shared
the feeling) that in hindsight it would have been better
to use jsb/rsb, but changing everything would have been
so much work that nobody wanted to do it.
1990 was already past peak VAX in the UNIX world, so
I can't imagine anyone bothering to make such a change
to an existing system after then. Especially a system
that already had many existing installations who would
have to deal with the resulting compatibility problem.
During the latter part of the 1990s, I was actively
supporting a private UNIX system just for myself on
a few MicroVAXes at home. One of the things I did
was to write a VAX code generator for the then-current
version of lcc (the one around which the book was
written), so as to have an ISO-compatible compiler
and convert all of /usr/src (not so big even in those
days) to ISO. It was an interesting exercise and I
learned a lot, but even then, I wasn't brave enough to
adopt an incompatible subroutine-calling convention.
Another big time waste in the original VAX UNIX was
the system-call interface: arguments were left on the
stack (where they had been put before calling the
syscall stub routine in libc); the kernel then had
to do a full-fledged copyin to get them. It occurred
to me more than once to change the convention and have
the syscall stubs copy the arguments into registers
before executing the chmk (syscall) instruction.
That instruction didn't touch the registers; the
kernel saved them early in the chmk trap routine,
in its own address space, so no copying or access
checking would have been required to fetch their
call-time contents.
That would still have been a messy change to make,
because I'd have to be sure every program had been
relinked with the new-style libc before changing the
kernel. (This was a system without shared libraries.)
But on a personal system it would have been doable.
I never did.
It's possible that current UNIX-descended/cloned systems
that have VAX ports, like Linux or Open/Free/NetBSD,
have had a chance to start over and do better on
subroutine calls and system calls. Does anyone know?
Norman Wilson
Toronto ON
> that's 28+13 = 41 memory cycles.
> ...
> purely in overhead (counting putting the args on the stack as overhead).
Oh, I missed an instruction for de-stacking the arguments, which was typically
something like 'add #N, sp', so another two instruction word fetches, or 43
cycles.
Ironically, if N=4, the compiler used to emit a 'cmp (sp)+, (sp)+', which is
more efficient space-wise (one word instead of two), but less time-wise
(3 cycles instead of 2).
Noel
> From: Warren Toomey
> I just re-found a quote about Unix processes
> ..
> Years later we found out that function calls were still expensive
> on the PDP-11
> ..
> Does anybodu have a measure of the expense of function calls under Unix
> on either platform?
Procedure calls were not cheap on the PDP-11 with the V6/V6 C compiler (which
admittedly was not the most efficient with small routines, since it always
saved all three non-temporary registers, no matter whether the called routine
used them or not).
This was especially true when compared to the code produced by the compiler
with the optimizer turned on, if the programmer was careful about allocating
'register' variables, which was pretty good.
On most PDP-11's, the speed was basically linearly related to the number of
memory references (both instruction fetch, and data), since most -11 CPU's
were memory-bound for most instructions. So for that compiler, a subroutine
call had a fair amount of overhead:
inst data
call 4 1
2 0 if any automatic variables
1 1 minimum per single-word argument
csv 9 5
cret 9 5
(In V7, someone managed to bum one cycle out of csv, taking it down to 8+5.)
So assume a pair of arguments which were not register variables (i.e.
automatics, or in structures pointed to by register variables), and some
automatics in the called routine, and that's 4+2 for the arguments, plus 6+1,
a subtotal of 10+3; add in csv and cret, that's 28+13 = 41 memory cycles.
On a typical machine like an 11/40 or 11/23, which had roughly 1 megacycle
memory throughput, that meant 40 usec (on a 1 MIP machine) to do a procedure
call, purely in overhead (counting putting the args on the stack as overhead).
We found that, even with the limited memory on the -11, it made sense to run
the time/space tradeoff the other way for short things like queue
insertion/removal, and do them as macros.
A routine had to be pretty lengthy before it was worth paying the overhead, in
order to amortize the calling cost across a fair amount of work (although of
course, getting access to another three register variables could make the
compiled output for the routine somewhat shorter).
Noel
Folks remember, VAX was not designed with UNIX in mind. It had two primary
influences, assembly programmers (Cutler et al) and FORTRAN compiler
writers. The truth is, the Vax was incredibly successful in both UNIX and
its intended OS (VMS) sites, even if a number of the instructions it has
were ignored by the C compiler writers. The fact the C did not map to it
as well as it would for other architectures later is not surprising given
the design constraints - C and UNIX were not part of the design. But it was
good enough (actually pretty darned good for the time) and was very, very
successful - I certainly stopped running a PDP11 when Vaxen were generally
available. I would not stop doing that until the 68000 based workstations
came along.
>From my own experience, when Dave (Patterson) was writing the RISC papers
in the early 1980s, a number of us ex-industry grad student types @ USB
were taking his architecture course having just come off some very
successful systems from the Vax, DG Eagle, Pr1me 750, etc.. [I'll leave the
names of said parties off to protect the innocent]. But what I will say is
that the four of used sit in the back of his calls and chuckle. We used to
remind Dave that a lot of the choices that were made on those machines, we
not for "CS" style reasons. IMO: Dave really did not "get it" -- all of
those system designers did make architectural choices, but the drivers were
the code base from the customer sites not how how well spell or grep
worked. And those commercial systems generally did mapped well at what
the designers considered and >>why<< those engineers considered what they
did [years later a HBS professor Clay Christensen's book explained why].
I've said this in other forums, but I contend that when we used pure CS to
design world's greatest pure computer architecture (Alpha) we ultimately
failed in the market. The computer architecture was extremely successful
and many of miss it. Hey, I now work for a company with one of the worst
instruction sets/ISA from a Computer Science standpoint - INTEL*64 (C),
and like the Vax, it's easy to throw darts at the architecture from a
purity standpoint. Alpha was great, C and other languages map to it well,
and the designers followed all of the CS knowledge at the time. But as a
>>system<< it could not compete with the disruption caused by the 386 and
later it's child, INTEL*64. And like Vaxen, INTEL*64 is ugly, but it
continues to win because of the economics.
At Intel we look at very specific codes and how they map and the choices of
what new things to add, how the system morphs are directly defined by what
we see from customers and in the case of scientific codes, how well the
FORTRAN compiler can exploit it -- because it is the same places (the
national labs and very large end users like weather, automotive, oil/gas or
life sciences) that have the same Fortran code that still need to run ;-)
This is just want DEC did years ago with the VAX (and Alpha).
As an interesting footnote, the DNA from the old DEC Fortran compiler lives
on "ifort" (and icc). Some of the same folks are still working on the
code generator, although they are leaving us fairly rapidly as they
approach and pass their 70s. But that's a different story ;-)
So the question is not a particular calling sequence or set of instructions
is good, you need to look at the entire economics of the system - which to
me begs the question of if the smartphone/tablet and ARM be the disruptor
to INTEL*64 - time will tell.
Clem
On Sun, Jan 3, 2016 at 7:42 PM, <scj(a)yaccman.com
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=scj@yaccman.com>> wrote:
> Well, I certainly said this on several occasions, and the fact that it is
> recorded more or less exactly as I remember saying it suggests that I may
> have even written it somewhere, but if so, I don't recall where...
>
> As part of the PCC work, I wrote a technical report on how to design a C
> calling sequence, but that was before the VAX. Early calling sequences
> had both a stack pointer and a frame pointer, but for most machines it
> was possible to get by with just one, so calling sequences got better as
> time went on. Also, RISC machines with many more registers than the
> PDP-11 also led to more efficient calls by putting some arguments in
> registers. Later standardizations like varargs were painful on some
> architectures (especially those which had different registers for pointers
> and integers).
>
> The CALLS instruction was indeed a pig -- a space-time tradeoff in the
> wrong direction! For languages like FORTRAN it might have been justified,
> but for C it was awful. It is my memory too that CALLS was abandoned,
> perhaps first at UCB. But I actually had little hands-on experience with
> the VAX C compiler...
>
> Steve
>
>
>
>
> > I just re-found a quote about Unix processes that I'd "lost". It's by
> > Steve Johnson:
> >
> > Dennis Ritchie encouraged modularity by telling all and sundry that
> > function calls were really, really cheap in C. Everybody started
> > writing small functions and modularizing. Years later we found out
> > that function calls were still expensive on the PDP-11, and VAX code
> > was often spending 50% of its time in the CALLS instruction. Dennis
> > had lied to us! But it was too late; we were all hooked...
> > http://www.catb.org/esr/writings/taoup/html/modularitychapter.html
> >
> > Steve, can you recollect when you said this, was it just a quote for
> > Eric's book or did it come from elsewhere?
> >
> > Does anybodu have a measure of the expense of function calls under Unix
> > on either platform?
> >
> > Cheers, Warren
> >
>
>
>
On 2016-01-04 00:53, Tim Bradshaw <tfb(a)tfeb.org> wrote:
>> >On 3 Jan 2016, at 23:35, Warren Toomey<wkt(a)tuhs.org> wrote:
>> >
>> >Does anybodu have a measure of the expense of function calls under Unix
>> >on either platform?
>> >
> I don't have the reference to hand, but one of the things Lisp implementations (probably Franz Lisp in particular) did on the VAX was not to use CALLS: they could do this because they didn't need to interoperate with C except at known points (where they would use the C calling conventions). This change made a really significant difference to function call performance and meant that on call-heavy code Lisp was often very competitive with C.
>
> I can look up the reference (or, really, ask someone who remembers).
>
> The VAX architecture and its performance horrors must have killed DEC, I guess.
I don't know if that is a really honest description of the VAX in
general, nor DEC. DEC thrived in the age of the VAX.
However, the CALLS/CALLG and RET instructions were really horrid for
performance. Any clever programmer started using JSB and RSB instead. as
they give you the plain straight forward call and return semantics
without all the extra stuff that the CALL instructions give.
But, for assembler programmers, the architecture was nice. For
compilers, it's more difficult to do things optimal, and of course, it
took quite a while before hardware designers had the tools, skill and
knowledge how to implement complex instruction sets fast in hardware.
But nowadays, that is definitely not a problem, and it was more or less
already solved by the time of the NVAX chip as well, which was actually
really fast compared to a lot of stuff when it came out.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Clem cole <clemc(a)ccc.com> writes on Thu, 31 Dec 2015 23:04:04 -0500
about SPICE:
>> ...
>> Anyway SPICE1 was actually started in the late 1960's by dop [Don
>> Pederson]. Ellis Cohen wrote SPICE2 for the CDC 6400 in the mid 70's,
>> added some new device models and created really novel bit of self
>> modifying Fortran the compiled the inner loop.
>>
>> You are correct it was really the first widely available FOSS code -
>> an idea that you correctly note dop created.
>> ...
SPICE wasn't the only such package, or even the earliest! Still, I'll
be grateful to list readers for pointers off-list (or on) to early
publications about SPICE that I can add to the bibliography archives.
The EISPACK system, which predated LINPACK, and both of which led to
the current LAPACK, and descendants like CLAPACK and ScaLAPACK, has an
older vintage. It began with Algol routines published in the
German/English journal Numerische Mathematik
http://www.math.utah.edu/pub/tex/bib/nummath.bibhttp://www.math.utah.edu/pub/tex/bib/nummath2000.bibhttp://www.math.utah.edu/pub/tex/bib/nummath2010.bib
[change .bib to .html for a similar view with live hyperlinks]
The first such routine may have been that in entry Martin:1965:SDPa in
nummath.bib, which appeared in Num. Math. 79(5) 355--361 (October
1965) doi:10.1007/BF01436248. That journal did not then record
"received" dates, so the best that I can do for now is to claim
"October 1965" as the start of published code for free and open source
software in the area of numerical analysis.
Publication of related algorithms continued for 6 years, and then they
were collected in the famous HACLA (Handbook for Automatic
Computation: Linear Algebra) volume in 1971 (ISBN 0-387-05414-6).
Because Algol was little used in the USA, a project was begun in that
country to translate the Algol code to Fortran. That project was
called NATS, which originally stood for the groups at (read their
names vertically)
Northwestern University
Argonne National Laboratory
Texas, University of (at Austin)
Stanford
but as more groups joined in the effort, and EISPACK begat LINPACK,
NATS was changed to mean National Activity to Test Software.
The EISPACK book appeared in two editions in 1976 (ISBN 0-387-06710-8)
and 1977 (0-387-08254-9), volumes 6 and 51, respectively of Springer's
Lecture Notes in Computer Science (now around 9000 published volumes).
The LINPACK book appeared in 1979 (ISBN 0-89871-172-X).
The LAPACK book has three editions, in 1992 (ISBN 0-89871-294-7), 1995
(ISBN 0-89871-345-5), and 1999 (ISBN 0-89871-447-8). In between them,
the ScaLAPACK book appeared in 1997 (ISBN 0-89871-400-1).
There were several other packages described in the 1984 book
Sources and Development of Mathematical Software
ISBN 0-13-823501-5
(entry Cowell:1984:SDM), including FUNPACK, MINPACK, IMSL, SLATEC,
Boeing, AT&T PORT, and NAG. Some are free, and others are commercial.
The Algol code from Numerische Mathematik, like the ACM Collected
Algorithms, the Applied Statistics algorithms, and the Transactions on
Mathematical Software algorithms, was intended to be freely available
to anyone for any purpose, and no license of any kind was claimed for
it. That tradition continues with all of its descendants in the *PACK
family.
I have old archives of source code for EISPACK and LINPACK, but
comment documentation in EISPACK does not include revision dates, just
references to page numbers in the HACLA volume from 1971, and rarely,
to journal articles from 1968, 1970 and 1973. My filesystem dates,
alas, only reflect the copying from distribution tape to disk, and my
oldest file date for EISPACK is 20-Apr-1981.
The LINPACK comments appear be almost entirely without dates: I found
only one:
snrm2.for:11:C C.L.LAWSON, 1978 JAN 08
The bibliography on the GNU Project at
http://www.math.utah.edu/pub/tex/bib/gnu.bib
records most of the books mentioned above, and it also contains as its
first entry, Galler:1960:LEC, a letter published in the April 1960
issue of Communications of the ACM from Bernie Galler, with this
field:
remark = "From the letter: ``\ldots{} it is clear that what is
being charged for is the development of the program,
and while I am particularly unhappy that it comes from
a university, I believe it is damaging to the whole
profession. There isn't a 704 installation that hasn't
directly benefited from the free exchange of programs
made possible by the distribution facilities of
SHARE. If we start to sell our programs, this will set
very undesirable precedents.''",
That is so far the earliest reference that I have found for the notion
that software should be free, long before Richard Stallman, Eric
Raymond, Linus Torvalds, and others became such well-known proponents
of that idea, and we had large and profitable companies like Red Hat
and SUSE devoted to supporting, for a fee, such software.
I was a graduate student in quantum chemistry at the Quantum Theory
Project (QTP) at the University of Florida in Gainesville in the late
1960s and early 1970s, and we had a general practice of sharing of
code among various university research groups, most notably through
the Quantum Chemistry Program Exchange (QCPE) hosted at the University
of Indiana in Bloomington, IN.
A search through my bibliography archives found my earliest recording,
a 6-Apr-1971 publication (by me), with mention of QCPE. Library
searches found a catalog entry for QCPE Catalog volume 19 (1987), so
perhaps volume 1 appeared in 1968. But no --- I just found in its
home institution's library catalog
http://www.iucat.iu.edu/?utf8=%E2%9C%93&search_field=all_fields&q=QCPE&high…
an entry dated 1963, with details
Publishing history: 1 (Apr. 1963)- Ceased with 71 (Nov. 1980).
Other widely-distributed programs of that time included Enrico
Clementi's IBM Research group's IBMOL (about 1965), and others named
MOLECULE (pre-1975), POLYATOM (1963), and Gaussian (1970).
The POLYATOM year appears to be the earliest of those: see the paper
by Michael Barnett at
http://dx.doi.org/10.1103/RevModPhys.35.571
It appears in a July 1963 journal issue, again without a "received"
date. It begins:
A system of programs is being written by Dr. Malcolm
C. Harrison, Dr. Jules W. Moskowitz, Dr. Brian T. Sutcliffe,
D. E. Ellis, and R. S. Pitzer, to perform nonempirical
calculations for small molecules.
I have met, or been in the same group as (Don Ellis), most of those,
and it is worth noting their affiliations to emphasize the broad
character of that work:
Malcolm Courant Institute of Mathematical Sciences, New York University, NY
Jules New York University, NY
Brian York University, York, UK
Don University of Florida (later, Northwestern University)
Russ Harvard, Cambridge, MA (later, Ohio State University)
Michael MIT, Cambridge, MA and various UK sites in academia and industry
(see https://en.wikipedia.org/wiki/Michael_P._Barnett)
On the subject of the Gaussian program, developed at Carnegie-Mellon
University, see the two sites
https://en.wikipedia.org/wiki/Gaussian_%28software%29http://www.bannedbygaussian.org/
The second decries the loss of openness of Gaussian, which remains a
widely-used commercial product.
There is also a book on the subject of mathematics whose use is
encumbered by patents and copyrights:
Ben Klemens
Ma$+$h you can't use: patents, copyright, and software
ISBN 0-8157-4942-2
(entry Klemens:2006:MYC in http://www.math.utah.edu/pub/tex/bib/master.bib)
----------------------------------------
P.S. A final sad personal note on computing history:
When our DEC-20/60 (Arpanet node UTAH-SCIENCE, later science.utah.edu
and still later, math.utah.edu) was retired on 31-Oct-1990 (its
predecessor, a DEC-20/40 began operating in March 1978) we were faced
with several cabinets full of 9-track tapes (about 25MB each), several
RP06 (200MB) removable disks (for a picture and description, see
http://www.columbia.edu/cu/computinghistory/rp06.html
) and the contents of three washing-machine sized RP07 (600MB) disks,
and were moving to a new machine room in an adjacent building.
We were able to copy over the RP0[67] disk contents, and I still have
them online on my desktop, but the tapes were financially infeasible
for us to copy to disk on the new VAX 8600 server, and we were leaving
9-track tape technology behind. There were probably 500 to 1000 of
those tapes, and all that we could do was fill a dumpster with them,
because we had no place to store the physical volumes at the new site,
and no money for their bits. I have deeply regretted that loss of 25
years of my, and our, early computing history ever since.
Computers were for far too long crippled by too little memory and too
little permanent storage, and only post-2000 has that situation been
alleviated with radical reductions in storage costs per byte of data.
My new desktop 8TB drive is 3.6 million times cheaper per byte than an
RP06 drive was. Had we been able to foresee that dramatic growth in
capacity, we could have archived those tapes in an off-campus
warehouse for later (attempted) data retrieval.
------------------------------------------------------------------------
P.P.S. Besides VAX VMS, our migration path from TOPS-20 was primarily
to Unix, first on the Wollongong distribution of BSD (3.x, I think)
running on VAX 750 machines in the early 1980s, then on Sun 3
MC68000-based workstations in 1988 that ultimately evolved to an
eclectic mixture of CPUs and vendors. My software test lab now has
about 70 flavors of Unix on assorted physical and virtual machines,
with ARM, MIPS, PowerPC, SPARC, x86, and x86-64 processors. Our last
DEC Alpha CPU died with its power supply 16 months ago, and a
colleague still has a runnable MC68020 box (an old NeXT desktop).
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Jacob Goense <dugo(a)xs4all.nl>
> Mills's 1983 RFC889[2] calls the original PING Packet InterNet Groper.
I have a strong suspicion that Packet-etc is a 'backronym' from Dave Mills.
Note that the use of the term "echo" for a packet returned dates back quite a
while before that, see e.g. IEN 104, "Minutes of the Fault Isolation
Meeting", from March 1979:
"ability to echo packets off any gateway"
When ICMP was split from GGP (see IEN-109, RFC-777), the functionality
migrated from GGP to ICMP, and was generalized so that all hosts provided the
capability, not just routers.
Noel
Personally, I lean away from listing the nine billion debunked
names of cron. It's like adding a disclaimer to cat(1) to
explain that cat just copies data to standard output, it doesn't
transform it or compute how long it would take to send the data
over UUCP.
But it probably shows that I have been trying to write a couple
of manual pages lately (for some personal stuff, plus some docs
for work that are not technically manual pages but deserve the
same sort of conciseness).
Maybe Wikipedia-page format should admit an optional BUGS section.
Norman Wilson
Toronto ON
PS: seriously, though I wouldn't bother including the debunking
text myself, save perhaps on the Talk page to encourage editors
to delete any future attempts to revive the un-names, I have no
problem with Grog doing it. More power to him if he has the
energy!
Hello all!
While I was reading the article "A Research UNIX Reader: Annotated Excerpts
from the Programmer's Manual" from Douglas McIlroy, I learnt of a set of
utilities for designing electronic circuits. Here is a brief quote of this
article:
"CDL (v7 pages 60-63)
Although most users do not encounter the UNIX Circuit Design System, it has long
stood as an important application in the lab. Originated by Sandy Fraser and
extended by Steve Bourne, Joe Condon, and Andrew Hume, UCDS handles circuits
expressed in a common design language, cdl. It includes programs to create
descriptions using interactive graphics, to lay out boards automatically, to
check circuits for consistency, to guide wire-wrap machines, to specify
combinational circuits and optimize them for programmed logic arrays (Chesson
and Thompson). Without UCDS, significant inventions like Datakit, the 5620 Blit
terminal, or the Belle chess machine would never have been built. UCDS appeared
in only one manual, v7."
I looked it up on the 7th Edition's Manual and I haven't found references of
this system. I also searched a v7 system image downloaded from TUHS and got no
results. However I got some references of this system in USENET archives. In
particular, two hierarchies, net.draw and after net.ucds were dedicated to it.
Apparently two of the binaries of the system were called "draw" and "wrap". I
also found a manual of a similar system which I suppose is the UCDS descendant
in the 1st Edition of Plan 9. This is the link of the document:
http://doc.cat-v.org/plan_9/1st_edition/cda/
However that edition of Plan 9 is not publicly released and I could not find
it in following editions. But since v7 Unix is available, I hope it may
be possible to get hold of an older release at least.
Does anyone have any information?
Thank you in advance!
--- Michele
I was going through the old AUUG newsletters at
http://www.tuhs.org/Archive/Documentation/AUUGN/
looking for wiki material. They are a mine of information!
I've sent an e-mail off to the UKUUG folk to see if they have any
on-line newsletters. Does anybody know what happened to EUUG, especialy
if any of their newsletters have been digitised?
And Usenix ;login, are any of their old newsletters available?
If not, who can I lobby to get this done? There's only 3 1/2 years
left before the 50th anniversary!
Cheers, Warren
> From: Wolfgang Helbig
> The HALT instruction of the real PDP11 only stops the CPU
I have this bit set that on at least some models of the real machine, when
the CPU is halted, it does not do DMA grants? If so, on such machines, the
trick of depositing in the device registers directly would not work; the
device could not do the bus cycles to do the transfer to memory. Anyone know
for sure which models do service DMA requests while halted?
Noel
Something of a tangent:
In my early days with UNIX, one of the systems I helped look
after was an 11/45. Normally we booted it from an SMD disk
with a third-party RP-compatible contorller, for which we
had a boot ROM. Occasionally, however, we wanted to boot it
from RK05, usually to run diagnostics, occasionally for some
emergency reason (like the root file system being utterly
scrambled, or the time we ran that system, with UNIX, on a
single RK05 pack, for several days so our secretaries could
keep doing their troff work while the people who had broken
our air-conditioning system got it fixed--all other systems
in our small machine room had to stay shut down).
There was no boot ROM for the RK05, but it didn't matter:
one just did the following from the front-panel switches:
1. Halt/Enable to Halt
2. System reset (also sends a reset to the UNIBUS)
3. Load address 777404
4. Deposit 5.
(watch lights blink for a second or so)
5. Load address 0
6. Halt/Enable to Enable
7. Continue
777404 is the RK11's command register. 5 is a read command.
Resetting the system reset the RK11, which cleared all the
registers; in particular the word count, bus address, and
disk address registers. So when the 5 was deposited (including
the bit 01, the GO bit), the RK11 would read from address 0 on
the disk to address 0 in physical memory, then increment the
word-count register, and keep doing so until the word count
was zero after the increment. Or, in higher-level terms, read
the first 65536 words of the disk into the first 65536 words
of memory.
Then starting at address 0 would start executing whatever code
was at the beginning of memory (read from the beginning of the
disk).
Only the first 256 words (512 bytes) of the disk was really
needed, of course, but it was harmless, faster, and easier to
remember if one just left the word-count at its initial zero,
so that is what we did.
The boot ROM for the SMD disk had a certain charm as well.
It was a quad-high UNIBUS card with a 16x16 array of diodes,
representing 16 words of memory. I forget whether one inserted
or removed a diode to make a bit one rather than zero.
It's too bad people don't get to do this sort of low-level stuff
these days; it gives one rather a good feel for what a bootstrap
does when one issues the command(s) oneself, or physically
programs the boot ROM.
Norman Wilson
Toronto ON
Hi all, thanks for the wiki suggestions so far.
Does anybody have any lists of good Unix websites that I can add in to
the wiki at http://wiki.tuhs.org/doku.php?id=publications:websites
Also, any suggestions on how to organise the page, as I can see we will
end up with hundreds of links!
Cheers, Warren
Someone off-list today asked about an annotated list of Unix papers which
might be good to add to the new wiki.
I've just uploaded my own short list of Unix papers, in BibTeX format, on
the wiki at:
http://wiki.tuhs.org/lib/exe/fetch.php?media=publications:wkt_reflist.bib
If you have your own list of references in whatever format, could you
upload them into the wiki also?
Once you have registered and have write permissions, go to:
Media manager -> publications -> Upload. Select your file and upload.
The dokuwiki can deal with references in BibTeX format, I just don't know how
to do it yet. Once I do, I'll decorate links to papers with a reference.
Cheers & thanks, Warren
P.S I just uploaded some of the BSTJ papers into
http://www.tuhs.org/Archive/Documentation/Papers/BSTJ/
I decided that, given that we have a few years until the 50th anniversary,
that I would set up a wiki for Unix in a similar vein to the Multicians one.
So I've made a start at http://wiki.tuhs.org (if/when the A record propagates).
I'd love to get some other people to help out, but I'll keep adding stuff
and we will see how it goes. Any good anecdotes about Unix are most welcome!
If you want to get edit status, register and then e-mail me so I can
manually mark you as having edit status.
Cheers, Warren
> > Louis wrote a disk loaded program called RUNCOM that read command
> > lines from a file,
> Hence, presumably, the .foorc files of Unix and the rc shell.
Yes, rc files were named for runcom, but did not adhere to runcom's
curious limit of 6 commands.
Doug
I got this a couple of days ago and thought I had sent it on, but
apparently not. Here goes.
Greg
On Saturday, 26 December 2015 at 11:12:59 -0500, Tom Van Vleck wrote:
> Short answer to your question is "depends on what you mean by shell."
> Answer for Unix heads is http://multicians.org/shell.html where Louis Pouzin says he made up the name
> for Multics. We never called the CTSS command processor a shell.
>
> When I used CTSS in 1963, command processing was in (wired) code in A-core.
> A simple program tokenized input lines, looked up the token in A-core tables, and either ran an
> A-core routine or loaded a command file, passing the rest of the arguments as a string array.
> To add a command, you had to recompile CTSS. Look at the module COMC in the source.
>
> This command language is documented in the "candy stripe" CTSS manual.
> http://bitsavers.org/pdf/mit/ctss/CTSS_ProgrammersGuide.pdf
>
> About 1964 or 65, COMC was changed to not list the disk loaded commands; if the table
> lookup failed, COMC looked for a disk file in a system directory, and ran it if it was found.
> System maintainers could add a command by copying a file into the directory.
> Command files were in core image format, already loaded and linked. Conventional practice
> was to make them small and to expand the core image for things like I/O buffers at execution start.
>
> Some disk-loaded commands were listed in COMC and flagged as "privileged" so that they could
> call special supervisor entries to get the supervisor to do things forbidden to regular programs.
> the LOGIN command was an example: it could read the password file, forbidden to regular users,
> and could patch the A-core table of logged in users.
>
> Louis wrote a disk loaded program called RUNCOM that read command lines from a file, substituted
> arguments into the command, and requested the supervisor to run them, and then return control
> to RUNCOM. This is a shell-like function.
>
> Noel Morris and I wrote an author-maintained unprivileged B-core CTSS program in 1965 called
> . SAVED that was also shell-like. It read lines from the terminal, tokenized them, expanded
> abbreviations and iterations, and ran sequences of commands, and then resumed itself. It had other
> features such as inter-user text messages. It allowed power CTSS users to extend the
> system-provided command set with their own set of SAVED files, all treated uniformly.
>
> Noel and I also added a facility to CTSS that allowed the system to run batch jobs. The user
> submitted a RUNCOM file to a queue for later processing, much like Unix CRON.
>
> Revised command processing, RUNCOM, and . SAVED are documented in the second edition CTSS manual.
> http://bitsavers.org/pdf/mit/ctss/CTSS_ProgrammersGuide_Dec69.pdf
>
> Multics had a program known as the shell, which went through a long series of evolutions.
> Users could replace their command shell. A program called the listener read command lines
> and fed them to the shell, which tokenized the command lines and found and called individual commands.
> The argument-substituting run-from-a-file mode of operation of RUNCOM was done by the exec_com command.
> All of these Multics features and design were familiar to Ken and Dennis when they worked on Multics.
>
> You say you have been looking at the CTSS source. You know you can run a simulated
> 7094 running a simulated CTSS, right? http://www.cozx.com/~dpitts/ibm7090.html
>
> regards, tom
--
Sent from my desktop computer.
Finger grog(a)FreeBSD.org for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft MUA reports
problems, please read http://tinyurl.com/broken-mua
Jon von Neumann was born in 1903; without him, we probably wouldn't have
had computers at all (but we could've had a Wintel version, I suppose,
wherein everything is controlled by BB).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
All,
I just finished some compact analyses on the boot loaders that are
presented in "Setting up Unix - Sixth Edition" by Thompson and Ritchie.
They are in followup to the more detailed analysis of the Tape Bootstrap
Loader that I mentioned previously and the entry is posted here:
http://decuser.blogspot.com/2015/12/pdp-11-bootstrap-loaders-some-analysis.…
What was most interesting about the analyses and programs was how
related they were. At the end of the day, once I understood how the
first one worked, the other two were pretty simple and required only
minor tweaks in the coding to achieve their results.
I am moving on now that I have a pretty good idea of how these bits of
code work and will be starting to program in Macro-11 for a few weeks to
get a handle on things before I return to the deep dive into the source
code of v6 that I temporarily put on hold so I could actually make sense
of the Assembly bits.
I really appreciate everyone's help, tips, suggestions and even critiques.
Thanks,
Will
I must be the only guy here who browses reddit (or I missed someone
else posting this).
This dude ported 5th edition to a game boy. I'll admit my non coolness
by saying I don't know what a game boy is but I'm guessing it's some
video game thingy.
http://www.kernelthread.com/publications/gbaunix/
Dave Horsfall:
Jon von Neumann was born in 1903; without him, we probably wouldn't have
had computers at all (but we could've had a Wintel version, I suppose,
wherein everything is controlled by BB).
====
On this day in Australia, but not yet in North America or Europe.
Or, as Warren would probably prefer, it was 2015 in England, but still only
1903 in Australia. Such was the great difference in time twixt these two
great nations.
Australia, Australia, we love you from the heart,
the liver, the kidneys, and the giblets,
and every other part!
Norman Wilson
Toronto ON
All,
I'm trying to understand the RK bootloader code that is found in
"Setting up Unix - Sixth Edition". My question is related to RKBA, RK's
bus address buffer. Is the bus address the same as memory address? If
so, the code makes sense, if not, I appreciate y'alls help.
Here's what I have so far:
RK05
01 012700 MOV 177414,R0 ; Move RKDB into R0
02 177414 ; RKDB Address
03 005040 CLR -(R0) ; Decrement R0 and clear the contents of RKDA
04 005040 CLR -(R0) ; Decrement R0 and clear the contents of RKBA
05 010040 MOV R0,-(R0) ; Move the contents of R0(RKBA) into
decremented R0(RKWC)
06 012740 MOV 5,-(R0) ; Decrement R0 and move 5 into RKCS
07 000005 ; Read and go
08 105710 WAIT: TSTB (R0) ; Test the lower byte of RKCS
09 002376 BGE WAIT ; When bit 7 becomes 1, the read is done
10 005007 CLR PC ; Set PC 000000, the start of the bytes read
RKDB - RK data buffer register
This register is a general data register and it only used by the code
above to initialize R0 so that subsequent RK addresses can be found by
simply decrementing R0.
RKDA - RK disk address register
This register determines the starting disk address of the read operation
and is cleared by the code.
RKBA - RK current bus address register
This register contains the bus address to or from which data will be
transferred. Is this the same as memory address?
RKWC - RK word count register
Two's complement of the number of words to be transferred.
RKCS - RK control status register
This is the register that controls the device and provides the device
status to the program
Lines 1-2
The execution of the boot loader code moves the address of RKDB into R0
to initialize the register so that it can be used to obtain the other RK
buffer addresses as they are needed.
Line 3
The RKDA buffer is cleared, setting the disk address to 0.
Line 4
The RKBA buffer is cleared, setting the bus address to 0.
Line 5
The value in R0 is transferred into the RKWC buffer. RKBA or 177410, the
value in R0, is a convenient number to use for the read operation
because it is big enough to cause the program to read in a block of
data. The number is in two's complement and represents -370. This tells
the disk controller that 370 words (540 bytes) will be transferred.
Lines 6-7
The value 5 is placed into RKCS, this value represents a read operation
and go.
Lines 8-9
The lower byte of RKCS is tested and when it is greater than or equal to
zero (not negative), it loops, waiting until the value is negative, that
is until bit 7 becomes 1, which indicates Control Ready (RDY) and done.
Line 10
PC is set to 00000 and execution of the bytes read from the disk
begins at location 00000.
> From: John Cowan
> Rather than starting another process, the Multics shell mapped the
> executable program the user requested into its own space .. then jumped
> into it. The equivalent of exit() simply jumped back to the shell code.
This is from memory, so apply the proverbial grain, but ISTR that the
original concept for the Multics shell was just like that for Unix - i.e.
each command would be a separte child process. This was given up when Multics
processes turned out to be so computationally expensive, and they went to
commands being subroutines that were dynamically linked in (very easy with
Multics, where dynamic linking was a fundamental system capability) and
called from the shell.
Noel
So what is the etymology of "shell", then? I see that Multics has a shell. Was the CTSS user interface also called a shell?
Cheers, Warren
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
John Cowan:
Wikipedia is by nature a *summary of the published literature*. If you
want to get some folklore, like what "cron" stands for, into Wikipedia,
then publish a folklore article in a journal, book, or similar reputable
publication. Random uncontrolled mailing lists simply do not count.
======
That sounds fair enough on the surface.
But if you follow the references cited to support the cron
acronyms, you find that random unsupported assertions in
conference papers do count. That's not a lot better.
I'd like to see a published, citable reference for the
true origin of `cron'. Even better, better published
material for a lot of the charming minutiae of the early
days of UNIX. (Anyone feel up to interviewing Doug and
Ken and Brian and whoever else is left, and writing it up
for publication in ;login:?)
But I'd be satisfied if we could somehow stamp out the
use of spurious references to support spurious claims.
If I had the time and energy I'd look into how to challenge
the cron acronyms on those grounds. Any volunteers?
Norman Wilson
Toronto ON
> From: Will Senn
> 000777 HALT
That's actually "BR ."; the difference is important, since the CPU (IIRC)
doesn't honour DMA requests when it is halted, and DMA needs to be working for
the controller to read that first block (a secondary tape bootstrap) into
memory.
> This seems like gobbledegook to me.
:-)
> It moves the MTCMA (Magtape Current Memory Address) into R0, then it
> moves the MTCMA into the MTBRC (Magtape Byte Record Count)
"The address of the MTCMA into", etc. Looking quickly at the programming spec
for the TM11 controllers, it wants a negative of the byte count to transfer in
this register; the address of the MTCMA just happens to also be a large enough
negative number to be usable as the (negative) size of the transfer request.
(The first record is probably shorter than that, but that doesn't matter.)
Note that this code could probably also have been written:
MOV #172524, R0
MOV R0,@R0
and been equally functional.
> then it moves 60003 into the MTC (Magtape control register), which
> causes a read operation with 800BPI 9 Channel density.
I'm too lazy to look at the programming spec for the details, but that sounds
right.
> Am I misinterpreting the byte codes or is this some idiosyncratic way to
> get the Magnetic tape to rewind or something (the TM11 has a control
> function to rewind, so it seems unlikely that this is the case
No, it's just the shortest possible program to read the first block off the
tape.
It depends on i) the operator having manually set the tape to the right point
(the start of the tape), so that it's the first block that gets read, and ii)
the fact that the reset performed by hitting the 'Start' key on the CPU clears
the TM11 registers, including the Current Memory Address register, so the
block that's read is read into memory location zero.
Hence the direction to 'once the tape has stopped moving, re-start the CPU at
0'.
Noel
> Thank you for responding so carefully.
The devil is in the details...
> I have been reading the PDP-11/40 handbook, much too much :)
I'm not sure that's possible! :-)
Yes, yes, I know, the architecture is deader than a doornail for serious use,
but I liken it to sailing vessels: nobody uses them for serious cargo haul any
more, but they are still much beloved (and for good reasons, IMO).
The PDP-11 is an incredibly elegant architecture, perhaps the best ever (IMO),
which remains one of the very best examples ever of how to get 30 pounds into
the proverbial ten-pount sack - like early UNIX (more below).
> this is really elegant code. The guys who thought this up were amazing.
Nah, it's just a clever hack (small-scale). What is really, almost
un-approachably, brilliant about early UNIX is the amount of functionality
they got into such a small machine.
It's hard to really appreciate, now, the impact UNIX had when it first
appeared on the scene: just as it's impossible for people who didn't
themselves actually experience the pre-Internet world to _really_ appreciate
what it was like (even turning off all one's computers for a day only
approximates it). I think only people who lived with prior 'small computer
OS's' could really grasp what a giant leap it was, compared to what came
before.
I remember first being shown it in circa 1975 or so, and just being utterly
blown away: the ability to trivially add arbitrary commands, I/O redirection,
invisibly mountable sections of the directory tree - the list just goes on and
on. Heck, it was better than all but a few 'big machine' OS's!
> Thanks again for your help.
Eh, de nada.
Noel
John Cowan:
Well, of course there are conferences and there are conferences. The
only conference I've ever had a paper published at, Balisage, is as
peer-reviewed as any journal. (And it is gold open access and doesn't
charge for pages -- the storage costs are absorbed as conference overhead.)
====
Have you actually looked up the cited reference?
The trouble is not that it's a conference paper. The trouble is
that that the `authority' being cited is just a random assertion,
not backed up.
It's as if I mentioned your name in a paper about something else,
remarked in passing and without any citation of my own that you have
a wooden leg, and Wikipedia accepted that as proof of your prosthesis.
Norman Wilson
Toronto ON
(Four limbs and eight eyes, thank you very much)
On Dec 22, 2015, at 5:44 PM, Norman Wilson <norman(a)oclsc.org> wrote:
> If that's the quality of reference they accept, there is simply no
> reason to take anything they publish as gospel.
You're mistaking Wikipedia for an information source you can rely on. It's
not. It's more akin to an attempt to prove that an infinite number of
monkeys, with an infinite number of typewriters, and an infinite amount of
time, can produce a reliable encyclopaedia.
(Yes, yes, spare me the surveys that show that Wikipedia's error rates aren't
that bad, when compared with other encyclopaedias, etc.)
Don't get me wrong, Wikipedia is quite useful as a place for an
_introduction_ to any topic, but anyone who really wants to _reliably_ know
anything about a topic needs to look at the references, not the articles.
There was an attempt to do a Wikipedia-like online encyclopaedia that one
could rely on - Citizendium - but alas it doesn't seem to have taken off (or
hadn't when I got distracted from working on it).
And I know whereof I speak; those who wish to be amused may want to read:
http://en.wikipedia.org/wiki/User:Jnc/Astronomer_vs_Amateur
And apologies for continuing the off-topic (this group certainly can't fix
Wikipedia, people have been complaining about this problem for many years
now), but some buttons, you just have to respond when they are pushed...
Noel
On 2015-12-23 17:04, norman(a)oclsc.org (Norman Wilson) wrote:
> John Cowan:
>
> Wikipedia is by nature a*summary of the published literature*. If you
> want to get some folklore, like what "cron" stands for, into Wikipedia,
> then publish a folklore article in a journal, book, or similar reputable
> publication. Random uncontrolled mailing lists simply do not count.
>
> ======
>
> That sounds fair enough on the surface.
>
> But if you follow the references cited to support the cron
> acronyms, you find that random unsupported assertions in
> conference papers do count. That's not a lot better.
I've had similar experiences with Wikipedia in the past. At one point I
was trying to get the PDP-11 article corrected, as it said that the
PDP-11 was an architecture that disappeared in the 80s (paraphrasing). I
pointed out that the last *new* PDP-11 model from DEC itself was
released in 1990, and that others are still making new PDP-11 CPUs.
My corrections were reverted, and I was asked for citations. I went
through a silly loop of requests for sources for my claims, while there
seems to have been no demand for citation for the original claims, more
than the "knowledge" of someone. It wasn't until I dug up the system
manuals and documentation from DEC about the PDP-11/93 and PDP-11/94
(which have actual time of original publishing date printed) that my
claims were (somewhat) accepted.
I've also had numerous fights about the Wikipedia articles about virtual
memory, where the original authors on the article clearly had not
understood the difference between virtual memory and demand paged
memory. The articles are better today, but when I last looked, they
still had some details wrong in there. And getting anything corrected is
hell, as any silly statement that is already in is almost regarded as
gospel, and anything you try to correct is questioned to hell and back
before anyone will accept it. (Hey, according to Wikipedia, a PDP-11 do
not have virtual memory... I wonder what it has then. Fake memory?
Although, the article might now actually accept that a PDP-11 do have
virtual memory, although no OS I know of implements demand paging, but
that could be done as well, if anyone wanted to.)
Nowadays, I use Wikipedia to find information, but just take everything
in there with a large grain of salt when it comes to details. There are
just too many ignorant people who are writing stuff, and who seem to get
anything accepted, and too much hassle to get anything corrected when
you actually knows the subject.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Perhaps Wikipedia would be satisfied if we could get
Ken to write a letter to some current published journal,
saying that he's the one who named cron, he's heard
people are interested in how it got that name, here's
how. We could then cite that as a reference.
On the other hand, this may be an example of the
degree to which one should trust Wikipedia. The
`command run on notice' acronym claim is backed up
by an article from the AUUG (Hi Warren!) Proceedings,
1994, in which the first reference to cron gives
that explanation with no further reference.
If that's the quality of reference they accept, there
is simply no reason to take anything they publish
as gospel. Sorry.
Norman Wilson
Toronto ON
Proud that no one has yet made a spurious Wikipedia
page asserting the etymology of my personal domain
name.
Larry McVoy:
As a guy who has donated money to Wikipedia this whole thread makes
me not want to donate again. Just me being grumpy.
====
Me too.
Perhaps we should start our own online encyclopedia.
In the Ken tradition we could call it pedi.
(pdia sounds better, but pdia.org is already taken.)
Norman Wilson
Toronto ON
I had never doubted that "cron" was a contraction of "chrono-".
Wikipedia, however, offered several folk acronyms on a par
with it. Brian asked Ken, who confirmed,
"cron comes from the prefix (greek?) for time.
it should have been chron, but i never could spell."
I edited Wikipedia to expunge the nonsense. Amusingly that
makes the article less "verifiable" because there had been
literature citations for the nonsense, but there is none
for the fact.
Doug
All,
I am in the process of gaining a deeper understanding of PDP-11 machine
instructions and how the bootstrap loader and its cousins function. As
part of that process, I am analyzing the code. I am concurrently working
through the DEC bootstrap loader and the bootstrap loader that is
described in the v6 documentation. The DEC bootstrap loader, while
fascinating and elegant, is relatively straightforward, given its
enormous range and the fact that it is self-modifying. I wrote up my
preliminary notes here:
http://decuser.blogspot.com/2015/12/analysis-of-pdp-11-bootloader-code.html
The code that is in the v6 documentation on the other hand is not
yielding easily to reasonable interpretation and I was hoping y'all
might be able to shed some light on how it works.
The following is the TU10 (TM11) bootstrap code from "Setting Up Unix -
Sixth Edition":
TU10
012700
172526
010040
012740
060003
000777
The author's notes around the code are:
The tape should move and the CPU loop. (The TU10 code is not the DEC
bulk ROM for tape; it reads block 0, not block 1.)
Halt and restart the CPU at 0. The tape should rewind. The console
should type ‘=’.
Of course, following the instructions results in a successful outcome,
but understanding what is happening is difficult given that this is a
virtual environment and no discernible tape movement can be detected.
My attempt at interpretation is along the following lines, I
manufactured the dissasembly based on my reading of the PDP-11/40
handbook and the machine codes:
012700 MOV #172526, R0 ; moves the TM11 Current Memory Address Register
(MTCMA) address into R0
172526 ; the immediate operand
010040 MOV R0,-(R0) ; moves the contents of R0, 172526, into
memory location 172524, the TM11 Byte Record Counter (MTBRC)
012740 MOV #60003,-(R0); moves 60003 into memory location 172522, the
TM11 Command Register (MTC)
060003 ; immediate data
000777 HALT
This seems like gobbledegook to me. It moves the MTCMA (Magtape Current
Memory Address) into R0, then it moves the MTCMA into the MTBRC (Magtape
Byte Record Count), then it moves 60003 into the MTC (Magtape control
register), which causes a read operation with 800BPI 9 Channel density.
172526 is -5252 in 2's complement.
Am I misinterpreting the byte codes or is this some idiosyncratic way to
get the Magnetic tape to rewind or something (the TM11 has a control
function to rewind, so it seems unlikely that this is the case, but I'm
mystified)?
I single stepped through the code in the simulator, and the TM11
registers appear to be pretty unobservable (examining these three
registers always displays 0's, but if I change from referencing the TM11
registers to another area of memory, say 100500 I see the values I would
expect to see as they are being moved from the registers into memory).
Thanks,
Will
Hi all, I've been in contact with Steven Schultz and I've set up a
mirror of his 2.11BSD patches at:
http://www.tuhs.org/Archive/PDP-11/Distributions/ucb/2.11BSD/Patches/
I have no "git fu", but it would be nice to have a Git repository
with all the sources fully patched. Oh, and new boot tapes :-)
I should ask Santa for it.
Cheers, Warren
Ok, not sure if anyone will want to do this but I've just compiled
ed.c from Unix v6 on Unix v5.
It's not much bigger than the assembled ed, with 1314 lines of C code
the compiled executable is only 6518 bytes vs 4292 for the original. I
was looking at the source code and didn't see anything that the v5 cc
couldn't handle. I trimmed the source a bit, there's a function at the
end called getpid() which is commented out.
The comment says:
/* Get process ID routine if system call is unavailable. */
but my version of v5 does have that system call so it's all good.
It's been run a few times and it seems to work just fine. It may even
have a few more features than the v5 ed, I'm not sure yet :)
Mark
> From: Random832
> Not quite. On a stock V6 kernel, system call 30 (smdate/utime) maps to
> nullsys rather than nosys.
Oh, right. (Hadn't checked the number, assumed they used a new one for utime.)
Noel
> From: Random832
> Non-existent syscalls map to nosys, which sets u_error to 100 ... which
> causes the process to be sent a signal SIGSYS.
Oh, right, I'd forgotten that.
So, getting back to v6tar, I'll bet that if you try and use it to _read_ a TAR
file file under V6 (i.e. write files into the V6 filesystem), it will bomb out
(because of the call to utime).
Noel
> From: Mark Longridge
> Not too sure about reversing getpid.o, but maybe possible with db?
Well, me, I'd just use 'od' - but then again, I have ucode for disassembling
PDP-11 octal! :-) (OK, OK, so a lot of the less common instructions I have
to look up! :-)
But, seriously, yeah, 'db' is probably the way to go.
FWIW, it's possible to get 'adb' running under V6 without much (any?) work,
too. Although maybe it needs the 'phototypesetter' C compiler? I'd have to
check...
There's also a 'cdb' running around (I found a copy on the 'Shoppa disks'),
which is basically 'db' but augmented with a few useful commands - maybe stack
backtrace, I don't recall the details, the documentation in on my V6, and I
don't feel like spinning it up just to look for that.
Noel
So, speaking of system calls that are missing in earlier versions of Unix,
that tickled a memory:
> From: Will Senn
> ... a special version of tar must be prepared to run on V6.
> The document goes on to describe a reasonable method to make v6tar on
> v7 and copy the binary over to the v6 system.
When I got tar running on my V6, I didn't know about this, and I took
a V7 tar and got it running myself, see here:
http://mercury.lcs.mit.edu/~jnc/tech/ImprovingV6.html#tar
One thing I found out while doing that is that tar uses the 'utime' system
call on V7 to set the file date, but i) V6 doesn't have utime() (although it
has smdate(), albeit commented out in the standard distro), and ii) on now
looking in src/cmd/tar and src/libc/v6 in the V7 distro, I don't see a
replacement version of utime().
As near as I can make out, 'v6tar' must be using the standard V7 version of
utime(), which I assume turns into a call to nosys() on V6 (returns an error);
tar doesn't check the return value, so the call fails (silently). So v6tar
won't correctly set the file date when moving a file _to_ V6.
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> if one looks at /lib/libc.a via 'ar t getpid.o' you can see the object
> file getpid.o
Library, schlibrary! The important question is 'is it in the kernel source'?
(Although now that I think about it, if the library routine tries to use a
non-existent system call, it should return an error. It would be interested
to disassemble the library routine, and see what it thinks it is doing.)
Noel
On 2015-12-12 06:31, Peter Jeremy<peter(a)rulingia.com> wrote:
> Also, I've seen suggestions that there's a 2.11BSD patch later than
> 447 but I can't find anything "official" andwww.2bsd.com is either
> down or inaccessible from all the systems I have access to. Does
> anyone know if 448 or later were released? And given the issues with
> www.2bsd.com would someone be willing to mirror it (assuming we can
> got a copy of it)?
Yes, I did 448. Various bits and pieces that were fixed there, but
unfortunately I haven't managed to reach Steve to get it officially
sanctioned.
I've passed it out a few times, but there is no canonical place for it.
You can find it at ftp://ftp.update.uu.se/pub/pdp11/2.11bsd/
Feel free to pass that information around.
Things fixed in there:
. Added a timeout to boot prompt for automatic boot
. Made console 8-bit clean
. Changed gethnamadr to fall back to /etc/hosts if dns fails.
. Fixed kernel build process to get version and date properly into kernel.
. Fixed raboot to work on non-DEC mscp controllers
. Fixed tmac macro to work correctly after 2009.
. Fixed a couple of documentation errors.
Essentially small issues that bothered me as I was running on an 11/84
with a CMD controller a few years ago. A system on which I also booted
other OSes, which is why the 8-bit clean issue also bothered me.
(Also was really surprised at the ugly Y2K fix someone had done with
tmac, which failed again in 2010).
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Ken Iverson was born in 1920; he is (in)famous for inventing APL. And if
you haven't used APL\360 on a dumb Kleinschmidt[*] terminal, you didn't
miss anything.
[*]
And that's the first time I've seen a spell-checker suggest that it be
replaced with "Consummated".
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Some time ago, someone posted an early Unix image that I recall
running. I know it was pre-groups but don't recall anything else and
I can't find either the images or mailing list references either
locally or on tuhs.org. Does anyone recall the details.
Also, I've seen suggestions that there's a 2.11BSD patch later than
447 but I can't find anything "official" and www.2bsd.com is either
down or inaccessible from all the systems I have access to. Does
anyone know if 448 or later were released? And given the issues with
www.2bsd.com would someone be willing to mirror it (assuming we can
got a copy of it)?
--
Peter Jeremy
> I got it.
Ta muchly! All seems OK now, after TUHS moved to a new ISP (linode,
which, ahem, is known for hosting spammers).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
1 2 3... Is this mic on? Tap tap...
Seriously, my anti-spam defences were having an issue with this list for a
while, so let's see whether it comes back.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Could whoever runs this broken mirror please fix the damned mailer so that
it handles my RFC-compliant banner? I do not appreciate retries every
five seconds or so, because Dovecot cannot seem to handle a multi-line
SMTP banner (a great spam defence); I have since firewalled the IP address
of 45.79.103.53 out of self-defence.
Thank you.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
All,
I'm stuck trying to determine what is going on with v6tar on v6. It
seems to work ok for files, but gets confused with subdirectories. I set
up a test folder structure:
t/dmr/vs.c
t/dmr/vt.c
t/ken/prf.c
then I created a tarball
tar cvf t.tar t
then I tried to extract the tarball. It made a mess:
# tar xvf t.tar
Tar: blocksize = 17
y ?
tar: t/ken/prf.c - cannot create
y ?
y ?
tar: t/dmr/vs.c - cannot create
y ?
y ?
tar: t/dmr/vt.c - cannot create
That was ugly and all of it was output. What exactly did I wind up with?:
# ls -l
total 19
drwxrwxrwx 2 root 32 Oct 10 12:54 y
-rw-rw-rw- 1 root 8704 Oct 10 12:54 t.tar
Ugh. Probably don't need the y directory...
# rmdir y
y ?
# ls y
y not found
Wow! It appears that I am unable to delete the y directory or list it by
name. That can't be good. Any ideas of how to remove this directory are
welcome.
Not to be deterred by one small failure, I copied the same tarball over
to v7 on the off chance that maybe v6tar isn't really for v6, but more
for moving files(and directories) over to v7 as Haley and Ritchie
describe, and lo and behold tar on v7 is able to extract both files and
directories from the same tarball without any trouble:
# tar xvf t.tar
Tar: blocksize = 17
x t/ken/prf.c, 2301 bytes, 5 tape blocks
x t/dmr/vs.c, 1543 bytes, 4 tape blocks
x t/dmr/vt.c, 834 bytes, 2 tape blocks
# ls -l
total 18
drwxrwxr-x 4 root 64 Dec 31 19:27 t
-rw-rw-r-- 1 root 8704 Dec 31 19:27 t.tar
# ls t
dmr
ken
# ls t/dmr
vs.c
vt.c
# ls t/ken
prf.c
Interesting. After looking at the tar source, the question marks in the
output appear to be coming from somewhere outside of tar (perhaps mkdir
or chown?). Also, the "cannot create" message comes from the following
snippet of the tar source, which looks reasonable:
...
if ((ofile = creat(dblock.dbuf.name, stbuf.st_mode & 07777)) < 0) {
fprintf(stderr, "tar: %s - cannot create\n",
dblock.dbuf.name);
...
I think this error is simply an effect related to the failure to create
the necessary directories properly. The code to do that looks pretty
straightforward:
checkdir(name)
register char *name;
{
register char *cp;
int i;
for (cp = name; *cp; cp++) {
if (*cp == '/') {
*cp = '\0';
if (access(name, 01) < 0) {
if (fork() == 0) {
execl("/bin/mkdir", "mkdir",
name, 0);
execl("/usr/bin/mkdir",
"mkdir", name, 0);
fprintf(stderr, "tar: cannot
find mkdir!\n");
done(0);
}
while (wait(&i) >= 0);
chown(name, stbuf.st_uid, stbuf.st_gid);
}
*cp = '/';
}
}
}
I speculate that chown is causing the "?" to be displayed. Is it safe
enough for me to add printf statements around this code to see what's
going on, or is there a better approach?
Thanks,
Will
/dev/makefile on the V7 distribution tape (or at least the
unpacked image I have that I believe to be same) says:
ht:
/etc/mknod mt0 b 7 64
/etc/mknod mt1 b 7 0
/etc/mknod rmt0 c 15 64
/etc/mknod rmt1 c 15 0
/etc/mknod nrmt0 c 15 192
/etc/mknod nrmt1 c 15 128
chmod go+w mt0 mt1 rmt0 rmt1 nrmt0 nrmt1
According to /usr/sys/dev/ht.c, the minor device
number was used as follows:
minor(dev) & 07 slave unit number
minor(dev) & 070 controller unit number
minor(dev) & 0100 tape density: set == 800 bpi, clear 1600
minor(dev) & 0200 no-rewind flag
It takes some digging in the source code (and the PDP-11
Peripherals Handbook) to understand all this numerology.
In most of the code, minor(dev) & 077 is just treated as
a unit number (fair enough). The use of 0200 appears only
as a magic number in htopen; that of 0100 only as a magic
number in htstart, and that only implied: the test is
not minor(dev) & 0100, but
unit = minor(bp->b_dev) & 0177;
if(unit > 077)
Not so bad when the whole driver is only 376 lines of code,
but it wouldn't have hurt to make it 400 lines if that
meant fewer magic numbers.
Anyway, what all this means is that /dev/*mt0 and /dev/*mt1
both actually meant slave 0 on TU16 controller 0, but mt0
was 800 bpi and mt1 1600 bpi. Hence, I would guess, tar's
default to mt1.
My first exposure to the insides of UNIX was in the High
Energy Physics group at Caltech. Some of our systems had
multiple tape drives and every drive supported multiple
densities, so we invented for ourselves a system like that
many other sites invented, with names like /dev/rmt3h to
mean the third tape drive at high density. (Hence the
USG naming scheme of /dev/rmt/3h and the like--not that
we taught it to them, just that many places had the same
idea.)
Our world wasn't nearly as exciting as that of our neighbors,
across the building and three floors down, in the Space
Radiation Laboratory. They had a huge room full of racks
of magtapes full of data from satellites, and many locally-
written tools for extracting the data so researchers could
work on it. The hardware was an 11/70 with eight tape drives,
and at any given time at least half the the drives would be
spinning. One of the drives was seven-track rather than
nine-track, because some of the satellite data had been
written in that format.
Fair disclosure: I had a vague memory that the `drive number'
in the device name had been recycled for other purposes,
but couldn't remember whether it was density or something
else. (I'm a little surprised none of the other old-timers
here remembered that, but maybe I worked with tapes more than
them.) But I had to dig into the source code for the details;
I didn't remember all that. And I did have to climb up to the
high shelf in my home office for a Peripherals Handbook to
understand the magic numbers being stuffed into registers!
Norman Wilson
Toronto ON
> I have no memory of why Ken used mt1 not mt0. Doug may know.
I don't know either. Come to think of it, I can't remember ever
using tar without option -f. Direct machine-to-machine trasfer,
e.g. by uucp, took a lot of business away from magtape soon
after tar was introduced. Incidentally, I think tar was written
by Chuck Haley or Greg Chesson, not Ken.
Doug
On 2015-12-12 07:16, William Pechter<pechter(a)gmail.com> wrote:
>
> Warren Toomey wrote:
>> >On Sat, Dec 12, 2015 at 03:54:16PM +1100, Peter Jeremy wrote:
>>> >>Also, I've seen suggestions that there's a 2.11BSD patch later than
>>> >>447 but I can't find anything "official" andwww.2bsd.com is either
>>> >>down or inaccessible from all the systems I have access to. Does
>>> >>anyone know if 448 or later were released? And given the issues with
>>> >>www.2bsd.com would someone be willing to mirror it (assuming we can
>>> >>got a copy of it)?
>> >[ Back to a real keyboard ]. Yes I'd be very happy to mirror 2bsd.com.
>> >Does anybody know what's happened to Steven Schultz?
>> >
>> >Cheers, Warren
>> >_______________________________________________
>> >TUHS mailing list
>> >TUHS(a)minnie.tuhs.org
>> >http://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
> Last patch is 447 from June 2012.
Uh. No. 447 is from December 31, 2008.
See /VERSION in the patch set, which holds the patch version and date
for the patch.
And I did an unofficial 448 in 2010, which I have tried to spread, and
which I suspect is the patch referred to above...
> I can get to the site just fine... pasted the patch below if it helps
> anyone.
> I haven't heard anything about him. Haven't worked at the same company
> since the early 1990's...
I used to talk with him a lot in the past, but have not been able to
raise him, and haven't seen anything from him in over 5 years... No idea
what he is up to nowadays...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Random832
> Interestingly, the SysIII sum.c program, which I assume yields the same
> result for this input, appears to go through the whole input
> accumulating the sum of all the bytes into a long, then adds the two
> halves of the long at the end rather than after every byte.
That's the same hack a lot of TCP/IP checksums routines used on machines with
longer words; add the words, then fold the result in the shorter length at the
end. The one I wrote for the 68K back in '84 did that.
> This suggests that the two programs would give different results for
> very large files that overflow a 32-bit value.
No, I don't think so, depending on the exact detals of the implementation. As
long as when folding the two halves together, you add any carry into the sum,
you get the same result as doing it into a 16-bit sum. (If my memory of how
this all works is correct - the neurons aren't what they used to be,
especially late in the day... :-)
> Also, if this sign extends, then its behavior on "negative" (high bit
> set) bytes is likely to be very different from the SysIII one, which
> uses getc.
I have this bit set that in C, 'char' is defined to be signed, and
furthermore that when you assign a shorter int to a longer one, the sign is
extended. So if one has a char holding '0200' octal (i.e. -128), assigning it
to a 16-bit int should result in the latter holding '0177600' (i.e. still
-128). So in fact I think they probably act the same.
Noel
> From: Will Senn
> I noticed that the sum utility from v6 reports a different checksum
> than it does using the sum utility from v7 for the same file.
> ... does anyone know what's going on here?
> Why is sum reporting different checksum's between v6 and v7?
The two use different algorithms to accumulate the sum (I have added comments
to the relevant portion of the V6 assembler one, to help understand it):
V6:
mov $buf,r2 / Pointer to buffer in R2
2: movb (r2)+,r4 / Get new byte into R4 (sign extends!)
add r4,r5 / Add to running sum
adc r5 / If overflow, add carry into low end of sum
sob r0,2b / If any bytes left, go around again
Read the description of MOVB in the PDP-11 Processor manual.
V7:
while ((c = getc(f)) != EOF) {
nbytes++;
if (sum&01)
sum = (sum>>1) + 0x8000;
else
sum >>= 1;
sum += c;
sum &= 0xFFFF;
}
I'm not clear on some of that, so I'll leave its exact workings as an
exercise, but I'm pretty sure it's not a equivalent algorithm (as in,
something that would produce the same results); it's certainly not
identical. (The right shift is basically a rotate, so it's not a straight sum,
it's more like the Fletcher checksum used by XNS, if anyone remembers that.)
Among the parts I don't get, for instance, sum is declared as 'unsigned',
presumably 16 bits, so the last line _should_ be a NOP!? Also, with 'c' being
implicitly declared as an 'int', does the assignment sign extend? I have this
vague memory that it does. And does the right shift if the low bit is one
really do what the code seems to indicate it does? I have this bit that ASR on
the PDP-11 copies the high bit, not shifts in a 0 (check the processor
manual). That is, of course, assuming that the compiler implements the '>>'
with an ASR, not a ROR followed by a clear of the high bit, or something.
Noel
Ok, it definitely sounds like the v6tar source is around somewhere so
if someone could point me in the right direction...
I've only seen the binary, and I can't remember where I got it.
Mark
All,
While working on the latest episode of my saga about moving files
between v6 and v7, I noticed that the sum utility from v6 reports a
different checksum than it does using the sum utility from v7 for the
same file. To confirm, I did the following on both systems:
# echo "Hello, World" > hi.txt
# cat hi.txt
Hello, World
Then on v6:
# sum hi.txt
1106 1
But on v7:
# sum hi.txt
37264 1
There is no man page for the utility on v6, and it's assembler. On v7,
there's a manpage and it's C:
man sum
...
Sum calculates and prints a 16-bit checksum for the named
file, and also prints the number of blocks in the file.
...
A few questions:
1. I'll eventually be able to read assembly and learn what the v6
utility is doing the hard way, but does anyone know what's going on here?
2. Why is sum reporting different checksum's between v6 and v7?
3. Do you know of an alternative to check that the bytes were
transferred exactly? I used od and then compared the text representation
of the bytes on the host using diff (other than differences in output
between v6 and v7 related to duplicate lines, it worked ok but is clunky).
Thanks,
Will
All,
In my exploration of v6, I followed the advice in "Setting up Unix -
Seventh Edition" and copied v6tar from v7 to v6. Life is good. However,
tar is using mt1 and it is hard coded into the source, tar.c:
char magtape[] = "/dev/mt1";
As the subject line suggested, I have two questions for those of you who
might know:
1. Why is it hard coded?
2. Why is it the second device and not the first?
Interestingly, it took me a little while to figure out it was doing this
because I didn't actually move files between v6 and v7 until today.
Before this my tests had been limited to separate tests on v6 and v7
along the lines of:
cd /wherever
tar c .
followed by
tar t
list of files
cd /elsewhere
tar x
files extracted and matching
What it was doing was writing to the non-existant /dev/mt1, which it
then created, tarring up stuff, and exiting. Then when I listed the
contents of the tarfile, or extracted the contents, it was successful.
But, when I went to move the tape between v6 and v7, the tape (mt0) was
blank, of course. It was at this point that I followed Noel's advice
and "Used the source", and figured out that it was hard-coded as you see
above.
Thanks,
Will
That's exactly right. ld performs the same task as LOAD did on BESYS,
except it builds the result in the file system rather than user
space. Over time it became clear that "linker" would be a better
term, but that didn't warrant canning the old name. Gresham's law
then came into play and saddled us with the ponderous and
misleading term, "link editor".
Doug
> My understanding, which predates my contact with Unix, is that the
> original toochains for single-job machines consisted of the assembler
> or compiler, the output of which was loaded directly into core with
> the loader. As things became more complicated (and slow), it made
> sense to store the memory image somewhere on drum, and then load that
> image directly when you wanted to run it. And that in some systems
> the name "loader" stuck, even though it no longer loaded. Something
> like the modern ISP use of the term "modem" to mean "router". But I
> don't have anything to back up this version; comments welcome.
> estabur (who thought these names up, I know 8 characters is limiting,
> but c'mon)
'establish user mode registers'
> the 411 header is read by a loader
Actually, it's read by the exec() system call (in sys1.c).
Noel
> From: Dave Horsfall
> I love those PDP-11 instructions, such as "blos" and "sob" :-)
Yes, but alas, there is no 'jump [on] no carry' instruction! (Yes, yes, I
know about BCC! :-) Although I guess the x86 has one...
Noel
> Yes the V6 kernel runs in split I and D mode, but it doesn't end up
> supporting any more data. I.e. the kernel is still a 407 (or 410) file.
> _etext/_edata/_end are still referencing the same 64K space.
Err, actually, not really.
The thing is that to build the split-I/D kernel, one sets the linker to
produce an output file which still contains the relocation bits. That is then
post-processed by 'sysfix', which does wierd magic (moves the text up to
020000, in terms of address space; and puts the data _below_ the text, in the
actual output file). So while the files concerned may have a '407' in their
header, they definitely aren't what one normally finds in a linked 407 or 410
file.
In particular, data addresses start at 0, and can run up to 0140000 (i.e. up
to 56KB), while text addreses start at 020000 and can run up to 0160000. So,
_etext/_edata/_end are not, in fact, in the same 64K space. And the total of
data (initialized and un-initialized) together with the text can be much
larger than 64KB - up to 112KB (modor so.
Noel
J.F. Ossanna (jfo) was born in 1928; he helped give us Unix, and developed
the ROFF series (which I still use).
And Ada Lovelace, the world's first computer programmer, was coded in 1815.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Ronald Natalie
> I'm pretty sure the V6 kernel didn't run in split I/D.
Nope. From 'SETTING UP UNIX - Sixth Edition':
"Another difference is that in 11/45 and 11/70 systems the instruction and
data spaces are separated inside UNIX itself."
And if you don't believe that, check out:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s
the source! ;-)
> It wasn't too involved of a change to make a split I/D kernel.
> Mike Muuss and his crew at JHU did it.
Maybe you're remembering the process on a pre-V6 system?
> We spent more time getting the bootstrap to work than anything else I
> recall.
It's possible you're remembering that, as distributed, V6 didn't support load
images where the text+initialized-data was larger than 24KW-delta; it would
have been pretty eaay to up that to 28KW-delta (change a parameter in the
bootstrap source, and re-assemble), but after that, the V6 bootstrap would
have had to have been extensively re-worked.
And there were _also_ a variety of issues with handling maximal large images
in the startup code. Once operating, the kernel has segments KI1-KI7 available
the hold the system's code; however, it's not clear that all of KI1-7 are
really usable, since the system can't 'see' enough code while in the code
relocation phase in the startup to fill them all. E.g. during code relocation,
KI7 is ripped off to hold a pointer to I/O space (since KD7 is set to point to
low memory just after the memory that KD6 points to).
These might have been issues in systems which were ARPANET-connected (i.e.
ran NCP), as that added a very large amount of code to the kernel.
Noel
> From: Will Senn
> my now handy-dandy PDP11/40 processor handbook
That's good for the instruction set, but for the memory management hardware,
etc you'll really want one of the {/44, /45, /70, /73, etc} group, since only
those models support split I+D.
> the 18 bits holding the word 000407
You mean '16 bits', right? :-)
> This means that branches are to 9th, 10th, 11th and 7th words,
> respectively. It'll be a while before I really understand what the
> ramifications are.
Only the '407' is functional. (IIRC, in early UNIX versions, the OS didn't
strip the header on loading a new program into memory, so the 407 was actually
executed.) The others are just magic numbers, inspired by the '407' - the
code always starts at byte 020 in the file.
> Oh and by the way, jumping between octal and decimal is weird, but
> convenient once you get the hang of it - 512 is 1000, which is nifty
> and makes finding buffer boundaries in an octal dump easy :).
The _real_ reason octal is optimal for PDP-11 is that when looking at core,
most instructions are more easily understood in octal, because the PDP-11 has
8 registers (3 bits), and 3 bits worth of mode modifier, and the fields are
aligned to show up in octal.
I.e. in the double-op instruction '0awxyz', the 'a' octit gives the opcode,
'w' is the mode for the first operand, 'x' is the register for the first
operand, and 'y' and 'z' similarly for the second operand. So '12700' is
'MOV (PC)+, R0' - AKA 'MOV #bbb, R0', where 'bbb' is the contents of the word
after the '12700'.
Noel
> From: Will Senn <will.senn(a)gmail.com>
> The problem is this, when I attempt to execute the v6tar binary on the
> v6 system (it works in v7) it errors out:
> v6tar
> v6tar: too large
That's an error message from the shell; the exec() call on the command
('v6tar') is returning an ENOMEM error. Looking in the kernel, that comes from
estabur() in main.c; there are a number of potential causes, but the most
likely is that 'v6tar' is linked to be split I+D, and your V6 emulation is on
a machine that doesn't have split I+D (e.g. an 11/40). If that's not it,
please dump the a.out header of 'v6tar', so we can work out what's causing the
ENOMEM.
Noel
> From: Will Senn
> Thanks for supplying the logic trail you followed as well!
"Use the source, Luke!" This is particularly true on V6, where it's assumed
that recourse to the source (which is always at hand - long before RMS and
'Free Software', mind) will be an early step.
> when you say dump the a.out header, how do you do that?
On vanilla V6? Hmm. On a system with 'more' (hint, hint ;-), I'd just do 'od
{file} | more', and stop it after the first page. Without 'more', I'd probably
send the 'od' output to a file, and use 'ed' to look at the first couple of
lines.
Back in the day, of course, on a (slow) printing terminal, one could have just
said 'od', and aborted it after the first couple of lines. These days, with
video terminals, 'more' is kind of really necessary. Grab the one off my V6
Unix site, it's V6-ready (should be a compile-and-go).
Noel
> From: Mark Longridge
> I've never been able to transfer any file larger than 64K to Unix V5 or
> V6.
Huh?
# hrd /windows/desktop/TheMachineStops.htm Mach.htm
Xfer complete: 155+38
# l Mach.htm
154 -rw-rw-r-- 1 root 78251 Oct 25 12:13 Mach.htm
#
'more' shows that the contents are all there, and fine. ('hrd' is a command
in my V6 under Ersatz11 that reads an arbitrary file off the host file
system. Guess I need to set the date on the system!)
V6 definitely supports fairly large files; see the code in bmap() in subr.c,
which shows that the basic structure on disk can describe files of 7*256
(1792) + 256*256 (65536) blocks, or 67328 blocks total (34MB).
(In reality, of course, a file can't reach that limit; first, a disk
partition in V6 is limited to 64K blocks, but from that one has to deduct
blocks for the ilist, etc; further, the argument to bmap() is an int, which
limits the 'block number in file' to 16 bits, and in fact the code returns an
error if the high bit in the 'block number in file' is set.)
> I also don't recall seeing any file on V5 or V6 larger than 65536
> bytes
I don't think there is one; the largest are just less than 64KB. I don't
think this is deliberate, other than in the sense that they didn't put any
huge files in the distro so it would fit on a couple of RK packs.
> dd if=/dev/mt0 of=cont.a bs=1 count=90212
> ..
> 24676+0 records in
> 24676+0 records out
> Now, if we take 90212 and subtract 65536 we get 24676 bytes. So there
> definitely seems to be some 64K limit here
Probably 'count' is an 'int' in dd, i.e. limited to 16 bits. No longs in V6 C
(as distributed, although later versions of the C compiler for V6 do support
longs - see my 'bringing up Unix' pages).
Noel
> From: Noel Chiappa
> the most likely is that 'v6tar' is linked to be split I+D, and your V6
> emulation is on a machine that doesn't have split I+D (e.g. an 11/40)
Now that I think about it, the linked systems that are part of the V6 distro
tape are all linked to run on an 11/40. They will boot and run OK on a more
powerful machine (/45 or /70), but they will act like they are on a /40 -
i.e. no split I+D support/use (user or kernel). So to get split I+D support,
you need to build a new Unix binary, with m45.s instead of m40.s. If you
haven't done that, that's probably what the problem is.
Aside: V6 comes in two flavours: no split I+D at all, or split I+D in both
the kernel and user. For some reason that I can't recall, we actually
produced an 'm43.s', BITD at MIT, which ran the kernel in non-split-I-D, but
supported split I-D for the users.
I wish I could remember why we did this - it couldn't have been to save
memory (the machine didn't have a great deal on it when this was done -
although I have this vague memory that that was why we did it), because
running split I+D in the kernel does not, I think, use any more physical
memory (provided you don't fiddle with the parameters like the number of
buffers) than running non-split. Or maybe it does?
One possible reason was that the odd layout of memory with split I+D in the
kernel made debugging kernel code harder (we were doing a lot of kernel
hacking to support early networking work); another was that we were just being
conservative, didn't need to extra space in the kernel that I+D allowed, and
so didn't want to run it.
Noel
All, in the next few days I'm migrating minnie.tuhs.org from one VM to
another, so as to upgrade the OS and clean out the system. I think I've
got the mail subsystem up and running, but as usual there may be bugs.
I'll send out another message when the system is cut over. If things
don't seem to be right, e-mail me at:
wkt at tuhs.org, or
warren.toomey at tafe.qld.edu.au if the tuhs.org one fails.
Cheers all, Warren
On Tue, 8 Dec 2015, Brantley Coile wrote:
> We were indeed lucky that admiral hooper was with us. I know people who
> still cherish their "nano" seconds.
Ah yes, the 1ft piece of wire... Got a photo of it?
> By the way, she wouldn't have said she coined the term "debugging". That
> is at least as old as Thomas Edison. She said she was the first to a
> actually find a real bug!
For those who may be new around here:
https://en.wikipedia.org/wiki/Grace_Hopper#/media/File:H96566k.jpg
Yes, that is a real bug, found inside a real computer.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
All,
According to "Setting Up Unix - Seventh Edition", by Haley and Ritchie:
The best way to convert file systems from 6th edition (V6) to 7th
edition (V7) format is to use tar(1). However, a special version of tar
must be prepared to run on V6.
The document goes on to describe a reasonable method to make v6tar on v7
and copy the binary over to the v6 system. I successfully built the
v6tar binary, which will execute in the v7 environment. I then moved it
over to the v6 system and did a byte compare on the file using od to
dump the octal bytes and then comparing them to the v7 version. The
match was perfect.
The problem is this, when I attempt to execute the v6tar binary on the
v6 system (it works in v7) it errors out:
v6tar
v6tar: too large
on the v7 system, it works:
v6tar
tar: usage tar -{txru}[cvfblm] [tapefile] [blocksize] file1 file2...
I don't think the binary is too large, is is only 18148 bytes.
ls -l v6tar
-rwxrwxrwx 1 root 18148 Oct 10 14:09 v6tar
Help. First, what does too large mean? Second, does this sound familiar
to anyone? etc.
Thanks,
Will
OK, slightly OT...
Rear Admiral Grace ("Amazing") Hopper PhD was given unto us in 1906. She
was famous for coining the term "debugging", whereby a moth was removed
from a relay contact in a *real* computer[*].
However, she must be condemned for giving us COBOL; yes, I know that vile
language, but I carefully leave it off my CV, as it seemed to be designed
for suits (Business Studies of course, but nothing technical) to spy upon
their programmers.
[*]
Defined, of course, where you could open a door and step inside it; I
actually did that once.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> It might not be so much a set of macros as just using a
> subset of raw groff.
Yes, there were no macros back then. If you format the
document using raw groff, the odds are that you will be
speaking the same roff that Dennis did.
> Doug having been there, might know/remember the actually lineage.
Aside from some fuzziness about who wrote what and in what
language, here's what happened:
To port Jerry Saltzer's Runoff (presumably written in MAD)
to Multics, either Dennis or Bob Morris or both together
reimplemented it (presumably in PL/I). To coexist with
Saltzer's version on CTSS, the new program needed a
distinct name, hence roff.
The early Multics PL/I compiler was far from a production
tool. Justifiably, the Bell Labs comp center didn't
support it. To get roff into general use at the Labs,
I undertook yet another implementation in BCPL. I added
functionality (number registers, three-part headings, etc)
and kept the new name. Molly Wagner added hyphenation.
Eventually, I added macros that were usable either as
commands or (when parameterless) embedded in text.
Almost as soon as Unix was up on the PDP-11 one of Ken, Dennis
or Ossanna reimplemented a pre-macro version of roff (presumably
in assembler or B). I'm quite sure roff never ran on the PDP-7.
Ossana had a grander plan and undertook nroff. When he learned
of the availability of the Graphic Systems CAT phototypesetter,
he promptly generalized nroff to handle it. Joe replaced the
CAT's paper tape reader with a direct wire to the computer.
It all worked swimmingly--nothing like the travails when the
CAT was replaced by the more capable Merganthaler Linotron.
An interesting question of priority is whether nroff or
BCPL roff was first to have a macro capability. Though
I don't remember for sure, the fact that BCPL roff unified
registers, macros, strings and diversions suggests that
I abstracted from nroff facilities.
Doug
All,
In the same vein as my prior note, I have made a note available on the
process of getting up and running on Unix Seventh Edition in a SimH
PDP-11 environment. The text is located at:
http://decuser.blogspot.com/2015/12/installing-and-using-research-unix.html
I welcome comments, suggestions, and even criticisms.
While I have learned a lot since my last blog entry (many thanks to
Hellwig Geisse, Nelson Beebe, Noel Chiappa, Clement Cole and several
others), I am still learning about these environments. I originally
invested time in getting v7 running so that I could more easily work
with v6, after having gone there, I believe that it was time very well
spent. I know a lot more about special devices, tape formats, and so on
than I did before as a result of taking the fork in the road.
Thanks for everyone's help.
Oh, and by the way, there appears to be quite a bit of active interest
in this topic - the blog post has been viewed several thousand times
since I posted it, two weeks ago.
Kind regards,
Will
I have set up v7 following [1] and I would like to better understand the
process of adding a disk to the environment. Here is what I know:
The system has one RP06 with two partitions rp0 and rp3 which correspond
to the two block devices rp0, rp3, and the two character devices rrp0,
and rrp3. The special files look like so:
brw-r--r-- 1 root 6, 0 Dec 31 19:05 /dev/rp0
brw-r--r-- 1 root 6, 7 Dec 31 19:04 /dev/rp3
crw-r--r-- 1 root 14, 0 Dec 31 19:01 /dev/rrp0
crw-r--r-- 1 root 14, 7 Dec 31 19:01 /dev/rrp3
This meshes with the device classes switches in c.c:
The block device switch:
struct bdevsw bdevsw[] =
{
...
nulldev, nulldev, hpstrategy, &hptab, /* hp = 6 */
...
}
The character device switch:
struct cdevsw cdevsw[] =
{
...
nulldev, nulldev, hpread, hpwrite, nodev, nulldev, 0, /* hp =
14 */
...
}
I would like to add another RP disk to the environment. After I attach
an RP04/05/06 to the system, what should I use as the major/minor device
numbers? To put it differently, it doesn't seem correct to me to use 6,1
for the block device or 14,1 for the character device on the new drive
as it's a completely different disk from rp0 and rp3 which are just
partitions on the first drive and have 6,0, 6,7, and 14,0, 14,7. If each
RP can have 8 partitions and there can be 8 drives, what is the correct
major, minor numbers to use with v7 for multiple devices?
c.c only lists one vector each for the hp device (one block vector where
hp = 6, and one char vector where hp = 15).
Thanks,
Will
[1] Haley, C. B. & Ritchie, D. M. (1979). Setting Up Unix – Seventh
Edition (pp. 497-505) in UNIX programmer's manual, Vol. 2, Revised and
Expanded Version. Bell Laboratories: NY.
In exploring v6, I have found some uses for having a running v7 instance...
When I try to install the RP bootblock during the installation procedure
for installing Version 7 Unix following the original documentation:
ftp://ftp.uvsq.fr/pub/tuhs/PDP-11/Documentation/v7_setup.html
I am unable to boot from the RP06 disk that I installed into the boot
block onto via:
dd if=/usr/mdec/hpuboot of=/dev/rp0 count=1
No error, it just hangs. I compared hpuboot to the bootblock at it
matches byte for byte. I also compared it to the hpuboot file in Henry
Spencer's tape image (I am using Keith Bostic's tape) and it matches
that as well.
I am asking this list because I thought y'all might know if there was a
problem with:
1) the hpuboot binary on the tapes
2) v7 using RP06
3) something else helpful :) (maybe it's not supposed to be loaded to
byte 0 on the disk image, although that's how it works with v6?)
I am aware that the system can be booted from tape, but that seems hokey
(obviously it works, since that's how the installation process works in
the first place, but I think it is reasonable to expect to be able to
boot from the RP06). Interestingly, there are and RL02 and RK05 v7
images that boot from disk available, but not RP.
I will ask on the SimH list, if y'all don't think it's appropriately
directed.
Thoughts?
Thanks,
Will
All,
I am studying Unix v6 using SimH and I am documenting the process, as I
go, as part of my own learning process. I have much to learn about Unix,
Unix v6 in particular, the PDP architecture and its relationship with
v6, and SimH's emulation of the PDP, so, I am taking notes... I thought
that I would share the notes in raw form as occasional blog posts in the
hope that the knowledge that I work to obtain, might be made available
and useful to others. I also believe that these forms of communication,
as insignificant as they may seem individually are part of helping to
preserve the knowledge of our computing history, in the aggregate. Here
is a link to the first post, a run at an installation walk-through:
http://decuser.blogspot.com/2015/11/installing-and-using-research-unix.html
I am open to feedback and criticism, but please keep in mind that I am a
relative newbie to v6 and PDP land, some of my assumptions are
probably/undoubtedly wrong, but definitely fixable :).
Regards,
Will
> From: Will Senn <will.senn(a)gmail.com>
> a deeper read will require the reader to have knowledge beyond what is
> required of most modern software developers (PDP-11 architecture,
> assembly language, and UNIX are prerequisite).
Well, for pretty much any _operating system_ (as opposed to applications),
one will need to know something about the details of the machine it is
intended to run on; depending on which part of the OS one is looking at, it
will be more or less. E.g. switching processes probably requires a fair
amount, since one needs to know about internal CPU registers, etc; whereas
working on the file system, one probably doesn't need to know very much about
the machine.
> It will also require access to a lab where the ideas covered can be
> experimented with.
Actually, Lions/V6 was used in operating systems courses using simulated
machines; one at MIT, 6.828 "Operating Systems Engineering":
https://pdos.csail.mit.edu/6.828/
used it for a while before the students started complaining about being
forced to learn an obsolete machine. They thereupon wrote a V6 clone for the
x86 architecture, 'XV6' (see the top of that page), which is apparently now
used for similar courses at quite a few other universities.
> The v6 kernel ... packs in features that were either unavailable in
> larger more established systems or may have been present in some form,
> but were orders of magnitude more lines of code and attendant
> complexity. It was and remains an amazing operating system and worthy
> of contemporary study.
I don't think you will find too many people here who disagree! ;-)
> So, I was thinking that next up, I would write up notes to help the
> modern reader engage with v6 more easily in order to follow works like
> Lyons.
Check around online to see what exists, first; there has been stuff written
since Lions! ;-)
Noel