It would be really appreciated if people replying to messages like this would
take 10 minutes (or so - that's about how lonfg it took me to find the actual
answer to this person's question) to do some research, instead of just
replying off the top of their heads with whatever they happen to think they
know.
> From: Gavin Tersteeg
> I can't seem to get the kernel to actually link together once
> everything is compiled. When the final "ld -X" is executed, I always
> get the following errors:
> "Undefined:
> _end
> _edata
> _decmch"
The first two are automagically defined by the linker after a successful
read-through of the files+libraries, i.e. when then are no un-defined
symbols. Ignore them; they will go away when you fix the problem.
The real issue is the missing 'decmch'. That apparently comes from decfd.c,
which I assume is the DEC floppy disk driver. Depending on the setting of the
EIS conditional compilation flag (a different one for C files, from the
PDP-11 assembler files, please note - and I haven't worked out what its
definitiion means; whether if defined, it means the machine _does_ have the
EIS, or _does not_x), it will be called for.
'decmch' is in low.s (rather oddly; that usualy holds just interrupt vectors
and interrupt toeholds), conditionally assembled on:
.if DEC
.if EIS-1
The first line presumably adds it if there _is_ a DEC floppy disk, the second
I don't know _for sure_ the sense of (although I'm guessing that EIS=1 means
there _is_ an EIS chip, so this line says 'assemble if there it _not_ an EIS
chip').
Although you'd think that whatever calculation it is doing, it would need to
do if there's an EIS chip or not, so with an EIS chip it must calculate it
some other way; you'll have to read decfd.c and see how it works.
Note that you'll have to make sure the EIS flags (note plural) are set
to the same sense, when compiling the C and assembler files...
Let me send this off, and I'll reply to some other points in a
separate message.
Noel
Hello, and greetings!
I guess as this is my first post here, I should give some background on
what I have been working on. Last summer I spent a lot of time getting UNIX
V6 working on my PDP-11/23 system. It took a lot of tinkering with the
kernel and drivers to make it work in the way I wanted to, but in the end I
was left with a system that ran well enough to do some demonstrations at
VCFMW.
This year I want to do more stuff with 1970s era UNIX, but now with even
more technical restrictions. I have had a Heathkit H-11 (consumer grade
PDP-11/03) for a while now, and I have been looking for something
interesting to do with it. From my research, it seems like there were two
different UNIX variants that could run on a system like this. These
variants were LSX and MINI-UNIX. MINI-UNIX seems to require a decent
mass-storage device like a RK05 and some porting to work on an 11/03, while
LSX is designed to work on exactly the hardware specs that I have on hand.
So on to the actual issues I am having at the moment: I have put together a
SIMH instance to get the ball rolling in building a kernel that will boot
on my EIS-less 11/03, but I am having significant difficulty actually
getting the kernel to build. The first issue is that the C compiler will
randomly spit out a "0: Missing temp file" when attempting to compile
something. This is annoying, but circumventable by just running the same
command over and over until it works. The second issue, however, is much
more of a road block for me. I can't seem to get the kernel to actually
link together once everything is compiled. When the final "ld -X" is
executed, I always get the following errors:
"
Undefined:
_end
_edata
_decmch
"
(This is from the build script found in the "shlsx" file)
https://minnie.tuhs.org/cgi-bin/utree.pl?file=LSX/sys/shlsx
I am assuming that this is some sort of issue with the object file
orderings, but I simply do not know enough about V6 ld to properly debug
this issue. I am hoping that somebody here has already run into this issue,
and knows what I am doing wrong.
If I can get this working, I have a long laundry list of modifications and
experiments that I want to run with LSX, but as it stands, this is where I
am stuck at.
Thank you,
Gavin
The interpretation of a string of addresses separated by commas and/or
semicolons was already defined in the v1 man page for ed.
Ed was essentially a stripped-down version of Multics qed. The latter
was originally
written by Ken. Unfortunately the "Multics Condensed Guide" online at
multicians.org describes how strings of addresses were interpreted
only by canonical examples for the various editing requests.
I have no specific memory of semicolons in qed. I have a vague
recollection that semicolons originated in ed, however you should put
no trust in this. Maybe Ken remembers.
Doug
All, Matt e-mailed this to me and the TUHS list, but it doesn't seem to
have made it through so I'm punting on a copy ...
Warren
----- Forwarded message from Matt Gilmore -----
Subject: Documents for UNIX Collections
Good afternoon everyone, my name is Matt Gilmore, and I recently worked with some folks here to help facilitate the scanning and release of the "Documents for UNIX" package as well as a few odds and ends pertinent to UNIX/TS 4.0. I've been researching pretty heavily the history of published memoranda and how they ultimately became the formal documents that Western Electric first published with UNIX/TS 5.0 and System V. Think the User's Guide, Graphics Guide, etc.
In my research, I've found that document sets in a similar spirit have been published since at least Research Version 6. I've been able to track down a few that are on the TUHS source archive in original *ROFF format (Links given as path in the tree to avoid hyperlink mangling):
Research V6: V6/usr/doc
Mini-UNIX: Mini-Unix/usr/doc
PWB/UNIX 1.0: PWB1/usr/man/man0/documents
(note, I'm not sure where the actual docs are, this is just a TOC, Operators Manual is in op in the base man folder)
Wollongong 7/32 Version: Interdata732/usr/doc (only 7/32 relevant docs, allegedly)
Research V7: V7/usr/doc
UNIX/32V: 32V/usr/doc
There are probably others, but these are the ones I'm aware of on the archive for Bell-aligned revisions prior to the commercialization of UNIX/TS as System III. On the note of System III, I seem to have an archive that is slightly different than what is on TUHS, namely in that it has this same documents collection. I can't find it in the System III section on the site, so I'm assuming it isn't hosted anywhere presently. One of the projects I'm working on (slowly) is comparing these documents with the 4.0 docs I scanned for Arnold and making edits to the *ROFF sources with the hopes I could then use them to produce 1:1 clean copies of the 4.0 docs, while providing an easy means for diff'ing the documents as well (to flush out changes between 3.0 and 4.0). Happy to provide this dump to Warren for comparison with what is currently hosted.
Usenix also published documentation sets for 4.2 and 4.3BSD in the 80's which served the same purpose for BSD users. There seems to be a 4.4BSD set as well, although I haven't looked at these yet, I've got a random smattering between 4.2 and 4.3 of the comb-bound Usenix manuals, but I assume the 4.4 set is in a similar vein, with reference guides and supplementary documents. Looks like a lot of the same, but with added documents regarding developments at Berkeley.
Now for my reasons for mailing, there are a couple:
1. Is anyone aware of whether similar document sets were compiled for MERT, UNIX/RT, USG Program Generic, or CB-UNIX? Or would users of those systems have simply been referred to the collection most closely matching the version they're forked from?
2. Was there ever any such document set published in this nature as "Documents for UNIX" consistent of memoranda for 5.0/System V? Or did USG immediately begin by providing just the published trade manuals? The implication here is if USG published no such documents, then the Documents for UNIX 4.0 represents the last time USG compiled the memoranda as they were written (of course with version-related edits) with original authorship and references as a documentation set.
3. Have there been any known efforts to analyze the history and authorship of these documents, explicitly denote errata and revisions, and map out the evolution of the system from a documentation perspective like this?
Thanks for any insight anyone can provide!
- Matt G.
P.S. I'd be interested in doing more preservation work, if anyone else has documents that need preserving, I'll happily coordinate shipment and scanning.
P.P.S. Ccing Warren, I don't know if I'm able to send emails to this list or not, so pardon the extraneous email if not necessary.
----- End forwarded message -----
Hoi,
via a recent message from Chris Pinnock to the list I became aware
of the book ``Ed Mastery'' by Michael W. Lucas. At once I bought
and read it. Although it is not on the mastery level it claims and
I would have liked it to be, it still was fun to read.
This brought me back to my ed interest. I like ed a lot and despite
my young age, I've actually programmed with ed for fun and have
prepared the troff slides for a talk on early Unix tools (like ed)
with ed alone. I use the Heirloom version of ed.
Anyways, I wondered about the possibility to give multiple
addresses ... more than two for relative address searches.
For example, to print the context of the first occurance of `argv'
within the main function, you can use:
/^main(/;/\<argv\>/-2;+4n
For the last occurance it's even one level more:
/^main(/;/^}/;?\<argv\>?-2;+4n
(The semicolons mean that the next search or relative addressing
starts at the result of the previous one. I.e. in this case: We go
to the `main' function, from there go to the function end, then
backwards to `argv' minus two lines and print (with line numbers)
this line and four lines more.)
The manpage of 6th Edition mentiones this possibility to give more
than two addresses:
Commands may require zero, one, or two addresses. Commands
which require no addresses regard the presence of an address
as an error. Commands which accept one or two addresses
assume default addresses when insufficient are given. If
more addresses are given than such a command requires, the
last one or two (depending on what is accepted) are used.
http://man.cat-v.org/unix-6th/1/ed
You can see it in the sources as well:
https://www.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s1/ed.c
(Search for ';' to find the line. There's a loop processing the
addresses.)
V5 ed(1) is in assembler, however, which I cannot read. Thus there
must have been a complete rewrite, maybe introducing this feature
at that point. (I don't know where to find v5 manpage to check
that as well.)
I wonder how using multiple addresses for setting starting points
for relative searches came to be. When was it implemented and what
use cases drove this features back in the days? Or was it more an
accident that was introduced by the implementation, which turned
out to be useful? Or maybe it existed already in earlier versions
of ed, althoug maybe undocumented.
For reference, POSIX writes:
Commands accept zero, one, or two addresses. If more than the
required number of addresses are provided to a command that
requires zero addresses, it shall be an error. Otherwise, if more
than the required number of addresses are provided to a command,
the addresses specified first shall be evaluated and then discarded
until the maximum number of valid addresses remain, for the
specified command.
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ed.html
Here more explanation rom the rationale section:
Any number of addresses can be provided to commands taking
addresses; for example, "1,2,3,4,5p" prints lines 4 and 5, because
two is the greatest valid number of addresses accepted by the print
command. This, in combination with the <semicolon> delimiter,
permits users to create commands based on ordered patterns in the
file. For example, the command "3;/foo/;+2p" will display the first
line after line 3 that contains the pattern foo, plus the next two
lines. Note that the address "3;" must still be evaluated before
being discarded, because the search origin for the "/foo/" command
depends on this.
As far as I can see, multiple addresses make only sense with the
semicolon separator, because the comma separator does not change
the state, thus previous addresses can have no effect on later
addresses. The implementation just does not forbid them, for
simplicity reasons.
meillo
Hi.
EFL was definitely a part of BSD Unix. But I don't see it in the V7
stuff in the TUHS archives. When did it first appear? Was it part
of 32V and I should look there?
It is definitely in the V8 and V10 stuff.
Did anyone actually use it? I have the feeling that ratfor had already
caught on and spread far, and that it met people's needs, and so
EFL didn't really catch on that much, even though it provided more
features on top of Fortran.
Thanks,
Arnold
> On Sun, Jul 3, 2022 at 1:33 PM Marc Donner wrote:
>
> I've been ruminating on the question of whether networks are different from
> disks (and other devices). Here are a couple of observations:
[...]
From my perspective most of these things are not unique to networks, they happen with disks and/or terminals. Only out-of-order delivery seems new. However, in many early networking contexts (Spider/Arpanet/Datakit/UUCP) this aspect was not visible to the host (and the same holds for a single segment ethernet).
To me, in some ways networks are like tty’s (e.g. completing i/o can take arbitrarily long, doing a seek() does not make sense), in other ways they are like disks (raw devices are organised into byte streams, they have a name space). Uniquely, they have two end-points, only one of which is local (but pipes come close).
Conceptually, a file system does two things: (i) it organises raw blocks into multiple files; these are the i-nodes and (ii) it provides a name space; these are directories and the namei routine. A network stack certainly does the first: a raw network device is organised into multiple pipe-like connections; depending on the network, it optionally offers a naming service.
With the first aspect one could refer to any file by “major device number, minor device number, i-node number”. This is not very different from referring to a network stream by “network number, host number, port number” in tcp/ip (and in fact this is what bind() and connect() in the sockets API do), or “switch / host / channel” in Datakit. For disks, Unix offers a clean way to organise the name spaces of multiple devices into a unified whole. How to do this with networks is not so easy, prior to the invention of the file system switch.
Early on (Arpanet Unix), it was tried to incorporate host names into a net directory by name (RFC 681) but this is not scalable. Another way would be to have a virtual directory and include only names for active connections. The simple way would be to use a text version of the numeric name as described above - but that is not much of an improvement. Better to have a network variant of namei that looks up symbolic names in a hosts file or in a network naming service. The latter does not look very performant on the hardware of 40 years ago, but it appears to have worked well on the Alto / PuPs network at Xerox PARC.
With the above one could do
open(“/net/inet/org.tuhs.www:80”, O_RDWR | O_STREAM)
to connect to the TUHS web server, and do
open(“/net/inet/any:80”, O_RDWR | O_STREAM | O_CREAT, 0600)
to create a ‘listening’ (rendez-vous) socket.
Paul
On Sun, Jul 03, 2022 at 05:55:15PM +1000, steve jenkin wrote:
>
> > On 3 Jul 2022, at 12:27, Larry McVoy <lm(a)mcvoy.com> wrote:
> >
> > I love the early Unix releases because they were so simple, processors
> > were simple then as well.
>
>
> Bell???s Observation on Computer Classes has brought surprises
> - we???ve had some very popular new devices appear at the bottom end of the market and sell in the billions.
Yes, and they all run Linux or some tiny OS. Has anyone ported v7 to
any of these devices and seen it take off? Of course not, it doesn't
have TCP/IP.