> Message: 7
> Date: Thu, 15 Jul 2021 10:28:04 -0400
> From: "Theodore Y. Ts'o"
> Subject: Re: [TUHS] head/sed/tail (was The Unix shell: a 50-year view)
>
> On Wed, Jul 14, 2021 at 10:38:06PM -0400, Douglas McIlroy wrote:
>> Head might not have been written if tail didn't exist. But, unlike head,
>> tail strayed from the tao of "do one thing well". Tail -r and tail -f are
>> as cringeworthy as cat -v.
>>
>> -f is a strange feature that effectively turns a regular file into a pipe
>> with memory by polling for new data, A clean general alternative
>> might be to provide an open(2) mode that makes reads at the current
>> file end block if some process has the file open for writing.
>
> OTOH, this would mean adding more functionality (read: complexity)
> into the kernel, and there has always been a general desire to avoid
> pushing <stuff> into the kernel when it can be done in userspace. Do
> you really think using a blocking read(2) is somehow more superior
> than using select(2) to wait for new data to be appended to the file?
>
> And even if we did this using a new open(2) mode, are you saying we
> should have a separate executable in /bin which would then be
> identical to cat, except that it uses a different open(2) mode?
Yes, it would put more complexity into the kernel, but maybe it is conceptually elegant.
Consider a classic pipe or a socket and the behaviour of read(2) for those objects. The behaviour of read(2) that Doug proposes for a file would make it in line with that for a classic pipe or a socket. Hence, maybe it should not be a mode, but the standard behaviour.
I often think that around 1981 the Unix community missed an opportunity to really think through how networking should integrate with the foundations of Unix. It seems to me that at that time there was an opportunity to merge files, pipes and sockets into a coherent, simple framework. If the 8th edition file-system-switch had been introduced already in V6 or V7, maybe this would have happened.
On the other hand, the installed base was probably already too large in 1981 to still make breaking changes to core concepts. V7 may have been the last chance saloon for that.
Paul
Below is a response from two of the authors with my response to it
inline. Not very impressed. Hopefully they'll get a clue and up
their game. In any case, enough time spent on it.
Jon
Michael Greenberg writes:
>
> HotOS isn't exactly a conventional venue---you might notice that many
> if not most HotOS papers don't fit the outline you've given.
I'm aware of that, and have participated in many unconventional venues
myself. I wasn't saying that papers must fit that outline, but I do
believe that they should contain that information. There's a big
difference between a discussion transcript and a paper; I believe that
papers, especially those published under the auspices of a prestigious
organization such as the ACM, should adhere to a higher standard.
> I'd definitely appreciate detailed feedback on any semantic errors
> we've made!
Unfortunately I can't help you here; that was feedback from
a reader who doesn't want to spend any more time on this.
> Your summary covers much of what we imagined!
>
> As I understand it, the primary goals of the paper were to (a) help
> other academics think about the shell as a viable area for research, (b)
> highlight some work we're doing on JIT compilation, (c) make the case
> for JIT approaches to the shell in general (as its well adapted to its
> dynamism), and (d) explore various axes on which the shell could be
> improved. It seems like we've done a good job communicating (b) to you,
> but perhaps not the rest. Sorry again to disappoint.
I certainly hope that you understand the primary goals of your own paper.
Point-by-point:
(a) While this is a valid point I don't understand why the paper didn't
just state it in a straightforward manner. There are several common
forms. One is to list issues in the introduction while explaining
which one(s) will be addressed in the paper. Another is in the
conclusion where authors list work still to be done.
(b) At least for me this goal was not accomplished because there were no
examples. Figure 1 by itself is insufficient given that the code
used to generate the "result" is not shown. It would have been much
more illuminating had the paper not only shown that code but also the
optimized result. Professionals don't blithely accept magic data.
(c) The paper failed to make this case to me for several reasons.
As I understand it, the paper is somewhat about applying JIT
compilation techniques to interconnected processes. While most
shells include syntax to support the construction of such, it's
really independent of the shell. For completeness, I have a vague
memory of shell implementations for non-multitasking systems that
sequentially ran pipelined programs passing intermediate results
via temporary files. The single "result" reminds me of something
that I saw at a science fair when my child was in middle school;
I looked a one team's results and asked "What makes you think that
a sample size of one is significant?" The lack of any benchmarks
or other study results that justified the work also undermined the
case. It reads more like the authors had something that they wanted
to play with rather than doing serious research. The paper does not
examine the percentage of shell scripts that actually benefit from
JIT compilation; for all the reader may know it's such a small number
that hand-optimizing just those scripts might be a better solution.
I suppose that the paper fits into the apparently modern philosophy
of expecting tools to fix up poorly written code so that programmers
don't have to understand what they're doing.
(d) In my opinion the paper didn't do this at all. There was no
analysis of "the shell" showing weaknesses and an explanation
of why one particular path was taken. And there was no discussion
of what was being done with shells to cause whatever problems you
were addressing and possibly ameliorating problems with some up-front
sanity. Again, being a geezer I'm reminded of past events
that repeat themselves. I joined a start-up company decades ago
that was going to speed up circuit simulation 100x by plugging
custom-designed floating-point processing boards into a multiprocessor
machine. I managed to beat that 100x just by cleverly crafting the
database and memory management code. The fact that the company founder
never verified his idea led to a big waste of resources. But, he did
manage to raise venture capital which is akin to getting DARPA funds.
Nikos Vasilakis writes:
>
> To add to Michael's points, HotOS' "position" papers are often
> intended as provocative, call-to-arms statements targeting primarily
> the research community (academic and industrial research). Our key
> position, which we possibly failed to communicate, is "Hey researchers
> everywhere, let's do more research on the shell! (and here's why)".
While provocative name-calling and false statements seem to have become
the political norm in America I don't think that they're appropriate in
a professional context.
In my experience a call-to-arms isn't productive unless those called
understand the nature of the call. I'm reminded of something that happened
many decades ago; my partner asked me to proof a paper that she was writing
for her masters degree. I read it over with a confused look and asked her
what she was trying to say. She responded, and I told her to write that
down and stop trying to sound like someone else. Turned her into a much
better writer. So if the paper wanted to say "Hey researchers ..." it
should have done so instead of being obtuse.
To continue on this point and Michael's (a) above, I don't see a lot of
value in proclaiming that research can be done. I think that a more
fruitful approach is to cover what has been done, what you're doing,
and what you see but aren't doing.
> For our more conventional research papers related to the shell, which
> might cover your concerns about semantics, correctness, and
> performance please see next. These three papers also provide important
> context around the HotOS paper you read:
> ...
Tracking down your other work was key to understanding this paper. It's
my opinion that my having to do is illustrative of the problems with the
paper.
> Thank you for taking the time to read our paper and comment on it.
> Could you please share the emails of everyone mentioned at the end of
> your email? We are preparing a longer report on a recent shell
> roundtable, and would love to get everyone's feedback!
While I appreciate the offer to send the longer report, it would only be
of interest if it was substantially more professional. There is no interest
in reviewing work that is not clearly presented, has not been proofread
and edited, includes unsubstantiated, pejorative, or meaningless statements,
includes incorrect examples, or statistically irrelevant results. Likewise,
there is also no interest if the homework hasn't been done to put the
report in context with regards to prior art and other current work.
Jon Steinghart <jsacm(a)fourwinds.com>
Warner Losh <imp(a)bsdimp.com>
John Cowan <cowan(a)ccil.org>
Larry McVoy <lm(a)mcvoy.com>
John Dow <jmd(a)nelefa.org>
Andy Kosela <akosela(a)andykosela.com>
Clem Cole <clemc(a)ccc.com>
Steve Bourne does not want to give out his address
On the subject of tac (concatenate and print files in reverse), I can
report that the tool was written by my late friend Jay Lepreau in the
Department of Computer Science (now, School of Computing) at the
University of Utah. The GNU coreutils distribution for src/tac.c
contains a copyright for 1988-2020.
I searched my TOPS-20 PDP-10 archives, and found no source code for
tac, but I did find an older TOPS-20 executable in Jay's personal
directory with a file date of 17-Mar-1987. There isn't much else in
that directory, so I suspect that he just copied over a needed tool
from his Department of Computer Science TOPS-20 system to ours in the
College of Science.
----------------------------------------
P.S. Jay was the first to get Steve Johnson's Portable C Compiler,
pcc, to run on the 36-bit PDP-10, and once we had pcc, we began the
move from writing utilities in Pascal and PDP-10 assembly language to
doing them in C. The oldest C file for pcc in our PDP-10 archives is
dated 17-Mar-1981, with other pcc files dated to mid-1983, and final
compiler executables dated 12-May-1986. Four system header files are
dated as late as 4-Oct-1986, presumably patched after the compiler was
built.
Later, Kok Chen and Ken Harrenstien's kcc provided another C compiler
that added support for byte datatypes, where a byte could be anything
from 1 to 36 bits. The oldest distribution of kcc in our archives is
labeled "Fifth formal distribution snapshot" and dated 20-Apr-1988.
My info-kcc mailing list archives date from the list beginning, with
an initial post from Ken dated 27-Jul-1986 announcing the availability
of kcc at sri-nic.arpa.
By mid-1987, we had a dozen Sun workstations and NFS fileserver; they
marked the beginning of our move to a Unix workstation environment,
away from large, expensive, and electricity-gulping PDP-10 and VAX
mainframes.
By the summer of 1991, those mainframes were retired. I recall
speaking to a used-equipment vendor about our VAX 8600, which cost
about US$450K (discounted academic pricing) in 1986, and was told that
its value was depreciating about 20% per month. Although many of us
missed TOPS-20 features, I don't think anyone was sad to say goodbye
to VMS. We always felt that the VMS developers worked in isolation
from the PDP-10 folks, and thus learned nothing from them.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Some comments from someone (me) who tends to be pickier than
most about cramming programs together and endless sets of
options:
I, too, had always thought sed was older than head. I stand
corrected. I have a long-standing habit of typing sed 10q but
don't spend much time fussing about head.
When I arrived at Bell Labs in late summer 1984, tail -f was
in /usr/bin and in the manual, readslow was only in /usr/bin.
readslow was like tail -f, except it either printed the entire
file first or (option -e) started at the end of the file.
I was told readslow had come first, and had been invented in a
hurry because people wanted to watch in real time the moves
logged by one of Belle's chess matches. Vague memory says it
was written by pjw; the name and the code style seem consistent
with that.
Personally I feel like tail -r and tail -f both fit reasonably
well within what tail does, since both have to do with the
bottom of the file, though -r's implementation does make for
a special extra code path in tail so maybe a separate program
is better. What I think is a bigger deal is that I have
frequently missed tail -r on Linux systems, and somehow hadn't
spotted tac; thanks to whoever here (was it Ted?) pointed it
out first!
On the other hand, adding data-processing functions to cat has
never made sense to me. It seems to originate from a mistaken
notion that cat's focus is printing data on terminals, rather
than concatenating data from different places. Here is a test:
if cat -v and cat -n and all that make sense, why shouldn't
cat also subsume tr and pr and even grep? What makes converting
control characters and numbering lines so different from swapping
case and adding page headers? I don't see the distinction, and
so I think vis(1) (in later Research) makes more sense than cat -v
and nl(1) (in Linux for a long time) more sense than cat -n.
(I'd also happily argue that given nl, pr shouldn't number lines.
That a program was in V6 or V7 doesn't make it perfect.)
And all those special options to wc that amounted to doing
arithmetic on the output were always just silly. I'm glad
they were retracted.
On the other other hand, why didn't I know about tac? Because
there are so damn many programs in /usr/bin these days. When
I started with UNIX ca. 1980, the manual (even the BSD version)
was still short enough that one could sit down and read it through,
section by section, and keep track of what one had read, and
remember what all the different tools did. That hasn't been
true for decades. This could be an argument for adding to
existing programs (which many people already know about) rather
than adding new programs (which many people will never notice).
The real problem is that the system is just too damn big. On
an Ubuntu 18.04 system I run, ls /usr/bin | wc -l shows 3242
entries. How much of that is redundant? How much is rarely or
never used? Nobody knows, and I suspect few even try to find
out. And because nobody knows, few are brave enough to throw
things away, or even trim out bits of existing things.
One day in the late 1980s, I helped out with an Introduction
to UNIX talk at a DECUS symposium. One of the attendees noticed
the `total' line in the output of ls, and asked why is that there?
doesn't that contradict the principles of tools' output you've
just been talking about? I thought about it, and said yes,
you're right, that's a bit of old history and shouldn't be
there any more. When I got home to New Jersey, I took the
`total' line out of Research ls.
Good luck doing anything like that today.
Norman Wilson
Toronto ON
What was the first machine to run rogue? I understand that it was written
by Glenn Wichman and Michael Toy at UC Santa Cruz ca. 1980, using the
`curses` library (Ken Arnold's original, not Mary Ann's rewrite). I've seen
at least one place that indicates it first ran on 6th Edition, but that
doesn't sound right to me. The first reference I can find in BSD is in 2.79
("rogue.doc"), which also appears to be the first release to ship curses.
Anyone have any info? Thanks!
- Dan C.
In this week's BSDNow.tv podcast, available at
https://www.bsdnow.tv/409
there is a story about a new conference paper on the Unix shell. The
paper is available at
Unix shell programming: the next 50 years
HotOS '21: Workshop on Hot Topics in Operating Systems, Ann
Arbor, Michigan, 1 June, 2021--3 June, 2021
https://doi.org/10.1145/3458336.3465294
The tone is overall negative, though they do say nice things about
Doug McIlroy and Steve Johnson, and they offer ideas about
improvements.
List readers will have their own views of the paper. My own is that,
despite its dark corners, the Bourne shell has served us
extraordinarily well, and I have been writing in it daily for decades
without being particularly bothered by the many issues raised by the
paper's authors. Having dealt with so-called command shells on
numerous other operating systems, at least the Unix shells rarely get
in my way.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Jon Steinhart
> I use UNIX instead of Unix as that's what I believe is the correct form.
Well, Bell documentation uses "UNIX" through V6:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V6/usr/doc/start/start
"Unix" starts to appear with V7:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V7/usr/doc/setup
As mentioned, the trademark is "UNIX".
I don't really have a fixed position on _the_ way to spell it: when I'm
wiriting about a specific version (e.g. V6) I use the capitalization as of
that version; for general text I'd probably use 'Unix', as that seems to be
general now. But I could easily be convinced otherwise.
Noel
Once again, thanks to everybody who as contributed to making this a
better letter. Many of you have asked to be co-signers. Please
let me know if I've included your name by mistake or if you'd like
your name to be added. And, of course, let me know if any more
edits are required.
BTW, except where I'm quoting the paper I use UNIX instead of Unix
as that's what I believe is the correct form. Please let me know
if that's not correct.
Thanks,
Jon
I read the "Unix Shell Programming: The Next 50 Years" paper
expecting some well thought out wisdom. I was sorely disappointed.
The paper is lacking the generally accepted form of:
o What problem are you trying to solve?
o What have others done?
o What's our approach?
o How does it do?
Some particulars:
o The paper never defines what is meant by the term "Unix shell."
I think that you're using it to mean a command interpreter as
described in the POSIX 1003.2 documents.
o The paper makes liberal use of the term "Unix" such as "... in
every Unix distribution." While systems descended from UNIX
abound few actual instances of UNIX exist today.
o There is no 50-year-old UNIX shell. I started using UNIX in the
early 1970s, and the command interpreter at the time (Ken Thompson's
shell) was nothing like later shells such as the Bourne shell (sh
since research V7 UNIX), Korn shell (ksh), C shell (csh), and the
Bourne again shell (bash). UNIX mainstreamed the notion of a
command interpreter that was not baked into the system. The paper
lacks any discussion of prior art. In practice, shell implementations
either predate the POSIX standard or were built afterwards and
include non-standard extensions.
o The paper repeatedly claims that the shell has been largely ignored by
academia and industry. Yet, it does not include any references to
support that claim. In fact, the large body of published work on
shells and ongoing work on shells such as zsh shows that claim to be
incorrect.
o The large number of pejorative statements detract from the academic
value of the paper. And, in my opinion, these statements are provably
false. It reads as if the authors are projecting their personal
opinions onto the rest of the world.
o The paper applies universal complaints such as "unmaintainable" to the
shell; it doesn't call out any shell-specific problems. It doesn't
explain whether these complaints are against scripts, implementations,
or both. One of the reasons for the longevity of the family of shells
descended from Bourne's sh is that experienced practitioners have been
able to write easily maintainable code. Scripts written in the 1980s
are still around and working fine.
o The paper seems to complain about the fact that the shell is documented.
This is astonishing. Proper documentation has always been a key
component of being a professional, at least in my decades of experience.
As a matter of fact, my boss telling me that "nobody will give a crap
about your work unless you write a good paper" when I was a teenager
at Bell Labs is what led me to UNIX and roff.
o The paper includes non-sequiturs such as discussions about Docker
and systemd that have nothing to to with the shell.
o The paper has many "no-op" statements such as "arguably improved" that
provide no useful information.
o The example on page 105 don't work as there is no input to "cut".
o The single result in Figure 1 is insufficient evidence that the
approach works on a wide variety of problems.
o The paper gives the appearance that the authors don't actually understand
the Bourne shell semantics. Not just my opinion; Steve Bourne expressed
that in an email to me after he read your paper, and I consider him to be
a subject matter expert.
o The paper confuses the performance of the shell with the performance of
external commands executed by the shell.
o Proofreading should have caught things like "improve performance
performance" on page 107 among others.
I think that the paper is really trying to say:
o Programmable command interpreters such as those found in UNIX based
systems have been around for a long time. For this paper, we're
focusing on the GNU bash implementation of the POSIX P1003.2 shell.
Other command interpreters predate UNIX.
o This implementation is used more often than many other scripting
languages because it is available and often installed as the default
command interpreter on most modern systems (UNIX-based and otherwise).
In particular, it is often the default for Linux systems.
o The shell as defined above is being used in more complex environments
than existed at the time of its creation. This exposes a new set of
performance issues.
o While much work has been done by the bash implementers, it's primarily
been in the area of expanding the functionality, usually in a
backward-compatible manner. Other shells such as the original ksh and
later ash and zsh were implemented with an eye towards the performance
of the internals and user perspectives.
o Performance optimization using modern techniques such as JIT compilation
have been applied to other languages but not to POSIX shell implementations.
This paper looks at doing that. It is unsurprising that techniques that
have worked elsewhere work here too.
It's hard to imagine that the application of this technique is all that's
required for a 50-year life extension. The title of this paper implies
that it's going to be comprehensive but instead concentrates on a couple
of projects. It ignores other active work on shells such as "fish". While
it wouldn't eliminate the issues with the paper, they would not have been
quite so glaring had it had a more modest title such as "Improving POSIX
Shell Performance with JIT Compilation".
Jon Steinhart plus John Cowan, Warner Losh,
John Dow, Steve Bourne, Larry McVoy, and Clem Cole
I not only found this paper offensive, but was more offended that
ACM would publish something like this and give it an award to boot.
I'm planning to send the authors and ACM what's below. Would
appreciate any feedback that you could provide to make it better.
Thanks,
Jon
I read your "Unix Shell Programming: The Next 50 Years" expecting
some well thought out wisdom from learned experiences. I was
sorely disappointed.
o The paper never defines what is meant by the term "Unix shell."
I think that you're using to mean a command interpreter as
described in the POSIX 1003.2 documents.
o There is no 50 year old Unix shell. I started using Unix in the
early 1970s, and the command interpreter at the time (Ken Thompson's
shell) was nothing like later shells such as the Bourne shell (sh
since research V7 Unix), Korn shell (ksh), C shell (csh), and the
Bourne again shell (bash). The paper is missing any discussion of
prior art. In practice, shell implementations either predate the
POSIX standard or were built afterwards and include non-standard
extensions.
o The paper repeats a claim that the shell has been largely ignored by
academia and industry. Yet, it does not include any references that
support that claim. My own experience and thus opinion is the
opposite making the veracity of your claim questionable. As a reader,
such unsubstantiated claims make me treat the entire content as suspect.
o The paper applies universal complaints such as "unmaintainable" to the
shell; it doesn't call out any shell-specific problems. It doesn't
explain whether these complaints are against the scripts, the
implementation, or both. One of the reasons for the longevity of the
sh/bash shells is that experienced practicioners have been able to
write easily maintainable code. Scripts written in the 1980s are
still around and working fine.
o The paper seems to complain that the fact that the shell is documented
is a problem. This is an astonishing statement. In my decades as an
acting professional, teacher, and author, proper documentation is a key
component of being a professional.
o The paper is full of non-sequiturs such as discussions about Docker
and systemd that have nothing to to with the shell.
o The paper has many "nop" statements such as "arguably improved" that
don't actually say anything.
o Examples, such as the one on page 105 don't work.
o Proofreading should have caught things like "improve performance
performance" on page 107 among others.
o The references contain many more items than the paper actually
references. Did you plagerize the bibliography and forget to
edit it?
o The single result in Figure 1 is insufficient evidence that the
approach works on a wide variety of problems.
o The paper makes it appear that the authors don't actually understand
the semantics of the original Bourne shell. Not just my opinion; I
received an email from Steve Bourne after he read your paper, and I
consider him to be a subject matter expert.
The paper is lacking the generally accepted form of:
o What problem are you trying to solve?
o What have others done?
o What's our approach?
o How does it do?
Filtering out all of the jargon added for buzzword compliance, I think
that the paper is really trying to say:
o Programmable command interpreters such as those found in Unix-based
systems have been around for a long time. For this paper, we're
focusing on the GNU bash implementation of the POSIX P1003.2 shell.
Other command interpreters predate Unix.
o This implementation is used more often than many other scripting
languages because it is available and often installed as the default
command interpreter on most modern systems (Unix-based or otherwise).
In particular, it is often the default for Linux systems.
o The shell as defined above is being used in ways that are far more
complex than originally contemplated when Bourne created the original
syntax and semantics, much less the new features added by the POSIX
standards committee. The combination of both the POSIX and bash
extensions to the Bourne shell exposes a new set of limitations and
issues such as performance.
o While much work has been done by the bash implementors, it's primarily
been in the area of expanding the functionality, usually in a
backward-compatible manner. Other shells such as the original ksh and
later ash and zsh were implemented with an eye towards the performance
of the internals and user perspectives.
o Performance optimization using modern techniques such as JIT have been
applied to other languages but not to POSIX shell implementations. This
paper looks at doing that. It is unsurprising that techniques that have
worked elsewhere work here too.
o It's hard to imagine that the application of this technique is all that's
required for a 50-year life extension. The title of this paper implies
that it's going to be comprehensive rather than just being a shameless
plus for an author's project.
Of course, this doesn't make much of a paper. Guessing that that's why it
was so "bulked up" with irrelevancies.
It appears that all of you are in academia. I can't imagine that a paper
like this would pass muster in front of any thesis committee, much less
get that far. Not only for content, but for lack of proofreading and
editing. The fact that the ACM would publish such a paper eliminates any
regret that I may have had in dropping my ACM membership.
Thanks to everyone who provided me feedback on the first pass, especially
those who suggested "shopt -u flame-off". Here's the second version.
Once again, would appreciate feedback.
Thanks,
Jon
I read your "Unix Shell Programming: The Next 50 Years" expecting
some well thought out wisdom. I was sorely disappointed.
The paper is lacking the generally accepted form of:
o What problem are you trying to solve?
o What have others done?
o What's our approach?
o How does it do?
Some particulars:
o The paper never defines what is meant by the term "Unix shell."
I think that you're using it to mean a command interpreter as
described in the POSIX 1003.2 documents.
o There is no 50 year old Unix shell. I started using Unix in the
early 1970s, and the command interpreter at the time (Ken Thompson's
shell) was nothing like later shells such as the Bourne shell (sh
since research V7 Unix), Korn shell (ksh), C shell (csh), and the
Bourne again shell (bash). Unix mainstreamed the notion of a
command interpreter that was not baked into the system. The paper
lacks any discussion of prior art. In practice, shell implementations
either predate the POSIX standard or were built afterwards and
include non-standard extensions.
o The paper repeats a claim that the shell has been largely ignored by
academia and industry. Yet, it does not include any references that
support that claim. In fact, the large body of published work on
shells and ongoing work on shells such as zsh shows that claim to be
incorrect.
o The paper applies universal complaints such as "unmaintainable" to the
shell; it doesn't call out any shell-specific problems. It doesn't
explain whether these complaints are against the scripts, the
implementation, or both. One of the reasons for the longevity of the
family of shells descended from Bourne's sh is that experienced
practicioners have been able to write easily maintainable code.
Scripts written in the 1980s are still around and working fine.
o The paper seems to complain that the fact that the shell is documented
is a problem. This is astonishing. Proper documentation has always
been a key component of being a professional in my decades of experience.
As a matter of fact, my boss telling me that "nobody will give a crap
about your work unless you write a good paper" when I was a teenager
at Bell Labs is what led me to UNIX and nroff.
o The paper includes non-sequiturs such as discussions about Docker
and systemd that have nothing to to with the shell.
o The paper has many "nop" statements such as "arguably improved" that
don't actually say anything.
o Examples, such as the one on page 105 don't work as there is no input
to "cut".
o The single result in Figure 1 is insufficient evidence that the
approach works on a wide variety of problems.
o The paper gives the appearance that the authors don't actually understand
the semantics of the original Bourne shell. Not just my opinion; I
received an email from Steve Bourne after he read your paper, and I
consider him to be a subject matter expert.
o Proofreading should have caught things like "improve performance
performance" on page 107 among others.
I think that the paper is really trying to say:
o Programmable command interpreters such as those found in Unix-based
systems have been around for a long time. For this paper, we're
focusing on the GNU bash implementation of the POSIX P1003.2 shell.
Other command interpreters predate Unix.
o This implementation is used more often than many other scripting
languages because it is available and often installed as the default
command interpreter on most modern systems (Unix-based or otherwise).
In particular, it is often the default for Linux systems.
o The shell as defined above is being used in ways that are far more
complex than originally contemplated when Bourne created the original
syntax and semantics, much less the new features from kash adopted by
the POSIX standards committee. The combination of both the POSIX and
bash extensions to the Bourne shell exposes a new set of limitations
and issues such as performance.
o While much work has been done by the bash implementors, it's primarily
been in the area of expanding the functionality, usually in a
backward-compatible manner. Other shells such as the original ksh and
later ash and zsh were implemented with an eye towards the performance
of the internals and user perspectives.
o Performance optimization using modern techniques such as JIT compilation
have been applied to other languages but not to POSIX shell implementations.
This paper looks at doing that. It is unsurprising that techniques that
have worked elsewhere work here too.
It's hard to imagine that the application of this technique is all that's
required for a 50-year life extension. The title of this paper implies
that it's going to be comprehensive but instead concentrates on a couple
of projects. It ignores other active work on shells such as "fish". While
the issues with the paper remain, they would not have been quite so glaring
had it had a more modest title such as "Applying JIT Compilation to the
POSIX Shell".
UNIX history memoir has been published in China (publish by posts and Telecommunications Press in China) and translated into Chinese. It is also called name "UNIX legend" in China. I'm going to buy now.
If you're in the computer industry, it's exciting just to understand how these buzzwords came into being. Even if you don't have a deep technical background,
You can also benefit a lot from these ides that shine with genius.
I have been looking forward to a Chinese book about the history of Unix development for so many years, and now I can finally see it.
Thanks bwk and other guys.
Some of us have, literally for decades, been dealing with
wtmp by rolling it weekly or monthly or quarterly or whatever,
letting cron run something like
cd /usr/adm # that's how long I've been doing this!
umask 022
>wtmp.new
ln wtmp wtmp.prev
mv wtmp.new wtmp
# also so long ago there was no seq(1)
nums=`awk 'BEGIN {for (i = 12; i >= 0; i--) print i; exit}'`
for i in $nums; do
inext=`expr $i + 1`
if [ -f wtmp.$i ]; then
mv wtmp.$i wtmp.$inext
fi
done
mv wtmp.prev wtmp.0
This really isn't rocket science. It isn't even particularly
interesting UNIX history. Can we move on to something that IS
interesting?
Here are some things I find more interesting:
1. utmp came before wtmp: utmp(V) appears in the First Edition
manual, wtmp(V) only in the Second. Apparently interest in
who else is logged in right now predated interest in who has
logged in recently.
2. Both files started out in /tmp. wtmp is first said to be
in /usr/adm instead in the Fifth Edition manual, utmp in /etc
in the Sixth.
3. The names /tmp/utmp and /tmp/wtmp appear to have been
issued by the Department of Redundancy Department. I think
it quite likely that Ken and Dennis would have been familiar
with that joke once the recording containing it was issued
in mid-1970, but I don't know whether utmp existed in some
form before that. I see no sign of it in the fragments of
PDP-7 source code we have (in particular init doesn't seem
to use it), but what about later PDP-7 or very early PDP-11
code predating the late-1971 First Edition manual?
Norman Wilson
Toronto ON
Not Insane
As long ago as the 7th Edition, several binary log files were maintained:
the file generated by acct(2) (one record per process) and the utmp and
wtmp files (one record per login). Both of these are defined by structs in
.h files, so they are definitely not portable (int sizes, endianism, etc.)
The last article of the latest issue of the Communications of the ACM
that appeared electronically earlier today is a brief interview with
this year's ACM Turing Award winners, Al Aho and Jeff Ullman.
The article is
Last byte: Shaping the foundations of programming languages
https://doi.org/10.1145/3460442
Comm. ACM 64(6), 120, 119, June 2021.
and it includes a picture of the two winners sitting on Dennis
Ritchie's couch.
I liked this snippet from Jeff Ullman, praising fellow list member
Steve Johnson's landmark program, yacc:
>> ...
>> At the time of the first Fortran compiler, it took several
>> person-years to write a parser. By the time yacc came around,
>> you could do it in an afternoon.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Wow. This is a terrible paper. It's full of of incorrect, unsubstiantiated,
and meaningless statements. It's so bad in my opinion that I'm not sorry
that I dropped my ACM membership a while ago. These folks really needed an
editor. The paper annoys me so much that I've decided to write a lengthy
note to the authors which I'll post here once I'm done.
Jon
>From what I can gather the only way to reasonably examine the disassembly
of a program in the early days of Unix was adb. Is this true? Was there a
way to easily produce a full disassembly? I'll confess to being fairly
ignorant of adb use since I always had dbx or the equivalent available.
The first tool I'm aware of to purposefully/primarily produce a full
listing is MIPS dis (ca. 1986?) but there must have been something before
that for other systems, no?
-Henry
On 7/1/21, scj(a)yaccman.com <scj(a)yaccman.com> wrote:
> When PCC came along and started running on 32-bit machines, I started
> thinking about algorithms for optimization. A problem that I had no
> good solution for could be illustrated by a simple piece of code:
>
> x = *p;
>
> y = *q;
>
> q gets changed
>
> *q = z;
>
> The question is, do I need to reload x now because q might have been
> changed to point to the same place as p?
Yes, this is a very well-known problem in scalar optimization in
compiler engineering. It's called pointer disambiguation and is part
of the more general problem of data flow analysis. As you observed,
getting it wrong can lead to very subtle and hard-to-debug correctness
problems. In the worst case, one has to throw out all current data
flow analysis of global and currently active local variables and start
over. In your example, the statement "*q = z" may end up forcing the
compiler to toss out all data flow information on x and z (and maybe p
and q as well). If q could possibly point to x and x is in a
register, the assignment forces x to be reloaded before its next use.
Ambiguous pointers prohibit a lot of important optimizations. This
problem is the source of a lot of bugs in compilers that do aggressive
optimizations.
Fortunately a bit of knowledge of just how "q gets changed" can rescue
the situation. In strongly-typed languages, for example, if x and z
are different data types, we know the assignment of z through q can't
affect x. We also know that the assignment can't affect x if x and z
have disjoint scopes.
The 'restrict' keyword in C pointer declarations was added to help
mitigate this problem.
Some compilers also have a command line option that allows the user to
say, "I solemnly swear that I won't do this sort of thing".
-Paul W.
> From: Paul Riley <paul(a)rileyriot.com>
>> (I wrote a note, BITD, explaining how all this worked; I'll upload it
>> to the CHWiki when I get a chance.)
Now here:
https://gunkies.org/wiki/PDP-11_C_stack_operation
along with simple examples of args and auto variables, which are both
referenced via the FP.
> As a non-C consumer of printf, should I point R5 at some space for a
> stack and call printf in the same manner as the C example I cited?
Not necessary to do anything with R5 (you can leave it blank); the only things
a PDP-11 C routine needs are:
- a good stack
- the arguments, and return point, on the top of the stack
csv will set up the frame pointer, making no assumptions about the old
contents of R5 - see the source:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/lib/csv.s
although it does save the old R5 contents, and restore them on exit.
Noel
> From: Paul Riley
> I want to use printf from an assembly language program, in V6. ... the
> substitutional arguments for printf are pushed onto the stack in reverse
> order, then the address of the string, and then printf is called. After
> this, 6 is added to the stack pointer.
This is all down to the standard C environment / calling sequence on the
PDP-11 (at least, in V6 C; other compilers may do it differently). Calls to
printf() are in no way special.
Very briefly, there's a 'frame pointer' (R5) which points to the current stack
frame; all arguments and automatics are relative to that. A pair of special
routines, csv and cret (I couldn't find the source on TUHS, but it happens to
be here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/lib/csv.s
if you want to see it), set up and tear down the frame on entry/exit to that
routine. The SP (R6) points to a blank location on the top (i.e. lower address;
PDP-11 stacks grow down) of the stack.
To call a subroutine, the arguments are pushed, the routine is called (which
pushes the return PC), and on return (which pops the return PC), the arguments
are discarded by the caller.
(I wrote a note, BITD, explaining how all this worked; I'll upload it to the
CHWiki when I get a chance.)
> I assume that the printf routine pops the address of the string off the
> stack, but leaves the other values on the stack
No, all C routines (including printf()) leave the stack more or less alone,
except for CSV/CRET hackery, allocating space for automatic variables on
routine entry (that would be at L1; try looking at the .s for a routine with
automatic variables), and popping the return PC on exit. The exception to this
is the stuff around calling _enother_ routine (sketched above).
Another exception is alloca() (source here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/lib/alloca.s
again, couldn't find it in TUHS), which allocated a block of memory on
the stack (automatically discarded when the routine which called alloca()
returns). Note that since all automatic variables and incoming arguments
are relative to the FP, alloca is _really_ simple; justs adjusts the
SP, and it's done.
> What troubles me is that the stack pointer is not decremented before the
> first mov, in the example below. Is this some C convention? I would
> assume that the first push in my example would overwrite the top of the
> stack.
That's right; that's because in the C runtime environment, the top location
on the stack is a trash word (set up by csv).
> I understand db only works on files like a.out or core dumps. If I
> wanted to break the assembly language program to examine values, how can
> I force a termination and core dump elegantly, so I can examine some
> register values?
Use 'cdb':
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V6/usr/man/man1/cdb.1
which can do interactive debugging.
Noel
Hi,
I want to use printf from an assembly language program, in V6. It seems
that the Unix Programmer's Manual doesn't show how to use it from assembly,
so I wrote a short C program and captured the assembler output, for some
clues. Listings below.
In my example, the substitutional arguments for printf are pushed onto the
stack in reverse order, then the address of the string, and then printf is
called. After this, 6 is added to the stack pointer. I assume that the
printf routine pops the address of the string off the stack, but leaves the
other values on the stack, hence the need to add 2x3=6 to the stack after
calling printf in my example.
What troubles me is that the stack pointer is not decremented before the
first mov, in the example below. Is this some C convention? I would assume
that the first push in my example would overwrite the top of the stack.
Perhaps I'm not used to PDP-11 stack conventions.
I understand db only works on files like a.out or core dumps. If I wanted
to break the assembly language program to examine values, how can I force a
termination and core dump elegantly, so I can examine some register values?
Paul
*Paul Riley*
Email: paul(a)rileyriot.com
int a, b, c;
int main(){
printf("printf: start\n");
a = 1;
b = 2;
c = 3;
printf("A = %d, B = %d, C = %d", a, b, c);
printf("printf: end\n");
}
.comm _a,2
.comm _b,2
.comm _c,2
.globl _main
.text
_main:
~~main:
jsr r5,csv
jbr L1
L2:mov $L4,(sp)
jsr pc,*$_printf
mov $1,_a
mov $2,_b
mov $3,_c
mov _c,(sp)
mov _b,-(sp)
mov _a,-(sp)
mov $L5,-(sp)
jsr pc,*$_printf
add $6,sp
mov $L6,(sp)
jsr pc,*$_printf
L3:jmp cret
L1:jbr L2
.globl
.data
L4:.byte 160,162,151,156,164,146,72,40,163,164,141,162,164,12,0
L5:.byte
101,40,75,40,45,144,54,40,102,40,75,40,45,144,54,40,103,40,75,40,45,144,0
L6:.byte 160,162,151,156,164,146,72,40,145,156,144,12,0
#
> The demand paging code for SysVR2 was written by Keith A. Kelleman and Steven J. Buroff, and in contemporary conference talks they were saying that they wanted to combine the best parts of demand-paged 32V and BSD. They may have some additional memories that could help with getting a better understanding of the final version of 32V.
>
> Does anybody have contact details for these two gentlemen?
I’ve managed to contact Keith Kelleman and he had some interesting remarks. The paging code in SVR2 was all new code, with a focus the 3B dual processor. It does not derive at the code level from 32V and in fact he does not recall working with the 32V paging code. This kills the hope that the SVR2 code had clues about the 32V code. Keith did suggest that I tried to contact Tom Raleigh, who might have worked with the later 32V code base. Anybody with suggestions for locating him?
===
Besides functionality, the people that remember paged 32V all recall it being very fast. I wonder what made it fast.
First to consider is “faster than what?”. Maybe Rob P. has a specific memory, but I would assume faster than 4BSD: if the comparison was with the "scatter loading, partial swapping” version of 32V people would have expected the better performance and not remember it as impressive 40 years later. Possibly the comparison is with 8th Edition which would have used the 4BSD paging code by then.
If the comparison is with 4BSD, then the CoW feature in paging 32V would have been mostly matched by the vfork mechanism in 4BSD: it covers 80% of the use and it leaves the code paths simpler. If the comparison is with 8th edition, this may be the difference that people remember.
The next possibility is that paging 32V had a better page-out algorithm. Joy/Babaoglu mention that the cost of the clock process is noticable. Maybe paged 32V used a VMS-like FIFO/second chance algorithm that did not need a separate kernel process/thread. Arguably this is not enough for a convincing speed difference.
It is also possible that JFR found a more clever way to do LRU approximation. He remembers that his code used ‘strict LRU’, but not the algorithm. On Tenex - his conceptual reference - that was natural to do, because the MMU hardware maintains a table with 4 words of 36 bits for each frame with statistical data. With the VAX hardware it is a challenge. Considering his mathematical prowess it is maybe plausible that JFR found an efficient way. A slightly better page hit rate gives a significant speed improvement.
All speculation of course: only finding the source will truly tell.
> So there you have it. Just a line of B code that wasn't updated to C.
>
> Cheers,
> aap
I love posts like this, thank you! “Sherlock Holmes and the mysterious case of the excessive parameter list"
Greetings,
It has always bugged me that the bsd-family-tree file got 2.8BSD slightly
wrong.
It has the relationship V6 -> 1BSD -> 2BSD -> 2.79BSD -> 2.8BSD
with V7 -> 32V -> 3BSD ... -> 4.1BSD -> 2.8BSD
Now, as far as it goes, that's not terrible. But it's missing something.
First, there was no V6 code in 1BSD or 2BSD through 2.79BSD. It was for V6
at first then for both V6 and V7, but w/o V7 code. There weren't even
patches for V6 or new drivers on the early 1BSD and 2BSDs. However,
starting with 2.8BSD, there's a V7 kernel. Or should I say a heavily
patched V7 kernel with a ton of #ifdefs for all the fixes and enhancements
collected by Berkeley and a few minor build system tweaks.
Also, the code from 4.1BSD that's in 2.8 is rather minimal from what I can
tell with my analysis so far, except indirectly in some of the patches to
the V7 kernel appear to also be in 4.1BSD. The biggest thing that's in
2.8BSD from 4.1BSD is the job control (confined to its own directory with
big warnings that basically say you'll need to update the code and even
then the system is unstable). 2.9BSD has much better 4.xBSD integration,
but 2.8 was quite early days for rejoining the two lines. 4.1BSD didn't
have many berkeley rewrites of userland code, and the 2.8 tape has only a
few of them (eg ls). So although it's not as complete as one would hope,
there was a decent amount of code from 4.1BSD flowing into 2.8BSD.
Now, my request. I've created a code review for FreeBSD to fix this.
https://reviews.freebsd.org/D30883 is the review. We use phabricator in the
FreeBSD project. Anybody can view this, but if you don't want to create an
account, please send me email with a comment about the change and/or the
commit message. It just adds an arc from V7 to 2.8BSD.
Thanks for any time and/or insight you might have here. I'm judging the
above entirely on the archived code rather than any historical knowledge...
Warner
On 23/06/21, silas poulson wrote:
> I’m aware line 2238’s famous “You are not expected to understand
> this.” Comment is due to odd PDP-11/45 behaviour.
Actually there are two different takes on what it is exactly that you're
not expected to understand. The "obvious" one in Lions' book, (i.e. saved
stack being overwritten when swapping so you have to restore from the
special swap-saved stack), but dmr had a different take on it, that it
had to do with functions really not being happy if you switch the stack
underneath them. You can find dummy variables in the interdata port that
make sure the stack frames of some functions (newproc and swtch?) match.
So not really a hardware thing but a consequence of the compiler.
> Do you know if other sections of the C show remnants of B or the PDP?
> Or is it just those spots?
For the kernel I'm just aware of this one. printf was copied over a
number of times (the c compiler also includes a version) but the kernel
was never written in B, so i wouldn't expect any B-ness in there
otherwise.
aap
... when he declared all the parameters for this procedure"
I'm sure some of you remember this quote expressing John Lions'
puzzlement over the declaration of printf in the UNIX kernel:
printf(fmt,x1,x2,x3,x4,x5,x6,x7,x8,x9,xa,xb,xc)
Note that parameters x2 and following are never referenced by name
in the function body, only by pointer arithmetic from &x1.
The historical reason for this long list of parameters is probably
not well known, so I want to explain it and hope that you will find
it as enjoyable as I did when I came across it some time ago while
studying leftover files of B.
To call a function in B, there's a little dance you have to do:
first push the function address onto the stack, remember the stack
pointer, push all the arguments, then restore the stack pointer and
do the actual call.
In diagrams, this is what happens to the B stack:
push func addr:
+------------+
| | <- sp
+------------+
| func addr |
+------------+
mark (n2):
+------------+
| | <- sp
+------------+
| | <- old sp (saved on native stack)
+------------+
| func addr |
+------------+
push arguments:
+------------+
| | <- sp
+------------+
| arg n |
+------------+
| ... |
+------------+
| arg 0 |
+------------+
| | <- old sp (saved on native stack)
+------------+
| func addr |
+------------+
call (n3):
pop old sp from stack
jump to func addr on stack
set ret addr and old fp
+------------+
| arg n |
+------------+
| ... |
+------------+
| arg 0 |
+------------+
| ret addr |
+------------+
| old fp | <- sp, fp
+------------+
The callee then sets sp, so has to know how many args and automatics!
+------------+
| | <- sp
+------------+
| auto n |
+------------+
| ... |
+------------+
| auto 0 |
+------------+
| arg n |
+------------+
| ... |
+------------+
| arg 0 |
+------------+
| ret addr |
+------------+
| old fp | <- fp
+------------+
So because the arguments to a functions were grouped together with
the automatics and not below the ret addr and saved fp, there has
to be space on the stack lest they clash. This of course means that
this printf in B would probably break if called with more arguments
than it expects.
So there you have it. Just a line of B code that wasn't updated to C.
Cheers,
aap
Hello,
I'm currently trying out the rc shell (using Byron Rakitzis'
implementation for Unix). Compared to Bash, which I normally use, this
shell has a rather small feature set (which isn't to say that that's
necessarily a bad thing).
Now, one of the features that Bash has and rc doesn't have is the
ability to perform arithmetic expansion. That's not really a problem
because you can just use `expr` instead. I wonder, though, when
arithmetic expansion as a shell built-in became a thing, especially in
POSIX sh.
POSIX has included that feature since at least 2001, and probably quite
some years earlier, given that Bash already had it in 1995 (going by
the manual page of version 1.14.7, the oldest I could find).
So, maybe someone here can help me find out when this was actually
standardized.
Thanks.
--
Michael
even tho it was there, in-shell integer arithmetic had a few
issues before ksh86.
pre-ksh86 these had problems --
A. decimal point in a number
$ integer I
$ I=2.2
would error on the assignment, but with ksh86 $I is truncated to 2
(yes "integer" is an builtin alias to typeset -i and ksh courses at the time
instructed to use that over typset)
B. negative numbers
$ integer I
$ I=-2
here pre-86 $I would be assigned a large value like 2147483646
also ksh was not stardard until SVR4 (very late 80s) so it was found
in paths like /usr/add-on/bin/ksh or /usr/exptools/bin/ksh, or not even
there at all, you could not reliably #! ksh scripts
also with ksh86 the double paren ((...)) notation was exactly the same as
"let ..." and were completely optional if the variable was predefined as
an integer, so
$ I=0
$ ((I=I+1))
and
$ integer I
$ I=0
$ I=I+1
are the same.
All internal integers in ksh were C longs (at least 32-bits)
where Bourne shell all vars are strings so you would need to do this --
$ I=`expr $I + 1`
But also at that time expr(1) could NOT deal with negative numbers on input,
they became strings. So
$ expr -9 + 1
is an error with "non-numeric argument", and
$ expr -11 '<' -1
returns 0, a false statement, which are hidden bugs with variables.
expr(1) was also 32-bits, as was test (i.e [ ) which could deal with
negative numbers just fine.
for arbitrarily large numbers you needed use dc(1) or bc(1). but dc(1)
also has a issue with inputing negative numbers as it uses an _ not
a - to denote a negatve number on input, but does use the - on output.
$ I=`echo _9 1 - p | dc`
is how you would do ((I=-9 - 1)) in bourne with dc
which is cumbersome if you have a variable with a neagtive number in it,
and required a "tr - _" first.
however
$ I=`echo -9 - 1 | bc`
worked just fine in bourne.
-Brian
Chet Ramey wrote:
> On 6/21/21 5:57 AM, arnold at skeeve.com wrote:
> > Arithmetic expansion dates back at least as far as ksh88.
>
> ksh had the `let' builtin from at least 1983. The ((...)) compound command
> was there by ksh-86.
>
> > Bash likely picked it up from there.
>
> Sort of, see below.
>
> > The original was only integer math and Bash remains that way (IIRC,
> > Chet can correct me if I'm wrong). ksh93 added floating point math.
>
> Yes, bash only has integer arithmetic, since it's all POSIX requires.
>
> > POSIX would have picked it up from ksh88.
>
> The $((...)) form of arithmetic expansion is something POSIX picked up
> from ksh-88, eventually. The early drafts of the standard (through 1003.2
> d9, at least), used $[...], but they eventually adopted $((...)) because
> ksh-88 had already implemented it, though it's not documented in Bolsky
> and Korn.
>
> I put $[...] into bash first (it's still there, though deprecated), then
> `let', then $((...)) after POSIX added it, and finally `((' for
> compatibility.
>
> Chet
I’m still researching John Reiser’s 32V with demand paging, copy-on-write and mmap.
Unfortunately, JFR does not have the bits or a listing for this version of 32V.
I’ve read the MSc theses of Leffler and Shannon with interest (https://www.tuhs.org/Archive/Documentation/Theses/) The thesis of Shannon has an interesting discussion of a demand paged version of his Harris/6 Unix (Chapter 5). It is based on the Tenex ideas, just as JFR mentioned for his version. The thesis of Leffler contains a gant chart that shows that the demand paged version was written in the first months of 1980 -- concurrently with or slightly after the 32V version.
I’ve also (superficially) read the papers on Tenex memory management. The design is closely tied to PDP-10 MMU that BBN designed for Tenex. Some of its data structuring is recognisable in Shannon’s version. One defining aspect is that the design for both is for a virtual address space that is smaller than the physical address space; on a 1980 VAX it was the reverse.
If 32V followed the same design ideas (a big if), it most likely limited processes to a capped address space (e.g. 2MB). It might also have contained an in-core flag/data vector with as many entries as there are pages frames in swap space. If true, these downsides may have been why it did not go on to become the root for SysVR1 or R2 paging.
The demand paging code for SysVR2 was written by Keith A. Kelleman and Steven J. Buroff, and in contemporary conference talks they were saying that they wanted to combine the best parts of demand-paged 32V and BSD. They may have some additional memories that could help with getting a better understanding of the final version of 32V.
Does anybody have contact details for these two gentlemen?
Not Unix in particular.
At least in Germany it is already the 16th, and my BSD calendar
notifies that "first programming error is encountered at the
U. S. Census Bureau".
As not being hard-to-the-core i may have missed it, but also in
1951, in March, the wonderful Grace Hopper "conceives the first
compiler, called A-O and later released as Math-Matic. Hopper is
also credited with coining the term 'bug' following an incident
involving a moth and a Mark II.
All (hm!) according to COMPUTERWORLD January 18th, 1999 (i was
young!), with assistance of the Computer Museum of Boston.
Like McCartney said in the legendary 1999 concert at the Cavern
club, the first after his wonderful wife Linda died, "See, with
this band, if we don't get it right ... we start again!".
Thank you.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Rob Pike:
Although upon reflection, I think what I did was fix 'adb' and call it
'db'. Haven't had my coffee yet this morning.
====
I don't think so. I did quite a lot of work on adb during my
time at the Labs. I remember clearly that it still used all
the Bournegol macros when I started; I doubt Rob would have
left that there. (It was Rob who translated the shell
back to C, I believe.)
I got into adb because it still used ptrace and everyone else
seemed scared to touch the code to convert it to use /proc.
So I fixed that, and fixed sdb too, and finally removed
the ptrace call from the kernel. I remember celebrating
by expunging ptrace from the UNIX Room copy of the V8
manual. ptrace happened to occupt two facing pages, which
I glued together.
I did a lot more hacking on adb after that, ranging from
little stuff like making # a synonym for 0x in input
(so that adb's output could be picked up from the screen
and re-used as input, a principle established firmly and
correctly by Rob!) to a major restructuring to isolate
machine-dependent pieces like instruction decoding and
word formats, so that it was simpler not only to make
adb work on a new processor architecture but even to make
a sort of cross-adb that could, say, correctly interpret
a PDP-11 core image on a VAX. (This actually mattered;
by the time I arrived Research had no PDP-11s running
general-purpose UNIX, but did have LSI-11s acting as
Datakit controllers and a standalone power-backed-up
LSI-11 that decoded the time signal from WWVB.)
I was never really happy about the restructuring; it did
more or less what I wanted but it wasn't really graceful.
And cross-adb needed a distinct binary for each target.
I had thoughts of trying to make a meta-language to
describe the target's data formats (simple except for
floating point) and how to print its instructions
(messier, but I remember being inspired by the clever
table-driven code in an disassembler Ken wrote for,
I think it was, the NS32000), so that one could load
a table from a file at startup; never had the time or
the courage to carry through on it.
Norman Wilson
Toronto ON
> From: Henry Bent
> From what I can gather the only way to reasonably examine the
> disassembly of a program in the early days of Unix was adb. Is this
> true? Was there a way to easily produce a full disassembly?
'adb' is quite late. We had it on the PWB1 (V6 enhanced, basically) system at
MIT, so its roots lie before V7. (Every time I run across people who think V7
is early, I go into 'get off my lawn' mode.)
The first thing I know of that could disassemble PDP-11 code on Unix was 'db',
which dates back to V1:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V1/man/man1/db.1
It wasn't optimal for doing disassembly, because it was non-trivial to
dump an entire object file as assembler source - but it could be done.
Later (V5 era) there was something called 'cdb', which was 'db' with
extensions to make it useful for debugging code whose source was in C:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V4/man/man1/cdb.1
There were other non-Unix disassembler (such as DDT), also.
Noel
A new journal issue published today carries this paper:
Diomidis Spinellis and Paris Avgeriou
Evolution of the Unix System Architecture: An Exploratory Case Study
IEEE Transactions on Software Engineering 47(6) 1134--1163 June 2021
https://doi.org/10.1109/TSE.2019.2892149
A preprint is available here:
https://www.researchgate.net/publication/332826685_Evolution_of_the_Unix_Sy…
However, it is dated four years ago, and after removing its cover
page, diffpdf shows numerous changes compared to today's publication.
In the new version, a footnote on the first page says
Manuscript received 19 May 2018; revised 18 Dec. 2018;
accepted 28 Dec. 2018. Date of publication 2 May 2019;
date of current version 14 June 2021.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Sounds very "Deus ex machina" like. Although it's hard to staple a ghost
to your notebook.
-----Original Message-----
From: Bakul Shah
To: Rob Pike
Cc: The Eunuchs Hysterical Society
Sent: 6/16/21 12:13 PM
Subject: Re: [TUHS] 70th anniversary of (official) programming errors
https://spectrum.ieee.org/the-institute/ieee-history/did-you-know-edison
-coined-the-term-bug
<https://spectrum.ieee.org/the-institute/ieee-history/did-you-know-ediso
n-coined-the-term-bug>
Like Edison, she (Grace Hopper) was recalling the word’s older origins
in the Welsh bwg, the Scottish bogill or bogle, the German bögge, and
the Middle English bugge: the hobgoblins of pre-modern life, resurrected
in the 19th century as, to paraphrase philosopher Gilbert Ryle, ghosts
in the machine.
Electrical circuits can have "bad connections" so I do wonder if Edison
coined this word based on "ghost like" faults that magically appear and
disappear!
-- Bakul
On Jun 15, 2021, at 8:48 PM, Rob Pike <robpike(a)gmail.com> wrote:
There are citations from Edison in the 19th century using the word, and
a quote somewhere by Maurice Wilkes about the stairwell moment when he
realized much of the rest of his life would be spent finding programming
errors.
That moth was not the first bug, nor the first "bug", it was the first
recorded "actual bug".
-rob
On Wed, Jun 16, 2021 at 9:46 AM Dan Cross < crossd(a)gmail.com
<mailto:crossd@gmail.com> > wrote:
On Tue, Jun 15, 2021 at 6:55 PM John Cowan < cowan(a)ccil.org
<mailto:cowan@ccil.org> > wrote:
On Tue, Jun 15, 2021 at 6:25 PM Steffen Nurpmeso < steffen(a)sdaoden.eu
<mailto:steffen@sdaoden.eu> > wrote:
As not being hard-to-the-core i may have missed it, but also in
1951, in March, the wonderful Grace Hopper "conceives the first
compiler, called A-O and later released as Math-Matic. Hopper is
also credited with coining the term 'bug' following an incident
involving a moth and a Mark II.
Yes, but wrongly. The label next to the moth is "First actual case of
bug being found", and the word "actual" shows that the slang term
already existed then. Brief unexplained faults on telephony (and before
that telegraphy) lines were "bugs on the line" back in the 19C.
Vibroplex telegraph keys, first sold in 1905, had a picture of a beetle
on the top of the key, and were notorious for creating bugs when
inexperienced operators used them. (Vibroplex is still in business,
still selling its continuous-operation telegraph keys, which ditt as
long as you hold the paddle to the right.)
Indeed, the Vibroplex key is called a "bug". I suspect this has
something to do with its appearance more than anything else, though (it
kinda sorta looks like, er, a bug).
- Dan C.
I understand the preference for a bottom-up entry, with somebody already sympathetic.
If there is no such entry point, top-down may work at Microfocus. I guess the ask is for a statement for SysV similar to the Nokia statement for 8th-10th Edition:
https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%…
According to Companies’ House, the general counsel is Jane Smithard:
https://find-and-update.company-information.service.gov.uk/company/01504593…
She appears to be 67, to have a long tenure with Microfocus and maybe is just the right person:
"Jane has more than 25 years’ experience as a lawyer in the IT industry and software sector. She has worked with Micro Focus for over 20 years providing a wide range of commercial and corporate legal services, from leading the efforts through the 2005 IPO to driving the legal aspects of the group’s mergers, acquisitions and divestitures strategy including the acquisition of HPE Software and divestiture of SUSE. Jane leads a team of approximately 60 lawyers and other professionals worldwide, the majority of whom are focused directly on supporting the Company’s commercial teams and business.”
https://www.microfocus.com/en-us/about/leadership-about
Her e-mail appears to be jane (dot) smithard (at) microfocus (dot) com
But I guess that Dan Doernberg already knows all that. His book project sounds intriguing, by the way.
Paul
I tried sending a letter a while back asking how much is a commercial SYSV
license anyways, never got a reply. I called their legal and they didn't
know what a Unix was.
I guess shockingly all the public facing micro focus is all on cobol
-----Original Message-----
From: Warren Toomey
To: tuhs(a)tuhs.org
Sent: 6/10/21 8:32 AM
Subject: [TUHS] Help Contacting Unix Legal People
All, we need help contacting some people so that Dan Doernberg can make
progress on a new Unix book that he's working on. I've attached his
request to the TUHS list below. If you can help out in any way, please
contact Dan directly.
Cheers, Warren
From: Peer-to-Peer Communications LLC <dan(a)peerllc.com>
Hello all, I’m the publisher of Lions' Commentary on UNIX (still
going strong after 25 years!) and I have an “Ancient UNIX” book
project
in mind that I need some community help with.
To avoid running into any “who has copyright rights to UNIX” legal
problems, we’re trying to reach out in advance to any
companies/organizations that may have such rights (Micro Focus,
Xinuos,
Nokia, are there others?) in advance. To that end, I’m trying to find
staffers there who are:
1. sympathetic to sharing information about “ancient UNIX” with
the
operating system/UNIX communities
2. somewhat familiar with the past legal issues and controversies
over UNIX ownership (perhaps someone in the legal department)
If you know any such person(s) at Micro Focus, Xinuos, Nokia, and/or
other relevant organizations that has either quality (or ideally a
unicorn with both!), please pass on their name and email address to
me
(even better, add any context about why they might be helpful to me,
if
it’s OK to say that you referred me to them, etc.].
Thanks much, all referrals greatly appreciated!
Dan Doernberg
dan(a)peerllc.com
So I've seen a number of places that talk about Unix TS 3.0 -> 4.0 -> 5.0
progression and how System III was released and System V was released and
System IV was internal only.
What I've not seen is the "why" part of this. Why was it internal only?
Warner
All, we need help contacting some people so that Dan Doernberg can make
progress on a new Unix book that he's working on. I've attached his
request to the TUHS list below. If you can help out in any way, please
contact Dan directly.
Cheers, Warren
From: Peer-to-Peer Communications LLC <dan(a)peerllc.com>
Hello all, I’m the publisher of Lions' Commentary on UNIX (still
going strong after 25 years!) and I have an “Ancient UNIX” book project
in mind that I need some community help with.
To avoid running into any “who has copyright rights to UNIX” legal
problems, we’re trying to reach out in advance to any
companies/organizations that may have such rights (Micro Focus, Xinuos,
Nokia, are there others?) in advance. To that end, I’m trying to find
staffers there who are:
1. sympathetic to sharing information about “ancient UNIX” with the
operating system/UNIX communities
2. somewhat familiar with the past legal issues and controversies
over UNIX ownership (perhaps someone in the legal department)
If you know any such person(s) at Micro Focus, Xinuos, Nokia, and/or
other relevant organizations that has either quality (or ideally a
unicorn with both!), please pass on their name and email address to me
(even better, add any context about why they might be helpful to me, if
it’s OK to say that you referred me to them, etc.].
Thanks much, all referrals greatly appreciated!
Dan Doernberg
dan(a)peerllc.com
I just noticed that the 32V tape on the TUHS Unix Tree page includes a directory “slowsys”:
https://www.tuhs.org/cgi-bin/utree.pl?file=32V/usr/src/slowsys
This “slowsys” directory appears to contain the 32V kernel with a pure swapping architecture. It is not quite the kernel described in the well known 32V paper, as it seems to have advanced from a fixed 192KB image size mapping to a more variable mapping of up to 1MB — but otherwise the code appears to be as described in the July 1978 paper.
The directory https://www.tuhs.org/cgi-bin/utree.pl?file=32V/usr/src/sys contains the scatter loading, partial swapping version of the 32V kernel.
Paul
Dear TUHS,
would anyone happen to have a copy of:
Felix, Jerry & Hauck, Chris (September 1987). "System Security: A Hacker's Perspective". 1987 Interex Proceedings.
I can only find references to it on the web but no link to an electronic version.
Cheers,
Arrigo
Received wisdom is that 32V used V7 style swapping for memory management. Over the past days I’ve come to the conclusion that this is a very partial truth and only holds true for 32V as it existed in the first half of 1978. In the second half of ’78 it implemented something that is referred to as “scatter loading” which is an interesting halfway house on the road to demand paging. It was also used in the VAX version of Sys III (and presumably in SysV R1 and early R2).
In the 32V report from July 1978 it says:
"Like the UNIX system for the PDP-11, the current implementation for the VAX-11/780 maintains each process in contiguous physical memory and swaps processes to disk when there is not enough physical memory to contain them all. Reducing external memory fragmentation to zero by utilizing the VAX- 11/780 memory mapping hardware for scatter loading is high on the list of things to do in the second implementation pass. To simplify kernel memory allocation, the size of the user-segment memory map is an assembly parameter which currently allows three pages of page table or 192K bytes total for text, data, and stack.” (https://www.bell-labs.com/usr/dmr/www/otherports/32v.pdf)
It turns out that scatter loading was added in the next months, and it was this version that was used as the basis for 3BSD and SysIII.
Babaoglu & Joy write:
"Except for the machine-dependent sections of code, UNIX for the VAX was quite similar to that for the PDP-11 which has a 16-bit address space and no paging hardware. It made no use of the memory-management hardware available on the VAX aside from simulating the PDP-11 segment registers with VAX page table entries. The main-memory management schemes employed by this first version of the system were identical to their PDP-11 counterparts -- processes were allocated contiguous blocks of real memory on a first-fit basis and were swapped in their entirety. A subsequent version of the system was capable of loading processes into noncontiguous real memory locations, called scatter loading, and was able to swap only portions of a process, called partial swapping, as deemed necessary by the memory contention. This would become the basis for the paging system development discussed in this paper.” (https://www.researchgate.net/publication/247924813_Converting_a_swap-based_…)
The 32V code on the TUHS website (e.g. here https://www.tuhs.org/cgi-bin/utree.pl?file=32V) is actually this later scatter loading code, and not the early 1978 code that used V7 style memory management. The 32-bit Sys III code is closely related (see https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/uts/vax)
===
My current understanding of how scatter loading worked (based on a brief code review) is as follows:
(Note that on the VAX pages/frames are 512 bytes and the page list is essentially single level; page lists grow quickly. It is also unusual in the sense that user page table entries point to kernel virtual memory, but kernel page table entries point to physical memory).
- Each process keeps a list of pages in its u-area (a page table prototype, if you will). This list is fixed size and allows up to 512KB per process in 32V and ~2.5MB per process in Sys III (i.e up to 1024 resp. 5120 pages).
- The kernel keeps a bitmap of free/used frames in physical memory.
- When a process loads/grows, the bitmap is scanned for free frames, these are marked as in-use, and added to the u-area list. If there are not enough free pages a process is selected for swapping out. Swapping out is gradual, in 8KB chunks in 32V and 32KB chunks in SysIII. When a process shrinks or completes, its pages are added back to the bitmap.
- When a partially swapped out process needs to run, the swapped out part is loaded back similar to the above. Partial swap-outs truncate the process, so everything above the remaining size needs to re-load.
- The user process page table is not swapped, but recreated from the u-area data instead.
- When switching between user processes, the kernel needs to update 16 (32V) or 40 (SysIII) kernel page table entries to update the user memory map.
Scatter loading and partial swapping seem to be a major improvement over V7 style swapping, although it of course falls short of demand paging. So far I have not seen bits of code that suggest ‘lazy loading’ or copy-on-write functionality in 32V or Sys III, but these things would not seem impossible to do in this memory management scheme.
In short, the view that “32V used V7 style swapping” seems to be an oversimplification.
Following the recent discussion on this list about early paging systems and John Reiser’s work in this area, I succeeded in reaching out to him. It is all very long ago, but John’s recollection is that his paging system work was initially done late in 1979 or maybe 1980. This matches with Rob Pike’s memory of seeing it demoed in early 1981.
John’s recollection confirms that his design implemented mmap and copy-on-write, and was integrated with the file buffer cache (all features that Norman Wilson also remembered).
I have appended John’s message below (with permission). I am not sure I understand - at this time - how John’s code was multiplexing page table entries with kernel data structures beyond what is mentioned, but I think it might be an interesting summer project to see how much of the design can be resurrected using related code, old documents and memories.
Paul
====
> I joined Bell Labs department 1135 ("Interactive Computer Systems Research")
> at Holmdel, NJ in Feb.1977. I soon gained fame by re-writing the C pre-processor,
> which was bug-infested and slow. (" 1.e4 " was recognized as an opportunity
> to expand the macro "e4", instead of as a floating-point constant.)
> cpp used to take 15% to 18% of total CPU compile time. My replacement code
> was 2% to 3%. The average improvement was a factor of 7; the minimum:
> a factor of 5; the best: a factor of 11 or 12.
> During the rest of 1977, I became dissatisfied with our PDP-11/45 (and other's
> PDP-11/70), having spent a few student years at Stanford Univ and its AI Lab,
> where PDP-6 and PDP-10 reigned. So Tom London and I wrote an "internal grant"
> for $250,000 to get a new machine, with a "research goal" of exploring CCD
> (charge coupled device) which was promised to replace spinning harddrive.
> Actual CCD product never appeared until flash memory devices in 1990s
> and SSD (current solid state drive) later.
>
> Our VAX-11/780, the first one delivered to a customer, arrived Feb.12, 1978
> (Lincoln's Birthday). Tom and I had been preparing using PDP-11/45 since
> December, and we achieved "login: " on the console DECwriter by April 15
> (the deadline for US income tax filing). The rest of 1978 was "tuning",
> and preparing for the release of "UNIX-32/V" to UC Berkeley. My annual
> performance review in early 1979 said "Well done; but don't ever do it again"
> because it was not regarded as "Research".
> So what did I do? I did it again; this time, with demand paging.
> I designed and implemented mmap() based on experience with PDP-10/Tenex
> PMAP page mapping system call. I fretted over introducing the first
> UNIX system call with 6 arguments.
>
> The internal design of the paging mechanism "broke the rules" by having a
> non-hierarchical dependency: A1 calls B1 calls A2 calls B2 where {A1, A2}
> are parts of one subsystem, and {B1, B2} are parts of another subsystem.
> (One subsystem was the buffer cache; I forget what was the other subsystem.)
> Our machine started with 512KB of RAM, but within a few months was upgraded
> to 4 MB with boards that used a new generation of RAM chips.
> The hardware page size was 512 bytes, which is quite small. Strict LRU
> on 8,192 pages, plus Copy-On-Write, made the second reference to a page
> "blindingly fast". It was impressive, running sometime in 1979 or 1980,
> I think. But it was not "Research", so I got many fewer accolades.
> My internal design strategy was to use the hardware page table entry
> (4 bytes per page, with "page not present" being one bit which freed
> the other 31 bits for use by software) as the anchor to track everything
> about a page. This resulted in a complicated union of bitfield structs,
> which became a headache. When other departments took over the work,
> the first thing they did was use 8 bytes per page, which meant shuffling
> between the software and the hardware descriptors: its own mess.
> I wasn't interested in maintaining a paging system, so I moved on
> to other things such as design of integrated circuits using
> Carver-Mead methodology.
Is it okay for me to ask a question about Linux that's from '91~'92?
Does anyone happen to have copies of H.J. Lu's Bootable Root and the
associated Linux Base System disk images from the early '90s?
I've managed to find a copy of 0.98.pl5-31 bootable root disk. But I
can't find any base disks to go along with it.
The files used to be on tsx-11.mit.edu:/pub/linux/GCC in rootdisk and
basedisk subdirectories.
Unfortunately all of the mirrors I'm finding of tsx-11 are newer, have
the basedisk directories, but no image files there in.
--
Grant. . . .
unix || die
More mildewy items from my dödsstäda project. Finally found my
March 1974 BTL directory. A while ago I was trying to figure
out who the people with the initials EPR, LIS, and MAS were in
the 516-TSS documents. Best guesses via vgrep on an ancient
single-processor are that EPR is Peter E. Rosenfeld, supervisor
of Computer Graphics Design Group, LIS is Loretta I. Stukas in
Programming Aids Software And Techniques Group. No reasonable
looking matches on MAS.
Nothing else spectacular in this box - it does have my Lyons
books and a whole bunch or original UNIX docs such as the C
manual, introduction to ed, and so on which I think are all
sufficiently preserved that I don't need to scan them in.
Jon
What's the current status of net/2?
I ask because I have a FreeBSD 1.1.5.1 CVS repo that I'd like to make
available. Some of the files in it are encumbered, though, and the
University of California has communicated that fact. But what does that
actually mean now that V7 has been released and that's what the files were
based on? Are they no longer encumbered?
Warner
> From: Clem Cole
> search is your friend:
The number of people who come here asking for help, who apparently don't know
how to do a Web search, is pretty astounding. Get off my lawn.
> The first hit is Noel's great text:
Actually, I didn't write that; another CHWiki contributor imported it from
the Web. (Wouldn't touch SIMH myself... :-)
Most of the other pages at the bottom, in the 'See also' section, I did
write. Some of them might actually be useful. Also, at the bottom of the
'Installing UNIX Sixth Edition on Ersatz-11' page, there are links to a
couple of external pages that have lots of info on how to upgrade a V6
into something semi-usable.
Noel
Hello there I’m Resun a teenaged programmer who knows nothing about past days.
Here Index of /Archive/Distributions/Research/Dennis_v6 (tuhs.org)<https://www.tuhs.org/Archive/Distributions/Research/Dennis_v6/> there are some compressed file of v6 Unix.
And here’s SETTING UP UNIX - Sixth Edition (tuhs.org)<https://www.tuhs.org/Archive/Distributions/Research/Documentation/v6_setup.…> a documentation about how to set it up.
But the documentation is not for me I mean I don’t understand what it’s saying. There are terms like “Mount magtape on drive 0 at load point”, “Mount formatted disk pack on drive 0.”, “Key in and execute at 100000” and lots of other stuffs that newbies like me don’t understand. Is there any easy documentation about it? Or any documentation that tells us what does those terms mean?
Please help me.
Thanks.
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
This video got passed around at my (new!) job, and I think it's very
relevant to this list. It's Bill Joy talking about what he and Sun were
thinking about as the future of workstations and computing in general ca
1987. Some of the predictions were not accurate, but some were.
I'm curious what others think. https://www.youtube.com/watch?v=k8Pd6xYaHGU
- Dan C.
hello there I'm Resun, a teenaged programmer and I love C and UNIX.
I want to use the 5th edition UNIX operating system. From here Index of /Archive/Distributions/Research/Dennis_v5 (tuhs.org)<https://www.tuhs.org/Archive/Distributions/Research/Dennis_v5/> I've got a compressed file of the 5th edition UNIX. I want to use it using the SimH emulator but there's no guide about how to install it or use it. I'm using SimH in Windows 10.
Can someone please help me to use this system?
Thanks.
I never saw his 32V work, but I reimplemented his additive random number generator in my own work.
Not too many people can write a 35 page PhD thesis.
Fewer can do it for Knuth.
-Larry
As an offshoot of looking more closely at 32V, SysIII and 8th Edition I got interested in how each managed memory.
I’ve not deep-dived into the code yet, but from cursory inspection and searching past posts on this list, I get to the following summary:
- As has been documented widely, 32V essentially retained the V7 swapping architecture, merely increasing the maximum process size a bit versus the PDP-11 version.
- SysIII appears to have retained this, just cleaning up the source code a bit. I assume that all the V7/SysIII derivatives of the early 80’s were swapping systems.
- John Reiser further developed 32V memory management into a (reportedly elegant) demand paging system in 1980-1981, but this code appears lost.
- 3BSD/4BSD/4.1BSD developed 32V memory management into a full demand paging setup as well. This code base was dominant in the 1980-1985 era.
- 8th Edition pulled in the BSD VM code and is essentially identical to that in 4.1BSD. This choice was made because it was not a research interest and using a maintained code base freed up scarce time.
- SysV R1 appears to have retained the SysIII memory system.
- SysV R2 (floating about on the net, eg. here https://github.com/ryanwoodsmall/oldsysv) seems to have used a new implementation.
Questions:
Is that about correct, or am I missing major elements?
Several places mention that there was also a setup that was still swapping in nature, but did not require allocations in core to be contiguous (“scatter paging”). Did this get used much in the era?
At first glance, the SysV R2 code seems shorter and cleaner than the early BSD code (~2000 vs. ~3000 sloc). Is this implementation perhaps a derivative of John Reiser’s work?
For clarity and ease of reference:
- The “Tour of paper” is for instance here: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3512 <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3512>
- A machine description for the VAX that matches with that paper is for instance in the SysIII source: https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd/cc/vax/pcc/ta… <https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd/cc/vax/pcc/ta…>
- The new style description in 8th edition is here: https://www.tuhs.org/cgi-bin/utree.pl?file=V8/usr/src/cmd/ccom/vax/stin <https://www.tuhs.org/cgi-bin/utree.pl?file=V8/usr/src/cmd/ccom/vax/stin>
- The program that translates the “stin” file to a “table.c” file is here: https://www.tuhs.org/cgi-bin/utree.pl?file=V8/usr/src/cmd/ccom/common/sty.y <https://www.tuhs.org/cgi-bin/utree.pl?file=V8/usr/src/cmd/ccom/common/sty.y>
====
Sometimes one thing leads to another.
Following the recent mention of some retro-brew 68K single board systems, I decided to build a CB030 board (in progress). I figure it is a rough proxy for a 1980 VAX and would allow for some experimentation with the 32V / SysIII / 8th edition code.
My first thought was to use the M68K compiler that is included with the Blit sources (see THUS Archive for this), as I had used that before to explore some of the Blit source. That compiler is LP32, not ILP32 - which may be a source of trouble. Just changing the SZINT parameter yielded some issues, so I started looking at the PCC source.
This source does not have a “table.c” in the well known format as described in the “A tour of the portable C compiler” paper. Instead it uses a file “stin” which appears to be in a more compact format and is translated into a “table.c” file by a new pre-processor ("sty.y”). Then looking at the VAX compilers for 8th and 10th edition, these too use this “stin” file.
All the other m68K compilers (based on pcc) that I found appear to derive from the V7/32V/SysIII lineage, not from the 8th edition lineage.
A quick google did not yield much background or documentation on the STY format.
Anybody on this list that can shed some light on the history of the STY table and on how to use it? Any surviving reports or memos that would be useful?
Many thanks in advance
Paul
Does anyone here have an archive of SunOS patches? I'm looking for one
specific one, 100332-08, for Fortran 1.4. Feel free to reply off-list.
Thanks!
-Henry
Thanks to Emanuel Steibler, I am now in possession of a VAXStation 4000
VLC. I've got OpenVMS installed, but, well, the SCSI2SD gives me two more
2GB disks (the fourth partition is the OpenVMS install CD).
I'd like to put Quasijarus on it.
Problem is, the VLC only supports, as far as I know, SCSI devices. I'm
quite happy to install Quasijarus under simhfrom an emulated SCSI tape to
an rz device and then just dd the resulting disk image over to the SD
card...but I can't work out how to do it.
This (as my simh ini file) works fine for getting to the emulated console:
set rz0 rzu
att rz0 quas.dsk
set rz4 tz30
att rz4 quas.tap
boot.cpu
Problem is, quas.tap doesn't actually work; neither the prepackaged
4.3BSD-Quasijarus0c.tap nor one I make with mkdisttap.pl and the input
stand/miniroot/etc files.
I get this:
adam@m1-wired:~/Documents/src/quasi$ ./vaxstation4000vlc install.ini
VAXstation 4000-VLC (KA48) simulator V4.0-0 Current git commit id:
9bf37d3d
/Users/adam/Documents/src/quasi/install.ini-4> att rz4 quas.tap
RZ4: Tape Image 'quas.tap' scanned as SIMH format
/Users/adam/Documents/src/quasi/install.ini-5> boot cpu
Loading boot code from internal ka48a.bin
KA48-A V1.2-32B-V4.0
08-00-2B-B2-35-2C
16MB
?? 010 2 LCG 0086
?? 001 3 DZ 0032
?? 001 4 CACHE 0512
?? 001 7 IT 8706
?? 001 8 SYS 0128
?? 001 9 NI 0024
>>> show dev
VMS/VMB ADDR DEVTYPE NUMBYTES RM/FX WP DEVNAM
REV
------- ---- ------- -------- ----- -- ------
---
ESA0 08-00-2B-B2-35-2C
DKA0 A/0/0 DISK 2.14GB FX RZ23
0A18
DKA100 A/1/0 DISK ...... FX RZ23
0A18
DKA200 A/2/0 DISK ...... FX RZ23
0A18
DKA300 A/3/0 DISK ...... FX RZ23
0A18
MKA400 A/4/0 TAPE RM TZK50
1.1A
DKA500 A/5/0 DISK ...... FX RZ23
0A18
..HostID.. A/6 INITR
DKA700 A/7/0 DISK ...... FX RZ23
0A18
>>> boot mka400:
-MKA400
?48 ENDOFFILE
HALT instruction, PC: 00000B15 (MOVL (R11),SP)
Sooooo....
How do I make a bootable SCSI tape image from Quasijarus? Or,
alternatively, how can I create a bootable ISO image from the Quasijarus
installation files (and then either install under simh, or just dd to an SD
partition and boot from there, or even burn to an actual CD and install
from a SCSI CD-ROM drive)?
Adam
Adam Thornton:
I sat in on an undergrad course from [Dave Hanson] my first year of
grad school (94-95) and he taught it with lcc. I asked `why not
gcc' and he said, `gcc is 100,000 lines and I don't know what 90%
of them are doing; lcc is 10,000'.
===
My copy is indeed about 10K lines, not counting the code-generator
modules. Those are C files generated by a utility program lburg
from a template file. The three architectures supplied in the
distribution, for MIPS, SPARC, and X86, have template files of
about 900, 1200, and 700 lines respectively.
The template file for the VAX is about 2800 lines, but includes
some metalanguage of my own, interpreted by an awk script, to
generate extra rules for all the direct-store type-to-type
instructions. The C output from lburg for the other architectures
is 5000-6000 lines; for the VAX, after expansion by my awk
program and then by lburg, is nearly 20K.
Did someone say Complex Instruction Set?
Norman Wilson
Toronto ON
Dan Cross:
I seem to recall that LCC was also used, at least on 10th Ed. Am I
imagining things, or was that real?
===
Some of the earliest work on lcc was done in 1127; Chris
Fraser worked for the Labs for some years, Dave Hanson
collaborated from his appointment at Princeton. I believe
there was a /usr/bin/lcc. Some programs used it, either
because they needed some part of the ISO syntax (pcc2 was
pre-ISO) or just because.
I don't think that version of lcc used Reiser's c2 optimizer;
it generated reasonably good code by itself, including
emitting auto-increment/decrement instructions. Later
versions of lcc (such as that I later adopted as cc in
my personal V10 world) couldn't do that any more, so I
had to keep c2, and in fact to modify it to turn
addl3 a,b,(p)
mova 4(p),p
into
addl3 a,b,(p)+
(or maybe it was addl2 $4,p, I forget)
But that's another story which I'll tell only if asked,
and nothing to do with the original question.
I was waiting to see whether Steve Johnson would speak
up, because I'm not much of an expert; but yes, the VAX
C compiler in V8/V9/V10 is pcc2.
I think there are a few Research-specific hacks to add
additional stab info for pi(9.1) and on request insert
basic-block profiling for lcomp(1), but nothing major.
Maybe we did some hacking on c2 as well. I know I did
a lot of c2 cleanup later in my personal hacking in
Toronto, but I don't think I did much if any in New
Jersey. But that's independent of the compiler (modulo,
I think, some of my later fixes discovered by using c2
with a different compiler).
Norman Wilson
Toronto ON
Sometimes one thing leads to another.
Following the recent mention of some retro-brew 68K single board systems, I decided to build a CB030 board (in progress). I figure it is a rough proxy for a 1980 VAX and would allow for some experimentation with the 32V / SysIII / 8th edition code.
My first thought was to use the M68K compiler that is included with the bit sources (see THUS Archive for this), as I had used that before to explore some of the Blit source. That compiler is LP32, not ILP32 - which may be a source of trouble. Just changing the SZINT parameter yielded some issues, so I started looking at the PCC source.
This source does not have a “table.c” in the well known format as described in the “A tour of the portable C compiler” paper. Instead it uses a file “stin” which appears to be in a more compact format and is translated into a “table.c” file by a new pre-processor ("sty.y”). Then looking at the VAX compilers for 8th and 10th edition, these to use this “stin” file.
All the other m68K compilers (based on pcc) that I found appear to derive from the V7/32V/SysIII lineage, not from the 8th edition lineage.
A quick google did not yield much background or documentation on the STY format.
Anybody on this list that can shed some light on the history of the STY table and on how to use it? Any surviving reports or memos that would be useful?
Many thanks in advance
Paul
Hello all again,
With a heavy heart I need to find a new home for the following beautiful
hardware:
- AlphaServer DS15 server
- Sun SPARC Enterprise T5140 1U rack server
- Sun Blade 10 mini tower
- HP Proliant DL380 G7 2U rack server
- DEC VT220 with screen, keyboard, and various adapter cables
Please note that the Sun T5140 and HP DL380 are deep (700mm for purposes
of installation in a rack).
I'm starting a new job next week and intend to focus on that and my
family. I've stopped working on various projects and I am vacating my
studio workshop, so I have a lot of things to give away or sell.
The above items are all FREE FOR COLLECTION ONLY (a car will be fine to
transport the above items).
I am located in London, UK. Post code is N15 4QL (Seven Sisters and
Tottenham Hale) in Haringey, London.
Kind regards,
Andrew
Hello,
Which was the first C compiler written outside Bell Labs?
I have a candidate in mind. Alan Snyder interned at Bell Labs in 1973.
Later at MIT, we wrote a C compiler for the PDP-10. This would have
been 1974-1975.
> On 09/04/2021 11:12, emanuel stiebler wrote: > You're comparing a z80 SBC running CP/M? Or are you thinking of 68000 SBCs?
Z80 CP/M machines were still competitive in 1981-1983 (Osborne, Kaypro)
> I've never seen a 68k SBC. Have I missed out something along the way? Is there a community for 68k SBC's? Kind regards, Andrew
Well, Rob Pike designed one: http://doc.cat-v.org/bell_labs/blit/
I guess the original hacker scene for the 68K was around Hal Hardenberg’s newsletter: https://en.wikipedia.org/wiki/DTACK_Grounded
The ready-made 68K SBC’s only arrived 1984-1985:
https://en.wikipedia.org/wiki/Sinclair_QL (I think Linus Torvalds owned one)
https://en.wikipedia.org/wiki/Atari_SThttps://en.wikipedia.org/wiki/Macintosh_128Khttps://en.wikipedia.org/wiki/Amiga_1000
All these machines are rather similar at the hardware level - 68K processor, RAM shared between CPU and display. Only the Amiga had a (simple) hardware GPU.
What set the SUN-1 apart was its MMU, which none of the above have.
What influenced the timing was probably that Motorola made the 68K more affordable by the mid-80’s.
Paul
hello there! I want to use Unix Operating system but I use Windows and from TUHS I got to know that Apout can be installed on FreeBSD 2.x and 3.x, and on RedHat Linux 2.2. Can I use it on Windows 10?
Thank you.
... or the proceedings that it's in.
The paper is by Chris Torek entitled "A New Framework for Device Support in
Berkeley Unix" from Proceedings of the UKUUG, London, Summer 1990.
The Google hits I'm getting in the proceedings suggest I'd like a copy of
the full thing.
Closest I've found is from 2005 or 2006 on archive.org... Nothing in the
TUHS archives I was able to find....
This paper is referenced in Chris Torek's "Device Configuration in 4.4BSD"
which only ever seemed to circulate in draft form. That I have a pdf of
which I converted from a ps that was on NetBSD.org...
Any chance I can get a copy of it? Or will I need to figure out
inter-library loan again for the first time in almost 2 decades...
Warner
I know this is a strange place to ask, but it was suggested to me that some people who may know may follow this list...
Anyone on here used IBM's XLC in very old versions?
Anyone know what the argument -qdebug=austlib does?
I can't seem to find any documentation that says... It would have been an argument for the compiler shipping with AIX 3.2.5, I believe.
Thanks in advance!
Nemo Nusquam:
In this informal survey, I side with Dave, though I prefer to read in my
comfy well-lit chair with tea/coffee/cocoa.B (A very similar thread was
aired on MO last year.)
=====
I should point out that, having at various times spilled hot
chocolate on a tablet and on a paper book, it is much simpler
to recover when it's a tablet.
And a cat can flip pages for you with either technology.
Norman Wilson
Toronto ON
(Curled up on the couch with my laptop, cat just left)
> On Fri, Apr 9, 2021 at 11:34 PM Ed Bradford <egbegb2 at gmail.com <https://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs>> wrote:
>
> > Why did a Ph.D., an academic, and a computer scientist not know about UNIX
> > in 1974 or so? 1976? In 1976, some (many?) universities had source code.
> >
>
> Some knowns/givens at the time ...
> 1.) He was a language/compiler type person -- he had created PL/M and that
> was really what he was originally trying to show off. As I understand it
> and has been reported in other interviews, originally CP/M was an attempt
> to show off what you could do with PL/M.
> 2.) The 8080/Z80 S-100 style machines we quite limited, they had very
> little memory, no MMU, and extremely limited storage in the 8" floppies
> 3.) He was familiar with RT/11 and DOS-11, many Universities had it on
> smaller PDP-11s as they ran on an 11/20 without an MMU also with limited
> memory, and often used simple (primarily tape) storage (DECtape and
> Cassette's) as the default 'laboratory' system, replacing the earlier PDP-8
> for the same job which primarily ran DOS-8 in those settings.
> 4.) Fifth and Sixth Edition of Unix was $150 for university but to run it,
> it took a larger at least 11/40 or 45, with a minimum of 64Kbytes to boot
> and really need the full 256Kbytes to run acceptably and the cost of a 2.5M
> byte RK05 disk was much greater per byte than tape -- thus the base system
> it took to run it was at least $60K (in 1975 dollars) and typically cost
> about two to four times that in practice. Remember the cost of
> acquisition of the HW dominated many (most) choices.
>
> *I**'ll take a guess, but it is only that.* I *suspect* he saw the S-100
> system as closer to a PDP-11/20 'lab' system than as a small
> timesharing machine. He set out with CP/M to duplication the functionality
> from RT/11. He even the naming of the commands was the same as what DEC
> used (*e.g.* PIP) and used the basic DEC style command syntax and parsing
> rules.
That is about it. CP/M predates the Altair / S-100 bus, and was designed for a heavily hacked Intellec-8 system.
CP/M was developed on a PDP-10 based 8080 simulator in 1974. It was developed for the dual purposes of creating a “native” PL/M compiler and to create the “astrology machine”.
The first versions of CP/M were written (mostly) in PL/M. To some extent, in 1974 both Unix and CP/M were research systems, with a kernel coded in a portable language — but aimed at very different levels of hardware capability.
In 1975 customers started to show up and paid serious money for CP/M (Omron, IMSAI) - from that point on the course for Kildall / DRI was set.
The story is here: https://computerhistory.org/blog/in-his-own-words-gary-kildall/?key=in-his-… <https://computerhistory.org/blog/in-his-own-words-gary-kildall/?key=in-his-…>
> I wonder. IBM introduced the IBM PC in August of 1981.
> That was years after a non-memory managed version of
> Unix was created by Heinze Lycklama, LSX. Is anyone
> on this list familiar with Bell Labs management thoughts
> on selling IBM on LSX rather than "dos"?
IBM famously failed to buy the well-established CP/M in
1980. (CP/M had been introduced in 1974, before the
advent of the LSI-11 on which LSX ran.) By then IBM had
settled on Basic and Intel. I do not believe they ever
considered Unix and DEC, nor that AT&T considered
selling to IBM. (AT&T had--fortunately--long since been
rebuffed in an attempt to sell to DEC.)
Doug
I'd totally subscribe to your newsletter :P
that's cool, there is a tape dump of the old stuff on bitsavers... the
UniSoft port I think was the original stuff before Bill showed up?
http://bitsavers.trailing-edge.com/bits/Sun/UniSoft_1.3/
along with some ROM images
http://bitsavers.trailing-edge.com/bits/Sun/sun1/
but more pictures and whatnot are always interesting!
-----Original Message-----
From: Earl Baugh
To: Clem Cole
Cc: tuhs(a)minnie.tuhs.org
Sent: 4/10/21 4:02 AM
Subject: Re: [TUHS] SUN (Stanford University Network) was PC Unix
I’ve done a fair amount of research on Sun 1’s since I have one ( and it
has one of the original 68k motherboards with the original proms ).
It’s on my list to create a Sun 1 registry along the lines of the Apple
1 registry. ( https://www.apple1registry.com/
<https://www.apple1registry.com/> )
Right now, I can positively identify 24 machines that still exist. Odd
serial numbering makes it very hard to know exactly how many they made.
Cisco was sued by Stanford over the Sun 1. From what I read, they made
off with some Stanford property ( SW and HW ). Wikipedia mentions this (
and I have some supporting documents as well ). They ended up licensing
from Stanford as part of the settlement. From what I’ve gathered VLSI
licensed the design from Stanford not Andy directly. However the only
produced a few machines and Andy wasn’t all that happy with that. That
was one of the impetus is to getting sun formed and licensing the same
design. I also believe another company ( or 2 )licensed the design but
either didn’t produce any or very very few machines.
You can tell a difference between VLSI boards and the Sun Microsystems
boards because the SUN is all capitalized on the VLSI boards ( and is
Sun on the others ). At least on the few I’ve seen pictures of.
The design was also licensed to SGI — I’ve seen a prototype SGI board
that’s the same thing with a larger PCB to allow some extensions.
And the original CPU boards didn’t have an MMU. They could only run Sun
OS up to 0.9, I believe was the version. When Bill Joy got there, again
from what I’ve gathered, he wanted to bring more of the BSD code over
and they had to change the system board. This is why you see the Sun
1/150 model number ( as an upgrade to the original Sun 1/100 designation
). The rack mounted Sun 1/120 was changed to the 1/170. The same
upgraded CPU board was used in the Sun 2/120 at least initially.
The original Sun OS wasn’t BSD based. It was a V32 variant I believe.
And the original CPU boards were returned to Sun, I believe as part of
the upgrade from the 1/100 to the 1/150. ( Given people had just paid
$10,000 for a machine having to replace the entire machine would’ve been
bad from a customer perspective). Sun did board upgrade trade ups after
this ( I worked at a company and we purchased an upgrade to upgrade a
Sun 3/140 to a Sun 3/110 — the upgrade consisted of a CPU board swap and
a different badge for the box )
Sun then, from when I can tell, sold the original CPU boards to a German
company that was producing a V32 system. They changed out the PROMs but
you can see the Sun logo and part numbers on the boards
I could go on and on about this topic ?
A Sun 1 was a “bucket list” machine for me - and I am still happy that
some friends were willing to take a 17 hour road trip from Atlanta to
Minnesota to pick mine up. ?
After unparking the drive heads it booted up, first try ( I was only
willing to try that without a bunch of testing work because I have some
spare power supplies and a couple plastic tubs of multi bus boards that
came with it ?)
Earl
Sent from my iPhone
On Apr 9, 2021, at 11:13 AM, Clem Cole <clemc(a)ccc.com> wrote:
?
On Fri, Apr 9, 2021 at 10:10 AM Tom Lyon < pugs(a)ieee.org
<mailto:pugs@ieee.org> > wrote:
Prior to Sun, Andy had a company called VLSI Technology, Inc. which
licensed SUN designs to 5-10 companies, including Forward Technology and
CoData, IIRC. The SUN IPR effectively belonged to Andy, but I don't
know what kind of legal arrangement he had with Stanford. But the
design was not generally public, and relied on CAD tools only extant on
the Stanford PDP-10. Cisco did start with the SUN-1 processor, though
whether they got it from Andy or direct from Stanford is not known to
me. When Cisco started (1984), the Sun-1 was long dead already at Sun.
Bits passing in the night -- this very much is what I remember,
expereinced.
<https://mailfoogae.appspot.com/t?sender=aY2xlbWNAY2NjLmNvbQ%3D%3D&type=
zerocontent&guid=57eccb88-2f68-40ed-9f5a-ce8913f2b4cc> ?
Is there any solid info on the Stanford SUN boards? I just know the SUN-1
was based around them, but they aren't the same thing? And apparently cisco
used them as well but 'borrowed' someone's RTOS design as the basis for IOS?
There was some lawsuit and Stanford got cisco network gear for years for
free but they couldn't take stock for some reason?
I see more and more of these CP/M SBC's on ebay/online and it seems odd that
there is no 'DIY' SUN boards... Or were they not all that open, hence why
they kind of disappeared?
-----Original Message-----
From: Jon Steinhart
To: tuhs(a)minnie.tuhs.org
Sent: 4/8/21 7:04 AM
Subject: Re: [TUHS] PC Unix
Larry McVoy writes:
> On Thu, Apr 08, 2021 at 12:18:04AM +0200, Thomas Paulsen wrote:
> > >From: John Gilmore <gnu(a)toad.com>
> > >Sun was making 68000-based systems in 1981, before the IBM PC was
created.
> >
> > Sun was founded on February 24, 1982. The Sun-1 was launched in May
1982.
> >
> > https://en.wikipedia.org/wiki/Sun_Microsystems
> > https://en.wikipedia.org/wiki/Sun-1
>
> John may be sort of right, I bet avb was building 68k machines at
> Stanford before SUN was founded. Sun stood for Stanford University
> Network I believe.
>
> --lm
Larry is correct. I remember visiting a friend of mind, Gary Newman,
who was working at Lucasfilm in '81. He showed me a bunch of stuff
that they were doing on Stanford University Network boards.
Full disclosure, it was Gary and Paul Rubinfeld who ended up at DEC
and I believe was the architect for the microVax who told me about
the explorer scout post at BTL which is how I met Heinz.
Jon
> From: Jason Stevens
> apparently cisco used them as well but 'borrowed' someone's RTOS design
> as the basis for IOS? There was some lawsuit and Stanford got cisco
> network gear for years for free but they couldn't take stock for some
> reason?
I don't know the whole story, but there was some kind of scandal; I vaguely
recall stories about 'missing' tapes being 'found' under the machine room
raised floor...
The base software for the Cisco multi-protocol router was code done by William
(Bill) Yeager at Stanford (it handled IP and PUP); I have a vgue memory that
his initially ran on PDP-11's, like mine. (I think their use of that code was
part of the scandal, but I've forgotten the details.)
> From: Tom Lyon
> the design ... relied on CAD tools only extant on the Stanford PDP-10.
Sounds like SUDS?
Noel
> I developed LSX at Bell Labs in Murray Hill NJ in the 1974-1975
> timeframe.
> An existing C compiler made it possible without too much effort. The
> UNIX
> source was available to Universities by then. I also developed Mini-UNIX
> for the PDP11/10 (also no memory protection) in the 1976 timeframe.
> This source code was also made available to Universities, but the source
> code for LSX was not.
>
> Peter Weiner, the founder of INTERACTIVE Systems Corp.(ISC) in June
> 1977,
> the first commercial company to license UNIX source from Western
> Electric for $20,000. Binary licenses were available at the same time.
> I joined ISC in May of 1978 when ISC was the first company to offer
> UNIX support services to third parties. There was never any talk about
> licensing UNIX source code from Western Electric (WE) from the founding
> of ISC to when the Intel 8086 micro became available in 1981.
> DEC never really targeted the PC market with the LSI-11 micro,
> and WE never made it easy to license binary copies of the UNIX
> source code, So LSX never really caught on in the commercial market.
> ISC was in the business of porting the UNIX source code to other
> computers, micro to mainframe, as new computer architectures
> were developed.
>
> Heinz
The Wikipedia page for ISC has the following paragraphs:
"Although observers in the early 1980s expected that IBM would choose Microsoft Xenix or a version from AT&T Corporation as the Unix for its microcomputer, PC/IX was the first Unix implementation for the IBM PC XT available directly from IBM. According to Bob Blake, the PC/IX product manager for IBM, their "primary objective was to make a credible Unix system - [...] not try to 'IBM-ize' the product. PC-IX is System III Unix." PC/IX was not, however, the first Unix port to the XT: Venix/86 preceded PC/IX by about a year, although it was based on the older Version 7 Unix.
The main addition to PC/IX was the INed screen editor from ISC. INed offered multiple windows and context-sensitive help, paragraph justification and margin changes, although it was not a fully fledged word processor. PC/IX omitted the System III FORTRAN compiler and the tar file archiver, and did not add BSD tools like vi or the C shell. One reason for not porting these was that in PC/IX, individual applications were limited to a single segment of 64 kB of RAM.
To achieve good filesystem performance, PC/IX addressed the XT hard drive directly, rather than doing this through the BIOS, which gave it a significant speed advantage compared to MS-DOS. Because of the lack of true memory protection in the 8088 chips, IBM only sold single-user licenses for PC/IX.
The PC/IX distribution came on 19 floppy disks and was accompanied by a 1,800-page manual. Installed, PC/IX took approximately 4.5 MB of disk space. An editorial by Bill Machrone in PC Magazine at the time of PC/IX's launch flagged the $900 price as a show stopper given its lack of compatibility with MS-DOS applications. PC/IX was not a commercial success although BYTE in August 1984 described it as "a complete, usable single-user implementation that does what can be done with the 8088", noting that PC/IX on the PC outperformed Venix on the PDP-11/23.”
It seems like Venix/86 came out in Spring 1983 and PC/IX in Spring 1984. I guess by then RAM had become cheap enough that running in 64KB of core was no longer a requirement and LSX and MX did not make sense anymore. Does that sound right?
I heard a while back, that the reason that Microsoft has avoided *ix so
meticulously, was that back when they sold Xenix to SCO, as part of the
deal Microsoft signed a noncompete agreement that prevented them from
selling anything at all similar to *ix.
True?
I first encountered the fuzz-testing work of Barton Miller (Computer
Sciences Department, University of Wisconsin in Madison) and his
students and colleagues in their original paper on the subject
An empirical study of the reliability of UNIX utilities
Comm. ACM 33(12) 32--44 (December 1990)
https://doi.org/10.1145/96267.96279
which was followed by
Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services
University of Wisconsin CS TR 1264 (18 February 1995)
ftp://ftp.cs.wisc.edu/pub/techreports/1995/TR1268.pdf
and
An Empirical Study of the Robustness of MacOS Applications Using Random Testing
ACM SIGOPS Operating Systems Review 41(1) 78--86 (January 2007)
https://doi.org/10.1145/1228291.1228308
I have used their techniques and tools many times in testing my own,
and other, software.
By chance, I found today in Web searches on another subject that
Milller's group has a new paper in press in the journal IEEE
Transactions on Software Engineering:
The Relevance of Classic Fuzz Testing: Have We Solved This One?
https://doi.org/10.1109/TSE.2020.3047766https://arxiv.org/abs/2008.06537https://ieeexplore.ieee.org/document/9309406
I track that journal at
http://www.math.utah.edu/pub/tex/bib/ieeetranssoftwengYYYY.{bib,html}
[YYYY = 1970 to 2020, by decade], but the new paper has not yet been
assigned a journal issue, so I had not seen it before today.
The Miller group work over 33 years has examined the reliability of
common Unix tools in the face of unexpected input, and in the original
work that began in 1988, they were able to demonstrate a significant
failure rate in common, and widely used, Unix-family utilities.
Despite wide publicity of their first paper, things have not got much
better, even from reprogramming software tools in `safe' languages,
such as Rust.
In each paper, they analyze the reasons for the exposed bugs, and
sadly, much the same reasons still exist in their latest study, and in
several cases, have been introduced into code since their first work.
The latest paper also contains mention of Plan 9, which moved
bug-prone input line editing into the window system, and of bugs in
pdftex (they say latex, but I suspect they mean pdflatex, not latex
itself: pdflatex is a macro-package enhanced layer over the pdftex
engine, which is a modified TeX engine). The latter are significant
to me and my friends and colleagues in the TeX community, and for the
TeX Live 2021 production team
http://www.math.utah.edu/pub/texlive-utah/
especially because this year, Don Knuth revisited TeX and Metafont,
produced new bug-fixed versions of both, plus updated anniversary
editions of his five-volume Computers & Typesetting book series. His
recent work is described in a new paper announced this morning:
The \TeX{} tuneup of 2021
TUGboat 42(1) ??--?? February 2021
https://tug.org/TUGboat/tb42-1/tb130knuth-tuneup21.pdf
Perhaps one or more list members might enjoy the exercise of applying
the Barton-group fuzz tests (all of which are available from a Web
site
ftp://ftp.cs.wisc.edu/paradyn/fuzz/fuzz-2020/
as discussed in their paper) to 1970s and 1980s vintage Unix systems
that they run on software-simulated CPUs (or rarely, on real vintage
hardware).
The Unix tools of those decades were generally much smaller (in lines
of code), and most were written by the expert Unix pioneers at Bell
Labs. It would of interest to compare the tool failure rates in
vintage Unix with tool versions offered by commercial distributions,
the GNU Project, and the FreeBSD community, all of which are treated
in the 2021 paper.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I'm not sure why people, even in a group devoted to history like
ours, focus so much on whether a journal is issued in print or
only electronically. The latter has become more and more common.
On one hand, I too find that if something is available only
electronically I'm more likely to put off reading it, probably
because back issues don't pile up as visibly.
On the other, in recent years I've been getting behind in my
reading of periodicals of all sorts, and so far as I can tell
that has nothing to do with whether a given periodical arrives
on paper. If anything, electronic access makes it more likely
I'll be able to catch up, because it's easier to carry a bunch
of back issues around on a USB stick or loaded into a tablet or
the like than to lug around lots of hardcopy. The biggest
burden has been that imposed by PDF files, which are often
carefully constructed to be appallingly cumbersome to read
unless viewed on a letter-paper/A4-sized screen (or printed
out). HTML used to be better, though the ninnies who design
web pages to look like magazine ads have spoiled a lot of
that over the years.
Since I often want to read PDF files when travelling (e.g.
conference proceedings while at the conference) I finally
invested in a large-screened tablet.
Even so, I have a big pile of back issues of ;login:, CACM
(until ACM's policies, having little to do with the journal,
recently drove me away), Rail Passenger Association News,
and Consumer Reports waiting to be read. And sometimes I'm
months behind on this list.
My advice to those who find electronic-only publications
cumbersome is to invest in either a good tablet or a good
printer. I have and use both. There's no substitute for
a large, high-quality screen, and sometimes there's no
substitute for paper that I can flip back and forth, but
I'm fine with supplying those myself.
I'm still looking for a nice brass-bound leather tablet case,
though.
Norman Wilson
Toronto ON
I'm having a bit of trouble with a couple of RD52 drives, and I suspect
that I need a newer formatting program. The formatter floppy in my XXDP
kit includes ZRQC-C-0 (ZRQCC0.BIC), and I understand that revision C is
really old, and I should be running at least F, preferably H.
Does anyone have (or can make) an image of a newer version of the RX50
formatter floppy? I've got an RX50 drive in my 11/83 with 2.11BSD, so
it would be a simple matter to make a bootable floppy there if I just
had the bits to write... :)
-tih
--
Most people who graduate with CS degrees don't understand the significance
of Lisp. Lisp is the most important idea in computer science. --Alan Kay
> IBM famously failed to buy the well-established CP/M in
> 1980. (CP/M had been introduced in 1974, before the
> advent of the LSI-11 on which LSX ran.) By then IBM had
> settled on Basic and Intel. I do not believe they ever
> considered Unix and DEC, nor that AT&T considered
> selling to IBM. (AT&T had--fortunately--long since been
> rebuffed in an attempt to sell to DEC.)
>
> Doug
Besides all the truth or legend around flying and signing NDA’s, I think there were clear economic reasons for ending up with Microsoft’s DOS, and the pre-cursor to that: picking the 8088.
[1] By 1980 there were an estimated 8,000 software packages for CP/M available, many aimed at small business. IBM was targeting that. The availability of source level converters for 8080 code to 8088 code made porting economically feasible for the (cottage) ISV’s. This must have been a strong argument in favour of picking the 8088 for the original PC.
[2] In line with their respective tried and tested business models, Digital Research offered CP/M-86 with a per-copy license structure. Microsoft offered QDOS with a one-off license structure. The latter was economically more attractive to IBM. I don’t think either side expected clones to happen the way they did, although they did probably factor in the appearance of non-compatible work-alikes.
Although some sources suggest that going with the 68000 and/or Unix were considered, it would have left the new machine without an instant base of affordable small business applications. Speed to market was a leading paradigm for the PC's design team.
> There is some information and demos of the early 8086/80286 Xenix,
> including the IBM rebranded PC Xenix 1.0 on pcjs.org
>
> https://www.pcjs.org/software/pcx86/sys/unix/ibm/xenix/1.0/
>
> And if you have a modern enough browser you can run them from the browser as
> well!
>
> It's amazing that CPU's are fast enough to run interpreted emulation that is
> faster than the old machines of the day.
That is a cool link. At the bottom of the page are two images of floppy disks. These show an ISC copyright notice. Maybe this is because the floppies contained “extensions” rather than Xenix itself.
===
Note that "IBM Xenix 1.0" is actually the same as MS Xenix 3.0, and arrived after MS Xenix had been available for 4 years (initially for the PDP-11 and shortly after for other CPU's):
http://seefigure1.com/2014/04/15/xenixtime.html
Rob Ferguson writes:
"From 1986 to 1989, I worked in the Xenix group at Microsoft. It was my first job out of school, and I was the most junior person on the team. I was hopelessly naive, inexperienced, generally clueless, and borderline incompetent, but my coworkers were kind, supportive and enormously forgiving – just a lovely bunch of folks.
Microsoft decided to exit the Xenix business in 1989, but before the group was dispersed to the winds, we held a wake. Many of the old hands at MS had worked on Xenix at some point, so the party was filled with much of the senior development staff from across the company. There was cake, beer, and nostalgia; stories were told, most of which I can’t repeat. Some of the longer-serving folks dug through their files to find particularly amusing Xenix-related documents, and they were copied and distributed to the attendees.
If memory serves, it was a co-operative effort between a number of the senior developers to produce this timeline detailing all the major releases of Xenix.
I have no personal knowledge of the OEM relationships before 1986, and I do know that there were additional minor ports and OEMs that aren’t listed on the timeline (e.g. NS32016, IBM PS/2 MCA-bus, Onyx, Spectrix), but to the best of my understanding this hits the major points.
Since we’re on the topic, I should say that I’ve encountered a surprising amount of confusion about the history of Xenix. So, here are some things I know:
Xenix was a version of AT&T UNIX, ported and packaged by Microsoft. It was first offered for sale to the public in the August 25, 1980 issue of Computerworld.
It was originally priced between $2000 and $9000 per copy, depending on the number of users.
MS owned the Xenix trademark and had a master UNIX license with AT&T, which allowed them to sub-licence Xenix to other vendors.
Xenix was licensed by a variety of OEMs, and then either bundled with their hardware or sold as an optional extra. Ports were available for a variety of different architectures, including the Z-8000, Motorola 68000, NS16032, and various Intel processors.
In 1983, IBM contracted with Microsoft to port Xenix to their forthcoming 80286-based machines (codenamed “Salmon”); the result was “IBM Personal Computer XENIX” for the PC/AT.
By this time, there was growing retail demand for Xenix on IBM-compatible personal computer hardware, but Microsoft made the strategic decision not to sell Xenix in the consumer market; instead, they entered into an agreement with a company called the Santa Cruz Operation to package, sell and support Xenix for those customers.
Even with outsourcing retail development to SCO, Microsoft was still putting significant effort into Xenix:
• Ports to new architectures, the large majority of the core kernel and driver work, and extensive custom tool development were all done by Microsoft. By the time of the Intel releases, there was significant kernel divergence from the original AT&T code.
• The main Microsoft development products (C compiler, assembler, linker, debugger) were included with the Intel-based releases of Xenix, and there were custom internally-developed toolchains for other architectures. Often, the latest version of the tools appeared on Xenix well before they were available on DOS.
• The character-oriented versions of Microsoft Word and Multiplan were both ported to Xenix.
• MS had a dedicated Xenix documentation team, which produced custom manuals and tutorials.
As late as the beginning of 1985, there was some debate inside of Microsoft whether Xenix should be the 16-bit “successor” to DOS; for a variety of reasons – mostly having to do with licensing, royalties, and ownership of the code, but also involving a certain amount of ego and politics – MS and IBM decided to pursue OS/2 instead. That marked the end of any further Xenix investment at Microsoft, and the group was left to slowly atrophy.
The final Xenix work at Microsoft was an effort with AT&T to integrate Xenix support into the main System V.3 source code, producing what we unimaginatively called the “Merged Product” (noted by the official name of “UNIX System V, r3.2” in the timeline above).
Once that effort was completed, all Intel-based releases of UNIX from AT&T incorporated Xenix support; in return, Microsoft received royalties for every copy of Intel UNIX that AT&T subsequently licensed.
It will suffice, perhaps, to simply note that this was a good deal for Microsoft.”
It would be so cool if these early (1980-1984) Xenix versions were available for historical examination and study.
I read the news, and I could not believe it.
It's April 1st, ain't it?
But then, this looks like is dated March 31. So it could be for real.
Behold: https://www.theregister.com/2021/03/31/ibm_redhat_xinuos/
The PDF also is dated March 31: https://regmedia.co.uk/2021/03/31/xinuos_complaint.pdf
It's hard to believe someone would go to the trouble of writing 57 pages of
legalese just to make a damn joke.
"
Xinuos, formed around SCO Group assets a decade ago under the name
UnXis and at the time disavowing any interest in continuing SCO's
long-running Linux litigation, today sued IBM and Red Hat for
alleged copyright and antitrust law violations.
"First, IBM stole Xinuos' intellectual property and used that stolen
property to build and sell a product to compete with Xinuos itself,"
the US Virgin Islands-based software biz claims in its complaint
[PDF]. "Second, stolen property in IBM's hand, IBM and Red Hat
illegally agreed to divide the relevant market and use their growing
market powers to victimize consumers, innovative competitors, and
innovation itself."
The complaint further contends that after the two companies
conspired to divide the market, IBM then acquired Red Hat to
solidify its position.
SCO Group in 2003 made a similar intellectual property claim. It
argued that SCO Group owned the rights to AT&T's Unix and UnixWare
operating system source code, that Linux 2.4.x and 2.5.x were
unauthorized derivatives of Unix, and that IBM violated its
contractual obligations by distributing Linux code.
That case dragged on for years, and drew a fair amount of attention
when SCO Group said it would sue individual Linux users for
infringement. Though SCO filed for bankruptcy in 2007 and some of
the claims have been dismissed, its case against IBM remains
unresolved.
There was a status report filed on February 16, 2018, details
remaining claims and counterclaims. And in May last year, Magistrate
Judge Paul Warner was no longer assigned to oversee settlement
discussions. But SCO Group v. IBM is still open.
"
Either way, some one if fooling us hard.
PS: OK, it seems it's for real: https://www.xinuos.com/xinuos-sues-ibm-and-red-hat/
I need to check my stock of pop corn, then...
My take: it's obvious they want to be a nuisance so that IBM settles the
case, so they then can go back home with some fresh cash. I hope IBM goes
ballistic on them to the bitter end, and finally sends the zombie back to
its grave. But then, IBM now has its new RedHat business to protect, so it
can get interesting.
--
Josh Good
> I had been debating leaving Usenix for several years already;
> the move to soft copy ;login: clinched it for me.
I have been a loyal nonmember of ACM ever since the CACM was
converted from a journal to a magazine. Usenix didn't strike quite
such a decisive blow when it abandoned Computing Systems.
;login: remains as a Cheshire grin. It remains to be seen whether
I'll continue to scan it in its non-tactile form.
Doug
There is some information and demos of the early 8086/80286 Xenix,
including the IBM rebranded PC Xenix 1.0 on pcjs.orghttps://www.pcjs.org/software/pcx86/sys/unix/ibm/xenix/1.0/
And if you have a modern enough browser you can run them from the browser as
well!
It's amazing that CPU's are fast enough to run interpreted emulation that is
faster than the old machines of the day.
-----Original Message-----
From: Clem Cole
To: M Douglas McIlroy
Cc: TUHS main list
Sent: 4/7/21 1:09 AM
Subject: Re: [TUHS] PC Unix (had been How to Kill a Technical Conference
Doug -- IIRC IBM private-labeled a Microsoft put out a version of Xenix,
although I think it required an PC/AT (286)
<https://mailfoogae.appspot.com/t?sender=aY2xlbWNAY2NjLmNvbQ%3D%3D&type=
zerocontent&guid=6f435ae6-0f2c-4fbd-bfe2-adcbf3edac32> ?
On Tue, Apr 6, 2021 at 11:36 AM M Douglas McIlroy <
m.douglas.mcilroy(a)dartmouth.edu <mailto:m.douglas.mcilroy@dartmouth.edu>
> wrote:
> I wonder. IBM introduced the IBM PC in August of 1981.
> That was years after a non-memory managed version of
> Unix was created by Heinze Lycklama, LSX. Is anyone
> on this list familiar with Bell Labs management thoughts
> on selling IBM on LSX rather than "dos"?
IBM famously failed to buy the well-established CP/M in
1980. (CP/M had been introduced in 1974, before the
advent of the LSI-11 on which LSX ran.) By then IBM had
settled on Basic and Intel. I do not believe they ever
considered Unix and DEC, nor that AT&T considered
selling to IBM. (AT&T had--fortunately--long since been
rebuffed in an attempt to sell to DEC.)
Doug
Rich Salz:
> > Honeyman: "Pathalias, or the care and feeding of relative addresses"
Dave Horsfall:
> Are you sure that peter honeyman wrote "Pathalias" and not "pathalias"?
> He seemed to have an aversion to using his shift key.
Dan Cross:
He actually wrote it as, "PATHALIAS _or_ The Care and Feeding of Relative
Addresses". Plenty of shift to go around. :-)
====
Peter probably had a graduate student hold the caps key for him.
Norman Wilson
Toronto ON
Used to honey bitching
Arnold:
But for several years now I have been increasingly dissatisfied with the
research nature of most of the articles. Very few of them are actually
useful (or even interesting) to me in a day-to-day sense.
===
I guess it depends on your interests, and also on what you look at.
I've got way behind in reading ;login:, but have been regularly
attending conferences: the Annual Technical Conference (ATC) and
some workshops (HotStorage, HotCloud) that are usually co-located;
LISA. I still find plenty to interest me, both in talks and in
the hallway tracks, though LISA has been drying up over the years
(and it's clear that USENIX know that too and are working on
whether it should just be subsumed into the already-burgeoning
SREcons).
As I say, interests differ, but I've learned plenty of new things
about OS and networking design and implementation tradeoffs,
security at many levels, file systems, and storage devices.
Thanks to COVID, USENIX-sponsored conferences have all been
online for the past year and are expected to stay so through
the end of 2021. For obvious reasons that greatly reduces
the expenses of the conferences, so the registration fees are
about 10% of normal. Thanks to that, I've been able to sample
conferences I've never had time or money to travel to, like Security
and FAST (file systems and storage). It's been well worth my
time and money even though the money comes out of my own pocket.
UNIX history is not part of the mainstream USENIX world these
days, alas--I was disappointed that there was no official 50th-
birthday party two years ago in Seattle (though the not-officially-
sponsored one at LCM organized by Clem and others was a fine time,
and USENIX had no objection to hosting announcements of it).
I should point out that the only time I've met Our Esteemed
Leader and Listrunner in person was at a USENIX conference, where
he held a session to show off his reconstructed very-early PDP-11
UNIX from the tape Dennis found under the floor of the UNIX Room.
I too would like to see the organization harbour some less-formal
meetings or publications. The way to make that happen would
be to run for the Board and to actively sponsor such stuff (with
care about who is selected for the real work to avoid the problems
Ted describes). Maybe that's a good idea, or maybe it's better
to let the Linux and BSD worlds do their own thing. Either way
I think what USENIX does is worth while. I've been a member for
40 years this year, and although it's not the same organization
as it was in the early 1980s, neither is it the same world it
lives in. I still think they do worth while work and I am proud
to continue to support them, even though I'm not a published
academic researcher, just an old-style systems hack and sysadmin
from the ancient days when those were inseparable.
Norman Wilson
Toronto ON
All of the great discussion on this list about editors has made me curious about the data structures used in the various Unix editors.
I found a great discussion of this for sam in Rob Pike’s publication “The Text Editor sam.”
I’d like to read similar discussions of the data structures for ed, em, ex/vi. If anyone has suggestions of references, they would be very welcome!
Similarly, if there are any pointers to references on some other data structures in editors like TECO, QED and E, I’d welcome them as well.
All the best,
David
...........
David C. Brock
Director and Curator
Software History Center
Computer History Museum
computerhistory.org/softwarehistory<http://computerhistory.org/softwarehistory>
Email: dbrock(a)computerhistory.org
Twitter: @dcbrock
Skype: dcbrock
1401 N. Shoreline Blvd.
Mountain View, CA 94943
(650) 810-1010 main
(650) 810-1886 direct
Pronouns: he, him, his
>From spaf(a)cs.purdue.EDU Thu Apr 4 23:11:22 1991
Path: ai-lab!mintaka!mit-eddie!wuarchive!usc!apple!amdahl!walldrug!moscvax!perdue!spaf
From: spaf(a)cs.purdue.EDU (Gene Spafford)
Newsgroups: news.announce.important,news.admin
Subject: Warning: April Fools Time again (forged messages on the loose!)
Message-ID: <4-1-1991(a)medusa.cs.purdue.edu>
Date: 1 Apr 91 00:00:00 GMT
Expires: 1 May 91 00:00:00 GMT
Followup-To: news.admin
Organization: Dept. of Computer Sciences, Purdue Univ.
Lines: 25
Approved: spaf(a)cs.purdue.EDU
Xref: ai-lab news.announce.important:19 news.admin:8235
Warning: April 1 is rapidly approaching, and with it comes a USENET
tradition. On April Fools day comes a series of forged, tongue-in-cheek
messages, either from non-existent sites or using the name of a Well Known
USENET person. In general, these messages are harmless and meant as a joke,
and people who respond to these messages without thinking, either by flaming
or otherwise responding, generally end up looking rather silly when the
forgery is exposed.
So, for the few weeks, if you see a message that seems completely out
of line or is otherwise unusual, think twice before posting a followup
or responding to it; it's very likely a forgery.
There are a few ways of checking to see if a message is a forgery. These
aren't foolproof, but since most forgery posters want people to figure it
out, they will allow you to track down the vast majority of forgeries:
o Russian computers. For historic reasons most forged messages have
as part of their Path: a non-existent (we think!) russian
computer, either kremvax or moscvax. Other possibilities are
nsacyber or wobegon. Please note, however, that walldrug is a real
site and isn't a forgery.
o Posted dates. Almost invariably, the date of the posting is forged
to be April 1.
o Funky Message-ID. Subtle hints are often lodged into the
Message-Id, as that field is more or less an unparsed text string
and can contain random information. Common values include pi,
the phone number of the red phone in the white house, and the
name of the forger's parrot.
o subtle mispellings. Look for subtle misspellings of the host names
in the Path: field when a message is forged in the name of a Big
Name USENET person. This is done so that the person being forged
actually gets a chance to see the message and wonder when he
actually posted it.
Forged messages, of course, are not to be condoned. But they happen, and
it's important for people on the net not to over-react. They happen at this
time every year, and the forger generally gets their kick from watching the
novice users take the posting seriously and try to flame their tails off. If
we can keep a level head and not react to these postings, they'll taper off
rather quickly and we can return to the normal state of affairs: chaos.
Thanks for your support.
Gene Spafford, Net.God (and probably tired of seeing this message)
> From: David C. Brock
> I'd like to read similar discussions of the data structures for ed, em,
> ex/vi. ... Similarly, if there are any pointers to references on some
> other data structures in editors like TECO, QED and E, I'd welcome them
> as well.
I don't have any discussions I can point you at, but I do have source - for
two things which are somewhat older than most of the ones you mention
(ex/vi/etc).
The first is a TECO from the fourth floor V6 machine (DSSR/RTS) at Tech Sq at
MIT:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/teco
There's some rudimentary documentation in there, in teco.doc, but don't expect
too much. You'll have to rely on the source, which is in MACRO-11 - but it
seems to be reasonably well commented. This actually predates V6; it was
originally written for an MIT OS called DELPHI, which ran on an -11/45 which
was the main EECS undergrad machine. At some point (probably post the Unix
port), it was modified to have '^R mode', which was a WYSIWYG display mode a
lot like the one in the ITS TECO in which EMACS was first written.
I have also put up the Montgomery Emacs for Unix:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/emacs
This is the version we were running on the 5th floor MIT V6 machine (CSR),
which by that point have absorbed a few V7isms (e.g. some ioctl() stuff). So
don't expect to be able to compile and run it, without a fair amount of
work. (I vaguely recall that it needs I+D space, so maybe not on a /23 at
all.) But at least the source is in C, so you can read it. I don't think
there's an un-modified version online (i.e. the original Montgomery source),
alas.
Noel
This just came in:
ACM has named Alfred Vaino Aho and Jeffrey David Ullman recipients of
the 2020 ACM A. M. Turing Award
https://awards.acm.org/about/2020-turing
for fundamental algorithms and theory underlying programming language
implementation and for synthesizing these results and those of others
in their highly influential books, which educated generations of
computer scientists.
Most of us probably have several of their books on the shelf; I certainly do!
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I missed the fact that Posix and Linux ed support s/foo/bar/3 (as opposed
to s3/foo/bar); ex does not, unfortunately.
We need a Great Unification of Line Editors.
John Cowan:
We need a Great Unification of Line Editors.
====
A standard for the standard editor?
I thought the nice thing about standards was that there
were so many of them.
Norman Wilson
Toronto ON
I had *.clients.your-server.de crawling mcvoy.com in violation of my
robots.txt. For whatever reason, the tty settings (or something)
made vi not work, I dunno what the deal is, stty -tabs didn't help.
So I had to resort to ed to write and debug the little program below.
It was surprisingly pleasant, it's probably the first time I've used ed
for anything real in at least a decade. My fingers still know it.
+1 for ed. It's how many decades old and still useful?
#!/usr/libexec/bitkeeper/bk tclsh
int
main(void)
{
FILE log = popen("/var/log/apache2/dns.l", "r");
string buf, ip;
string dropped{string};
fconfigure(log, buffering: "line");
while (buf = <log>) {
unless (buf =~ /([^ ]+\.your-server\.de\.) /) continue;
ip = $1;
if (defined(dropped{ip})) continue;
dropped{ip} = "yes";
warn("DROP ${ip}\n");
system("/sbin/iptables -I INPUT -s ${ip} -j DROP");
}
}
Andy Kosela <akosela(a)andykosela.com> wrote:
> On 3/29/21, arnold(a)skeeve.com <arnold(a)skeeve.com> wrote:
> > Andy Kosela <akosela(a)andykosela.com> wrote:
> >
> >> If ed(1) had cursor positioning and full screen capabilities along
> >> with line oriented editing (similar to Atari 8-bit default editor) it
> >> would be perfect. I still love it though and use it pretty often.
> >
> > Try out the 'se' editor, see www.se-editor.org.
>
> Thanks. It is a nice editor, but it actually resembles ex(1) when
> using visual mode. Maybe I am missing something but it appears you
> cannot actually use cursor keys to visually edit lines in the upper
> area of the screen in se -- you can only edit cmd line.
>
> As far as I know there is no editor in the Unix land which gives you
> the ability to work in the ed(1) line oriented mode BUT also allowing
> to freely move cursor keys in all directions. I gave example of the
> Atari editor[1] which does exactly that. I believe to accomplish it
> on Unix one would need to hack ed(1) and add ncurses(3) cursor
> positioning.
>
> --Andy
>
> [1] https://youtu.be/c9o92l5gupI
> the hammer fired to make an impression the ribbon on the paper, which was
> caused the noise people associated with computer printers.
GE outdid the printer with a fantastically fast pneumatic card reader. The make
and break of the suction on each card repeated at aural frequency and so loud
that I hied off to the instrument stockroom to borrow a noise meter. It was 90db
at the operator's position.
ed is the standard editor, they say.
The b command (stands for browse) came from late-1970s
U of T; rob probably brought it to 1127. There were a
handful of other syntactic conveniences, like being
allowed to leave off the final delimeter of an s command,
and declaring that a missing address means 1 before the
comma or semicolon and $ after, so
3,s/fish/&head
works over all lines from 3 to the last, and , standing
alone addresses the whole buffer.
Also the idea that s followed by a digit N means start
with the Nth instance of the pattern:
s3/fish/&head/
affects only the third fish, and
s3/fish/&head/g
every fish after the second.
I have all those tweaks, plus a few others, embedded in
my fingers from the qed produced by the same Toronto
hacks. I contracted it from the copy rob left behind
at Caltech, which means it has been my editor of choice
for 40 years now (with sam as an alternate favourite
since its inception 35 years or so ago). That qed
has a lot of cryptic programming stuff that I have
mostly forgotten because it was never that useful, but
what really hooked me was
a. Multiple buffers, with the ability to move and
copy text between them reasonably smoothly (both with
the m and t commands and with an interpolate-into-input
magic character);
b. The > < | commands, which respectively send the
addressed lines to a shell command (default ,), replace
the addressed lines or append after the single addressed
line the standard output of the shell command (default .),
and replaced addressed lines with what you get by
sending them (default ,) to the shell command, replacing
them with its standard output.
The last operators make qed into a kind of workbench,
both for massaging data and for constructing a list
of commands to send to the shell.
I gather current Linux/BSD eds have > and <, spelled
r ! and w !, but without | it just ain't the same,
rather like the way | revolutionized the shell.
I believe the credit for U of T ed and qed go mainly
to Rob Pike, Tom Duff, Hugh Redelmeier, and the (alas
now late) David Tillbrook. David remained an avid
user of qed, continuing to add stuff to it.
Norman Wilson
Toronto ON
PS: this message, as most of my e-mail, composed by
typing it into qed, editing as needed, then running
>mail tuhs(a)tuhs.org
> When hyphenation is disabled, soft (discretionary) hyphens are
> interpreted.
In pre-Unix roff hyphenation mode 0 turned off all breaking of words.
The original troff, however, behaved as described above, and alsowant
broke genuinely hyphenated words in mode 0. If you t really wants
to break words one day, you may use
Noel Hunt:
Who in the Unix world today
writes, would even be able to write, a manual entry like that.
====
Doug McIlroy is still around, though (alas) he doesn't write
many manual entries these days.
Norman Wilson
Toronto ON
As the example came through in my mail reader--in a different,
proportionally spaced font, the effect of .ll in the examples was hard
to figure out. Which of the two line lengths in the new case is
actually operative? Why are the inch lengths in the old and new
examples so different? The new example is ticklish, since it depends
on the peculiar AI that identifies sentence endings. Suppose reference
1 is naively broken after "Soc."
I prefer the old example because it's clean to read, isn't mixed up
with AI, and incidentally illustrates a nontrivial use for .nop.
Doug
> The example itself originally read:
>
> .ll 4.5i
> 1.\ This is the first footnote.\c
> .ss 48
> .nop
> .ss 12
> 2.\ This is the second footnote.
>
> RESULT:
>
> 1. This is the first footnote. 2. This
> is the second footnote.
>
> The new version of this example is:
>
> .ie n .ll 50n
> .el .ll 2.75i
> .ss 12 48
> 1. J. Fict. Ch. Soc. 6 (2020), 3\[en]14.
> 2. Better known for other work.
>
> RESULT:
>
> 1. J. Fict. Ch. Soc. 6 (2020), 3-14. 2. Better
> known for other work.
FYI, folks.
Arnold
> Date: Tue, 23 Mar 2021 06:06:49 -0700
> From: a(a)9srv.net
> To: 9fans(a)9fans.net
> Subject: [9fans] Transfer of Plan 9 to the Plan 9 Foundation
>
> We are thrilled to announce that Nokia has transferred the copyright of
> Plan 9 to the Plan 9 Foundation. This transfer applies to all of the
> Plan 9 from Bell Labs code, from the earliest days through their final
> release.
>
> The most exciting immediate effect of this is that the Plan 9 Foundation
> is making the historical 1st through 4th editions of Plan 9 available
> under the terms of the MIT license. These are the releases as they
> existed at the time, with minimal changes to reflect the above.
>
> 1st and 2nd edition were never released as open source software, and
> both (but especially 1st edition) were only available to a very small
> number of people. 3rd and 4th were previously available as open source,
> but under a license which was problematic for some people (especially
> the 3rd edition). We think making these available under the MIT license
> is something that's going to be a significant benefit for all projects
> using Plan 9 code. While this doesn't automatically change the license
> on any downstream projects, and you're welcome to keep using the LPL if
> you like, you now have the option of switching to MIT, which we think
> most everyone will find preferable.
>
> Obviously, for folks in the Plan 9 community, there isn't a way to say
> "thank you" to Bell Labs, and its various parent organizations, that's
> really adequate. None of us would be talking about any of this if it
> weren't for the work done there for decades. All of us here at the Plan
> 9 Foundation express our sincerest thanks to the team at Nokia who made
> this possible, the Plan 9 alumni who supported the effort, and the Plan
> 9 community who have kept kernels booting and the userland useful.
>
> The historical releases are available right now at:
> https://p9f.org/dl/
>
> You can read Nokia's press release on the transfer here:
> https://www.bell-labs.com/institute/blog/plan-9-bell-labs-cyberspace/
>
> Thank you for your time,
> Anthony Sorace
> Plan 9 Foundation
>
> ------------------------------------------
> 9fans: 9fans
> Permalink: https://9fans.topicbox.com/groups/9fans/Tf20bce89ef96d4b6-M63f81768e4ffdfa4…
> Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
Micnet would seem to fall within my interest scope of Unix networking 1975-1985. I’ve seen it mentioned before, but I don’t have a clear picture of what it was.
There is some sysadmin material on bitsavers about it, but no info on how it worked on the inside or how it relates to other early networking.
At first glance it seems to be conceptually related to Berknet.
Does anybody here know the backstory to Micnet and/or how it worked?
Paul
Hello UNIX veterans.
So I stumbled online upon a copy of the book "SCO Xenix System V Operating
System User's Guide", from 1988, advertised as having 395 pages, and the
asked for price was 2.50 EUROs. I bought it, expecting --well, I don't know
exactly what I was expecting, something quaint and interesting, I suppose.
I've received the book, and it is not a treasure trobe, to say the least. I
am in fact surprised at how sparse was UNIX System V of this age, almost
spartan.
The chapter titles are:
1. Introduction
2. vi: A Text Editor
3. ed
4. mail
5. Communicating with Other Sites
6. bc: A Calculator
7. The Shell
8. The C-Shell
9. Using the Visual Shell
And that's it. The communications part only deals the Micnet (a serial-port
based local networking scheme), and UUCP. No mention at all of the words
"Internet" or "TCP/IP", no even in the Index.
Granted, this Xenix version is derived from System V Release 2, and I think
it was for the Intel 286 (not yet ported to the i386), but hey it's 1988
already and the Internet is supposed to be thriving on UNIX in the Pacific
Coast, or so the lore says. I see now that it probably was only in the
Berkely family that the Internet was going on...
In truth, I fail to see what was the appeal of such a system, for mere
users, when in the same PC you could run rich DOS-based applications like
WordPerfect, Lotus 1-2-3, Ventura Publisher and all the PC software from
those years.
I mean, mail without Internet is pretty useless, althouhg I understand it
could be useful for inter-company communications. And yes, it had vi and the
Bourne Shell. But still, it feels very very limited, this Xenix version,
from a user's point of view.
I'm probably spoiled from Linux having repositories full of packaged free
software, where the user just has to worry about "which is the best of":
email program, text editor, browser, image manipulation program, video
player, etc. I understand this now pretty well, how spoiled are we these
days.
--
Josh Good
> When you're the phone company, calls are free
Not so. But the culture prioritized phone use in a way
that's been completely forgotten. High execs would
answer their own phones when they were at their
desks. "Your call is very important to us. Please wait
for the first available representative" would have been
anathema.
One of my few managerial decrees in the Unix lab was
to give a year's notice that "research" would stop
forwarding Usenet traffic, not because of phones, but
because uucp was becoming a burden on our computer.
Doug
Connectivity evolved rapidly in the early 1980s. In 1980 I served on the
board of CSNet, which connected have-not CS departments (including Bell
Labs) via dialup and X.25 links onto the periphery of the magic circle
of Arpanet.
By 1982 it was not extraordinary that I could via international email arrange
all aspects of a trip to visit lively universities of the AUUG.
OK. So, I've been trying to decide (for the last time, I swear) whether
to use tabs or spaces in my code... I did a quick pulse-check on the
state of argument and it appears to be alive and well in 2021. My
question for y'all is, was there a preference in the very early days or
not? I saw an article talking about the 20 year feud, but that's not my
recollection. In 1994, nobody agreed on this, but I'm sure it predates
my entree into the field. I'm thinking the history of entab and detab
are somehow related, but I've been wrong on these sorts of thoughts
before. What say you?
Will
Amazing coincidences. A week prior I was researching Topper Toys
looking for their old factory ("largest toy factory in the world")
As there was litte on it's location and it lead me to find out
in 1961 it took over the old Singer Factory in Elizabeth, NJ.
So looking up the Singer factory led me to "Elizabeth,
New Jersey, Then and Now" by Robert J. Baptista
https://ia801304.us.archive.org/11/items/ElizabethNewJerseyThenAndNowSecond…
Which had no information on Topper, but had had this paragraph in it's Singer
section on page 28 --
Boys earned money "rushing the growler" at lunchtime at the Singer plant.
German workers lowered their covered beer pails, called growlers, on ropes
to the boys waiting below. They earned a nickel by filling them with beer
at Grampp's saloon on Trumbull St. One of these boys was Thomas Dunn who
later became a long term Mayor. In the early 1920s Frederick Grampp went
into the hardware business at the corner of Elizabeth Ave. and Reid St.
When I read it I thought funny, as I know the name Fred Grampp. But beleived
just a coincidenental same name. After reading the biography post, I went back
to the book as it turns out that Fred Grampp is your Fred Grampps's
grandfather. You can find more his family and the hardware store and
Grampp himself on pages 163-164, and 212.
-Brian
Wow this is nothing short of GREAT!
I always wanted to tackle this but it was out of my reach as I barely got
anything from this lineage to build to anything.
Most excellent!
-----Original Message-----
From: MOCHIDA Shuji
To: tuhs(a)minnie.tuhs.org
Sent: 3/6/21 10:42 AM
Subject: [TUHS] 4.4BSD sparc, pmax binary recently compiled
I compiled 4.4BSD to get pmax and sparc binary, from CSRG Archive
CD-ROM #4
source code.
http://www.netside.co.jp/~mochid/comp/bsd44-build/
pmax:
- Works on GXemul DECstaion(PMAX) emulation.
- I used binutils 2.6 and gcc 2.7.2.3 taken from Gnu ftp site,
as 4.4BSD src does not contain pmax support part in as, ld,
gcc and gdb.
- Lack of GDB. I got rid of compile errors of gdb 4.16, but that
does not work yet.
- gcc included can not deal c++ static constructor. So,
contrib/groff
can not be compiled. Instead, it uses old/{nroff,troff,eqn,tbl..}.
sparc:
- Works on sun4c. I use on SPARCstation 2, real hardware.
TME sun4c emulation can boot to single user, but it locks up in
middle of /etc/rc.
CSRG Archive CD-ROM #4's source code (just after Lite2 release) seems
have differences from CSRG's binary distributions before (2 times),
e.g. mount systemcall is not compatible.
I used NetBSD 1.0/sparc, NetBSD 1.1/pmax for 1st (slightly) cross
compiling. NetBSD 1.0/sparc boots and works well on TME emulator.
SunOS 4.1.4, Solaris7 works too, but this 4.4BSD binary doesn't..
-mochid
As I remember it, the Facilities folks were so upset about
someone painting stuff on Their Water Tower that a complaint
went to Vic Vyssotsky, then Executive Director of Division
112 (one step up from Sandy Fraser, who was Director of 1127).
The story was that Vic and/or Sandy told them that there were
60 people in the research centre and no way to tell who did it.
Word was then quietly passed to certain people--Vic and Sandy
in fact knew exactly who--that things were getting out of hand,
please lay off the Peter-face pranking for a while.
I tried to start a rumour that Vic did the painting, but it
never took off. I hope Vic at least heard it. He'd have
enjoyed the rumour, surely laughed at the prank while knowing
he'd have to calm things down, and 20 years earlier might well
have been involved in something like that.
It was Vic who, on learning I was a cyclist, urged me to try
cycling on the newly-constructed but not yet open segment of
interstate highway that ran behind the Labs. He apparently
had done so and found it lots of fun. Alas, I never did.
Norman Wilson
Toronto ON
BBN’s TCP implementation contained something akin to the hosts file, called hostmap there:
https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/doc
I have not looked at the code for a while, but if I remember correctly the BBN kernel code also read in this file (pre-processed into a binary form) to build its internal routing table.
I do not recall having seen an equivalent file with UoI's NCP Unix in any of the surviving docs or sources - but that does not exclude a library having existed to do lookups in a local copy the SRI-NIC host file. In fact there is some evidence for that in the 2.9 BSD source.
The only surviving copy of the 4.1a (network) source code that I know is in the back-port of this code to 2.8/2.9 BSD. This code includes #ifdef’ed code for accessing the SRI-NIC online host table via NCP:
https://www.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/net/local
This source also contains tools to convert the SRI-NIC data into - inter alia - a hosts file:
https://www.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/net/man/man8/htable.8https://www.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/net/man/man8/gettable…
It would seem that the modern host.txt on Unix evolved late ’81 (BBN code) to early ’82 (4.1a BSD). Possibly NCP Unix has prior work.
Paul
Hi,
I'm not sure where this message best fits; TUHS, COFF, or Internet
History, so please forgive me if this list is not the best location.
I'm discussing the hosts file with someone and was wondering if there's
any historical documentation around it's format and what should and
should not be entered in the file.
I've read the current man page on Gentoo Linux, but suspect that it's
far from authoritative. I'm hoping that someone can point me to
something more authoritative to the hosts file's format, guidelines
around entering data, and how it's supposed to function.
A couple of sticking points in the other discussion revolve around how
many entries a host is supposed to have in the hosts file and any
ramifications for having a host appear as an alias on multiple lines /
entries. To whit, how correct / incorrect is the following:
192.0.2.1 host.example.net host
127.0.0.1 localhost host.example.net host
--
Grant. . . .
unix || die
In the Seventh Edition manual, a joke was added to
the entry for kill(1). It appears in every following
Research manual, but seems to have been discarded
by all modern descendants.
I guess the prejudice against humour in the manual
is extreme these days.
Norman Wilson
Toronto ON
In all that's been written about the Research Unix players,
Fred Grampp has gotten far less coverage than he deserves.
I hope to rectify that with this post, most of which was
written soon after his death.
Doug
During Fred's long career at Bell Laboratories, his coworkers
were delighted to work with him, primarily because of his
innovative and often surprising ways of attacking problems.
Fred's unique approach was by no means limited to work-related
matters. Fred arranged an annual canoe-camping trip on the
Delaware River replete with nearly professional grade fireworks.
He also arranged a number of trips to New York City (referred
to as culture nights) which included, among other things,
trips to the planetarium and visits to various tea rooms.
To his friends at Bell Labs, Fred Grampp was a true original. He
knew the urban community of small, scrabbling business
as well as the pampered life of industrial research in the
country's greatest industrial research lab. And he brought
the best of each to his approach to work.
In his father's hardware store, Fred learned on the front line
what "customer-oriented" meant--a far cry from the hypothetical
nonsense on the subject put forth by flacks in a modern PR
department, or by CEO Bob Allen thinking big thoughts on the
golf course.
Fred ran the computing facilities for the Computer Science
Research Center. He had his finger on the pulse of the machinery
at all hours of day and night. He and his colleague Ed Sitar
rose early to pat the hardware and assure that everything was
in order just as had been done at the hardware store. The rest
of us, who kept more nerdish hours, could count on everything
running.
Packed with equipment, the machine room depended on
air conditioning. Fred saw this as a threat to dependable
service. As a backup, he had big galvanized barn fans installed
in several windows--incongruous, but utterly practical. And
they saw actual use on at least one occasion.
Fred cooked up ingenious software to sniff the computers'
health and sound alarms in his office and even by his bed when
something was amiss. When a user found something wrong and
popped into Fred's office to report the trouble, more often
than not he'd find Fred already working on it.
With his street smarts, Fred was ahead of the game when
computer intrusion began to become a problem in the 1970s.
He was a real white-hat marshall, who could read the the bad
guys' minds and head them off at the pass. With Bob Morris,
Fred wrote a paper to alert system administrators to the kinds
of lapse of vigilance that leave them open to attack; the paper
is still read as a classic. Other sage advice was put forth
by Fred in collaboration with G. R. Emlin, who would become an
important adjunct member of the lab, as several TUHS posts attest.
Quietly he developed a suite of programs that probed a
computer's defenses--fortunately before the bad guys really
got geared up to do the same. That work led to the creation
of a whole department that used Fred's methods to assess and
repair the security of thousands of computers around Bell Labs.
Fred's avocations of flying and lock-picking lent spice to
life in the Labs. He was a central figure of the "computer
science airforce" that organized forays to see fall colors,
or to witness an eclipse. He joined Ken Thompson, who also
flew in the department air force, on a trip to Russia to fly
a MIG-29. Ken tells the story at cs.bell-labs.com/ken/mig.html.
Fred's passion for opera was communicated to many. It was
he who put the Met schedule on line for us colleagues long
before the Met discovered the World Wide Web. He'd press new
recordings on us to whet our appetites. He'd recount, or take
us to, rehearsals and backstage visits, and furnish us with
librettos. When CDs appeared on the scene, Fred undertook to
build a systematic collection of opera recordings, which grew
to over two hundred works. They regularly played quietly in the
background of his office. To Fred the opera was an essential
part of life, not just an expensive night on the town.
Fred's down-to-earth approach lightened life at Bell Labs. When
workmen were boarding up windows to protect them from some major
construction--and incidentally to prevent us from enjoying the
spectacle of ironworkers outside. Fred posted a little sign
in his window to the effect that if the plywood happened to
get left off, a case of Bud might appear on the sill. For the
next year, we had a close-up view of the action.
Fred, a graduate of Stevens Institute, began his career in
the computer center, under the leadership of George Baldwin,
perhaps the most affable and civic-minded mathematician I have
ever met. At the end of one trying day, George wandered into
Fred's office, leaned back in the visitor chair, and said,
"I sure could use a cold one about now." Fred opened his window
and retrieved a Bud that was cooling on the sill.
Fred lived his whole life in Elizabeth, New Jersey. At one
point he decided that for exercise he could get to the Labs by
train to Scotch Plains and bike from there up to Bell Labs--no
mean feat, for the labs sat atop the second range of the
Watchung Mountains, two steep climbs away from Scotch Plains.
He invested in a folding bike for the purpose. Some days
into the new routine a conductor called him out for bringing
a bicycle onto the train. Fred had looked forward to this
moment. He reached into his pocket, pulled out a timetable
and pointed to the fine print: bicycles were prohibited with
the exception of folding bikes.
Originally dated October 25, 2000. Lightly edited and three
paragraphs added February 22, 2021.
Rob Pike:
I don't believe the water tower was a one-person job.
====
I agree. Even if GR Emlin helped, I bet two live people
were involved in painting.
I'm quite sure more than that participated in making the
stencil.
Norman Wilson
Toronto ON
PS: I have never been on a water tower.
On Mar 11, 2021, at 10:08 AM, Warner Losh <imp(a)bsdimp.com> wrote:
>
> On Thu, Mar 11, 2021 at 10:40 AM Bakul Shah <bakul(a)iitbombay.org> wrote:
>> From https://www.freebsd.org/cgi/man.cgi?hosts(5)
>> For each host a single line should be present with the following information:
>> Internet address
>> official host name
>> aliases
>> HISTORY
>> The hosts file format appeared in 4.2BSD.
>
> While this is true wrt the history of FreeBSD/Unix, I'm almost positive that BSD didn't invent it. I'm pretty sure it was picked up from the existing host file that was published by sri-nic.arpa before DNS.
A different and more verbose format. See RFCs 810 & 952. Possibly because it had to serve more purposes?
> Warner
>
>>> On Mar 11, 2021, at 9:14 AM, Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org> wrote:
>>> Hi,
>>>
>>> I'm not sure where this message best fits; TUHS, COFF, or Internet History, so please forgive me if this list is not the best location.
>>>
>>> I'm discussing the hosts file with someone and was wondering if there's any historical documentation around it's format and what should and should not be entered in the file.
>>>
>>> I've read the current man page on Gentoo Linux, but suspect that it's far from authoritative. I'm hoping that someone can point me to something more authoritative to the hosts file's format, guidelines around entering data, and how it's supposed to function.
>>>
>>> A couple of sticking points in the other discussion revolve around how many entries a host is supposed to have in the hosts file and any ramifications for having a host appear as an alias on multiple lines / entries. To whit, how correct / incorrect is the following:
>>>
>>> 192.0.2.1 host.example.net host
>>> 127.0.0.1 localhost host.example.net host
>>>
>>>
>>>
>>> --
>>> Grant. . . .
>>> unix || die
>> _______________________________________________
>> COFF mailing list
>> COFF(a)minnie.tuhs.org
>> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
In 1972, while in high school, I went to an Intel seminar on the 8008.
There I met a Bell Labs scientist who gave me a sample 8008 and invited
me for a visit at some NJ Bell Labs facility. That group had a
timesharing system of some kind, but it was not Unix. I was also given a
Bell Labs speech synthesis kit after meeting one of the speech
scientists who happened to be in on the same Saturday. I have searched
my attic but can't find further details. Would any of you alumni recall
what this other timesharing system might have been?
Dan
Hello everyone,
I'm Wojciech Adam Koszek and I'm a new member here. After a short stint with Red Hat 6.0 and Slackware Linux around 2000-2001 (I think it was Slackware 7.0 or 7.1) my journey with UNIX started with FreeBSD 4.5. I fell in love with BSD and through Warner Losh, Robert Watson, and folks from a Polish UNIX scene, I became hooked. I ended up working with FreeBSD for the following 15 years or so.
Anyway: the volume of the UNIX literature back then in Poland was scarce, yet through a small bookstore and a friendly salesman I got myself a "UNIX Network Programming Volume 1" at a huge discount, and read it back-to-back.
Looking back, his books had a huge impact on my life (I had all his books, and read everything line by line, with a slight exception of TCP/IP illustrated vol 2, which I used as a reference), and while Stevens's website sheds some light on what he did, I often wonder what is the story behind how his books came to be. It doesn't help he appeared a very private person--never have I seen a photo of him anywhere.
What was the reception of his books in the US?
Did you know him? Do you know any more details about what he did after 1990?
Thanks and take care,
Wojciech Adam Koszek
Following my success in getting 6th Edition UNIX running on a KDF11-B,
with support for the MSCP disk controller, I was looking for ways to get
as new a tool chain as possible onto it, with full source code (as I'd
been using the tool chain from UNSW, for which the source is missing).
Well, it turns out that there's an even newer one in PWB, and there are
complete source and binary PWB distributions in the TUHS archive!
I now have PWB/UNIX 1.0 running, and completely rebuilt from its own
sources, on one of my physical /23+ boxes (and, of course, in simh).
It's connected to my main (NetBSD) system using UUCP over a serial line.
Oh, and it runs the University Ingres RDBMS. :)
The write-up (and download) is at https://www.hamartun.priv.no/pwb.html
-tih
--
Most people who graduate with CS degrees don't understand the significance
of Lisp. Lisp is the most important idea in computer science. --Alan Kay
Here is a link to the 1897 bill of the Indiana State Legislature
that legislated a new value for $\pi$:
https://journals.iupui.edu/index.php/ias/article/view/4753/4589
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> The reason to use tab was file size for one
This is urban legend. The percentage of 512-byte blocks that
tabs would save was never significant.
(I agree that tabs and--especially--newlines can significantly
compress fixed-field formats from punched-card tradition, but
on the tiny Unix systems where tab conventions were
established, big tabular files were very rare.)
Tabs were a convenience for typists. Of course the tty driver
could have replaced them with spaces, but that would have
foreclosed important usage such as tab-separated fields and
run-time-adjustable tab stops tab-separated fields.
(I have run into latter-day trouble with selecting a space-substituted tab
from a screen, only to discover that I was copying or searching for spaces
instead of the tab.. That's not an intrinsic problem, though. Editors like sam
handle it without fuss.)
Doug
I compiled 4.4BSD to get pmax and sparc binary, from CSRG Archive CD-ROM #4
source code.
http://www.netside.co.jp/~mochid/comp/bsd44-build/
pmax:
- Works on GXemul DECstaion(PMAX) emulation.
- I used binutils 2.6 and gcc 2.7.2.3 taken from Gnu ftp site,
as 4.4BSD src does not contain pmax support part in as, ld,
gcc and gdb.
- Lack of GDB. I got rid of compile errors of gdb 4.16, but that
does not work yet.
- gcc included can not deal c++ static constructor. So, contrib/groff
can not be compiled. Instead, it uses old/{nroff,troff,eqn,tbl..}.
sparc:
- Works on sun4c. I use on SPARCstation 2, real hardware.
TME sun4c emulation can boot to single user, but it locks up in
middle of /etc/rc.
CSRG Archive CD-ROM #4's source code (just after Lite2 release) seems
have differences from CSRG's binary distributions before (2 times),
e.g. mount systemcall is not compatible.
I used NetBSD 1.0/sparc, NetBSD 1.1/pmax for 1st (slightly) cross
compiling. NetBSD 1.0/sparc boots and works well on TME emulator.
SunOS 4.1.4, Solaris7 works too, but this 4.4BSD binary doesn't..
-mochid
> From: John Floren
> Can anyone on the list point me to either an existing archive where
> these exist
The canonical repository for historic documentation online is BitSavers.
It has an almost-complete set of DEC stuff (both manuals and prints. QBUS
devices are at:
http://www.bitsavers.org/pdf/dec/qbus/
QBUS CPU's will be in the relevant model directory, e.g.:
http://www.bitsavers.org/pdf/dec/pdp11/1123/
and disk drives are in:
http://www.bitsavers.org/pdf/dec/disc/
I haven't checked your list, but I suspect most of them are there; I think the
ADV11-A prints are missing, though. You can either send the originals to Al
Kossow, or scan them for him; but check with him first, to make sure he doen't
already have them, just hasn't got around to posting them yet.
There's another site which indexes DEC online documentation:
https://manx-docs.org/
There are a very few things which aren't in Bitsavers, and can be found there.
> KFD11-A cpu
I assume that's a typo for 'KDF11-A'?
Noel
I've been hauling around a pile of DEC Field Maintenance Print Sets
for PDP-11 components for over a decade now, intending to see if
they're worth having scanned or if there are digital versions out
there already. Can anyone on the list point me to either an existing
archive where these exist, or an archivist who would be interested in
scanning them? They're full of exploded diagrams, schematics, and
assembly listings.
Here's the list of what I have:
Field Maintenance Print Set (17" wide, 11" high):
RLV11 disk controller
RL01-AK disk drive
ADV-11A (??)
Field Maintenance Print Set (14" wide, 8.5" high):
RL01 disk drive
DLV11-J serial line controller
RLV11 disk controller
KFD11-A cpu
KEF11-A floating point processor
PDP11/23
PDP11/03-L
Thanks,
John Floren
I could chip in with my own strong opinions about code formatting,
but I think others have already posted plenty of such boring off-topic
fluff.
A straight answer to Will's original question might be interesting,
though:
The oldest extant UNIX code samples I know are those the TUHS archive,
in Distributions/Research/Dennis_v3/nsys.tar.gz; they're a very old
kernel source tree. There are plenty of tabs there.
This matches my memories of the V7-era source code, and of what I saw
people like ken and dmr and rob and bwk and pjw typing in the not-
so-early days of the 1980s when I worked with them.
Tabs were generally eight spaces apart. In code, nobody worried about
the effects on long lines, because the coding style was spare and
didn't run to many deeply-nested constructs, so lines didn't get that
long. (Maybe it was considered a feature rather than a bug that
deep nesting and deep indentation looked messy, because it wasn't
usually a good idea anyway.)
I can't speak to original motivations, but I suspect my own reasons
for using tabs aren't too different:
-- It's quicker and faster to type than multiple spaces
-- When not at the start of the line, tabs more often preserve
alignment when earlier part of the line are edited
-- Back when terminals were connected at speeds like 110 or 300 bps
(I am old enough to have experienced that, especially when working
from home), letting the system send a tab and the local terminal
expand it was a lot faster, especially when reading code (more likely
to have lots of indentation than prose). Not every device supported
tabs, but enough did that it made a real difference.
UNIX didn't originate any of this. I used tabs when writing in FORTRAN
and ALGOL/SIMULA and MACRO-10 on the TOPS-10 system I used before I
encountered UNIX. So did all the other hackers I knew in the terminal
room where we all hung out.
I don't know the history of entab/detab. Neither appears to have
been around in the Research systems; they're not in V7 and they're
not in V10. V7 does.
As an aside, the V10 manual has a single manual page for col, [23456],
mc, fold, and expand. It's a wonderful example of how gracefully
Doug assembled collections of related small programs onto a single
page to keep the manual size down. Also of his gift for concise
prose: the first sentence is
These programs rearrange files for appearance's sake.
which is a spot-on but non-stodgy summary. I wish I could write
half as well as Doug can.
And as an almost-joke, it's a wonder those programs haven't all been
made into options to cat in modern systems.
Norman Wilson
Toronto ON
For sure, I've seen at least two interesting changes:
- market forces have pushed fast iteration and fast prototyping into the
mainstream in the form of Silicon valley "fail fast" culture and the
"agile" culture. This, over the disastrous "waterfall" style, has led to a
momentous improvement in overall productivity improvements.
- As coders get pulled away from the machine and performance is less and
less in coders hands, engineers aren't sucked into (premature) optimization
as much.
Tyler
On Sat, Jan 30, 2021 at 6:10 AM M Douglas McIlroy <
m.douglas.mcilroy(a)dartmouth.edu> wrote:
> Have you spotted an evolutionary trend toward better, more productive
> programmers? Or has programmer productivity risen across the board due to
> better tools? Arguably what's happened is that principle has been
> self-obsoleting, for we have cut back on the demand for unskilled (i.e.
> less capable) programmers. A broad moral principle may be in play:
> programmers should work to put themselves out of business, i.e. it is wrong
> to be doing the same kind of work (or working in the same way) tomorrowas
> yesterday.
>
> Doug
>
>
> On Tue, Jan 26, 2021 at 5:23 AM Tyler Adams <coppero1237(a)gmail.com> wrote:
>
>> Looking at the 1978 list, the last one really stands out:
>>
>> "Use tools in preference to unskilled help to lighten a programming task"
>> -- The concept of unskilled help for a programming task...doesn't really
>> exist in 2020. The only special case is doing unskilled labor yourself.
>> What unskilled tasks did people used to do back in the day?
>>
>> Tyler
>>
>>
>> On Tue, Jan 26, 2021 at 4:07 AM M Douglas McIlroy <
>> m.douglas.mcilroy(a)dartmouth.edu> wrote:
>>
>>> It might be interesting to compare your final list with the two lists in
>>> the 1978 special issue of the BSTJ--one in the Foreword, the other in the
>>> revised version of the Ritchi/Thompson article from the CACM. How have
>>> perceptions or values changed over time?
>>>
>>> Doug
>>>
>>>
>>> On Mon, Jan 25, 2021 at 7:32 AM Steve Nickolas <usotsuki(a)buric.co>
>>> wrote:
>>>
>>>> On Mon, 25 Jan 2021, Tyler Adams wrote:
>>>>
>>>> > I'm writing about my 5 favorite unix design principles on my blog this
>>>> > week, and it got me wondering what others' favorite unix design
>>>> principles
>>>> > are? For reference, mine are:
>>>> >
>>>> > - Rule of Separation (from TAOUP <
>>>> http://catb.org/~esr/writings/taoup/html/>
>>>> > )
>>>> > - Let the Machine Do the Dirty Work (from Elements of Programming
>>>> Style)
>>>> > - Rule of Silence (from TAOUP <
>>>> http://catb.org/~esr/writings/taoup/html/>)
>>>> > - Data Dominates (Rob Pike #5)
>>>> > - The SPOT (Single Point of Truth) Rule (from TAOUP
>>>> > <http://catb.org/~esr/writings/taoup/html/>)
>>>> >
>>>> > Tyler
>>>> >
>>>>
>>>> 1. Pipes
>>>> 2. Text as the preferred format for input and output
>>>> 3. 'Most everything as a file
>>>> 4. The idea of simple tools that are optimized for a single task
>>>> 5. A powerful scripting language built into the system that, combined
>>>> with
>>>> 1-4, makes writing new tools heaps easier.
>>>>
>>>> -uso.
>>>>
>>>
> - separation of code and data using read-only and read/write file systems
I'll bite. How do you install code in a read-only file system? And
where does a.out go?
My guess is that /bin is in a file system of its own. Executables from
/letc and /lib are probably there too. On the other hand, I guess
users' personal code is still read/write.
I agree that such an arrangement is prudent. I don't see a way,
though, to update bin without disrupting most running programs.
Doug
All,
I was introduced to Unix in the mid 1990's through my wife's VMS account
at UT Arlington, where they had a portal to the WWW. I was able to
download Slackware with the 0.9 kernel on 11 floppies including X11. I
installed this on my system at the time - either a DEC Rainbow 100B? or
a handme down generic PC. A few years later at Western Illinois
University - they had some Sun Workstations there and I loved working
with them. It would be several years later, though, that I would
actually use unix in a work setting - 1998. I don't even remember what
brand of unix, but I think it was again, sun, though no gui, so not as
much love. Still, I was able to use rcs and and when my Windows bound
buddies lost a week's work because of some snafu with their backups, I
didn't lose anything - jackflash was the name of the server - good
memories :). However, after this it was all DOS and Windows until, 2005.
I'd been eyeing Macs for some time. I like the visual aesthetics and
obvious design considerations. But, in 2005, I finally had a bonus big
enough to actually buy one. I bought a G5 24" iMac and fell in love with
Mac. Next, it was a 15" G4 Powerbook. I loved those Macs until Intel
came around and then it was game over, no more PC's in my life (not
really, but emotionally, this was how I felt). With Mac going intel, I
could dual boot into Windows, Triple boot into Linux, and Quadruple boot
into FreeBSD, and I could ditch Fink and finally manage my unix tools
properly (arguable, I know) with Homebrew or MacPorts (lately, I've gone
back to MacPorts due to Homebrew's lack of support for older OS
versions, and for MacPorts seeming rationality).
Anyhow, I have thoroughly enjoyed the Mac ride, but with Catalina, the
ride got really bumpy (too much phone home, no more 32 bit programs and
since Adobe Acrobat X, which I own, outright, isn't 64 bit, among other
apps, this just in not an option for me), and with Big Sur, it's gotten
worse, potholes, sinkholes, and suchlike, and the interface is downright
patronizing (remember Microsoft Bob?). So, here I am, Mr.
Run-Any-Cutting-Edge-OS anytime guy, hanging on tooth and nail to Mac OS
Mojave where I still have a modicum of control over my environment.
My thought for the day and question for the group is... It seems that
the options for a free operating system (free as in freedom) are
becoming ever more limited - Microsoft, this week, announced that their
Edge update will remove Edge Legacy and IE while doing the update -
nuts; Mac's desktop is turning into IOS - ew, ick; and Linux is wild
west meets dictatorship and major corporations are moving in to set
their direction (Microsoft, Oracle, IBM, etc.). FreeBSD we've beat to
death over the last couple of weeks, so I'll leave it out of the mix for
now. What in our unix past speaks to the current circumstance and what
do those of you who lived those events see as possibilities for the next
revolution - and, will unix be part of it?
And a bonus question, why, oh why, can't we have a contained kernel that
provides minimal functionality (dare I say microkernel), that is
securable, and layers above it that other stuff (everything else) can
run on with auditing and suchlike for traceability?
Hi,
As I find myself starting yet another project that that wants to use
ANSI control sequences for colorization of text, I find myself -- yet
again -- wondering if there is a better way to generate the output from
the code in a way that respects TERMinal capabilites.
Is there a better / different control sequence that I can ~> should use
for colorizing / stylizing output that will account for the differences
in capabilities between a VT100 and XTerm?
Can I wrap things that I output so that I don't send color control
sequences to a TERMinal that doesn't support them?
--
Grant. . . .
unix || die
The recent discussions on the TUHS list of whether /bin and /usr/bin
are different, or symlinked, brought to mind the limited disk and tape
sizes of the 1970s and 1980s. Especially the lower-cost tape
technologies had issues with correct recognition of an end-of-tape
condition, making it hard to span a dump across tape volumes, and
strongly suggesting that directory tree sizes be limited to what could
fit on a single tape.
I made an experiment today across a broad range of operating systems
(many with multiple versions in our test farm), and produced these two
tables, where version numbers are included only if the O/S changed
practices:
------------------------------------------------------------------------
Systems with /bin a symlink to /usr/bin (or both to yet another common
directory) [42 major variants]:
ArchLinux Kali RedHat 8
Arco Kubuntu 19, 20 Q4OS
Bitrig Lite ScientificLinux 7
CentOS 7, 8 Lubuntu 19 Septor
ClearLinux Mabox Solaris 10, 11
Debian 10, 11 Magiea Solydk
Deepin Manjaro Sparky
DilOS Mint 20 Springdale
Dyson MXLinux 19 Ubuntu 19, 20, 21
Fedora Neptune UCS
Gnuinos Netrunner Ultimate
Gobolinux Oracle Linux Unleashed
Hefftor Parrot 4.7 Void
IRIX PureOS Xubuntu 19, 20
------------------------------------------------------------------------
Systems with separate /bin and /usr/bin [60 major variants]:
Alpine Hipster OS108
AltLinux KaOS Ovios
Antix KFreeBSD PacBSD
Bitrig Kubuntu 18 Parrot 4.5
Bodhi LibertyBSD PCBSD
CentOS 5, 6 LMDE PCLinuxOS
ClonOS Lubuntu 17 Peppermint
Debian 7--10 LXLE Salix
DesktopBSD macOS ScientificLinux 6
Devuan MidnightBSD SlackEX
DragonFlyBSD Mint 18--20 Slackware
ElementaryOS MirBSD Solus
FreeBSD 9--13 MXLinux 17, 18 T2
FuryBSD NetBSD 6-1010 Trident
Gecko NomadBSD Trisquel
Gentoo OmniOS TrueOS
GhostBSD OmniTribblix Ubuntu 14--18
GNU/Hurd OpenBSD Xubuntu 18
HardenedBSD OpenMandriva Zenwalk
Helium openSUSE Zorinos
------------------------------------------------------------------------
Some names appear in both tables, indicating a transition from
separate directories to symlinked directories in more recent O/S
releases.
Many of these system names are spelled in mixed lettercase, and if
I've botched some of them, I extend my apologies to their authors.
Some of those systems run on multiple CPU architectures, and our test
farm exploits that; however, I found no instance of the CPU type
changing the separation or symbolic linking of /bin and /usr/bin.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
To fill out the historical record, the earliest doctype I know of
was a shell (not rc) script. From my basement heater that happens
to run 10/e:
b$ man doctype | uniq
DOCTYPE(1) DOCTYPE(1)
NAME
doctype - guess command line for formatting a document
SYNOPSIS
doctype [ option ... ] [ file ]
DESCRIPTION
Doctype guesses and prints on the standard output the com-
mand line for printing a document that uses troff(1),
related preprocessors like eqn(1), and the ms(6) and mm
macro packages.
Option -n invokes nroff instead of troff. Other options are
passed to troff.
EXAMPLES
eval `doctype chapter.?` | apsend
Typeset files named chapter.0, chapter.1, ...
SEE ALSO
troff(1), eqn(1), tbl(1), refer(1), prefer(1), pic(1),
ideal(1), grap(1), ped(9.1), mcs(6), ms(6), man(6)
BUGS
It's pretty dumb about guessing the proper macro package.
Page 1 Tenth Edition (printed 2/24/2021)
doctype(1) is in the 8/e manual, so it existed in early 1985;
I bet it's actually older than that. The manual page is on
the V8 tape, but, oddly, not the program; neither is it in
the V10 pseudo-tape I cobbled together for Warren long ago.
I'm not sure why not.
The version in rc is, of course, a B-movie remake of the
original.
Norman Wilson
Toronto ON
Lately, I've been playing around in v6 unix and mini-unix with a goal of
better understanding how things work and maybe doing a little hacking.
As my fooling around progressed, it became clear that moving files into
and out of the v6 unix world was a bit tedious. So it occurred to me
that having a way to mount a v6 filesystem under linux or another modern
unix would be kind of ideal. At the same time it also occurred to me
that writing such a tool would be a great way to sink my teeth into the
details of old Unix code.
I am aware of Amit Singh's ancientfs tool for osxfuse, which implements
a user-space v6 filesystem (among other things) for MacOS. However,
being read-only, it's not particularly useful for my problem. So I set
out to create my own FUSE-based filesystem capable of both reading and
writing v6 disk images. The result is a project I call retro-fuse,
which is now up on github for anyone to enjoy
(https://github.com/jaylogue/retro-fuse)
A novel (or perhaps just peculiar) feature of retro-fuse is that, rather
than being a wholesale re-implementation of the v6 filesystem, it
incorporates the actual v6 kernel code itself, "lightly" modernized to
work with current compilers, and reconfigured to run as a Unix process.
Most of file-handling code of the kernel is there, down to a trivial
block device driver that reflects I/O into the host OS. There's also a
filesystem initialization feature that incorporates code from the
original mkfs tool.
Currently, retro-fuse only works on linux. But once I get access to my
mac again in a couple weeks, I'll port it to MacOS as well. I also hope
to expand it to support other filesystems as well, such as v7 or the
early BSDs, but we'll see when that happens.
As I expected, this was a fun and very educational project to work on.
It forced me to really understand what was going in the kernel (and to
really pay attention to what Lions was saying). It also gave me a
little view into what it was like to work on Unix back in the day.
Hopefully someone else will find my little self-education project useful
as well.
--Jay
Some additions:
Systems with /bin a symlink to /usr/bin
Digital UNIX 4.0
Tru64 UNIX 5.0 to 5.1B
HP-UX 11i 11.23 and 11.31
Systems with separate /bin and /usr/bin
SCO UNIX 3.2 V4.0 to V4.2
--
The more I learn the better I understand I know nothing.
> I can imagine a simple perl (or python or whatever) script that would run
> through groff input [and] determine which preprocessors are actually
> needed ...
Brian imagined such and implemented it way back when. Though I used
it, I've forgotten its name. One probably could have fooled it by
tricks like calling pic only in a .so file and perhaps renaming .so.
But I never heard of it failing in real life. It does impose an extra
pass over the input, but may well save a pass compared to the
defensive groff -pet that I often use or to the rerun necessary when I
forget to mention some or all of the filters.
All,
So, we've been talking low-level design for a while. I thought I would
ask a fundamental question. In days of old, we built small
single-purpose utilities and used pipes to pipeline the data and
transformations. Even back in the day, it seemed that there was tension
to add yet another option to every utility. Today, as I was marveling at
groff's abilities with regard to printing my man pages directly to my
printer in 2021, I read the groff(1) page:
example here: https://linux.die.net/man/1/groff
What struck me (the wrong way) was the second paragraph of the description:
The groff program allows to control the whole groff system by command
line options. This is a great simplification in comparison to the
classical case (which uses pipes only).
Here is the current plethora of options:
groff [-abcegilpstzCEGNRSUVXZ] [-d cs] [-f fam] [-F dir] [-I dir] [-L
arg] [-m name] [-M dir] [-n num] [-o list] [-P arg] [-r cn] [-T dev] [-w
name] [-W name] [file ...]
Now, I appreciate groff, don't get me wrong, but my sensibilities were
offended by the idea that a kazillion options was in any way simpler
than pipelining single-purpose utilities. What say you? Is this the
perfected logical extension of the unix pioneers' work, or have we gone
horribly off the trail.
Regards,
Will
Will Senn wrote,
> join seems like part of an aborted (aka never fully realized) attempt at a text based rdb to me
As the original author of join, I can attest that there was no thought
of parlaying join into a database system. It was inspired by
databases, but liberated from them, much as grep was liberated from an
editor.
Doug
Hi:
I've been following the discussion on abstractions and the recent
messages have been talking about a ei200 batch driver (ei.c:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/sys/dmr/ei.c) I
have access to DtCyber (CDC Cyber emulator) that runs all/most of the
cdc operating system. I'm toying with the idea of getting ei200 running.
In looking at things, I ran across the following in
https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM/READ_ME
> The UNSW batch system has not been provided with this
> distribution, because of its limited appeal.
> If you are unfortunate enough to have a CYBER to talk to,
> please contact us and we will forward it to you.
Does anyone happen to know if the batch system is still around?
thanks
-ron
To quote from Jon’s post:
> There have been heated discussions on this list about kernel API bloat. In my
> opinion, these discussions have mainly been people grumbling about what they
> don't like. I'd like to flip the discussion around to what we would like.
> Ken and Dennis did a great job with initial abstractions. Some on this list
> have claimed that these abstractions weren't sufficient for modern times.
> Now that we have new information from modern use cases, how would we rethink
> the basic abstractions?
I’d like to add the constraint of things that would have been implementable
on the hardware of the late 1970’s, let’s say a PDP11/70 with Datakit or
3Mbps Ethernet or Arpanet; maybe also Apple 2 class bitmap graphics.
And quote some other posts:
> Because it's easy pickings, I would claim that the socket system call is out
> of line with the UNIX abstractions; it exists because of practical political
> considerations, not because it's needed. I think that it would have fit
> better folded into the open system call.
>>
>> Somebody once suggested a filesystem interface (it certainly fits the Unix
>> philosophy); I don't recall the exact details.
>
> And it was done, over 30 years ago; see Plan 9 from Bell Labs....
I would argue that quite a bit of that was implementable as early as 6th
Edition. I was researching that very topic last Spring [1] and back ported
Peter Weinberger’s File System Switch (FSS) from 8th to 6th Edition; the
switch itself bloats the kernel by about half a kilobyte. I think it may be
one of the few imaginable extensions that do not dilute the incredible
bang/buck ratio of the V6 kernel.
With that change in place a lot of other things become possible:
- a Kilian style procfs
- a Weinberger style network FS
- a text/file based ioctl
- a clean approach to named pipes
- a different starting point to sockets
Each of these would add to kernel size of course, hence I’m thinking about
a split I/D kernel.
To some extent it is surprising that the FSS did not happen around 1975, as
many ideas around it were 'in the air' at the time (Heinz Lycklama’s peripheral
Unix, the Spider network Filestore, Rand ports, Arpanet Unix, etc). With the
benefit of hindsight, it isn’t a great code leap from the cdev switch to the
FSS - but probably the ex ante conceptual leap was just too big at the time.
Paul
[1] Code diffs here:
https://1587660.websites.xs4all.nl/cgi-bin/9995/vdiff?from=fab15b88a6a0f36b…
The last group before I left the labs in 1992 was on was the
POST team.
pq stood for "post query," but POST consisted of -
- mailx: (from SVR3.1) as the mail user agent
- UPAS: (from research UNIX) as the mail delivery agent
- pq: the program to query the database
- EV: (pronounced like the biblical name) the database (and the
genesis program to create indices)
- post: program to combine all the above to read email and to send mail via queries
pq by default would looku up people
pq lastname: find all people with lastname, same as pq last=lastname
pq first.last: find all people with first last, same as pq first=first/last=last
pq first.m.last: find all people with first m last, same as pq first=first/middle=m/last=last
this how email to dennis.m.ritchie @ att.com worked to send it on to research!dmr
you could send mail to a whole department via /org=45267 or the whole division
via /org=45 or a whole location via /loc=mh or just the two people in a specific
office via /loc=mh/room=2f-164
these are "AND"s an "OR" is just another query after it on the same line
There were some special extentions -
- prefix, e.g. pq mackin* got all mackin, mackintosh, mackinson, etc
- soundex, e.g. pq mackin~ got all with the last name that sounding like mackin,
so names such as mackin, mckinney, mckinnie, mickin, mikami, etc
(mackintosh and mackinson did not match the soundex, therefore not included)
The EV database was general and fairly simple. It was directory with
files called "Data" and "Proto" in it.
"Data" was plain text, pipe delineated fields, newline separated records -
123456|ritchie|dennis|m||r320|research!dmr|11273|mh|2c-517|908|582|3770
(used data from preserved at https://www.bell-labs.com/usr/dmr/www/)
"Proto" defined the fields in a record (I didn't remember exact syntax anymore) -
id n i
last a i
first a i
middle a -
suffix a -
soundex a i
email a i
org n i
loc a i
room a i
area n i
exch n i
ext n i
"n" means a number so 00001 was the same as 1, and "a" means alpha, the "i" or "-"
told genesis if an index should be generated or not. I think is had more but
that has faded with the years.
If indices are generated it would then point to the block number in Data, so an lseek(2)
could get to the record quick. I beleive there was two levels of block pointing indices.
(sort of like inode block pointers had direct and indirect blocks)
So everytime you added records to Data you had to regenerate all the indices, that was
very time consuming.
The nice thing about text Data was grep(1) worked just fine, or cut -d'|' or awk -F'|'
but pq was much faster with a large numer of records.
-Brian
Dan Cross <crossd at gmail.com> wrote:
> It seems that Andrew has addressed Daytona, but there was a small database
> package called `pq` that shipped with plan9 at one point that I believe
> started life on Unix. It was based on "flat" text files as the underlying
> data source, and one would describe relations internally using some
> mechanism (almost certainly another special file). An interesting feature
> was that it was "implicitly relational": you specified the data you wanted
> and it constructed and executed a query internally: no need to "JOIN"
> tables on attributes and so forth. I believe it supported indices that were
> created via a special command. I think it was used as the data source for
> the AT&T internal "POST" system. A big downside was that you could not add
> records to the database in real time.
>
> It was taken to Cibernet Inc (they did billing reconciliation for wireless
> carriers. That is, you have an AT&T phone but make a call that's picked up
> by T-Mobile's tower: T-Mobile lets you make the call but AT&T has to pay
> them for the service. I contracted for them for a short time when I got out
> of the Marine Corps---the first time) and enhanced and renamed "Eteron" and
> the record append issue was, I believe, solved. Sadly, I think that
> technology was lost when Cibernet was acquired. It was kind of cool.
>
> - Dan C.
>
Hello All.
Many of you may remember the AT&T UNIX PC and 3B1. These systems
were built by Convergent Technologies and sold by AT&T. They had an
MC 68010 processor, up to 4 Meg Ram and up to 67 Meg disk. The OS
was System V Release 2 vintage. There was a built-in 1200 baud modem,
and a primitive windowing system with mouse.
I had a 3B1 as my first personal system and spent many happy hours writing
code and documentation on it.
There is an emulator for it that recently became pretty stable. The original
software floppy images are available as well. You can bring up a fairly
functional system without much difficulty.
The emulator is at https://github.com/philpem/freebee. You can install up
to two 175 Meg hard drives - a lot of space for the time.
The emulator's README.md there has links to lots of other interesting
3B1 bits, both installable software and Linux tools for exporting the
file system from disk image so it can be mounted under Linux and
importing it back. Included is an updated 'sysv' Linux kernel module
that can handle the byte-swapped file system.
I have made a pre-installed disk image available with a fair amount
of software, see https://www.skeeve.com/3b1/.
The emulator runs great under Linux; not so sure about MacOS or Windows. :-)
So, anyone wishing to journey back to 1987, have fun!
Arnold
FYI, interesting.
---------- Forwarded message ---------
From: Tom Van Vleck <thvv(a)multicians.org>
Date: Sun, Feb 14, 2021, 12:35 PM
Subject: Re: [multicians] History of C (with Multics reference)
To: <multicians(a)groups.io>
Remember the story that Ken Thompson had written a language called "Bon"
which was one of the forerunners of "B" which then led to "new B" and then
to "C"?
I just found Ken Thompson's "Bon Users Manual" dated Feb 1, 1969, as told
to M. D. McIlroy and R. Morris
in Jerry Saltzer's files online at MIT.
http://people.csail.mit.edu/saltzer/Multics/MHP-Saltzer-060508/filedrawers/…
_._,_._,_
------------------------------
Groups.io Links:
You receive all messages sent to this group.
View/Reply Online (#4231) <https://groups.io/g/multicians/message/4231> | Reply
To Group
<multicians@groups.io?subject=Re:%20Re%3A%20%5Bmulticians%5D%20History%20of%20C%20%28with%20Multics%20reference%29>
| Reply To Sender
<thvv@multicians.org?subject=Private:%20Re:%20Re%3A%20%5Bmulticians%5D%20History%20of%20C%20%28with%20Multics%20reference%29>
| Mute This Topic <https://groups.io/mt/78835051/481017> | New Topic
<https://groups.io/g/multicians/post>
------------------------------
-- sent via multicians(a)groups.io -- more Multics info at https:://
multicians.org/
------------------------------
Your Subscription <https://groups.io/g/multicians/editsub/481017> | Contact
Group Owner <multicians+owner(a)groups.io> | Unsubscribe
<https://groups.io/g/multicians/leave/5961246/1924879241/xyzzy> [
crossd(a)gmail.com]
_._,_._,_
Was thinking about our recent discussion about system call bloat and such.
Seemed to me that there was some argument that it was needed in order to
support modern needs. As I tried to say, I think that a good part of the
bloat stemmed from we-need-to-add-this-to-support-that thinking instead
of what's-the-best-way-to-extend-the-system-to-support-this-need thinking.
So if y'all are up for it, I'd like to have a discussion on what abstractions
would be appropriate in order to meet modern needs. Any takers?
Jon
I was lucky enough to actually have a chance to use wm at Carnegie Mellon before it was fully retired in favor of X11 on the systems in public clusters; it made a monochrome DECstation 3100 with 8MB much more livable.
When it was retired, it was still usable for a while because the CMU Computer Club maintained an enhanced version (wmc) that everyone had access to, and Club members got access to its sources.
Did anyone happen to preserve the wm or wmc codebase? There's some documentation in the papers that were published about the wm and Andrew API but no code.
-- Chris
I've been writing about unix design principles recently and tried
explaining "The Rule of Silence" by imagining unix as a restaurant
<http://codefaster.substack.com/p/rule-of-silence>. Do you agree with how I
presented it? Would you do it differently?
Tyler
All,
I'm tooling along during our newfangled rolling blackouts and frigid
temperatures (in Texas!) and reading some good old unix books. I keep
coming across the commands cut and paste and join and suchlike. I use
cut all the time for stuff like:
ls -l | tr -s ' '| cut -f1,4,9 -d \
...
-rw-r--r-- staff main.rs
and
who | grep wsenn | cut -c 1-8,10-17
wsenn console
wsenn ttys000
but that's just cuz it's convenient and useful.
To my knowledge, I've never used paste or join outside of initially
coming across them. But, they seem to 'fit' with cut. My question for
y'all is, was there a subset of related utilities that these were part
of that served some common purpose? On a related note, join seems like
part of an aborted (aka never fully realized) attempt at a text based
rdb to me...
What say you?
Will
Rich Morin <rdm(a)cfcl.com> wrote:
> PTF was inspired, in large part, by the volunteer work that produced the
> Sun User Group (SUG) tapes. Because most of the original volunteers had
> other fish to fry, I decided to broaden the focus and attempt a
> (somewhat) commercial venture. PTF, for better or worse, was the
> result.
>
> So, I should also relate some stories about running for and serving on
> the SUG board, hassling with AT&T and Sun's lawyers, assembling
> SUGtapes, etc. My copies of the SUGtapes are (probably) long gone, but
> John Gilmore (if nobody else :-) probably has the tapes and/or their
> included bits.
While I was involved, the Sun User Group made three tapes of freely
available software, in 1985, 1987, and 1989. The 1989 tape includes
both of the earlier ones, as well as new material.
A copy of both the 1987 tape and the 1989 tape are here:
http://www.toad.com/SunUserGroupTape-Rel-1987.1.0.tar.gzhttp://www.toad.com/SunUserGroupTape-Rel-1989.tarhttp://www.toad.com/
I'll have to do a bit more digging to turn up more than vague memories
about our dealings with the lawyers...
John
Has anyone written down the story of Prime Time Freeware or archived the various distributions? Is there even a complete listing of what they distributed?
I’ve imaged my own stuff (PTF AI 1-1, PTF SDK for UnixWare 1-1, PTF Tools & Toys for UnixWare 1-1) but I’d really like to find the original PTF 1-1 and things like it.
— Chris
Thank you for banner! I used the data, abliet modified, 40 years ago
in 1981, for a banner program as well, on an IBM 1130 (manufactured 1972)
so it could print on an 1132 line printer. The floor would vibrate
when it printed those banners. I used "X" as the printed char as the
1132 did not have the # char. But those banners looked great!
I wrote it in FORTRAN IV. On punched cards. I did this because
from 1980-1982 I only had access to UNIX on Monday evenings from
7PM-9PM, using a DEC LA120 terminal, it was slow and never had
enough ink on the ribbon.
I had only 8K of core memory with only EBCIDIC uppercase so there
were lots of compromises and cleverness needed -
- read in a 16-bit integer as a packed two 8-bit numbers
- limit the banner output to only A-Za-z0-9 !?#@'*+,-.=
- unpack the char data into buffer and then process it.
- fix the "U" charater data
- find the run-lenght ecnodings that could be consoldated to save space
(seeing those made me think it had to have been generated data)
The program still survives here - http://ibm1130.cuzuco.com/
(with sample output runs)
Also since I had to type all those numbers onto punch cards
with a 029 keypunch, to speed things up I coded my own free-form
atoi() equivalent in FORTRAN, reading cards, then packed two numbers into
a integer, then punch out those numbers along with card ID numbers in columns
73-80 on the 1442. This was many weeks of keypunching, checking,
fixing and re-keypunching.
That code is here http://ibm1130.cuzuco.com/ipack.html
When done the deck was around 8" or so. It took well over a
minute to read in the data cards, after complition.
Again thanks! Many hundreds of banners for many people were printed
by this, around 2 to 3 a week, until July 1982, when that IBM
was replaced by a Prime system. I still have many found memeories of
that 1130.
-Brian
Mary Ann Horton (mah at mhorton.net) wrote:
> We had vtroff at Berkeley around 1980, on the big Versatec wet plotter,
> 4 pages wide. We got really good at cutting up the pages on the output.
>
> It used the Hershey font. It was horrible. Mangled somehow, lots of
> parts of glyphs missing. I called it the "Horse Shit" font.
>
> I took it as my mission to clean it up. I wrote "fed" to edit it, dot by
> dot, on the graphical HP 2648 terminal at Berkeley. I got all the fonts
> reasonably cleaned up, but it was laborious.
>
> I still hated Hershey. It was my dream to get real C/A/T output at the
> largest 36 point size, and scan it in to create a decent set of Times
> fonts. I finally got the C/A/T output years later at Bell Labs, but
> there were no scanners available to me at the time. Then True Type came
> along and it was moot.
>
> I did stumble onto one nice rendition of Times Roman in one point size,
> from Stanford, I think. I used it to write banner(6).
At some point I thought NeWS source was released. Is it just another
Lost Source or it is out there somewhere?
Do I remember right that it was a Gosling effort?
Apparently they are getting 68040 levels of performance with a Pi... and
that interpreted. Going with JIT it's way higher.
-----Original Message-----
From: Gregg Levine [SMTP:gregg.drwho8@gmail.com]
Sent: Saturday, February 13, 2021 10:30 AM
To: Jason Stevens; The Eunuchs Hysterical Society
Subject: Re: [TUHS] 68k prototypes & microcode
An amazing idea.
-----
Gregg C Levine gregg.drwho8(a)gmail.com
"This signature fought the Time Wars, time and again."
On Fri, Feb 12, 2021 at 7:51 PM Jason Stevens
<jsteve(a)superglobalmegacorp.com> wrote:
>
> You might find this interesting
>
> https://twitter.com/i/status/1320767372853190659
> <https://twitter.com/i/status/1320767372853190659>
>
> It's a pi (arm) running Musashi a 68000 core, but using voltage
buffers it's
> plugged into the 68000 socket of an Amiga!
>
> You can find more info on their github:
>
> https://github.com/captain-amygdala/pistorm
> <https://github.com/captain-amygdala/pistorm>
>
> Maybe we are at the point where numerous cheap CPU's can eliminate
FPGA's?
>
> -----Original Message-----
> From: Michael Parson [SMTP:mparson@bl.org]
> Sent: Friday, February 05, 2021 10:43 PM
> To: The Eunuchs Hysterical Society
> Subject: Re: [TUHS] 68k prototypes & microcode
>
> On 2021-02-04 16:47, Henry Bent wrote:
> > On Thu, Feb 4, 2021, 17:40 Adam Thornton
<athornton(a)gmail.com>
> wrote:
> >
> >> I'm probably Stockholm Syndrommed about 6502. It's
what I grew
> up on,
> >> and
> >> I still like it a great deal. Admittedly
register-starved (well,
>
> >> unless
> >> you consider the zero page a whole page of registers),
> but...simple,
> >> easy
> >> to fit in your head, kinda wonderful.
> >>
> >> I'd love a 64-bit 6502-alike (but I'd probably give it
more than
> three
> >> registers). I mean given how little silicon (or how
few FPGA
> gates) a
> >> reasonable version of that would take, might as well
include
> 65C02 and
> >> 65816 cores in there too with some sort of
mode-switching
> instruction.
> >> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit
address bus
> be
> >> fun?
> >> Throw in an onboard MMU and FPU too, I suppose, and
then you
> could
> >> have a
> >> real system on it.
> >>
> >>
> > Sounds like a perfect project for an FPGA. If there's
already a
> 6502
> > implementation out there, converting to 64 bit should be
fairly
> easy.
>
> There are FPGA implementations of the 6502 out there. If
you've not
> seen
> it, check out the MiSTer[0] project, FPGA implementations
of a LOT
> of
> computers, going back as far as the EDSAC, PDP-1, a LOT of
8, 16,
> and 32
> bit systems from the 70s and 80s along with gaming
consoles from the
> 70s
> and 80s.
>
> Keeping this semi-TUHS related, one guy[1] has even
implemented a
> Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4,
5, Linux,
> NetBSD, and even the Sparc version of NeXTSTEP, but it's
not part of
> the
> "official" MiSTer bits (yet?).
>
> --
> Michael Parson
> Pflugerville, TX
> KF5LGQ
>
> [0] https://github.com/MiSTer-devel/Main_MiSTer/wiki
> [1] https://temlib.org/site/
> [2] https://temlib.org/pub/mister/SS/
You might find this interesting
https://twitter.com/i/status/1320767372853190659
<https://twitter.com/i/status/1320767372853190659>
It's a pi (arm) running Musashi a 68000 core, but using voltage buffers it's
plugged into the 68000 socket of an Amiga!
You can find more info on their github:
https://github.com/captain-amygdala/pistorm
<https://github.com/captain-amygdala/pistorm>
Maybe we are at the point where numerous cheap CPU's can eliminate FPGA's?
-----Original Message-----
From: Michael Parson [SMTP:mparson@bl.org]
Sent: Friday, February 05, 2021 10:43 PM
To: The Eunuchs Hysterical Society
Subject: Re: [TUHS] 68k prototypes & microcode
On 2021-02-04 16:47, Henry Bent wrote:
> On Thu, Feb 4, 2021, 17:40 Adam Thornton <athornton(a)gmail.com>
wrote:
>
>> I'm probably Stockholm Syndrommed about 6502. It's what I grew
up on,
>> and
>> I still like it a great deal. Admittedly register-starved (well,
>> unless
>> you consider the zero page a whole page of registers),
but...simple,
>> easy
>> to fit in your head, kinda wonderful.
>>
>> I'd love a 64-bit 6502-alike (but I'd probably give it more than
three
>> registers). I mean given how little silicon (or how few FPGA
gates) a
>> reasonable version of that would take, might as well include
65C02 and
>> 65816 cores in there too with some sort of mode-switching
instruction.
>> Wouldn't a 6502ish with 64-bit wordsize and a 64-bit address bus
be
>> fun?
>> Throw in an onboard MMU and FPU too, I suppose, and then you
could
>> have a
>> real system on it.
>>
>>
> Sounds like a perfect project for an FPGA. If there's already a
6502
> implementation out there, converting to 64 bit should be fairly
easy.
There are FPGA implementations of the 6502 out there. If you've not
seen
it, check out the MiSTer[0] project, FPGA implementations of a LOT
of
computers, going back as far as the EDSAC, PDP-1, a LOT of 8, 16,
and 32
bit systems from the 70s and 80s along with gaming consoles from the
70s
and 80s.
Keeping this semi-TUHS related, one guy[1] has even implemented a
Sparc 32m[2] (I think maybe an SS10), which boots SunOS 4, 5, Linux,
NetBSD, and even the Sparc version of NeXTSTEP, but it's not part of
the
"official" MiSTer bits (yet?).
--
Michael Parson
Pflugerville, TX
KF5LGQ
[0] https://github.com/MiSTer-devel/Main_MiSTer/wiki
[1] https://temlib.org/site/
[2] https://temlib.org/pub/mister/SS/
Apologies if this has already been linked here.
"The UNIX Command Languageis the first-ever paper published on the Unix
shell. It was written by Ken Thompson in 1976."
https://github.com/susam/tucl
Joachim
Recent discussions on this list are about the problem getting fonts
for typesetting before there was an industry to provide them. Noted
font designer Chuck Bigelow has written about the subject here:
Notes on typeface protection
TUGboat 7(3) 146--151 October 1986
https://tug.org/TUGboat/tb07-3/tb16bigelow.pdf
Other TUGboat papers by him and his design partner, Kris Holmes, might
be of reader interest:
Lucida and {\TeX}: lessons of logic and history
https://tug.org/TUGboat/tb15-3/tb44bigelow.pdf
About the DK versions of Lucida
https://tug.org/TUGboat/tb36-3/tb114bigelow.pdf
A short history of the Lucida math fonts
https://tug.org/TUGboat/tb37-2/tb116bigelow-lucidamath.pdf
Science and history behind the design of Lucida
https://tug.org/TUGboat/tb39-3/tb123bigelow-lucida.pdf
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Do they *really* want something which is just V7 Unix, with nothing else?
> No TCP/IP, no hot-plug USB support? No web browsing?
> Oh, you wanted more than that? Feature bloat! Feature bloat!
> Feature bloat! Shame! Shame! Shame!
% ls /usr/share/man/man2|wc
495 495 7230
% ls /bin|wc
2809 2809 30468
How many of roughly 500 system calls (to say nothing of uncounted
ioctl's) do you think are necessary for writing those few crucial
capabilities that distinguish Linux from v7? There is
undeniably bloat, but only a sliver of it contributes to the
distinctive utility of today's systems.
Or consider this. Unix grew by about 39 system calls in its first
decade, but an average of 40
per decade ever since. Is this accelerated growth more symptomatic of
maturity or of cancer?
Doug
There's so much experience here, I thought someone might know:
"Our goal is to develop an emulator for the Burroughs B6700 system. We
need help to find a complete release of MCP software for the Burroughs
B6700.
If you have old magnetic tapes (magtapes) in any format, or computer
printer listings of software or micro-fiche, micro-film, punched-card
decks for any Burroughs B6000 or Burroughs B7000 systems we would like
to hear from you.
Email nw(a)retroComputingTasmania.com"
Hi all,
On a completely different note... I’ve been delving into typing tutor programs of late. Quite a mishmash of approaches out there. Not at all like what I remember from junior high - The quick brown fox jumps over the lazy dog, kinda stuff. Best of breed may be Mavis Beacon Teaches Typing on the gui front, and I hate to admit it, gnu typist, on the console front.
I’m wondering if there are some well considered unix programs, historically, for learning typing? Or did everyone spring into the unix world accomplished typists straight outta school? I did see mention a while back about a TOPS-10 typing tutor, not unix, but in the spirit - surely there's some unix history around typing tutors.
Thanks,
Will
> Or consider this. Unix grew by about 39 system calls in its first
> decade, but an average of 40
> > per decade ever since. Is this accelerated growth more symptomatic of
> maturity or of cancer?
Looks like I need a typing tutor. 39 should be 30. And a math tutor, too. 40
should be 100.
Doug
$ k-2.9t
K 2.9t 2001-02-14 Copyright (C) 1993-2001 Kx Systems
Evaluation. Not for commercial use.
\ for help. \\ to exit.
This is a *linux* x86 binary from almost exactly 20 years ago running on FreeBSD built from last Wednesday’s sources.
$ uname -rom
FreeBSD 13.0-ALPHA3 amd64
Generally compatibility support for previous versions of FreeBSDs has been decent when I have tried. Though the future for x86 support doesn’t look bright.
> On Feb 8, 2021, at 10:56 PM, John Gilmore <gnu(a)toad.com> wrote:
>
> (I'm not up on what the BSD releases are doing.)
This topic is evocative, even though I really have nothing to say about it.
Mike Lesk started, and I believe Brian contributed to, "learn", a program
for interactive tutorials about Unix. It was never pushed very far--almost
certainly not into typing.
But the mention of typing brings to mind the inimitable Fred Grampp--he
who pioneered massive white-hat computer cracking. Fred's exploits justified
the opening sentence I wrote for Bell Labs' first computer-security task
force report, "It is easy and not very risky to pilfer data from Bell
Laboratories computers." Among Fred's many distinctive and endearing
quirks was the fact that he was a confirmed two-finger typist--proof that
typing technique is an insignificant factor in programmer productivity.
I thought this would be an excuse to tell another ftg story, but I
don't want to repeat myself and a search for "Grampp" in the tuhs archives
misses many that have already been told. Have the entries been lost or
is the index defective?
Doug
I would like to revive Lorinda Cherry's "parts".
Implicit in "revival" is dispelling the hundreds
of warnings from gcc -Wpedantic -Wall -Wextra.
Has anybody done this already?
Doug
> Does anyone know why the computer industry wound up standardising on
8-bit bytes?
I give the credit to the IBM Stretch, aka 7030, and the Harvest attachment
they made for NSA. For autocorrelation on bit streams--a fundamental need
in codebreaking--the hardware was bit-addressable. But that was overkill
for other supercomputing needs, so there was coarse-grained addressability
too. Address conversion among various operand sizes made power of two a
natural, lest address conversion entail division. The Stretch project also
coined the felicitous word "byte" for the operand size suitable for
character
sets of the era.
With the 360 series, IBM fully committed to multiple operand sizes. DEC
followed suit and C naturalized the idea into programmers' working
vocabulary.
The power-of-2 word length had the side effect of making the smallest
reasonable size for floating-point be 32 bits. Someone on the
Apollo project once noted that the 36-bit word on previous IBM
equipment was just adequate for planning moon orbits; they'd
have had to use double-precision if the 700-series machines had
been 32-bit. And double-precision took 10 times as long. That
observation turned out to be prescient: double has become the
norm.
Doug
The topic of GBACA (Get Back At Corporate America), the video game for
the BLIT/5620, has come up on a Facebook group.
Does anyone happen to have any details about it, source code, author,
screen shots, ...?
Thanks,
Mary Ann
I will ask Warren's indulgence here - as this probably should be continued
in COFF, which I have CC'ed but since was asked in TUHS I will answer
On Wed, Feb 3, 2021 at 6:28 AM Peter Jeremy via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> I'm not sure that 16 (or any other 2^n) bits is that obvious up front.
> Does anyone know why the computer industry wound up standardising on
> 8-bit bytes?
>
Well, 'standardizing' is a little strong. Check out my QUORA answer: How
many bits are there in a byte
<https://www.quora.com/How-many-bits-are-there-in-a-byte/answer/Clem-Cole>
and What is a bit? Why are 8 bits considered as 1 byte? Why not 7 bit or 9
bit?
<https://www.quora.com/What-is-a-bit-Why-are-8-bits-considered-as-1-byte-Why…>
for my details but the 8-bit part of the tail is here (cribbed from those
posts):
The Industry followed IBM with the S/360.The story of why a byte is 8- bits
for the S/360 is one of my favorites since the number of bits in a byte is
defined for each computer architecture. Simply put, Fred Brooks (who lead
the IBM System 360 project) overruled the chief hardware designer, Gene
Amdahl, and told him to make things power of two to make it easier on the
SW writers. Amdahl famously thought it was a waste of hardware, but Brooks
had the final authority.
My friend Russ Robeleon, who was the lead HW guy on the 360/50 and later
the ASP (*a.k.a.* project X) who was in the room as it were, tells his yarn
this way: You need to remember that the 360 was designed to be IBM's
first *ASCII
machine*, (not EBCDIC as it ended up - a different story)[1] Amdahl was
planning for a word size to be 24-bits and the byte size to be 7-bits for
cost reasons. Fred kept throwing him out of his office and told him not to
come back “until a byte and word are powers of two, as we just don’t know
how to program it otherwise.”
Brooks would eventually relent on the original pointer on the Systems 360
became 24-bits, as long as it was stored in a 32-bit “word”.[2] As a
result, (and to answer your original question) a byte first widely became
8-bit with the IBM’s Systems 360.
It should be noted, that it still took some time before an 8-bit byte
occurred more widely and in almost all systems as we see it today. Many
systems like the DEC PDP-6/10 systems used 5, 7-bit bytes packed into a
36-bit word (with a single bit leftover) for a long time. I believe that
the real widespread use of the 8-bit byte did not really occur until the
rise of the minis such as the PDP-11 and the DG Nova in the late
1960s/early 1970s and eventually the mid-1970s’ microprocessors such as
8080/Z80/6502.
Clem
[1] While IBM did lead the effort to create ASCII, and System 360 actually
supported ASCII in hardware, but because the software was so late, IBM
marketing decided not the switch from BCD and instead used EBCDIC (their
own code). Most IBM software was released using that code for the System
360/370 over the years. It was not until IBM released their Series 1
<https://en.wikipedia.org/wiki/IBM_Series/1>minicomputer in the late 1970s
that IBM finally supported an ASCII-based system as the natural code for
the software, although it had a lot of support for EBCDIC as they were
selling them to interface to their ‘Mainframe’ products.
[2] Gordon Bell would later observe that those two choices (32-bit word and
8-bit byte) were what made the IBM System 360 architecture last in the
market, as neither would have been ‘fixable’ later.
> From: Greg A. Woods
> There's a "v6net" directory in this repository.
> ...
> I wonder if it is from either of the two ports you mention.
No; the NOSC system is an NCP system, not TCP; and this one has mbufs (which
the BBN v6 one did not have), so it's _probably_ a Berkleyism of some sort
(or did the BBN VAX code have mbuf's too; I don't recall - yes, it did:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-Vax-TCP
see bbnnet/mbuf.c). It might also be totally new code which just chose to
re-use that meme. I don't have time to look closely to see if I see any
obvious descent.
> Too many broken half-baked MUAs seem to still be widely used.
I'm one of the offendors! Hey, this is a vintage computing list, so what's
the problem with vintage mail readers? :-)
Noel
PS: I'm just about done collecting up the MIT PWB1 TCP system; I only have
the Server FTP left to go. (Alas, it was a joint project between a student
and a staffer, who left just at the end, so half the source in one's personal
area, and the other half's in the other's. So I have to find all the pieces,
and put them in the system's source area.) Once that's done, I'll get it to
WKT to add to the repositoey. (Getting it to _actually run_ will take a
while, and will happen later: I have to write a device driver for it, the
code uses a rare, long-extinct board.)
> V6, as distributed, had no networking at all. There are two V6 systems with
> networking in TUHS:
>
> https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC <https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSC>
> https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-V6 <https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-V6>
>
> The first is an 'NCP' Unix (unless unless you have an ARPANet); the second is
> a fairly early TCP/IP from BBN (ditto, out of the box; although one could write
> an Ethernet driver for it).
I’ve also done a port of the BBN VAX stack to V6 (running on a TI990 clone), using a serial
PPP interface to connect. Experimental, but may have the OP's interest:
https://www.jslite.net/cgi-bin/9995/dir?ci=tip
> There's also a fairly nice Internet-capable V6 (well, PWB1, actually) from MIT
> which I keep meaning to upload; it includes SMTP, FTP, etc, etc. I also have
> visions of porting an ARP I wrote to it, and bringing up an Ethernet driver
> for the DEQNA/DELQA, but I've yet to get to any of that.
I’d love to have a look at that and compare and contrast the approaches.
I’m finding that BBN’s original design, with a separate kernel thread for the network stack,
is elegant but difficult to tune: too much priority and it crowds out user processes, too little
and the slow PPP line is not kept busy.
I think I’m beginning to understand why CSRG (and later also BBN) moved to
the interrupt driven structure of 4.2BSD: perhaps it was also difficult to tune for a
VAX with ethernet.
> From: Paul Riley
> In the bootable images archive, there's the "Unknown V6" RL02
> image. I've tried that on SimH configured as an 11/23+ with 256kB of RAM
> and it seems to work fine.
Sorry, where's this archive? Somewhere in:
https://www.tuhs.org/Archive/Distributions/Research/
I assume? From the description, that might be from the 'Shoppa disks'; didn't
realize that was a /23 on those.
> I would assume that Ethernet boards are available, but not supported on
> V6.
V6, as distributed, had no networking at all. There are two V6 systems with
networking in TUHS:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=SRI-NOSChttps://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-V6
The first is an 'NCP' Unix (unless unless you have an ARPANet); the second is
a fairly early TCP/IP from BBN (ditto, out of the box; although one could write
an Ethernet driver for it).
There's also a fairly nice Internet-capable V6 (well, PWB1, actually) from MIT
which I keep meaning to upload; it includes SMTP, FTP, etc, etc. I also have
visions of porting an ARP I wrote to it, and bringing up an Ethernet driver
for the DEQNA/DELQA, but I've yet to get to any of that.
> it's hard to glean that wisdom from reading the manual.
Yeah, DEC manuals went through a phase-change around about the time of the
/23. Old DEC manuals are wonderful; stuffed to the gills with deep technical
details. Suitable for engineers...
Later, they turned into manuals for 'ordinary people' - 'plug cable C1 into
plug P1'. Semi-useless; although one can often glean a few useful morsels if
you trawl through the entire thing.
That's why I've been doing PDP-11 pages on the CHWiki which attempt to cover a
lot of technical detail, in a high technical content/size way.
If you need something that's not there, let me know, and I'll get to adding it.
Noel
I've done some research for a friend about when the reboot() system call
was added, and how it related to the sync, sync, sync dance.
https://bsdimp.blogspot.com/2020/07/when-unix-learned-to-reboot2.html
may be of interest. Please do let me know if I've gotten something wrong...
Warner
Hello All.
I have updated various READMEs in the QED archive I set up a while
back: https://github.com/arnoldrobbins/qed-archive. Now included
is a link to Leah's blog, mention that the SDS files came from Al Kossow,
and Doug's link to the Multics QED cheat sheet.
Thanks,
Arnold
> fairly early in PDP-11 development ed gained three features: & in the
> rhs of substitutions plus k and t commands. (I'm not sure about & ....
Oh, and backreferencing, which took regular expressions way up the
complexity hierarchy--into NP-complete territory were it not for the limit
of 9 backreferenced substrings. (Proof hint: reduce the knapsack problem to
an ed regex.)
Also g and s were generalized to allow escaped newlines.
I was indeed wrong about &. It was in v1.
Doug
Another stack of old notebooks. I can scan these in if anyone is interested
and if they're not available elsewhere. In addition to what's below, I have
a fat notebook with the BRL CAD package docs.
These are V/W papers from Stanford - Lantz/Cheriton et. al.
Multi-process Structuring of User Interface Software
Refernce Models, Window Systems, and Concurrency
An Experiment in Integrated Multimedia Conferencing
An Architecture for Configurable User Interfaces
An Empirical Study of Distributed Application Performance
Third Generation Graphics for Distributed Systems
Virtual Terminal Management In A Multiple Process environment
Distributed Process Groups in the V Kernel
The Distributed V Kernel and its Performance for Diskless Workstations
Effective Use of Large RAM Memories on Diskless Workstations with the V Virtual Memory System
Evaluating Hardware Support for Superconcurrency with the V Kernel
Fault-tolerant Transaction Management in a Workstation Cluster
File Access Performance of Diskless Workstations
An Introduction to the V System
The Multi-Satellite Star: Structuring Parallel Computations for a Workstation Cluster
UIO: A Uniform I/O System Interface for Distributed Systems
The V Kernel: A Software Base for Distributed Systems
Other random stuff
Bitmap Graphics (SIGGRAPH '84 Course Notes, Pike et. al.)
A Window Manager with a Moduler User Interface (Whitechapel?)
IRIS-4D Superworkstation and Visual Computing
IRIS GT Graphics Architecture
Position Paper on the Importance and Application of Video Mixing Display Architectures (Jack Grimes)
A Data-Flow Manager for an Interactive Programming Environment (Paul Haeberli)
Multiple Programs in One UNIX Process (Don Libes - from ;login:)
Lightweight Processes for UNIX Implementation and Applications (Jonathan Kepecs)
A Capability Based Hierarchic Architecture for UNIX Window Management (R. D. Trammell)
MEX - A Window Manager for the IRIS (Rocky Rhodes et. al.)
Windows for UNIX at Lucasfilm (Hawley, Leffler)
Next-Generation Hardware for Windowed Displays (McGeady)
Problems Implementing Window Systems in UNIX (Gettys)
Mach: A New Kernel Foundation For UNIX Development (Accetta et. al.)
Uwm: A User Interface for X Windows (Ganearz)
Programming with Windows on the Major Workstations or Through a Glass Darkly (Daniel, Rogers)
PIX, the latest NeWS (Leler)
Ace: a syntax-driven C preprocessor Overview (Gosling)
Attribute Considerations in Raster Graphics (Bresenham)
Ten Years of Window Systems - A Retrospective View (Teitelman)
W User's Manual (Asente)
The WA Beyond Traditional Window Systems (An Overviw of The Workstation Agent (Lantz et. al., draft, marked not to to be redistributed)
Performance Measurements of the WA (Islam)
STDWIN: A Standard Window System Interface (Rossum)
Summary of Current Research (Lantz et. al. at Olivetti)
User Interfaces in Window Systems: Architechure and Implementation (Farrell, Schwartz; SIGCHI)
Introduction to the GMW Window System (Hagiya)
UNIX Window Management Systems Client-Server Interface Specification (Williams et. al., Rutherford Apleton Laboaratory)
Curves Made Trivial (Gosling)
Smart Code, Stupid Memory: A Fast X Server for a Dumb Color Frame Buffer (McCormack)
Jon
I used Ken's qed in pre-Unix days. I understand its big departure from the
original was regular expressions. Unix ed was the same, with
multi-file capability dropped. Evidently the lost function was not much
missed, for it it didn't come back when machines got bigger. I remember
that fairly early in PDP-11 development ed gained three features: & in the
rhs of substitutions plus k and t commands. (I'm not sure about &--that was
50 years ago.).
With hindsight it's surprising that a "minimalist" design had m but not t,
for m can be built from t but not vice versa. A cheat sheet for multics qed
is at h
<http://www.bitsavers.org/pdf/honeywell/multics/swenson/6906.multics-condens…>
ttp://
www.bitsavers.org/pdf//honeywell/multics/swenson/6906.multics-condensed-gui….
It had two commands I don't remember: sort(!) and transform, which I assume
is like y in sed.
Doug
Hi all,
So. On a lighter note. I was tooling around the web and came across a discussion of QED, the editor. It’s been resurrected in no small part based on discussions on this list (and members like Rob Pike). Anyhow, there’s a version that compiles in modern systems and that handles wide characters. My question for the group is this how different is QED from ed? I’ve read Dennis’ paper on the history of QED and it’s fascinating, but all I really got out of the discussion related to ed, was that QED was a precursor. I’m curious about functional parity or lack thereof, more than technical differences. In full disclosure, and at the risk of drawing fire from lovers of other editors, I have to confess a love of the original ed (and it’s decendent ed’s and vi).
Cheers,
Will
Sent from my iPhone
> From: Jim Geist
> When did mmap(2) come about?
Pretty sure it's a Berserkleyism. I think it came in with the VM stuff that
DARPA mandated for VAX Unix (for the research project they funded).
Noel
One of the things that I've noticed in my explorations into the H.J. Lu
bootable root disks is that some of them predate the /sbin split in Linux.
One of them has exactly one file in /sbin and other commands spread
across /bin, /usr/bin, and /etc. The single file in /sbin is sln.
To me, this makes it fairly self evident that /sbin was originally for
statically linked binaries. At least in Linux.
Does anyone have any history of /sbin from other traditional Unixes?
I'd be quite interested in learning more.
I also noticed that (at least) one of the early versions of the H.J. Lu
disks had root's home directory in /usr/root.
I seem to recall that one version used an atypical of /users vs /usr.
Which as I understand it, goes back to the original / vs /usr split in
Unix, before /home became a thing.
--
Grant. . . .
unix || die
Nice archaeology. Blinded by my distaste for Basic , I never bothered to
try bs--and should have. Dave has highlighted features that deserve respect.
One telling example suggests this should be legalized in C:
printf("%s\n", {"true", "false"}[1]);
Doug
All,
So... I've moved on from v7 to 2.11bsd - shucks, vi and tar and co. just
work there and everything else seems to be similar enough for what I'm
interested in anyway. So yay, I won't be pestering y'all about vi
anymore :). One the other hand, now I'm interested in printing the docs.
2.11bsd comes with docs in, of all places, /usr/doc. In there are
makefiles for making the docs - ok, make nroff will make ascii docs, and
troff will make troff? docs using Ossana's 'original' troff. So, after
adding -t to it so it didn't complain about 'typesetter busy', I got no
errors. I mounted a tape, tar'ed my .out file and untar'ed it on my
macbook (did it for the nroff and troff output). Then I hit the first
snag, groff -Tps -ms troff.out > whatever.ps resulted in cannot adjust
line and cannot break line errors and groff -Tps -ms nroff.out >
whatever.ps resulted in a bunch of double vision. I seem to recall doing
this in v6 and it working ok (at least for nroff).
My questions:
1. Is there a troff to postcript conversion utility present in a stock
2.11 system (or even patch level 4xx system)?
2. Is there a way to build postscript directly on the system?
3. Is there an alternative modern way to get to ps or pdf output from
the nroff/troff that 2.11 has?
I'm still digging into the nroff stuff as that may be just minor diffs
between ancient nroff macros and "modern" macros or even just errors
(.sp -2 rather than .sp or .sp -1, .in -2 instead of .in +2), etc.
Although, the files display ok in 2.11bsd using nroff -ms nroff.out...
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Norman Wilson
> You get a good deal and support a worthwhile small business (not just
> ABE but the individual selling shop) at the same time.
ABE isn't a small business (any more); Amazon bought them a couple of years
ago. Biblio (https://www.biblio.com/) is the same basic thing ("more than 6500
independent book stores"), but independent. There's also Alibris
(https://www.alibris.com/) but I like Biblio's site better; YMMV.
Noel
Nemo Nusquam:
Borenstein wrote a book ("Programming as if people mattered: Friendly
Programs, Software Engineering, and Other Noble Delusions") in which he
mused about W and X and Andrew. (A very nice read but horribly
expensive -- fortunately I bought it when PUP had reasonably priced
paperbacks.)
======
abebooks.com is your friend here. I just bought a used paperback copy
for about USD 15 including shipping to Canada. There are others of
similar price. Shipping to the US is probably a little cheaper.
There's at least one copy available from a seller in the UK as
well (and doubtless some from other countries if you dig further
in the listings).
For those who don't know, ABE is a central place for independent
booksellers, including used-book shops, to sell online. You get
a good deal and support a worthwhile small business (not just ABE
but the individual selling shop) at the same time.
Norman Wilson
Toronto ON
Since nobody seems to have mentioned his passing yet, I thought I might.
David Tilbrook died (from complications of COVID-19) in the early hours
of January 15, 2021.
He had been in long term care in Toronto for just over a year.
His web site remains up and running for now at http://qef.com/ though I
don't know for how long that may last.
--
Greg A. Woods <gwoods(a)acm.org>
Kelowna, BC +1 250 762-7675 RoboHack <woods(a)robohack.ca>
Planix, Inc. <woods(a)planix.com> Avoncote Farms <woods(a)avoncote.ca>
All,
I came across this note on vermaden's valuable news blog and thought
y'all might enjoy it - it's not pure unix, but it's got a lot of
crossover. The history is interesting and to us relative newbs,
informative. I can't confirm its accuracy on the history side of things,
but I'm sure you can :).
http://unixsheikh.com/articles/the-terminal-the-console-and-the-shell-what-…
Later,
Will
SIMH has 3b2 emulation...
Much of the work was documented here:
https://loomcom.com/3b2/emulator.html
<https://loomcom.com/3b2/emulator.html>
-----Original Message-----
From: Henry Bent [SMTP:henry.r.bent@gmail.com]
Sent: Wednesday, January 27, 2021 12:05 AM
To: Arnold Robbins
Cc: The Eunuchs Hysterical Society
Subject: Re: [TUHS] System V Release 2, adding swap?
On Mon, 25 Jan 2021 at 11:02, Arnold Robbins < arnold(a)skeeve.com
<mailto:arnold@skeeve.com> > wrote:
Hi.
Does anyone know how to add swap space on a System V Release
2 system?
In particular, on an emulated AT&T 3B1. The kernel is S5R1
or S5R2
vintage.
I don't see any commands with 'swap' in their names.
A little bit of Google Groups trawling turned up this:
https://groups.google.com/g/comp.sys.att/c/8XLILI3K8-Y/m/VxVMJNdt9NQJ
<https://groups.google.com/g/comp.sys.att/c/8XLILI3K8-Y/m/VxVMJNdt9NQJ>
But I don't have one of those systems, so I have no way to verify.
-Henry
At 03:46 PM 1/23/2021, Dave Horsfall wrote:
>Sent to me from a fellow weirdo...
>
>At 19:25:36 AEDT (00:25:36 UTC), Unix time reached 0x60000000. We're three quarters of the way to 2038...
That was January 14, 2021, right?
https://www.epochconverter.com/hex
- John
Hi.
Does anyone know how to add swap space on a System V Release 2 system?
In particular, on an emulated AT&T 3B1. The kernel is S5R1 or S5R2
vintage.
I don't see any commands with 'swap' in their names.
Thanks,
Arnold
I'm writing about my 5 favorite unix design principles on my blog this
week, and it got me wondering what others' favorite unix design principles
are? For reference, mine are:
- Rule of Separation (from TAOUP <http://catb.org/~esr/writings/taoup/html/>
)
- Let the Machine Do the Dirty Work (from Elements of Programming Style)
- Rule of Silence (from TAOUP <http://catb.org/~esr/writings/taoup/html/>)
- Data Dominates (Rob Pike #5)
- The SPOT (Single Point of Truth) Rule (from TAOUP
<http://catb.org/~esr/writings/taoup/html/>)
Tyler
> From: Paul Riley
> Is LSX the only option on the 11/03, or could I run V6 or Mini-Unix with
> more RAM?
All PDP-11 Unix versions from V4 on require the MMU, so the -11/03 is out for
them. We don't have the code for V2-V4, though. So V1 (mostly all assembler,
no C :-), LSW and Mini-Unix are the only options for it.
V6 can be run on an -11/23 (I've done it), but not straight out of the box;
it requires a few minor tweaks first:
http://gunkies.org/wiki/Running_UNIX_V6_on_an_-11/23
Noel
On 1/24/21, Jon Steinhart <jon(a)fourwinds.com> wrote:
> So I never liked Apollos much. What I was referring to was Apollo's claim
> that their token-ring network performed better for large numbers of nodes.
> And they were correct. However, they didn't consider the eventually
> invention of switches that solved the problem.
A problem that shouldn't have ever been there in the first place. When
I was at EDS, we did a lot of benchmarks against token-ring vs.
CSMA-CD. Token-ring was slower than CSMA-CD until the traffic got to
be more than about 10% of capacity - then the collision detection
exponential backoff algorithm would clobber the network. The argument
that "well, we will never get above that anyway, so we want the
fastest we can get" sort of short-sightedness won the day. It wasn't
until switches and virtual LANs came into existence that (as you said)
solved the problem.
Sent to me from a fellow weirdo...
At 19:25:36 AEDT (00:25:36 UTC), Unix time reached 0x60000000. We're
three quarters of the way to 2038...
Stock up on food, load dem guns, and batten down the hatches :-)
-- Dave
Hi folks,
In case you're interested:
I've published a couple videos on these ancient Unix tools, sharing
including some language details and showing them in action on v7 and
System III, respectively:
Ken Thompson's bas(1): https://youtu.be/LZUMNZTUJos
Dick Haight's bs(1): https://youtu.be/ELICIa3L22o
Thanks much for the help from TUHS, Mashey, Kernighan, McIlroy, and
others cited therein.
Peace,
Dave
--
dave(a)plonka.us http://www.cs.wisc.edu/~plonka/
Dave Horsfall:
At one place I worked, every Unix bod sported facial fungus; it must be a
Unix thing...
====
Not really. I've seen bare faces and beards and operating systems
over the decades that would scare the bugs out of any of the above
beards and operating systems. There really isn't a lot of consistency.
I've known plenty of bare-faced UNIX hacks, and plenty of RSX and
VMS and Windows and IBM programmers who hide their embarrassment
behind beards.
The ultimate reference is, of course, the inhabitants of the UNIX
Room. During my time there in mid-to-late 1980s, some people wore
beards, some didn't. I have never seen Ken or Dennis or Brian
clean-shaven, but I have never seen Doug or Rob or Tom Duff or
Lorinda with a beard.
And despite a certain remark attributed to the late Vic Vyssotsky
(who I've never seen with a beard either), I am quite sure that
my appearance had nothing to do with my being recruited by the group.
Norman `too lazy to shave' Wilson
Toronto ON
Andrew Hume (dammit andrew):
i have probed recently about the origins of the bEGREGb (its all greg cession's fault) error in Research Unix.
alas, i recall nothing about this, and can't recall ever getting the message.
===
Your memory fails you, which is not unreasonable for stuff you
probably haven't thought about in more than 30 years:
/*
SCSI Pass-Thru driver for the TD Systems UD? -- Andrew Hume
Ninth Edition Unix
*/
[...]
scsiwrite(dev)
dev_t dev;
{
register count;
register struct scsi *p = &scsi[minor(dev)];
register struct mscmd *cmd = &p->junk->cmd.msg;
unsigned char flag, bus_id;
if(p->flag&NEXTWR)
p->flag &= ~NEXTWR;
else {
u.u_error = EGREG;
return;
}
As I remember it, EGREG went into errno.h and libc out
of a desire to have some never-normally-used error to
be returned when debugging. I forget just who was in
the UNIX Room conversation that created it; almost certainly
I was. I thought andrew was as well; very likely one or
more of andrew td presotto.
I do remember being a bit annoyed at Andrew for putting it
in permanent use in the raw-SCSI driver (which was at the
time of interest mainly to Andrew for controlling an
early optical-disc jukebox, used by the original File
Motel backup system).
As to the origin of `It's all Greg's fault' as a meme,
that was already around and established when I arrived at
the Labs in mid-1984, though Greg himself had already
moved west. Maybe Doug or Ken remembers how that started.
Andrew himself was responsible for or blamed by more than
one meme of the days. The scsi driver spawned one, in fact:
the first attempt used a SCSI interface from Emulex, which
never worked quite right, and despite repeated phone calls
to Emulex Andrew could never get it figured out. He tried
and tried, though, and his attempts spawned the catch
phrase `Time to call Emulex again!'
Norman Wilson
Toronto ON
an urgent request: can someone please send me sam leaflet’s email address?
its needed for a funeral (dave tilbrook just died).
also, rob pike, can you send me your email please?
(these are for tilbrook’s son.)
i have probed recently about the origins of the “EGREG” (its all greg cession’s fault) error in Research Unix.
alas, i recall nothing about this, and can’t recall ever getting the message.
however, courtesy of Dave Presotto (i am fairly sure), there was an equivalent error
in Plan 9, where more or less randomly, if your user id was ‘andrew’, system calls would fail.
and yes, i did feel special; this was one of my lesser contributions to Plan 9.
this stopped after a while (several to many months, maybe).
andrew hume
fyi
-------- Original Message --------
From: joel(a)sirjofri.de
Sent: January 13, 2021 11:30:31 AM EST
To: sl(a)stanleylieber.com
Subject: Writer's Workbench on Plan 9/9front
Hello TUHS,
I don't know if that mail arrives since I'm not subscribed to the tuhs
mailing list. I just thought this might be interesting to some of
you, especially since I noticed there are some threads about the
writer's workbench.
Some weeks ago I started porting V10 wwb tools to 9front (which is a
fork of Plan 9). I have still many things to do and currently only
limited time, but the greater tools (style, diction and suggest) work.
Most code worked fine, btw, only minor changes needed. Especially
implicit C declarations and missing #includes. Comparison with the
original code in the archive is possible.
I also tried porting (or rewriting) the shell scripts in rc, and made
mkfiles that better fit the Plan 9 build systems. I also included
acme commands, they also translate the locations into the plumber
format for the files (filename:line).
Here's a link to the git repository (yes, we have a native git
implementation now): https://git.sr.ht/~sirjofri/wwb9
sirjofri
As the new year is about to kick in (down-under anyway), it got me to
thinking (always dangerous): how many here will be around for it to pick
up the pieces that are no doubt still lying around?
I'll be about the ripe old age of 85, so I may be around to see the
Imminent Death of the Internet (Film at 11).
2100? Forget it... Too bad, as "Revolt in 2100 (?)" is one of my
favourite Heinlein books.
Others?
-- Dave
We all know and love the UNIX beard, but I can't find anything on how the
beards started other than an old photo of Ken and Dennis with majestic
manes.
And, to make the question a bit more meta, what's the history of the joke
of the "UNIX beard"?
Tyler
>From output of 'what' on /bin/sh in SCO UNIX 3.2V4.2
- spname.c 23.2 91/02/21
Cheers,
uncle rubl
>Date: Sat, 09 Jan 2021 03:39:16 -05
>From: Norman Wilson <norman(a)oclsc.org>
>To: tuhs(a)tuhs.org
>Subject: Re: [TUHS] Question
>Message-ID: <1610181560.23999.for-standards-violators(a)oclsc.org>
>
>Rob Pike, re the spelling corrector in V8 and later Research
>versions of sh:
>
> That was done by Tom Duff, I believe before he came to Bell Labs. I might
> have brought the idea with me from Toronto.
>
>Very likely, since you left it behind at Caltech as well; it was
>in sh on cithep (a hostname meaningless to many but rob will remember)
>when I arrived in 1980.
>
>It was in the version of p you left behind there as well.
>
>I can confirm that spname remained in the shell through V10
>(it's still in my copy), but it seems to have disappeared from p.
>
>Norman Wilson
>Toronto ON