Hi
I am looking for any write-up or recollection about the debate mentioned here:
https://pages.cs.wisc.edu/~reps/popl00/cfd00.html
And also mentioned in an interview with Fran Allen (Coders at Work).
Many thanks
Regards
Dibyendu
Please excuse the wide distribution, but I suspect this will have general
interest in all of these communities due to the loss of the LCM+Labs.
The good folks from SDF.org are trying to create the Interim Computer
Museum:
https://icm.museum/join.html
As Lars pointed out in an earlier message to COFF there is a 1hr
presentation on the plans for the ICM.
https://toobnix.org/w/ozjGgBQ28iYsLTNbrczPVo
FYI: The yearly (Bootstrap) subscription is $36
They need to money to try to keep some of these systems online and
available. The good news is that it looks like many of the assets, such as
Miss Piggy, the Multics work, the Toads, and others, from the old LCM are
going to be headed to a new home.
ᐧ
Just sharing a copy of the Roff Manual that I had forgotten I scanned a little while back:
https://archive.org/details/roff_manual
This appears to be the UNIX complement to the S/360 version of the paper backed up by Doug here: https://www.cs.dartmouth.edu/~doug/roff71/roff71.pdf
From the best I could tell, this predates both 1973's V3 and the 1971 S/360 version of the paper, putting it somewhere prior to 1971. For instance, it is missing the .ar, .hx, .ig, .ix, .ni, .nx, .ro, .ta, and .tc requests found in V3. The .ar and .ro, and .ta requests pop up in the S/360 paper, the rest are in the V3 manpage (prior manpages don't list the request summary).
If anyone has some authoritative date information I can update the archive description accordingly.
Finally, this very well could be missing the last page, the Page offset, Merge patterns, and Envoi sections of Doug's paper are not reflected here, although at the very least, the .mg request is not in this paper so the Merge patterns section probably wasn't there anyway.
- Matt G.
I had meant to copy TUSH on this/
On Wed, Jul 17, 2024 at 2:41 PM Tom Lyon <pugs78(a)gmail.com> wrote:
> I got excited by your mention of a S/360 version, but Doug's link talks
> about the GECOS version (GE/Honeywell hardware).
>
> Princeton had a S/360 version at about that time, it was a re-write of a
> version for the IBM 7094 done by Kernighan after spending a summer at MIT
> with CTSS and RUNOFF. I'm very curious whether the Princeton S/360 version
> spread to other locations. Found this article in the Daily Princetonian
> about the joy and history of ROFF.
> https://photos.app.goo.gl/zMWV1GRLZdNBUuP36
>
> On Wed, Jul 17, 2024 at 1:51 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>
>> Just sharing a copy of the Roff Manual that I had forgotten I scanned a
>> little while back:
>>
>> https://archive.org/details/roff_manual
>>
>> This appears to be the UNIX complement to the S/360 version of the paper
>> backed up by Doug here:
>> https://www.cs.dartmouth.edu/~doug/roff71/roff71.pdf
>>
>> From the best I could tell, this predates both 1973's V3 and the 1971
>> S/360 version of the paper, putting it somewhere prior to 1971. For
>> instance, it is missing the .ar, .hx, .ig, .ix, .ni, .nx, .ro, .ta, and .tc
>> requests found in V3. The .ar and .ro, and .ta requests pop up in the
>> S/360 paper, the rest are in the V3 manpage (prior manpages don't list the
>> request summary).
>>
>> If anyone has some authoritative date information I can update the
>> archive description accordingly.
>>
>> Finally, this very well could be missing the last page, the Page offset,
>> Merge patterns, and Envoi sections of Doug's paper are not reflected here,
>> although at the very least, the .mg request is not in this paper so the
>> Merge patterns section probably wasn't there anyway.
>>
>> - Matt G.
>>
>
> Yeah, but if you do that you have to treat the places
> acquired in the Louisiana Purchase differently because
> they switched in 1582. And Puerto Rico. Bleh.
Then there are all the German city states. And the
shifting borders of Poland. (cal -s country) is a mighty
low-res "solution" to the Julian/Gregorian problem.
Doug
The manpage for "cal" used to have the comment "Try September 1752" (and
yes, I know why); it's no longer there, so when did it disappear? The
SysV fun police?
I remember it in Ed5 and Ed6, but can't remember when I last saw it.
Thanks.
-- Dave
A few folks on the PIDP mailing lists asked me to put scans of the cards I
have on-line. I included TUHS as some of the new to the PDP-11 folks might
find these interesting also.
I also included a few others from other folks. Note my scans are in 3
formats (JPG, TIFF, PDF) as each has advantages. Pick which one you
prefer. I tried to scan in as a high a resolution as I could in case some
one wants to try to print the later.
I may try adding some of my other cards, such as my microprocessor and IBM
collections. In the future
https://drive.google.com/open?id=13dPAlRMQEwNvPwLXwlOC5Q_ZrQp4IpkJ&usp=driv…
ᐧ
I subscribe to the TUHS mailing list, delivered in digest form. I do not
remember having subscribed to COFF, and am not aware of how to do so. Yet
COFF messges come in my TUHS digest. How does COFF differ from TUHS and how
does one subscibe to it (if at all)?
Doug
Just had a quick look at 'man cat' on Uixes I've got 'at hand'. Just a 'cut
and past' of the relevant parts.
SCO UNIX 3.2V4.2
Limitations
Note that ``cal 84'' refers to the year 84, not 1984.
The calendar produced is the Gregorian calendar from September 14 1752
onward. Dates up to and including September 2 1752 use the Julian calen-
dar. (England and her colonies switched from the Julian to the
Gregorian
calendar in September 1752, at which time eleven days were excised from
the year. To see the result of this switch, try cal 9 1752.)
Digital UNIX 4.0g
DESCRIPTION
The cal command writes to standard output a Gregorian calendar for the
specified year or month.
For historical reasons, the cal command's Gregorian calendar is
discontinu-
ous. The display for September 1752 (cal 9 1752) jumps from Wednesday the
2nd to Thursday the 14th.
--
The more I learn the better I understand I know nothing.
All ...
----- Forwarded message from Poul-Henning Kamp -----
Subject: DKUUG, EUUG and 586 issues (3700+ pages) of Unigram-X 1988…1996
(Please forward to the main TUHS list if you think it is warranted)
A brief intro: Datamuseum.dk is a volunteer-run effort to collect,
preserve and present "The Danish IT-history".
UNIX is part of that history, but our interest is seen through the
dual prisms of "Danish IT-History" and the computers in our collection.
My own personal UNIX interest is of course much deeper and broader,
which is why I'm sending this email.
Recently we helped clean out the basement under the Danish Unix
User's Group (DKUUG) which is winding down, and we hauled of a lot
of stuff, which includes much EUUG - (European Unix Users Group)
material.
As I feed it through the scanner, the EUUG-newsletters will appear here:
https://datamuseum.dk/wiki/Bits:Keyword/PERIODICALS/EUUG-NEWSLETTER
And proceedings from EUUG conferences (etc.) will appear here:
https://datamuseum.dk/wiki/Bits:Keyword/DKUUG/EUUG
I also found four boxes full of "Unigram-X" newsletters.
Unigram-X was a newsletter, published weekly out of London. A
typical issue was two yellow A3 sheets folded, or if if news was
slight, a folded A3 with an A4 insert.
… and that is just about all I know about it.
But whoever wrote it, they clearly had an amazing Rolodex.
In total there a tad more than 3700 pages of real-time news and
gossip about the UNIX world from 1986 to 1996.
It's not exactly core material for datamuseum.dk, but it is a
goldmine for UNIX history, so I have spent two full days day scanning
and getting all the pages, sorted, flipped and split into
one-year-per-pdf files.
I should warn that neither the raw material nor the scan is perfect,
but this is it, unless somebody else feels like going through it again.
(The paper stays in our collection, no rush.)
I need to go through and check for pages being upside down or out
of order, before I ingest the PDFSs into the Datamuseum.dk bitarchive,
but here is a preview:
https://phk.freebsd.dk/misc/unigram_x_1986_0034_0058.pdfhttps://phk.freebsd.dk/misc/unigram_x_1987_0059_0108.pdfhttps://phk.freebsd.dk/misc/unigram_x_1988_0109_0159.pdfhttps://phk.freebsd.dk/misc/unigram_x_1989_0160_0211.pdfhttps://phk.freebsd.dk/misc/unigram_x_1990_0212_0262.pdfhttps://phk.freebsd.dk/misc/unigram_x_1991_0263_0313.pdfhttps://phk.freebsd.dk/misc/unigram_x_1992_0314_0365.pdfhttps://phk.freebsd.dk/misc/unigram_x_1993_0366_0416.pdfhttps://phk.freebsd.dk/misc/unigram_x_1994_0417_0467.pdfhttps://phk.freebsd.dk/misc/unigram_x_1995_0468_0518.pdfhttps://phk.freebsd.dk/misc/unigram_x_1996_0519_0616.pdf
My ulterior motives for this preview are several:
If you find any out-of-order or rotated pages, please let me know.
It's not a complete collection, the following issues are missing:
1…33 35 39…49 86…87 105 138 229 321 400 405…406 496 498
507 520 523…524 527…528 548 613 615 617…
It would be nice to fill the holes.
As a matter of principle, we do not store OCR'ed PDF's in the
datamuseum.dk bitarchive[1], and what with me still suffering from
a job etc, I do not have the time to OCR 3700+ pages under any
circumstances.
But even the most crude and buggy OCR reading would be a great
resource to grep(1), so I'm hoping somebody else might find
the time and inclination ?
And a "best-of-unigram-x" page on the TUHS wiki may be warranted,
because there are some seriously great nuggets in there :-)
Enjoy,
Poul-Henning
[1] I'm not entertaining any arguments about this: We're trying
to align with best practice in historical collection world.
The argument goes: Unless the OCR is perfect, people will do
a text-search, not find stuff, and conclude it is not there.
Such interpretations of artifacts belong in peer-reviewed papers,
so there is a name of who to blame or praise, and so that they
can be debated & revised etc.
[2] The PDF's are archive-quality, you can extract the raw images
from them, for instance with XPDF's "pdfimages" program.
----- End forwarded message -----
> From: Dan Cross
> These techniques are rather old, and I think go back much further than
> we're suggesting. Knuth mentions nested translations in TAOCP ..
> suggesting the technique was well-known as early as the mid-1960s.
I'm not sure what exactly you're referring to with "[t]hese techniques"; I
gather you are talking about the low-level mechanisms used to implement
'system calls'? If so, please permit me to ramble for a while, and revise
your time-line somewhat.
There are two reasons one needs 'system calls'; low-level 'getting there from
here' (which I'll explain below), and 'switching operating/protection
domains' (roughly, from 'user' to 'kernel').
In a batch OS which only runs in a single mode, one _could_ just use regular
subroutine calls to call the 'operating system', to 'get there from here'.
The low-level reason not to do this is that one would need the addresses of
all the routines in the OS (which could change over time). If one instead
used permanent numbers to identify system calls, and had some sort of 'system
call' mechanism (an instruction - although it could be a subroutine call to a
fixed location in the OS), one wouldn't need the addresses. But this is just
low level mechanistic stuff. (I have yet to research to see if any early OS's
did use subroutine calls as their interface.)
The 'switching operating/protection domains' is more fundamental - and
unavoidable. Obviously, it wouldn't have been needed before there were
machines/OS's that operated in multiple modes. I don't have time to research
which was the _first_ machine/OS to need this, but clearly CTSS was an early
one. I happen to have a CTSS manual (two, actually :-), and in CTSS a system
call was:
TSX NAMEI, 4
..
..
NAMEI: TIA =HNAME
where 'NAME' "is the BCD name of a legitimate supervisor entry point", and
the 'TIA' instruction may be "usefully (but inexactly) read as Trap Into A
core" (remember that in CTSS, the OS lived in the A core). (Don't ask me what
HNAME is, or what a TSX instruction does! :-)
So that would have been 1963, or a little earlier. By 1965 (see the 1965 Fall
Joint Computer Conference papers:
https://multicians.org/history.html
for more), MIT had already moved on to the idea of using a subroutine calls
that could cross protection domain boundaries for 'system calls', for
Multics. The advantage of doing that is that if the machine has a standard
way of passing arguments to subroutines, you natively/naturally get arguments
to system calls.
Noel
All, just a friendly reminder to use the TUHS mailing list for topics
related to Unix, and to switch over to the COFF mailing list when the
topic drifts away from Unix. I think a couple of the current threads
ought to move over to the COFF list.
Thanks!
Warren
> In order to port VMS to new architectures, DEC/HP/VSI ...
> turned the VAX MACRO assembly language (in which
> some of the VMS operating system was written) into a
> portable implementation language by 'compiling' the
> high-level CISC VAX instructions (and addressing modes)
> into sequences of RISC instructions.
Clem Pease did the same thing to port TMG from IBM 7000-series machines to
the GE 600 series for Multics, circa 1967. Although both architectures had
36-bit words, it was a challenge to adequately emulate IBM's accumulator,
which supported 38-bit sign-magnitude addition, 37-bit twos-complement and
36-bit ones-complement.
Doug
I’ve never heard of a Computer Science or Software Engineering program
that included a ‘case study’ component, especially for Software Development & Projects.
MBA programs feature an emphasis on real-world ‘case studies’, to learn from successes & failures,
to give students the possibility of not falling into the same traps.
Creating Unix V6, because it profoundly changed computing & development,
would seem an obvious Case Study for many aspects of Software, Coding and Projects.
There have been many descriptive treatments of Initial Unix,
but I’ve never seen a Case Study,
with explicit lessons drawn, possibly leading to metrics to evaluate Project progress & the coding process.
Developers of Initial Unix arguably were 10x-100x more productive than IBM OS/360, a ‘best practice’ development at the time,
so what CSRC did differently is worth close examination.
I’ve not seen examined the role of the ‘capability’ of individual contributors, the collaborative, collegiate work environment
and the ‘context’, a well funded organisation not dictating deadlines or product specifications for researchers.
USG, then USL, worked under ’normal commercial’ management pressure for deadlines, features and specifications.
The CSRC/1127 group did have an explicit approach & principles for what they did and how they worked,
publishing a number of books & papers on them - nothing they thought or did is secret or unavailable for study.
Unix & Unix tools were deliberately built with explicit principles, such as “Less is More”.
Plan 9 was also built on explicit Design principles.
The two most relevant lessons I draw from Initial Unix are:
- the same as Royce's original “Software Waterfall” paper,
“build one to throwaway” [ albeit, many, many iterations of the kernel & other code ]
- Writing Software is part Research, part Creative ‘Art’:
It’s Done when it's Done, invention & creation can’t be timetabled.
For the most reliable, widely used Open Source projects,
the “Done when it’s Done” principle is universally demonstrated.
I’ve never seen a large Open Source project succeed when attempting to use modern “Project Management” techniques.
These Initial Unix lessons, if correct and substantiated, should cause a revolution in the teaching & practice
of Professional Programming, i.e. Programming In the Large, for both CS & SW.
There are inherent contradictions within the currently taught Software Project Management Methodologies:
- Individual capability & ability is irrelevant
The assumption is ‘programmers’ are fungible/ identical units - all equally able to solve any problem.
Clearly incorrect: course evaluations / tests demonstrate at least a 100x variance in ability in every software dimension.
- Team environment, rewards & penalties and corporate context are irrelevant,
Perverse incentives are widely evident, the cause of many, or all, “Death Marches”.
- The “Discovery & Research Phases” of a project are timetabled, an impossibility.
Any suggestions for Case Studies gratefully accepted.
===========
Professions & Professionals must learn over time:
there’s a negative aspect (don’t do this) and positive aspect (do this) for this deliberate learning & improvement.
Negatives are “Don’t Repeat, or Allow, Known Errors, Faults & Failures”
plus in the Time Dimension, “Avoid Delays, Omissions and Inaction”.
Positives are what’s taught in Case Studies in MBA courses:
use techniques & approaches known to work.
Early Unix, from inception to CACM papers, 1969 to 1974, took probably 30 man-years,
and produced a robust, performant and usable system for it’s design target, “Software Development”.
This in direct comparison to Fred Brooks IBM OS/360 effort around 5 years before that consumed 3,000-4,000 man-years
was known for bugs, poor & inconsistent code quality, needed large resource to run and was, politely, non-performant.
This was a commercial O/S, built by a capable, experienced engineering organisation, betting their company on it,
who assigned their very best to the hardware & software projects. It was “Best of Breed” then, possibly also now.
MULTICS had multiple business partners, without the same, single focus or commercial imperative.
I don’t believe it’s comparable to either system.
Initial Unix wasn’t just edit, compile & run, but filesystems, libraries, debugging & profiling tools, language & compiler construction tools, ‘man’ pages, document prep (nroff/troff) and 'a thousand' general tools leveraging shell / pipe.
This led directly to modern toolchains, config, make & build systems, Version Control, packaging systems, and more.
Nothing of note is built without using descendants or derivatives of these early toolchains.
All this is wrapped around by many Standards, necessary for portable systems, even based on the same platform, kernel and base system.
The “Tower of Babel” problem is still significant & insurmountable at times, even in C-C & Linux-Linux migration,
but without POSIX/IEEE standards the “Software Tarpit” and "Desert of Despair” would’ve been unsolvable.
The early Unix system proved adaptable and extensible to many other environments, well beyond “Software Development”.
===========
[ waterfall model ]
Managing the development of large software systems: concepts and techniques
W. W. Royce, 1970 [ free access ]
<https://dl.acm.org/doi/10.5555/41765.41801>
STEP3: DO IT TWICE, pg 334
After documentation, the second most important criterion for success revolves around whether the product is totally original.
If the computer program in question is being developed for the first time,
arrange matters so that the version finally delivered to the customer for operational deployment
is actually the second version insofar as critical design/operations areas are concerned.
===========
Plan 9, Design
<https://9p.io/sys/doc/9.html>
The view of the system is built upon three principles.
First, resources are named and accessed like files in a hierarchical file system.
Second, there is a standard protocol, called 9P, for accessing these resources.
Third, the disjoint hierarchies provided by different services are joined together into a single private hierarchical file name space.
The unusual properties of Plan 9 stem from the consistent, aggressive application of these principles.
===========
Escaping the software tar pit: model clashes and how to avoid them
Barry Boehm, 1999 [ free access ]
<https://dl.acm.org/doi/abs/10.1145/308769.308775#>
===========
Mythical Man-Month, The: Essays on Software Engineering,
Anniversary Edition, 2nd Edition
Fred Brooks
Chapter 1. The Tar Pit
Large-system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it.
===========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
I suppose this question is best directed at Rob or Doug, but just might as well ask at large: Given AT&T's ownership of Teletype and the involvement (AFAIK) of Teletype with other WECo CRT terminals (e.g. Dataspeed 40 models), was there any direct involvement of folks from the Teletype side of things in the R&D on the Jerq/Blit/DMD? Or was this terminal pure Bell Labs?
- Matt G.
Good afternoon, I was wondering if anyone currently on list is in possession of a copy of the UNIX Programmer's Manual for Program Generic Issue 3 from March of 1977. This is the version nestled between Issue 2 which Al Kossow has preserved here:
http://www.bitsavers.org/pdf/att/unix/6th_Edition/UNIX_Programmers_Manual_1…
and the MERT 0 release provided by I believe Heinz Lycklama here:
https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/
If I might make a request of anyone having such a copy, could I trouble you for at the very least scans of the lines(V), getty(VIII), and init(VIII) pages? I can't 100% confirm the presence of the first page, but the instructions for replacing PG3 pages with MERT0 pages above indicate a page called lines was present in section V of the PG3 manual, and there is a fragment of a lines(5) page in the CB-UNIX 2.3 manual here:
https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/man/man5/lines.5.pdf
In short, lines there appears to be a predecessor to inittab(5) and, if the references in CB and USG PG3 are the same, points to the earliest appearance in the wild of System V-style init in PG3 all the way back in 1977. Granted we don't have earlier CB-UNIX literature to wholly confirm whether this started in PG3 or some pre-'77 issue of CB-UNIX, but I'm quite interested in seeing how these relate.
Thanks for any help!
- Matt G.
> From: Steve Jenkin
> C wasn't the first standardised coding language, FORTRAN & COBOL at
> least were before it
There were a ton; Algol-60 is the most imppotant one I can think of.
(I was thinking that Algol-60 was probably an important precursor to BCPL,
which was the predecessor to C, but Richards' first BCPL paper:
https://dl.acm.org/doi/10.1145/1476793.1476880
"BCPL: A tool for compiler writing and system programming" doesn't call it
out, only CPL. However, CPL does admit its dues to Algol-60: "CPL is to a
large extent based on ALGOL 60".)
Noel
Howdy,
I now have this pictured 3B21D in my facility
http://kev009.com/wp/2024/07/Lucent-5ESS-Rescue/
It will be a moment before I can start work on documentation of the
3B21D and DMERT/UNIX-RTR but wanted to share the news.
Regards,
Kevin
So I'm doing a little bit of the footwork in prep for analyzing manual differences between Research, Program Generic, and PWB/UNIX, and found something interesting.
The LIL Programming Language[1] was briefly available as a user-supported section 6 command on Fifth Edition (1974) UNIX, appearing as a page but not even making it into the TOC. It was gone as quickly as it appeared in the Research stream, not surviving into V6.
However, with Al Kossow's provided Program Generic Issue 2 (1976) manual[2] as well as notes in the MERT Issue 0 (1977) manual [3], it appears that LIL was quite supported in the USG Program Generic line, making it into section 1 of Issue 2 and staying there through to Issue 3. lc(1) happens to be one of the pages excised in the transformation from PG Issue 3 to MERT Issue 0.
This had me curious, so I went looking around the extant V5 sources and couldn't find anything resembling the LIL compiler. Does anyone know if this has survived in any other fashion? Additionally, does anyone have any recollection of whether LIL was in significant use in USG-supported UNIX sites, or if it somehow made it into section 1 and spread around due to the state of use in Research at the time USG sampled userland out.
Finally, one little tidbit from P.J. Plauger's paper[1] stuck out to me: "...the resulting language was used to help write a small PDP-11/10 operating system at Bell Labs." Does anyone have any information about this operating system, whether it was a LIL experiment or something purpose-driven and used in its own right after creation?
[1] - http://www.ultimate.com/phil/lil/
[2] - http://bitsavers.org/pdf/att/unix/6th_Edition/UNIX_Programmers_Manual_19760…
[3] - https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Pgs%2001-…'s%20Manual%20for%20MERT.pdf
> Peter J Denning in 2008 wrote about reforming CACM in 1982/83. [ extract
at end ]
> <https://cacm.acm.org/opinion/dja-vu-all-over-again/>
That "accomplishment" drove me away from the ACM. I hope the following
explanation does not set off yet another long tangential discussion.
The CACM had been the prime benefit of ACM membership. It carried generally
accessible technical content across the whole spectrum of computing. The
"Journal for all Members" (JAM) reform resulted in such content being
thinly spread over several journals. To get the perspective that CACM had
offered, one would have had to subscribe to and winnow a mountain of
specialst literature--assuming the editors of these journals would accept
some ACM-style articles.
I had been an active member of ACM, having serving as associate editor of
three journals, member of the publications planning committee, national
lecturer, and Turing-Award chairman. When the JAM reform cut off my window
into the field at large, I quit the whole organization.
With the advent of WWW, the ACM Digital Library overcame the need to
subscribe to multiple journals for wide coverage. Fortunately I had
institutional acess to that. I rejoined ACM only after its decision to
allow free access to all 20th century content in the DL. This
public-spirited action more than atoned for the damage of the JAM reform
and warranted my support.
I have been happy to find that the current CACM carries one important
technical article in each issue and a couple of interesting columnists
among the generally insipid JAM content. And I am pleased by the news that
ACM will soon also give open access to most 21st-century content.
Doug
Found these two.
Anyone seen others?
I bought this book soon after it was published.
It’s a detailed study of some major IT projects, doesn’t draw “lessons” & rules like I’d expect of an MBA Case Study.
Why information systems fail: a case study approach
February 1993
<https://dl.acm.org/doi/book/10.5555/174553>
> On 5 Jul 2024, at 09:31, Lawrence Stewart <stewart(a)serissa.com> wrote:
>
> A quick search also shows a number of software engineering case study books.
================
Case Study Research in Software Engineering: Guidelines and Examples
April 2012
<https://dl.acm.org/doi/book/10.5555/2361717>
Based on their own experiences of in-depth case studies of software projects in international corporations,
in this bookthe authors present detailed practical guidelines on
the preparation, conduct, design and reporting of case studies of software engineering.
This is the first software engineering specific book on thecase study research method.
================
Case studies for software engineers
May 2006
<https://dl.acm.org/doi/10.1145/1134285.1134497>
The topic of this full-day tutorial was the correct use and interpretation of case studies as an empirical research method.
Using an equal blend of lecture and discussion, it gave attendees
a foundation for conducting, reviewing, and reading case studies.
There were lessons for software engineers as researchers who
conduct and report case studies, reviewers who evaluate papers,
and practitioners who are attempting to apply results from papers.
The main resource for the course was the book
Case Study Research: Design and Methods by Robert K. Yin.
This text was supplemented with positive and negative examples from the literature.
================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Links for those who’ve not read these articles. Open access, downloadable PDF’s.
================
Peter J Denning in 2008 wrote about reforming CACM in 1982/83. [ extract at end ]
<https://cacm.acm.org/opinion/dja-vu-all-over-again/>
The space shuttle primary computer system
Sept 1984
<https://dl.acm.org/doi/10.1145/358234.358246>
The TWA reservation system
July 1984
<https://dl.acm.org/doi/abs/10.1145/358105.358192>
================
After Editor in Chief of the ACM, in 1993 Denning established "The Center for the New Engineer" (CNE)
<http://www.denninginstitute.com/cne/cne-aug93.pdf>
Great Principles of Computing, paper
<https://denninginstitute.com/pjd/PUBS/ENC/gp08.pdf>
Website
<https://denninginstitute.com/pjd/GP/GP-site/welcome.html>
================
Denning, 2008
Another major success was the case studies conducted by Alfred Spector and David Gifford of MIT,
who visited project managers and engineers at major companies and interviewed them about their projects,
producing no-holds-barred pieces.
This section was wildly popular among the readers.
Unfortunately, the labor-intensive demands of the post got the best of them after three years, and we were not able to replace them.
Also by that time, companies were getting more circumspect about discussing failures and lessons learned in public forums.
================
> On 5 Jul 2024, at 09:31, Lawrence Stewart <stewart(a)serissa.com> wrote:
>
> Alright, apologies for being late.
>
> Back in 1984, David Gifford and Al Spector started a series of case studies for CACM.
> I think only two were published, on the TWA reservation system and on the Space Shuttle primary computer.
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Steve Jenkin:
I've never heard of a Computer Science or Software Engineering program
that included a `case study' component, especially for Software
Development & Projects.
[...]
Creating Unix V6, because it profoundly changed computing & development,
would seem an obvious Case Study for many aspects of Software, Coding
and Projects.
====
How about the course for which John Lions wrote his famous
exegesis of the 6/e kernel?
Norman Wilson
Toronto ON
David Rosenthal reflects on his involvement with the development
of the X Window System:
> Although most of my time was spent developing NeWS, I rapidly
> ported X version 10 to the Sun/1, likely the second port to
> non-DEC hardware. It worked, but I had to kludge several areas
> that depended on DEC-specific hardware. The worst was the
> completely DEC-specific keyboard support.
>
> Because it was clear that a major redesign of X was needed to
> make it portable and in particular to make it work well on Sun
> hardware, Gosling and I worked with the teams at DEC SRC and WRL
> on the design of X version 11. Gosling provided significant
> input on the imaging model, and I designed the keyboard
> support. As the implementation evolved I maintained the Sun port
> and did a lot of testing and bug fixing. All of which led to my
> trip to Boston to pull all-nighters at MIT finalizing the
> release.
>
> My involvement continued after the first release. I was the
> editor and main author of the X Inter-Client Communications
> Conventions Manual (ICCCM) that forms Part III of Robert
> Scheifler and Jim Gettys' X Window System.
-- https://blog.dshr.org/2024/07/x-window-system-at-40.html
Alexis.
https://www.geekwire.com/2024/seattles-living-computers-museum-logs-off-for…
These folks hosted the UNIX 50th Celebration and had a physical PDP-7 that
was used to bring up UNIX V0 (after first getting it running on SIMH). That
later was not easy because the original PDP-7s (like the one Ken had access
to) did not have disk storage. BTL had paid DEC's Custom Special Systems
(CSS) to splice a Burrough's disk that DEC was selling using for the 15 and
later the PDP-9. It started with splicing reverse engineering that code
to build a simulation of that disk into the simh, so we could ensure that
UNIX ran—finally, modeling that HW with a custom microprocessor-based board
with an SD card with a functional replica of a PDP-7 I/O interface on one
side obeying the device registers and operations that UNIX expected to see.
The LCM-L folks were incredibly gracious and generous. I am so sad to see
their collection go away. In particular, I hope the PDP-7s and the CDC-6500
find new homes.
Clem
Hi
Out of curiosity, what would be considered the most direct descendent of Unix available today? Yes, there are many descendants, but they've all gone down their own evolutionary paths.
Is it FreeBSD or NetBSD? Something else? I don't think it would be Minix or Linux because I remember when they came along, and it was well after various Unix versions were around.
Does such a thing even exist anymore? I remember using AT&T Unix System V and various BSD variants back in college in the 1980's. System V was the "new thing" back then but was eventually sold and seems to have faded. Maybe it is only available commercially, but it does not seem as prominent as it once was.
Any thoughts?
Thanks, Andrew Lynch
> The lack of a monospaced font is, I suspect, due either to
> physical limitations of the C/A/T phototypesetter[1] or fiscal
> limitations--no budget in that department to buy photographic
> plates for Courier.
Since the C/A/T held only four fonts, there was no room for
Courier. But when we moved beyond that typesetter, inertia
kept the old ways . Finally, in v9, I introduced the fixed-width
"literal font", L, in -man and said goodbye to boldface in
synopses. By then, though, Research Unix was merely a
local branch of the Unix evolutionary tree, so the literal-font
gene never spread.
Doug
All, I've decided to bring the ANSI C/POSIX thread to a close. While
initially the thread was informative, it's recently become host to
comments which are inappropriate and certainly well out of scope for
a mailing list about UNIX and its history.
If someone wants to resurrect a thread about standards, feel free to
use the COFF list. But please, keep the conversation on-topic.
Thanks, Warren
I have a directory, t:
ronsexcllentmbp:t rminnich$ ls -li
total 0
23801442 -rw-r--r-- 1 rminnich wheel 0 Jun 26 20:21 a
23801443 -rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 b
23801443 -rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 c
note that b and c are the same inode.
let's make a cpio.
ronsexcllentmbp:t rminnich$ cpio -o >../t.cpio
a
b
c
^D
1 block
what's in it?
ronsexcllentmbp:t rminnich$ cpio -ivt < ../t.cpio
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:21 a
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 b
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 c link to b
"c link to b"? wtf? Who thought that was a good idea? because ...
ronsexcllentmbp:t rminnich$ touch 'c link to b'
ronsexcllentmbp:t rminnich$ ls -l
total 0
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:21 a
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 b
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 c
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:22 c link to b
and
ronsexcllentmbp:t rminnich$ cpio -o >../t.cpio
a
b
c
c link to b
^D
ronsexcllentmbp:t rminnich$ cpio -ivt < ../t.cpio
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:21 a
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 b
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 c link to b
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:22 c link to b
so ... it looks like a file is there twice, because somebody thought it was
a good idea to confuse a file name and file metadata. And, anyway, it's
just as accurate to have it say
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:21 a
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 b link to c
-rw-r--r-- 2 rminnich wheel 0 Jun 26 20:21 c link to b
-rw-r--r-- 1 rminnich wheel 0 Jun 26 20:22 c link to b
Right? :-)
From the same people who brought you this:
ronsexcllentmbp:t rminnich$ bc
>>>
Somebody needs to get the osx folks a unix manual set :-)
> From: Aron <aki(a)insinga.com>
> Now if only his family would take those wishes of his into account and
> tell the lawyers to finish the job.
I suspect that was his real mistake; he trusted that his family would do what
he wanted (perhaps so that instead of putting the final bow on this, he could
pay attention to something else that was more important to him) after he was
gone - and they decided not to.
My suspicion is that there is something else that is more important to his
sister (probably political), and she decided that the pittance she'd get from
flushing the LCM would be better put towards her project(s).
I've just been re-reading Thucydides' extraordinarily outstanding meditation
on the revolution on Corcyra (which is really a meditation on the death of
the Greek polei - and is a pre-epitaph on the death of many democracies
since), and if my pupposition is correct, it applies to this too
I wonder what'll happen to all the less-valuable stuff that was given to the
LCM? A PDP-10 will fetch 10K's of $, but a lot of the rest is worth pennies.
I hope they don't scrap it for pennies, or throw it in the trash. I'm sure
that someone who would ignore her brother's wishes about important pieces
of history that meant a lot to him would have no qualms about doing either.
Noel
Good morning, I was wondering if anyone has the scoop on the rationale behind the selection of standards bodies for the publication of UNIX and UNIX-adjacent standards. C was published via the ANSI route as X3.159, whereas POSIX was instead published by the IEEE route as 1003.1. Was there every any consideration of C through IEEE or POSIX through ANSI instead? Is there an appreciable difference suggested by the difference in publishers? In any case, both saw subsequent adoption by ISO/IEC, so the track to an international standard seems to lead to the same organizations.
- Matt G.
This announcement just arrived on the ACM Bulletins list:
>> ...
>> Andrew S. Tanenbaum, Vrije Universiteit, receives the ACM Software
>> System Award (http://awards.acm.org/software-system) for MINIX, which
>> influenced the teaching of Operating Systems principles to multiple
>> generations of students and contributed to the design of widely used
>> operating systems, including Linux.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Or like rms claimed Linux as part of GNU, he claimed POSIX and some people
believed him. -jsq
On Thu, Jun 27, 2024, 9:38 AM Warner Losh <imp(a)bsdimp.com> wrote:
>
>
> On Thu, Jun 27, 2024, 6:15 AM Dan Cross <crossd(a)gmail.com> wrote:
>
>> On Thu, Jun 27, 2024 at 8:12 AM John S Quarterman <jsqmobile(a)gmail.com>
>> wrote:
>> > I don't recall rms being involved, certainly not in the name. -jsq
>>
>> quip: Like childbirth, perhaps the unpleasant memory was simply blocked
>> out?
>>
>
> Is it possible that RMS suggested it as maybe an obvious quip to a
> committee member who later credited him with that since that conversation
> happened before that person heard it from others on the committee? Tricky
> to know from this distance in time.
>
> Warner
>
> - Dan C.
>>
>
> <clemc(a)ccc.com>
> In particular, I hope the PDP-7s and the CDC-6500 find new homes.
Also, their collection of PDP-10's, which is absolutely unrivalled; they
had a KA10 and a KI10; also the MIT-MC KL10.
Also a Multics front panel; AFAIK, the ony one in the world other than the
CHM's. Any idea where it's all being sold? I might enquire about the Multics
panel.
Noel
FYI, Tom Van Vleck just passed this on the Multicians list; DMR's
recollections of the end of Multics at BTL.
I can't resist asking about the nugget buried in here about Ken
writing a small kernel for the 645. Is that in the archives anywhere?
- Dan C.
---------- Forwarded message ---------
From: Tom Van Vleck via groups.io <thvv=multicians.org(a)groups.io>
Date: Mon, Jun 24, 2024 at 10:38 AM
Subject: [multicians] Dennis Ritchie's 1993 Usenet posting "BTL Leaves Multics"
To: <multicians(a)groups.io>
in "alt.os.multics"
about Unix, CTSS, Multics, BTL, qed, and mail
https://groups.google.com/g/alt.os.multics/c/1iHfrDJkyyE
Comments by DMR, me, RMF, PAG, PAK, BSG, PWB, JJL, AE, MAP, EHR, DMW
Covers many issues.
(I feel like we should save this thread somehow. hard to trust Google any more.
the posting ends with a heading of a response by JWG but no content.)
_._,_._,_
________________________________
Groups.io Links:
You receive all messages sent to this group.
View/Reply Online (#5547) | Reply To Group | Reply To Sender | Mute
This Topic | New Topic
________________________________
-- sent via multicians(a)groups.io -- more Multics info at
https:://multicians.org/
________________________________
Your Subscription | Contact Group Owner | Unsubscribe [crossd(a)gmail.com]
_._,_._,_
All, recently I saw on Bruce Schneier "Cryptogram" blog that he has had
to change the moderation policy due to toxic comments:
https://www.schneier.com/blog/archives/2024/06/new-blog-moderation-policy.h…
So I want to take this opportunity to thank you all for your civility
and respect for others on the TUHS and COFF lists. The recent systemd
and make discussions have highlighted significant differences between
people's experiences and opinions. Nonetheless, apart from a few pointed
comments, the discussions have been polite and informative.
These lists have been in use for decades now and, thankfully, I've
only had to unsubscribe a handful of people for offensive behaviour.
That's a testament to the calibre of people who are on the lists.
Cheers and thank you again,
Warren
P.S. I'm a happy Devuan (non-systemd) user for many years now.
my personal frustration with autotools was trying to port code to plan9.
i wish autotools had an intermediate file format which described the packages requirements, that way i could have written my own backend to create my config.h and makefiles (or mkfiles)
in the end i wrote my own tool which crudely parses a directory of C or F77 sourcecode and uses heuristics to create a config.h and a plan9 mkfile, it was named mkmk(1)
it was almost _never_ completely correct, but usually got close enough that the files only needed a little manual hacking.
it also took great pains to generate mkfiles that looked hand written; if you are going to auto generate files, make them look nice.
-Steve
> This is The Way if you really care about portability. Autoconf,
> once you get your head around what, why, and when it was created,
> makes for nice Makefiles and projects that are easy to include in
> the 100 Linux distributions with their own take on packaging the
> world.
This is outright claptrap and nonsense. In the latter half of the
90s I was responsible for writing installers and generating
platform-native packages for about a dozen different commercial
UNIX platforms (AIX, Solaris, Irix, HP/UX, OSF, BSD/OS, ...). Each
of these package systems was as different as could be from the
others. (HP/UX didn't even have one.)
That entire process was driven by not very many lines of make
recipes, with the assistance of some awk glue that read a template
file from which it generated the native packages. And these were
not trivial software distributions. We were shipping complex IMAP,
X.400 and X.500 servers, along with a couple of MTAs. Our installers
didn't just dump the files onto the system and point you at a README;
we coded a lot of the site setup into the installers, so the end
user mostly just had to edit a single config file to finish up.
--lyndon
FYI, for people like me that care about 80s 68K Unix systemf
There is a pretty serious multi-purpose preservation effort that started a few weeks ago
around Plexus systems as a result of a series of YouTube videos
https://youtu.be/iltZYXg5hZwhttps://github.com/misterblack1/plexus-p20/
Every so often I want to compare files on remote machines, but all I can
do is to fetch them first (usually into /tmp); I'd like to do something
like:
rdiff host1:file1 host2:file2
Breathes there such a beast? I see that Penguin/OS has already taken
"rdiff" which doesn't seem to do what I want.
Think of it as an extension to the Unix philosophy of "Everything looks
like a file"...
-- Dave
> From: Warner Losh
> 2.11BSD used a mode between kernel and user for the TCP stack to get
> more effective address space...
Is there a document for 2.11 which explains in detail why they did that? I
suspect it's actually a little more complicated than just "more address
space".
The thing is that PDP-11 Unix had been using overlays in the kernel for quite
a while to provide more address space. I forget where they first came in (I
suspect there were a number of local hacks, before everyone started using the
BSD approach), but by 2.9 BSD they were a standard part of the system. (See:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/src/sys/conf/Ovmak…
for some clues about how this works. There is unfortunately no documentation
that I know of which explains clearly how it works; if anyone knows of any,
can you please let me know? Otherwise you'll have to read the sources.)
I can think of two possible reasons they started using supervisor mode: i)
There were a limited number of the 2.9-type overlays, and they were not
large; trying to support all the networking code with the existing overlay
system may have been too hard. ii) I think this one is unlikely, but I'll
list it as a possibility. Switching overlays took a certain amount of
overhead (since mapping registers had to be re-loaded); if all the networking
code ran in supervisor mode, the supervisor mode mapping registers could be
loaded with the right thing and just left.
Noel
> From: Clem Cole
> Frankly my major complaint with much of the modern world is that when we
> ignore the past
"There are two kinds of fools. One says, 'This is old, therefore it is good';
the other says, 'This is new, therefore it is better.'" -- Dean Inge, quoted
by John Brunner in "The Shockwave Rider".
Noel
> this month marks the fiftieth anniversary of the release of what
> would become a seminal, and is arguably the single most important,
> piece of social software ever created.
I'm flattered, but must point out that diff was just one of
a sequence of more capable and robust versions of
proof(1), which Mike Lesk contributed to Unix v3. It, in
turn, copied a program written by Steve Johnson before
Unix and general consciousness of software tools. Credit
must also go to several people who studied and created
algorithms for the "longest common subsequence"
problem: Harold Stone (who invented the diff algorithm
at a blackboard during a one-day visit to Bell Labs), Dan
Hirschberg, Tom Szymanksi, Al Aho, and Jeff Ullman.
For a legal case in which I served as an expert witness,
I found several examples of diff-type programs
developed in the late 1960s specifically for preparing
critical editions of ancient documents. However, Steve
Johnson's unpublished program from the same era
appears to be the first that was inspired as a general
tool, and thus as "social software".
Doug
Good evening, while I'm still waiting on the full uploads to progress (it's like there's a rule any >100MB upload to archive.org for me has to fail like 5 times before it'll finally go...) I decided to scrape out the UNIX RTR manual from a recent trove of 5ESS materials I received and tossed it up in a separate upload:
https://archive.org/details/5ess-switch-unix-rtr-operating-system-reference…
This time around I've got Issue 10 from December 2001. The last issue of this particular manual I found on another 5ESS disc is Issue 7 from 1998 which I shared previously (https://ia601200.us.archive.org/view_archive.php?archive=%2F12%2Fitems%2F5e…)
The manual is in "DynaText" format on the CD in question, unlike Issue 7 which was already a PDF on its respective CD. I used print-to-PDF to generate the above linked copy. Given that the CD itself is from 2007, this may point to UNIX RTR having no significant user-visible changes from 2001 to 2007 that would've necessitated manual revisions.
In any case, I intend to upload bin/cue images of all 7 of the CDs I've received which span from 1999 to 2007, and mostly concern the 5ESS-2000 switch from the administrative and maintenance points of view. Once I get archive.org to choke these files down I also intend to go back around to the discs I've already archived and reupload them as proper bin/cue rips. I was in a hurry the last time around and simply zipped the contents from the discs, but aside from just being good archive practice, I think bin/cue is necessary for the other discs as they seem to have control information in the disc header that is required by the interactive documentation viewers therein.
All that to say, the first pass will result in bin/cues which aren't easily readable through archive.org's interface, but I intend to also swing back around on these new discs and provide zips of the contents as well to ensure the archives are both correct (bin/cue) and easily navigable (zip).
As always, if you have any such documentation or leads on where any may be awaiting archival, I'm happy to take on the work!
- Matt G.
Doug McIlroy kindly sent me contact information for John Chambers,
co-author of the cited book about the S system. I have just heard
back from John, who offered a link to his summary paper from the
2020 HOPL conference proceedings
S, R, and data science
https://doi.org/10.1145/3386334
and reported that S was licensed to AT&T Unix customers only in binary
form, and that the original source code may no longer exist.
That is a definitive answer, even if not the one that I was hoping to
find.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Eloquently put. Amen!
doug
> Unix brought automation to the forefront of possibilities. Using Unix,
> anyone could do it - even that kid in Jurassic Park. Today, everything
> is GUI and nothing can be automated easily or, most of the time,
> not at all.
> Unix is an ever shrinking oasis in a desert of non-automation and
complexity.
> It is the loss of automation possibilities that frustrates me the most
Hi folks,
Reiser and London's paper documenting their preparation of UNIX/32V, a
port of Seventh Edition Unix to the VAX-11/780, is an important
milestone in Unix development--as much I think for its frank critique of
C as "portable assembly" as for the status of the system documented: the
last common ancestor of the BSD and System V branches of development.
Because the only version I've ever seen of this paper is a scan of,
possibly, a photocopy several generations removed from the original, I
thought I'd throw an OCR tool at it and see about reconstructing it, not
just for posterity but to put the groff implementation of mm to the
test. So even if someone has a beautiful scan of this document
elsewhere, this exercise remains worthwhile to me for what it has
shown me about Documenter's Workbench mm and groff's mostly compatible
reimplementation thereof.
Please find attached my first draft of the reconstruction as an mm
document as well as a PDF rendered by bleeding edge groff.
I did not attempt to fix _any_ typos, solecisms, or non-idiomatic *roff
usage (like the employment of hyphens as arithmetic signs or operators)
in this document. I may have introduced errors, however, due to human
fallibility or incorrect inferences about what lay beneath scanning
artifacts. Hence its draft status. I welcome review.
Assuming this reconstruction survives peer scrutiny, I aim to put it up
on GitHub as I did Kernighan & Cherry's "Typesetting Mathematics"
paper.[1]
For the casual reader, I extract my documentary annotations below.
For groff list subscribers, I will add, because people are accustomed to
me venturing radical suggestions for reforms of macro packages, I
suggest that we can get rid of groff mm's "MOVE" and "PGFORM"
extensions. They're buggy (as the man page has long conceded), and I
don't think anyone ever mastered them, not even their author. I rewrote
"0.MT", essential to rendering of this document, without requiring them
at all. I _tried_ to use them, but "MOVE" in particular introduced
baffling errors in vertical spacing. When I threw it aside to attack
head-on the layout problems facing me, things got easier. Further,
simple caching and restoration of `.i` and `.l` register values (when
multiple changes were being made to them within a macro) obviated
`PGFORM`. I'm not sure that it is tractable to idiot-proof
manipulations of basic layout parameters like these, as these macros
seem to have tried to do. If a document author wants to seize control
of page layout from a full-service macro package and reach deep into the
guts of the formatter, they should glove up and put things back where
they found them. My opinion.
.\" London & Reiser's UNIX/32V porting paper
.\"
.\" Reconstruction in groff mm (but DWB 3.3 mm compatible)
.\" from scanned/OCRed text by G. Branden Robinson, June 2024
.\"
.\" The original scan shows no evidence of superscript usage, except on
.\" the cover sheet where "TM" superscripts "UNIX".
.\"
.\" Some differences may arise due to changes in the mm macro package
.\" itself from its PWB incarnation (ca. 1978) and DWB 3.3 (July 1992).
.\" Thanks to Dan Plassche for the history.
.\" https://www.tuhs.org/pipermail/tuhs/2022-March/025545.html
.\"
.\" The groff reimplementation of mm was undertaken mostly from
.\" 1991-1999 (by Juergen Haegg), based on the DWB documentation. It
.\" added features but also parameterized many aspects of package
.\" behavior, for example to facilitate easy localization. Later,
.\" Werner Lemberg and G. Branden Robinson contributed enhancements, bug
.\" fixes, and improvements to the groff_mm(7) man page.
.\"
.\" I anticipate adding further parameters to groff mm to better
.\" emulate the old version of mm used by this paper. (For example, the
.\" format of the caption applied to the reference page differs between
.\" PWB mm and DWB 3.3.) Where this document exercises such extensions,
.\" they should be prefixed with a `do` request so that AT&T troff will
.\" ignore them.
.\" Override: "By default, ... bold stand-alone headings are printed
.\" in a size one point smaller than the body."
.\" XXX: The cover "page" (more like a header block) is a mess when
.\" typeset with groff mm, and outright horrific in nroff mode. GBR has
.\" fixes for these pending for push to GNU Savannah's Git repository.
.\"
.\" XXX: Original scan capitalizes "Subject:"; DWB 3.3 renders it in
.\" full lowercase.
.\"
.\" XXX: Original scan bears a "TM:" heading for the technical
.\" memorandum number(s). DWB 3.3 lacks this.
.\"
.\" Memorandum captions may have changed from PWB to DWB 3.3 mm. groff
.\" mm has changed in Git (June 2024) to use the captions documented in
.\" the DWB 3.3 manual. We override the default for authenticity.
.\" XXX: Original scan sets reference marks as a typewriter might, at
.\" normal size on the baseline between square brackets. DWB 3.3
.\" converts them to superscripts but keeps the brackets(!). groff mm
.\" should add a "Rfstyle" register to control this.
.\" 0 = auto (nroff/troff); 1 = bracket; 2 = superscript; 3 = both. (?)
\" straight quotes in original
.ns \" XXX: Hack to keep `VL` from adding pre-list vertical space.
\" recte: *(\-\-p+i)
\" bad ellipsis spacing in original
\" - missing; error in text or scanner fubar?
\" recte: \-1
\" sic
.\" Either `AL` worked differently in 1978 mm, or didn't exist, or
.\" somebody wanted this list _just so_.
.\"AL "" 5
.\" XXX: Scan has signatures set farther to the right, not centered as
.\" DWB 3.3 mm sets them. groff mm follows DWB here.
.\"
.\" XXX: PWB and DWB 3.3 put the signature names in bold; groff mm sets
.\" them at normal weight. Bug.
.\"
.\" XXX: Scan has a couple of vees between the signature line and the
.\" flush left secretarial annotation. groff mm sets the annotation on
.\" the same line as the last author but also puts its information in
.\" the cover page header as DWB 3.3 does, described next. DWB 3.3: (1)
.\" omits the secretarial annotation altogether, putting it up in the
.\" cover page header under the authors' names; (2) does not use author
.\" initials (in the cover header) for this memorandum type; (3) puts
.\" the department number after "Org." on the line under the author
.\" name; (4) puts the abbreviated AT&T site name below that. Should we
.\" consider a `Sgstyle` register for groff mm?
.\"
.\" XXX: groff mm organizes the department and site name differently
.\" from DWB 3.3 in the cover head, and I don't see any reason for it
.\" to. Fix this.
.\" XXX: Scan only breaks between notations; DWB 3.3 and groff put 1v
.\" between them. Should we consider an `Nss` register for groff mm?
.\" XXX: Scan has references caption set flush left, in mixed case and
.\" bold (just like `HU`). DWB 3.3 and groff center it and set it in
.\" full caps in italics (at normal weight). If there were a way to
.\" dump the accumulated reference list independently of rendering the
.\" caption, that would give the author much more flexibility.
.\"
.\" XXX: The numbered reference list does not look like one produced
.\" with `RL` nor with `AL`. The numeric tag is left-aligned within the
.\" paragraph indentation. groff mm aligns it to the right.
.\"
.\" DWB 3.3 and Heirloom mm don't seem to honor `.RP 2` as the DWB
.\" manual documents. They start the table immediately after the
.\" reference list and go haywire boxing the table. Bug.
Regards,
Branden
[1] https://github.com/g-branden-robinson/retypesetting-mathematics
All,
There's an interesting dive into PID 0 linked to from osnews:
https://blog.dave.tf/post/linux-pid0/
In the article, the author delves into the history of the scheduler a
bit - going back to Unix v4 (his assembly language skills don't go to
PDP variants).
I like the article for two reasons - 1) it's clarity 2) it points out
the self-reinforcing nature of our search ecosystem.
I'm left with the question - how did scheduling work in v0-v4? and the
observation that search really sucks these days.
Later,
Will
The book ``S: An Interactive Environment for Data Analysis and
Graphics'' (Wadsworth, 1984), by Richard A. Becker and John
M. Chambers, and an earlier Bell Labs report in 1981, introduced the
S statistical software system that later evolved into the commercial
S-Plus system (now defunct, I think), and the vibrant and active R
system (https://cran.r-project.org/) that we use at Utah in our
statistics courses.
Almost 21,000 open-source packages for R are available, and they
appear to be the dominant form of statistical software package
publication, based on extensive evidence in our bibliography archives
that completely cover numerous journals in probability and statistics.
I'm interested in looking into the early S source code, if possible,
to see how some statistical software that I am freshly implementing
for high-precision computation was handled at Bell Labs more than four
decades ago.
Does anyone on this list know whether the original S system was ever
distributed in source code to commercial sites, and academic sites,
that licensed Unix from Bell Labs in the 1980s? Does that code still
exist (and is openly accessible), or has it been lost?
As with the B, C, D, and R programming languages, it is rather hard
for Web search engines to find things that are known only by a single
letter of the alphabet.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Well, I really dove into using vi by way of nvi this past week and I
have to say... the water's fine. It turns out that vi is a much simpler
editor to understand and use (but no less powerful) than it's great
grandchild, vim. To be fair, vim is an interesting editor and having
used it off and on since the mid '90s, it's very familiar... but its
powers? difficult to attain.
vim stands as an excellent example of just how far you can take a
product that works, keeping its core, but expanding it in all
directions. No matter how much I tried to grasp its essence, it alluded
me. The online help files seemed inscrutable to me, mixing environment
settings, ex commands, ex mode commands, vi motions, insert mode
commands in ways that seemed quite confusing to me. I know that I'm
probably among a select few vim users that love it without really having
a clue as to how it works. My best resource has been the web and search,
but of late I've been wanting more. That's what drove me on this quest
to really dig in to how things used to work and nvi is the best
surrogate of the old ways that I could find (well, excluding heirloom
vi, the traditional vi, which I've confirmed works pretty much the same
way as nvi, with lisp support and without a few nice-to-haves nvi has).
Anyway, here's something I worked out, after much travail - vi appears
to be all about modes, movement, counts, operators, registers, and
screens (which I found very weird at first, but simple in retrospect)...
with these fundamentals down, it seems like you can do anything and I
mean anything... and all of the other functions are just bolted on for
the purpose of making their specific tasks work.
Getting this out of the existing documentation was a mess. Thankfully
the nvi docs (based on the 4.4 docs) are much slimmer and better
organized. Even so, they make assumptions of the reader that I don't
seem to fit. Take motions as prolly the most glaring example - all of
the docs I've ever seen organize these by logical units of text (words,
paras, etc), personally and apparently persistently, I think of motion
as directed, so it took me a lot of experimentation, head scratching,
and writing things out several times in several different ways to
realize I could represent the idea on a single notecard as (some
commands appear in multiple lines):
Leftward motions - [[, {, (, 0, ^|_, B, b, h|^H
Rightward Movement - l|SP, e, E, w, W, $, ), }, ]]
Upward motions - 1G, ^B, H, ^U, -, k | ^P
Downward motions - G, ^F, L, ^D, ^M | +, j | ^J | ^N
Absolute - | G
Relative - %, H, M, L
Marks - m, ', `, '', ``
Keeping in mind that movements left-to-right are - section, para,
sentence, line, text, word and endword (big, and small), and letter. And
up and down are - file, screen, in screen (HML), half-screen,
chars-in-line, and line. For me, this inversion from units of motion to
direction of motion put forty some-odd commands in much closer reach for
me. Looking back at the vim documentation, I see how its sheer volume
and the way it is organized got in the way of my seeing the forest.
Thankfully, in nvi, there are two incredibly useful ex commands to help
- exu[sage] and viu[sage]. I simply printed these out and worked with
them making the experimental assumption that they would serve as a
baseline that represented the full capabilities of vi... and sure
enough, after working and working with them, I am pretty confident they
are sufficient for any editing task. Wow, who knew? I loved vi, but
now, I'm starting to really appreciate it's simplicity?! I can't believe
those words are coming out of my mouth. I never thought of it as
simple... those movement commands were far too numerous as I understood
them.
Are there things I miss from vim? Sure, I miss command line completion
in ex mode, I want my help text to appear in a window where I can
search, I would like better window control. But, I think I'll stick with
nvi a while until I really nail it down. Then all of the cool stuff that
vim offers, or neovim, will seem like icing on the cake that is vi.
Thanks to Ken Thompson for writing a work of art that serves as the true
core of this editor, to Bill Joy for his great work in extending it,
again to Bill Joy for bringing vi to life, and to Mary Ann for the
macros and making it accessible to the rest of us, and others who
contributed. It's 2024 and I still can't find a better terminal editor
than vi... as it existed in the late '80s or as it exists today as
nvi/vim/neovim. Amazing piece of software.
Off to figure out tags!! Arg, seems like it oughtta be really useful in
my work with source code, why can't I figure it out?! Sheesh.
Will
A small reflection on the marvels of ancient writing...
Today, I went to the local Unix user group to see what that was like. I
was pleasantly surprised to find it quite rewarding. Learned some new
stuff... and won the door prize, a copy of a book entitled "Introducing
the UNIX System" by Henry McGilton and Rachel Morgan. I accepted the
prize, but said I'd just read it and recycle it for some other deserving
unix-phile. As it turns out, I'm not giving it back, I'll contribute
another Unix book. I thought it was just some intro unix text and
figured I might learn a thing or two and let someone else who needs it
more have it after I read it, but it's a V7 book! I haven't seem many of
those around and so, I started digging into it and do I ever wish I'd
had it when I was first trying to figure stuff out! Great book, never
heard of it, or its authors, but hey, I've only read a few thousand tech
books.
What was really fun, was where I went from there - the authors mentioned
some bit about permuted indexes and the programmer's manual... So, I
went and grabbed my copy off the shelf and lo and behold, my copy either
doesn't have a permuted index or I'm not finding it, I was crushed. But,
while I was digging around the manual, I came across Section 9 - Quick
UNIX Reference! Are you kidding me?!! How many years has it taken me to
gain what knowledge I have? and here, in 20 pages is the most concise
reference manual I've ever seen.
Just the SH, TROFF and NROFF sections are worth the effort of digging up
this 40 year old text.
Anyhow, following on the heels of a recent dive into v7 and Ritchie's
setting up unix v7 documentation, I was yet again reminded of the golden
age of well written technical documents. Oh and I guess my recent
perusal of more modern "heavy weight" texts (heavy by weight, not
content, and many hundreds of pages long) might have made me more
appreciative of concision - I long for the days of 300 page and shorter
technical books :). In case you think I overstate - just got through a
pair of TCL/TK books together clocking in at 1565 pages.
Thank you Henry McGilton, Rachel Morgan, and Dennis Ritchie and Steve
Bourne and other folks of the '70s and '80s for keeping it concise. As a
late to the party unix enthusiast, I greatly value your work and am
really thankful you didn't write like they do now...
Later,
Will
> was there ever any crossover regarding UNIX folks applying their
developments to other non-UNIX AT&T systems
Besides Sandy Fraser's long-term effort to advance digital communication
(as distinct from digital transmission), there was TPC; see TUHS
https://www.tuhs.org/pipermail/tuhs/2020-April/020802.html and other
mentions of TPC in the TUHS archives.
Ken Thompson did considerable handholding for early adopters of Unix for
applications within the Bell System, notably tracking automatic trouble
reports from switching systems and managing the workflow of craftspeople in
a wire center.
Bob Morris's intimate participation in a submarine signal-processing
project that Bell Labs contracted to produce for the US Navy set him on a
career path that led to becoming chief scientist at NSA's National Computer
Security Center.
Gerard Holtzmann collaborated to instill model-checking in switching and
transmission projects.
Andrew Hume spent much time with AT&T's call records.
Lorinda Cherry single-handedly automated the analysis of call centers'
notes on customer contacts, This enabled detection of significant
human-engineering and public-relations problems.
An important part of my role as a department head was to maintain contacts
with development labs so that R and D were mutually aware of each other's
problems and expertise. This encouraged consulting visits, internships, and
occasionally extended collaboration or specific research projects as
recounted above.
Doug
Today, as I was digging more into nroff/troff and such, and bemoaning
the lack of brevity of modern text. I got to thinking about the old days
and what might have gone wrong with book production that got us where we
are today.
First, I wanna ask, tongue in cheek, sort of... As the inventors and
early pioneers in the area of moving from typesetters to print on
demand... do you feel a bit like the Manhattan project - did you maybe
put too much power into the hands of folks who probably shouldn't have
that power?
But seriously, I know the period of time where we went from hot metal
typesetting to the digital era was an eyeblink in history but do y'all
recall how it went down? Were you surprised when folks settled on word
processors in favor of markup? Do you think we've progressed in the area
of ease of creating documentation and printing it making it viewable and
accurate since 1980?
I didn't specifically mention unix, but unix history is forever bound to
the evolution of documents and printing, so I figure it's fair game for
TUHS and isn't yet COFF :).
Later,
Will
> Were you surprised when folks settled on word processors in favor of
markup?
I'm not sure what you're asking. "Word processor" was a term coming into
prominence when Unix was in its infancy. Unix itself was sold to management
partly on the promise of using it to make a word processor. All word
processors used typewriters and were markup-based. Screens, which
eventually enabled WYSIWYG, were not affordable for widespread use.
Perhaps the question you meant to ask was whether we were surprised when
WYSIWYG took over word-processing for the masses. No, we weren't, but we
weren't attracted to it either, because it sacrificed markup's potential
for expressing the logical structure of documents and thus fostering
portability of text among distinct physical forms, e.g. man pages on
terminals and in book form or technical papers as TMs and as journal
articles. WYSIWYG was also unsuitable for typesetting math. (Microsoft Word
clumsily diverts to a separate markup pane for math.)
Moreover, WYSIWYG was out of sympathy with Unix philosophy, as it kept
documents in a form difficult for other tools to process for unanticipated
purposes, In this regard, I still regret that Luca Cardelli and Mark
Manasse moved on from Bell Labs before they finished their dream of Blue, a
WYSIWYG editor for markup documents, I don't know yet whether that blue-sky
goal is achievable. (.docx may be seen as a ponderous latter-day attempt.
Does anyone know whether it has fostered tool use?)
Doug
I'm reading about the Automatic Intercept System as discussed in BSTJ Vol. 53 No. 1 this evening. It is a stored program control call handling system designed to respond to calls with potential forwarding or disconnection messages. Reading through the description of the operating system for AIS got me wondering:
What with the growing experience in the CSRC regarding kernel technologies and systems programming, was there ever any crossover regarding UNIX folks applying their developments to other non-UNIX AT&T systems projects or vice versa, perhaps folks who worked primarily on switching and support software bringing things over to the UNIX development camp? In other words, was there personnel cross-pollination between Bell System UNIX programmers and the folks working on stuff like AIS, ESS switching software, etc.? Or were the aims and implementation of such projects so different that the resources were relatively siloed?
I would imagine some of these projects were at least developed using UNIX given the popularity and demands of PWB. That's just my hunch though, some BSTJs also describe software development and maintenance taking place on S/360 and S/370 machines and various PDPs. Indeed the development process for AIS mentioned above, as of late 1971, involved assembly via S/360 software and then system maintenance and debugging via an attached PDP-9.
- Matt G.
Good day everyone, I just wanted to share that I've put up a bit of info as well as some book covers concerning UNIX standards that were published from the 80s til now:
https://wiki.tuhs.org/doku.php?id=publications:standards
I did my best to put down a bit of information about the /usr/group, POSIX, SVID, and SUS/Open Group standards, although there's certainly more to each story than what I put down there. Still, hopefully it serves to lay out a bit of the history of the actual standards produced over time.
I'm kicking myself because one of the things I could've produced a picture of but didn't save at the time is the cover of IEEE 1003.2, a copy of this popped up on eBay some time in the past year and for reasons I can't recall I didn't order it, nor did I save the picture from the auction at the time. In any case, if anyone has any published standards that are not visually represented in this article, I'm happy to add any photos or scans you can provide to the page.
Also pardon if the bit on spec 1170/SUS may be shorter than the others. Admittedly even having most of this on the desk in front of me right now, I'm fuzzy on the lines between POSIX, the Single UNIX Specification, the "Open Group Specification", spec 1170, etc. or if these are all names that ultimately just refer to different generations of the same thing. Part of getting this information put down is hoping someone will be along to correct inaccuracies :)
Anywho, that's all for now. Feel free to suggest any corrections or additions!
- Matt G.
FYI, this just got passed by Vint Cerf. Very sad news.
---------- Forwarded message ---------
From: vinton cerf via Internet-history <internet-history(a)elists.isoc.org>
Date: Tue, Jun 4, 2024 at 3:18 PM
Subject: [ih] Mike Karels has died
To: internet-history <internet-history(a)elists.isoc.org>
Mike Karels died on Sunday. I don’t have any details other than:
https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/https://www.gearty-delmore.com/obituaries/michael-mike-karels
Mike was deeply involved in the Berkeley BSD releases as I recall, after he
inherited the TCP/IP implementation for Unix from Bill Joy (am I
remembering that correctly?).
RIP
v
--
Internet-history mailing list
Internet-history(a)elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
Today after trying to decipher the online help for vim and neovim, I
decided I'd had enough and I opted for nvi - the bug for bug vi
compatible that I've used for so long on FreeBSD. It handles cursor
keys, these days (my biggest gripe back when, now I'm not so sure it's
an improvement). It's in-app help pages are about 300 lines long, the
docs are just four of the 4.4 docs: An Introduction to Display Editing
with VI, Edit: A tutorial, EX Reference Manual, and VI-EX Reference
Manual - all very well written and understandable. It does everything I
really need it to do without the million and one extensions and
"enhancements" the others offer.
In doing the docs research, I found many, many references to a "/Vi
Quick Reference card"/ in the various manpages and docs. I googled and
googled some more and of course got thousands of hits (really many
thousands), but I can't seem to find the actual card referenced. I'm
pretty sure what I want to find is a scanned image or pdf of the card
for 4.4bsd.
Do y'all happen to know of where I might find the golden quick ref card
for vi from back in the 4.4bsd days or did it even really exist?
Will
I keep Lomuto and Lomuto, "A Unix Primer", Prentice-Hall (1983) on my
shelf, not as a reference, but because I like to savor the presentation.
The Lomutos manage to impart the Unix ethos while maintaining focus on the
title in a friendly style that is nevertheless succinct and accurate.
Doug
A few years ago, someone -- and I've forgotten who, forgive me -- kindly gave me a copy of the source code for a UNIX for the AT&T PC6300 called IN/ix, developed by INTERACTIVE Systems. I have found precious little about this system online. Apparently the PC/ix UNIX for the IBM PC XT is fairly well preserved, but I can't find much about IN/ix.
For what it's worth, the login herald in the source code reads:
"IN/ix Office System (c) Copyright INTERACTIVE Systems Corp. 1983, 1988"
Presumably this was PC/ix, but targeting the AT&T 6300? Does anyone have any more knowledge of IN/ix?
If you're interested in digging into it yourself, I've dropped the source here:
https://archives.loomcom.com/pc6300/
(N.B.: All the files inside the zip are compressed, that's just how I got it)
-Seth
--
Seth Morabito * Poulsbo, WA * https://loomcom.com/
> Does anyone here have any source material they can point me to
> documenting the existence of a port of BSD curses to Unix Version 7?
Curses appears in the v8 manual but not v7. Of course a
conclusion that it was not ported to v7 turns on dates. Does
v7 refer to a point in time or an interval that extended until we
undertook to prepare the v8 manual? Obviously curses was
ported during or before that interval. If curses was available
when the v7 manual was prepared, I (who edited both editions)
evidently was unaware of any dependence on it then.
Doug
So I've been doing a bit of reading on 1A and 4ESS technologies lately, getting
a feel for the state of things just prior to 3B and 5ESS popping onto the scene,
and came across some BSTJ references to the programming environments involved
in the 4ESS and TSPS No. 1 systems.
The general assembly system targeting the 1A machine language was known as
SPC-SWAP (SWitching Assembly Program)[1](p. 206) and ran under OS/360/370, with
editing typically performed in QED. This then gave way to the EPL (ESS
Programming Language) and ultimately EPLX (EPL eXtra)[2](p. 1)[3](p. 8)
languages which, among other things, were used for later 4ESS work with cross-
compilers for at least TSS/360 by the sounds of it.
Are there any recollections of attempts by the Bell System to rebase any of
these 1A-targeting environments into UNIX, or by the time UNIX was being
considered more broadly for Bell System projects, was 3B/5ESS technology well on
the way, rendering attempting to move entrenched IBM-based environments for the
older switching computation systems moot?
For the record, in addition to the evolution of ESS to the 5ESS generation, a
revision of TSPS, 1B, was also introduced which was rebased on the 3B20D
processor and utilized the same 3B cross-compilation SGS under UNIX as other 3B-
targeted applications[4]. Interestingly, the paper on software development
in [4](p. 109) still makes reference to Programmer's Workbench as of 1982,
implying that nomenclature may have still been the norm at some Bell Labs sites
such as Naperville, Illinois, although I can't tell if they're referring to
PWB as in the branch of UNIX or the environment of make, sccs, etc.
Additionally, is anyone aware of surviving accessible specimens of SWAP
assembly, EPL, or EPLX code or literature beyond the BSTJ references and paper
referenced in the IEEE library below? Thanks for any insights!
- Matt G.
[1] - https://bitsavers.org/magazines/Bell_System_Technical_Journal/BSTJ_V58N06_1…
[2] - https://ieeexplore.ieee.org/document/810323
[3] - https://bitsavers.org/magazines/Bell_System_Technical_Journal/BSTJ_V60N06_1…
[4] - https://bitsavers.org/magazines/Bell_System_Technical_Journal/BSTJ_V62N03_1…
> Doug McIlroy was generating random regular expressions
Actually not. I exhaustively (within limits) tested an RE recognizer
without knowingly generating any RE either mechanically or by hand.
The trick: From recursive equations (easily derived from the grammar of
REs), I counted how many REs exist up to various limits on token counts,
Then I generated all strings that satisfied those limits, turned the
recognizer loose on them and counted how many it accepted. Any disagreement
of counts revealed the existence (but not any symptom) of bugs.
Unlike most diagnostic techniques, this scheme produces a certificate of
(very high odds on) correctness over a representative subdomain. The scheme
also agnostically checks behavior on bad inputs as well as good. It does
not, however, provide a stress test of a recognizer's capacity limits. And
its exponential nature limits its applicability to rather small domains.
(REs have only 5 distinct kinds of token.)
Doug
Hi folks,
I'm finding it difficult to find any direct sources on the question in
the subject line.
Does anyone here have any source material they can point me to
documenting the existence of a port of BSD curses to Unix Version 7?
I know that curses made it into 2.9BSD for the PDP-11, but that's not
quite the same thing.
There are comments in System V Release 2's curses.h file[1][2] (very
different from 4BSD's[3]) that suggest some effort to accommodate
Version 7's terminal driver. So I would _presume_ that curses got
ported to Version 7. But that's System V, right when it started
diverging from BSD curses, and moreover, presumption is not evidence.
Even personal accounts/anecdotes would be helpful. Maybe some of you
_wrote_ curses applications for Version 7 machines.
Regards,
Branden
[1] System III apparently did not have curses at all. Both it and 4BSD
were released in 1980. System V Release 1 doesn't seem to, either.
[2] https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/include/cur…
[3] https://minnie.tuhs.org/cgi-bin/utree.pl?file=4BSD/usr/include/curses.h
with my pedantic head on…
The “7th Edition” was the name of the Perkin Elmer port (nee Interdata), derived from Richard Miller’s work.
This was Unix Version 7 from the labs, with a v6 C compiler, with vi, csh, and curses from 2.4BSD (though we where never 100% sure about this version).
You never forget your first Unix :-)
-Steve
All,
I can't believe it's been 9 years since I wrote up my original notes on
getting Research Unix v7 running in SIMH. Crazy how time flies. Well,
this past week Clem found a bug in my scripts that create tape images.
It seem like they were missing a tape mark at the end. Not a showstopper
by any means, but we like to keep a clean house. So, I applied his fixes
and updated the scripts along with the resultant tape image and Warren
has updated them in the archive:
https://www.tuhs.org/Archive/Distributions/Research/Keith_Bostic_v7/
I've also updated the note to address the fixes, to use the latest
version of Open-SIMH on Linux Mint 21.3 "Virginia" (my host of choice
these days), and to bring the transcripts up to date:
https://decuser.github.io/unix/research-unix/v7/2024/05/23/research-unix-v7…
Later,
Will
Well this is obviously a hot button topic. AFAIK I was nearby when fuzz-testing for software was invented. I was the main advocate for hiring Andy Payne into the Digital Cambridge Research Lab. One of his little projects was a thing that generated random but correct C programs and fed them to different compilers or compilers with different switches to see if they crashed or generated incorrect results. Overnight, his tester filed 300 or so bug reports against the Digital C compiler. This was met with substantial pushback, but it was a mostly an issue that many of the reports traced to the same underlying bugs.
Bill McKeemon expanded the technique and published "Differential Testing of Software" https://www.cs.swarthmore.edu/~bylvisa1/cs97/f13/Papers/DifferentialTesting…
Andy had encountered the underlying idea while working as an intern on the Alpha processor development team. Among many other testers, they used an architectural tester called REX to generate more or less random sequences of instructions, which were then run through different simulation chains (functional, RTL, cycle-accurate) to see if they did the same thing. Finding user-accessible bugs in hardware seems like a good thing.
The point of generating correct programs (mentioned under the term LangSec here) goes a long way to avoid irritating the maintainers. Making the test cases short is also maintainer-friendly. The test generator is also in a position to annotate the source with exactly what it is supposed to do, which is also helpful.
-L
I'm surprised by nonchalance about bad inputs evoking bad program behavior.
That attitude may have been excusable 50 years ago. By now, though, we have
seen so much malicious exploitation of open avenues of "undefined behavior"
that we can no longer ignore bugs that "can't happen when using the tool
correctly". Mature software should not brook incorrect usage.
"Bailing out near line 1" is a sign of defensive precautions. Crashes and
unjustified output betray their absence.
I commend attention to the LangSec movement, which advocates for rigorously
enforced separation between legal and illegal inputs.
Doug
>> Another non-descriptive style of error message that I admired was that
>> of Berkeley Pascal's syntax diagnostics. When the LR parser could not
>> proceed, it reported where, and automatically provided a sample token
>> that would allow the parsing to progress. I found this uniform
>> convention to be at least as informative as distinct hand-crafted
>> messages, which almost by definition can't foresee every contingency.
>> Alas, this elegant scheme seems not to have inspired imitators.
> The hazard with this approach is that the suggested syntactic correction
> might simply lead the user farther into the weeds
I don't think there's enough experience to justify this claim. Before I
experienced the Berkeley compiler, I would have thought such bad outcomes
were inevitable in any language. Although the compilers' suggestions often
bore little or no relationship to the real correction, I always found them
informative. In particular, the utterly consistent style assured there was
never an issue of ambiguity or of technical jargon.
The compiler taught me Pascal in an evening. I had scanned the Pascal
Report a couple of years before but had never written a Pascal program.
With no manual at hand, I looked at one program to find out what
mumbo-jumbo had to come first and how to print integers, then wrote the
rest by trial and error. Within a couple of hours I had a working program
good enough to pass muster in an ACM journal.
An example arose that one might think would lead "into the weeds". The
parser balked before 'or' in a compound Boolean expression like 'a=b and
c=d or x=y'. It couldn't suggest a right paren because no left paren had
been seen. Whatever suggestion it did make (perhaps 'then') was enough to
lead me to insert a remote left paren and teach me that parens are required
around Boolean-valued subexpressions. (I will agree that this lesson might
be less clear to a programming novice, but so might be many conventional
diagnostics, e.g. "no effect".)
Doug
I just revisited this ironic echo of Mies van der Rohe's aphorism, "Less is
more".
% less --help | wc
298
Last time I looked, the line count was about 220. Bloat is self-catalyzing.
What prompted me to look was another disheartening discovery. The "small
special tool" Gnu diff has a 95-page manual! And it doesn't cover the
option I was looking up (-h). To be fair, the manual includes related
programs like diff3(1), sdiff(1) and patch(1), but the original manual for
each fit on one page.
Doug
> was ‘usage: ...’ adopted from an earlier system?
"Usage" was one of those lovely ideas, one exposure to which flips its
status from unknown to eternal truth. I am sure my first exposure was on
Unix, but I don't remember when. Perhaps because it radically departs from
Ken's "?" in qed/ed, I have subconsciously attributed it to Dennis.
The genius of "usage" and "?" is that they don't attempt to tell one what's
wrong. Most diagnostics cite a rule or hidden limit that's been violated or
describe the mistake (e.g. "missing semicolon") , sometimes raising more
questions than they answer.
Another non-descriptive style of error message that I admired was that of
Berkeley Pascal's syntax diagnostics. When the LR parser could not proceed,
it reported where, and automatically provided a sample token that would
allow the parsing to progress. I found this uniform convention to be at
least as informative as distinct hand-crafted messages, which almost by
definition can't foresee every contingency. Alas, this elegant scheme seems
not to have inspired imitators.
Doug
So fork() is a significant nuisance. How about the far more ubiquitous
problem of IO buffering?
On Sun, May 12, 2024 at 12:34:20PM -0700, Adam Thornton wrote:
> But it does come down to the same argument as
>
https://www.microsoft.com/en-us/research/uploads/prod/2019/04/fork-hotos19.…
The Microsoft manifesto says that fork() is an evil hack. One of the cited
evils is that one must remember to flush output buffers before forking, for
fear it will be emitted twice. But buffering is the culprit, not the
victim. Output buffers must be flushed for many other reasons: to avoid
deadlock; to force prompt delivery of urgent output; to keep output from
being lost in case of a subsequent failure. Input buffers can also steal
data by reading ahead into stuff that should go to another consumer. In all
these cases buffering can break compositionality. Yet the manifesto blames
an instance of the hazard on fork()!
To assure compositionality, one must flush output buffers at every possible
point where an unknown downstream consumer might correctly act on the
received data with observable results. And input buffering must never
ingest data that the program will not eventually use. These are tough
criteria to meet in general without sacrificing buffering.
The advent of pipes vividly exposed the non-compositionality of output
buffering. Interactive pipelines froze when users could not provide input
that would force stuff to be flushed until the input was informed by that
very stuff. This phenomenon motivated cat -u, and stdio's convention of
line buffering for stdout. The premier example of input buffering eating
other programs' data was mitigated by "here documents" in the Bourne shell.
These precautions are mere fig leaves that conceal important special cases.
The underlying evil of buffered IO still lurks. The justification is that
it's necessary to match the characteristics of IO devices and to minimize
system-call overhead. The former necessity requires the attention of
hardware designers, but the latter is in the hands of programmers. What can
be done to mitigate the pain of border-crossing into the kernel? L4 and its
ilk have taken a whack. An even more radical approach might flow from the
"whitepaper" at www.codevalley.com.
In any even the abolition of buffering is a grand challenge.
Doug
On Sat, May 11, 2024 at 2:35 PM Theodore Ts'o <tytso(a)mit.edu> wrote:
>
> I bet most of the young'uns would not be trying to do this as a shell
> script, but using the Cloud SDK with perl or python or Go, which is
> *way* more bloaty than using /bin/sh.
>
> So while some of us old farts might be bemoaning the death of the Unix
> philosophy, perhaps part of the reality is that the Unix philosophy
> were ideal for a simpler time, but might not be as good of a fit
> today
I'm finding myself in agreement. I might well do this with jq, but as you
point out, you're using the jq DSL pretty extensively to pull out the
fields. On the other hand, I don't think that's very different than piping
stuff through awk, and I don't think anyone feels like _that_ would be
cheating. And jq -L is pretty much equivalent to awk -F, which is how I
would do this in practice, rather than trying to inline the whole jq bit.
But it does come down to the same argument as
https://www.microsoft.com/en-us/research/uploads/prod/2019/04/fork-hotos19.…
And it is true that while fork() is a great model for single-threaded
pipeline-looking tasks, it's not really what you want for an interactive
multithreaded application on your phone's GUI.
Oddly, I'd have a slightly different reason for reaching for Python (which
is probably how I'd do this anyway), and that's the batteries-included
bit. If I write in Python, I've got the gcloud api available as a Python
module, and I've got a JSON parser also available as a Python module (but I
bet all the JSON unmarshalling is already handled in the gcloud library),
and I don't have to context-switch to the same degree that I would if I
were stringing it together in the shell. Instead of "make an HTTP request
to get JSON text back, then parse that with repeated calls to jq", I'd just
get an object back from the instance fetch request, pick out the fields I
wanted, and I'd be done.
I'm afraid only old farts write anything in Perl anymore. The kids just
mutter "OK, Boomer" when you try to tell them how much better CPAN was than
PyPi. And it sure feels like all the cool kids have abandoned Go for Rust,
although Go would be a perfectly reasonable choice for this task as well
(and would look a lot like Python: get an object back, pick off the useful
fields).
Adam
There was nothing unique about the size or the object code of Dennis's C
compiler. In the 1960s, Digitek had a thriving business of making Fortran
compilers for all manner of machines. To optimize space usage, the
compilers' internal memory model comprised variable-size movable tables,
called "rolls". To exploit this non-native architecture, the compilers
themselves were interpreted, although they generated native code. Bob
McClure tells me he used one on an SDS910 that had 8K 16-bit words.
Dennis was one-up on Digitek in having a self-maintaining compiler. Thus,
when he implemented an optimization, the source would grow, but the
compiler binary might even shrink thanks to self-application.
Doug
nl(1) uses the notable character sequences “\:\:\:”, “\:\:”, and “\:” to delimit header, body, and trailer sections within its input.
I wondered if anyone was able to shed light on the reason those were adopted as the defaults?
I would have expected perhaps something compatible with *roff (like, .\” something).
FreeBSD claims nl first appeared in System III (although it previously claimed SVR2), but I haven’t dug into the implementation any further.
Thanks in advance,
d
While the idea of small tools that do one job well is the core tenant of
what I think of as the UNIX philosophy, this goes a bit beyond UNIX, so I
have moved this discussion to COFF and BCCing TUHS for now.
The key is that not all "bloat" is the same (really)—or maybe one person's
bloat is another person's preference. That said, NIH leads to pure bloat
with little to recommend it, while multiple offerings are a choice. Maybe
the difference between the two may be one person's view over another.
On Fri, May 10, 2024 at 6:08 AM Rob Pike <robpike(a)gmail.com> wrote:
> Didn't recognize the command, looked it up. Sigh.
>
Like Rob -- this was a new one for me, too.
I looked, and it is on the SYS3 tape; see:
https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/man/man1/nl.1
> pr -tn <file>
>
> seems sufficient for me, but then that raises the question of your
> question.
>
Agreed, that has been burned into the ROMs in my fingers since the
mid-1970s 😀
BTW: SYS3 has pr(1) with both switches too (more in a minute)
> I've been developing a theory about how the existence of something leads
> to things being added to it that you didn't need at all and only thought of
> when the original thing was created.
>
That is a good point, and I generally agree with you.
> Bloat by example, if you will. I suspect it will not be a popular theory,
> however accurately it may describe the technological world.
>
Of course, sometimes the new features >>are<< easier (more natural *for
some people*). And herein lies the core problem. The bloat is often
repetitive, and I suggest that it is often implemented in the wrong place -
and usually for the wrong reasons.
Bloat comes about because somebody thinks they need some feature and
probably doesn't understand that it is already there or how they can use
it. But they do know about it, their tool must be set up to exploit it - so
they do not need to reinvent it. GUI-based tools are notorious for this
failure. Everyone seems to have a built-in (unique) editor, or a private
way to set up configuration options et al. But ... that walled garden is
comfortable for many users and >>can be<< useful sometimes.
Long ago, UNIX programmers learned that looking for $EDITOR in the
environment was way better than creating one. Configuration was as ASCII
text, stored in /etc for system-wide and dot files in the home for users.
But it also means the >>output<< of each tool needs to be usable by each
other [*i.e.*, docx or xlx files are a no-no).
For example, for many things on my Mac, I do use the GUI-based tools --
there is no doubt they are better integrated with the core Mac system >>for
some tasks.<< But only if I obey a set of rules Apple decrees. For
instance, this email read is easier much of the time than MH (or the HM
front end, for that matter), which I used for probably 25-30 years. But on
my Mac, I always have 4 or 5 iterm2(1) open running zsh(1) these days. And,
much of my typing (and everything I do as a programmer) is done in the shell
(including a simple text editor, not an 'IDE'). People who love IDEs swear
by them -- I'm just not impressed - there is nothing they do for me that
makes it easier, and I have learned yet another scheme.
That said, sadly, Apple is forcing me to learn yet another debugger since
none of the traditional UNIX-based ones still work on the M1-based systems.
But at least LLDB is in the same key as sdb/dbx/gdb *et al*., so it is a
PITA but not a huge thing as, in the end, LLDB is still based on the UNIX
idea of a single well-designed and specific to the task tool, to do each
job and can work with each other.
FWIW: I was recently a tad gob-smacked by the core idea of UNIX and its
tools, which I have taken for a fact since the 1970s.
It turns out that I've been helping with the PiDP-10 users (all of the
PiDPs are cool, BTW). Before I saw UNIX, I was paid to program a PDP-10. In
fact, my first UNIX job was helping move programs from the 10 to the UNIX.
Thus ... I had been thinking that doing a little PDP-10 hacking shouldn't
be too hard to dust off some of that old knowledge. While some of it has,
of course, come back. But daily, I am discovering small things that are so
natural with a few simple tools can be hard on those systems.
I am realizing (rediscovering) that the "build it into my tool" was the
norm in those days. So instead of a pr(1) command, there was a tool that
created output to the lineprinter. You give it a file, and it is its job to
figure out what to do with it, so it has its set of features (switches) -
so "bloat" is that each tool (like many current GUI tools) has private ways
of doing things. If the maker of tool X decided to support some idea, they
would do it like tool Y. The problem, of course, was that tools X and Y
had to 'know about' each type of file (in IBM terms, use its "access
method"). Yes, the engineers at DEC, in their wisdom, tried to
"standardize" those access methods/switches/features >>if you implemented
them<< -- but they are not all there.
This leads me back to the question Rob raises. Years ago, I got into an
argument with Dave Cutler RE: UNIX *vs.* VMS. Dave's #1 complaint about
UNIX in those days was that it was not "standardized." Every program was
different, and more to Dave's point, there was no attempt to make switches
or errors the same [getopt(3) had been introduced but was not being used by
most applications). He hated that tar/tp used "keys" and tools like cpio
used switches. Dave hated that I/O was so simple - in his world all user
programs should use his RMS access method of course [1]. VMS, TOPS, *etc.*,
tried to maintain a system-wide error scheme, and users could look things
like errors up in a system DB by error number, *etc*. Simply put, VMS is
very "top-down."
My point with Dave was that by being "bottom-up," the best ideas in UNIX
were able to rise. And yes, it did mean some rough edges and repeated
implementations of the same idea. But UNIX offered a choice, and while Rob
and I like and find: pr -tn perfectly acceptable thank you, clearly someone
else desired the features that nl provides. The folks that put together
System 3 offer both solutions and let the user choose.
This, of course, comes as bloat, but maybe that is a type of bloat so bad?
My own thinking is this - get things down to the basics and simplest
privatives and then build back up. It's okay to offer choices, as long as
the foundation is simple and clean. To me, bloat becomes an issue when you
do the same thing over and over again, particularly because you can not
utilize what is there already, the worst example is NIH - which happens way
more than it should.
I think the kind of bloat that GUI tools and TOPS et al. created forces
recreation, not reuse. But offering choice and the expense of multiple
tools that do the same things strikes me as reasonable/probably a good
thing.
1.] BTW: One of my favorite DEC stories WRT to VMS engineering has to do
with the RMS I/O system. Supporting C using VMS was a bit of PITA.
Eventually, the VMS engineers added Stream I/O - which simplified the C
runtime, but it was also made available for all technical languages.
Fairly soon after it was released, the DEC Marketing folks discovered
almost all new programs, regardless of language, had started to use Stream
I/O and many older programs were being rewritten by customers to use it. In
fact, inside of DEC itself, the languages group eventually rewrote things
like the FTN runtime to use streams, making it much smaller/easier to
maintain. My line in the old days: "It's not so bad that ever I/O has
offer 1000 options, it's that Dave to check each one for every I/O. It's a
classic example of how you can easily build RMS I/O out of stream-based
I/O, but the other way around is much harder. My point here is to *use
the right primitives*. RMS may have made it easier to build RDB, but it
impeded everything else.
> On Wed, 8 May 2024 14:12:15 -0400,Clem Cole <clemc(a)ccc.com <mailto:clemc@ccc.com>> wrote:
>
> FWIW: The DEC Mod-II and Mod-III
> were new implementations from DEC WRL or SRC (I forget). They targeted
> Alpha and I, maybe Vax. I'd have to ask someone like Larry Stewart or Jeff
> Mogul who might know/remember, but I thought that the font end to the DEC
> MOD2 compiler might have been partly based on Wirths but rewritten and by
> the time of the MOD3 FE was a new one originally written using the previous
> MOD2 compiler -- but I don't remember that detail.
Michael Powell at DEC WRL wrote a Modula 2 compiler that generated VAX code. Here’s an extract from announcement.d accompanying a 1992 release of the compiler from gatekeeper.dec.com <http://gatekeeper.dec.com/>:
The compiler was designed and built by Michael L. Powell, and originally
released in 1984. Joel McCormack sped the compiler up, fixed lots of bugs, and
swiped/wrote a User's Manual. Len Lattanzi ported the compiler to the MIPS.
Later, Paul Rovner and others at DEC SRC designed Modula-2+ (a language extension with exceptions, threads, garbage collection, and runtime type dispatch). The Modula-2+ compiler was originally based on Powell’s compiler. Modula-2+ ran on the VAX.
Here’s a DEC SRC research report on Modula-2+:
http://www.bitsavers.org/pdf/dec/tech_reports/SRC-RR-3.pdf
Modula-3 was designed at DEC SRC and Olivetti Labs. It had a portable implementation (using the GCC back end) and ran on a number of machines including Alpha.
Paul
Sorry for the dual list post, I don’t who monitors COFF, the proper place for this.
There may a good timeline of the early decades of Computer Science and it’s evolution at Universities in some countries, but I’m missing it.
Doug McIlroy lived through all this, I hope he can fill in important gaps in my little timeline.
It seems from the 1967 letter, defining the field was part of the zeitgeist leading up to the NATO conference.
1949 ACM founded
1958 First ‘freshman’ computer course in USA, Perlis @ CMU
1960 IBM 1400 - affordable & ‘reliable’ transistorised computers arrived
1965 MIT / Bell / General Electric begin Multics project.
CMU establishes Computer Sciences Dept.
1967 “What is Computer Science” letter by Newell, Perlis, Simon
1968 “Software Crisis” and 1st NATO Conference
1969 Bell Labs withdraws from Multics
1970 GE's sells computer business, including Multics, to Honeywell
1970 PDP-11/20 released
1974 Unix issue of CACM
=========
The arrival of transistorised computers - cheaper, more reliable, smaller & faster - was a trigger for the accelerated uptake of computers.
The IBM 1400-series was offered for sale in 1960, becoming the first (large?) computer to sell 10,000 units - a marker of both effective marketing & sales and attractive pricing.
The 360-series, IBM’s “bet the company” machine, was in full development when the 1400 was released.
=========
Attached is a text file, a reformatted version of a 1967 letter to ’Science’ by Allen Newell, Alan J. Perlis, and Herbert A. Simon:
"What is computer science?”
<https://www.cs.cmu.edu/~choset/whatiscs.html>
=========
A 1978 masters thesis on Early Australian Computers (back to 1950’s, mainly 1960’s) cites a 17 June 1960 CSIRO report estimating
1,000 computers in the US and 100 in the UK. With no estimate mentioned for Western Europe.
The thesis has a long discussion of what to count as a (digital) ‘computer’ -
sources used different definitions, resulting in very different numbers,
making it difficult to reconcile early estimates, especially across continents & countries.
Reverse estimating to 1960 from the “10,000” NATO estimate of 1968, with a 1- or 2-year doubling time,
gives a range of 200-1,000, including the “100” in the UK.
Licklider and later directors of ARPA’s IPTO threw millions into Computing research in the 1960’s, funding research and University groups directly.
[ UCB had many projects/groups funded, including the CSRG creating BSD & TCP/IP stack & tools ]
Obviously there was more to the “Both sides of the Atlantic” argument of E.W. Dijkstra and Alan Kay - funding and numbers of installations was very different.
The USA had a substantially larger installed base of computers, even per person,
and with more university graduates trained in programming, a higher take-up in private sector, not just the public sector and defence, was possible.
=========
<https://www.acm.org/about-acm/acm-history>
In September 1949, a constitution was instituted by membership approval.
————
<https://web.archive.org/web/20160317070519/https://www.cs.cmu.edu/link/inst…>
In 1958, Perlis began teaching the first freshman-level computer programming course in the United States at Carnegie Tech.
In 1965, Carnegie Tech established its Computer Science Department with a $5 million grant from the R.K. Mellon Foundation. Perlis was the first department head.
=========
From the 1968 NATO report [pg 9 of pdf ]
<http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF>
Helms:
In Europe alone there are about 10,000 installed computers — this number is increasing at a rate of anywhere from 25 per cent to 50 per cent per year.
The quality of software provided for these computers will soon affect more than a quarter of a million analysts and programmers.
d’Agapeyeff:
In 1958 a European general purpose computer manufacturer often had less than 50 software programmers,
now they probably number 1,000-2,000 people; what will be needed in 1978?
_Yet this growth rate was viewed with more alarm than pride._ (comment)
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Where did chunix (which contains chaos.c) and several other branches of the
v8 /usr/sys tree on TUHS come from? This stuff does not appear in the v8
manual. I don't recall a Lisp machine anywhere near the Unix room, nor any
collaborations that involved a Lisp machine.
Doug
I wonder if anyone can shed any light on the timing and rationale for
the introduction of “word erase” functionality to the kernel terminal
driver. My surface skim earlier leads me to believe it came to Unix
with 4BSD, but it was not reincorporated into 8th Edition or later,
nor did it make it to Plan 9 (which did incorporate ^U for the "line
kill" command). TOPS-20 supports it via the familiar ^W, but I'm not
sure about other PDP-10 OSes (Lars?). Multics does not support it.
VMS does not support it.
What was the proximal inspiration? The early terminal drivers seem to
use the Multics command editing suite (`#` for erase/backspace, `@`
for line kill), though at some point that changed, one presumes as
TTYs fell out of favor and display terminals came to the fore.
- Dan C.
I've been doing some research on Lisp machines and came across an
interesting tidbit: there was Chaosnet support in Unix v8, e.g.
https://www.tuhs.org/cgi-bin/utree.pl?file=V8/usr/sys/chunix/chaos.c
Does anyone remember why that went in? My first guess would be for
interoperability with the Symbolics users at Bell Labs (see Bromley's
"Lisp Lore", 1986), but that's just speculation.
john
Wikipedia has a brief page on cscope, which has a link to
https://cscope.sourceforge.net/history.html
written by Harold Bamford, in which he talks about the
early days of cscope at Bell Labs and its inventor Joe Steffan.
I wondered if anyone can add any interesting information about using
cscope on their projects or anything about its development.
-Marcus.
> You can always read Josh Fisher's book on the "Bulldog" compiler, I
> believe he did this work at Yale.
Are you thinking of John Ellis’s thesis:
Bulldog: A Compiler for VLIW Architectures
John R. Ellis
February 1985
http://www.cs.yale.edu/publications/techreports/tr364.pdf
Fisher was Ellis’s advisor. The thesis was also published in ACM’s Doctoral Dissertation Award:
https://mitpress.mit.edu/9780262050340/bulldog/
I believe Ellis still has a tape with his thesis software on it, but I don’t know if he’s been able to read it.
Hello Everyone
One of polish academic institutions was getting rid of old IT-related stuff
and they were kind enough to give me all Solaris related stuff, including
lots (and i mean lots) of installation CD-ROMS, documentations, manuals,
and some solaris software, mostly compilers and scientific stuff.
If anyone would be interested feel free to contact me and i'd be happy to
share - almost everything is in more than a few copies and I have no
intention of keeping everything for myself.
Currently all of the stuff is located in Warsaw, Poland.
Best regards,
mjb
--
Maciej Jan Broniarz
> [Tex's] oversetting of lines caused by the periodic failure of the
> paragraph-justification algorithms drove me nuts.
Amen. If Tex can't do what it thinks is a good job, it throws a fit
and violates the margin, hoping to force a rewrite. Fortunately,
one can shut off the line-break algorithm and simply fill greedily.
The command to do so is \sloppy--an ironic descriptor of text
that looks better, albeit not up to Tex's discriminating standard.
Further irony: when obliged to write in Tex, I have resorted to
turning \sloppy mode on globally.
Apologies for airing an off-topic pet peeve,
Doug
I happened upon
https://old.gustavobarbieri.com.br/trabalhos-universidade/mc722/lowney92mul…
and I am curious as to whether any of the original Multiflow compilers
survive. I had never heard of them before now, but the fact that they were
licensed to so many influential companies makes me think that there might
be folks on this list who know of its history.
-Henry
ACPI has 4-byte identifiers (guess why!), but I just wondered, writing some
assembly:
is it globl, not global, or glbl, because globl would be a one-word
constant on the PDP-10 (5 7-bit bytes)?
Not entirely off track, netbsd at some point (still does?) ran on the
PDP-10.
> "BI" fonts can, it seems, largely be traced to the impact
> of PostScript
There was no room for BI on the C/A/T. It appeared in
troff upon the taming of the Linotron 202, just after v7
and five years before PostScript.
> Seventh Edition Unix shipped a tc(1) command to help
> you preview your troff output with that device before you
> spent precious departmental money sending it to the
> actual typesetter.
Slight exaggeration. It wasn't money, It was time and messing
with film cartridges, chemicals, and wet prints. You could buy a
lot of typesetter film and developer for the price of a 4014.
Doug
yeah that was the one that id' first mentioned.
Although I was more so interested in when/where the 386 PCC came from
Seems at best all those sources are locked away.
____
| From: Angus Robinson
| To: Jason Stevens
| Cc: TUHS main list
| Sent: March 25, 2024 09:17 AM
| Subject: Re: [TUHS] 386 PCC
|
|
| Is this it ?
|
| https://web.archive.org/web/20071017025542/http://pcc.l
| udd.ltu.se/
|
| Kind Regards,
| Angus Robinson
|
|
| On Sun, Mar 24, 2024 at 2:13?AM Jason Stevens <
| jsteve(a)superglobalmegacorp.com> wrote:
|
|
| I'd been on this whole rabbithole exploration thing of
| those MIT PCC 8086
| uploads that have been on the site & on bitsavers, it
| had me wondering is
| there any version of PCC that targeted the 386?
|
| While rebuilding all the 8086 port stuff, and MIT
| PC/IP was fun, it'd be
| kind of interesting to see if anything that ancient
| could be forced to work
| with a DOS Extender..
|
| I know there was the Anders Magnusson one in 2007,
| although the site is now
| offline. But surely there must have been another one
| between 1988/2007?
|
| Thanks!
|
|
|
|
I'd been on this whole rabbithole exploration thing of those MIT PCC 8086
uploads that have been on the site & on bitsavers, it had me wondering is
there any version of PCC that targeted the 386?
While rebuilding all the 8086 port stuff, and MIT PC/IP was fun, it'd be
kind of interesting to see if anything that ancient could be forced to work
with a DOS Extender..
I know there was the Anders Magnusson one in 2007, although the site is now
offline. But surely there must have been another one between 1988/2007?
Thanks!
Not that I'm looking for drama but any idea what happened?
Such a shame it just evaporated.
____
| From: arnold(a)skeeve.com
| To: tuhs@tuhs.org;jsteve@superglobalmegacorp.com
| Cc:
| Sent: March 25, 2024 08:46 AM
| Subject: Re: [TUHS] 386 PCC
|
|
| Jason Stevens <jsteve(a)superglobalmegacorp.com> wrote:
|
| > I know there was the Anders Magnusson one in 2007,
| although the site is now
| > offline.
|
| A mirror of that work is available at
| https://github.com/arnoldrobbins/pcc-revived.
| It's current as of the last time the main site was
| still online,
| back in the fall of 2023.
|
| Magnusson has more than once said he's working to get
| things back
| online, but nothing has happened yet. I check weekly.
|
| FWIW,
|
| Arnold
|
Hi Everyone,
I’m cleaning the office and I have the following free books available first-come, first-served (just pay shipping).
“Solaris Internals.” Richard McDougall and Jim Mauro. 2007 Second Edition. 1020pp hardbound. (2 copies)
“Sun Performance and Tuning - Java and the Internet.“ Adrian Cockcroft and Richard Pettit. 1998 Second Edition. 587pp softbound.
“DTrace - Dynamic Tracing in Oracle Solaris, MacOSX, and FreeBSD.” Brendan Gregg and Jim Mauro. 2011. 1115 pp softbound. (2 copies)
“Oracle Database 11g Release 2 High Availability.” Scott Jesse, Bill Burton, & Bryan Vongray. 2011 Second Edition. 515pp softbound.
“Oracle Solaris 11 System Administration - The Complete Reference.” Michael Jang, Harry Foxwell, Christine Tran, & Alan Formy-Duval. 2013. 582pp softbound. (12 copies). NOTE, this is an older edition not the one covering 11.2.
“Strategies for Real-Time System Specification.” Derek Hatley & Imtiaz Pirbhai. 1988. 386pp hardbound.
“Mathematica.” Stephen Wolfram. 1991 Second Edition. 961pp hardbound. (Anyone want to save this from the landfill?)
Please send me mail off-list with your name and address and I’ll let you know shipping cost.
I expect to have additional books later this year.
Regards,
Stephen
> From: Rich Salz <rich.salz(a)gmail.com <mailto:rich.salz@gmail.com>>
>> Don't forget the Imagen's
>>
>
> What, no Dover "call key operator"? :) (It was a Xerox product based on
> their 9700 copier.)
Actually, it was based on a Xerox 7000:
"The Dover is strip-down [sic] Xerox 7000 Reduction Duplicator. All optical system, electronics, contact relays, top harness, control console and related components are eliminated from the Xerox 7000. The paper feeder, paper transports, engines, solenoid, paper path sensing switches and related components are not disturbed. …"
http://www.bitsavers.org/pdf/xerox/dover/dover.pdf
Evenin' all...
I have a vague recollection that /dev/tty8 was the console in Edition 5
(we only used it briefly until Ed 6 appeared), but cannot find a reference
to it; lots of stuff about Penguin/OS though...
Something to do with 0-7 being the mux, so "8" was left (remember that
/dev/tty and /dev/console didn't exist back then), mayhaps?
Thanks.
-- Dave
> There was lawyerly concern about the code being stolen.
Not always misplaced. There was a guy in Boston who sold Unix look-alike
programs. A quick look at the binary revealed perfect correlation with our
C source. Coincidentally, DEC had hired this person as a consultant in
connection with cross-licensing negotiations with AT&T. Socializing at
the end of a day's negotiations, our lawyer somehow managed to turn the
conversation to software piracy. He discussed a case he was working on,
and happened to have some documents about it in his briefcase. He pulled
out a page disassembled binary and a page of source code and showed them to
the consultant.
After a little study, the consultant confidently opined that the binary was
obviously compiled from that source. "Would it surprise you," the lawyer
asked, "if I told you that this is yours and that is ours?" The consultant
did not attend the following day's meeting.
Doug
In another thread there's been some discussion of Coherent. I just came
across this very detailed history, just posted last month. There's much
more to it than I knew.
https://www.abortretry.fail/p/the-mark-williams-company
Marc
VP/ix ran on both System III and UNIX System V/386 Release 3.2.
I do still have a copy of the VP/ix Environment documentation
and the diskettes for the software. I have the "Introduction to the
VP/ix Environment" for further reference for interested folks.
Also found some information about VP/ix on these web pages:
1.
https://virtuallyfun.com/2020/11/29/fun-with-vp-ix-under-interactive-unix-s…
2.
https://techmonitor.ai/technology/interactive_systems_is_adding_to_vpix_wit…
3.
https://manualzz.com/doc/7267897/interactive-unix-system-v-386-r3.2-v4.1---…
It's been a long time since I looked at this.
Heinz
On 3/13/2024 8:53 AM, Clem Cole wrote:
> Thanks. Fair enough. You mentioned PC/IX as /ISC's System III/
>
> I'm not sure I ever ran ISC's System III port—only the V.3 port -
> which was the basis for their ATT, Intel, and IBM work and later sold
> directly. I'm fairly sure ISCalso called that port PC/IX, but they
> might have added something to say with 386 in the name—I've forgotten.
> [Heinz probably can clarify here]. Anyway, this is likely the source
> of my thinking. FWIW: The copy of PC/IX for the 386 (which I still
> have on a system I have not booted in ages) definitely has VPIX.
> ᐧ
>
> On Wed, Mar 13, 2024 at 11:28 AM Marc Rochkind <mrochkind(a)gmail.com>
> wrote:
>
> @Clem Cole <mailto:clemc@ccc.com>,
>
> I don't remember what it was. But, the XT had an 8088, so
> certainly no 386 technology was involved.
>
> Marc
>
> On Wed, Mar 13, 2024 at 8:38 AM Clem Cole <clemc(a)ccc.com> wrote:
>
> @Marc
>
> On Tue, Mar 12, 2024 at 1:18 PM Marc Rochkind
> <mrochkind(a)gmail.com> wrote:
>
> At a trade show, I bought a utility that allowed me to run
> PC-DOS under PC/IX. I'm sure it wasn't a virtual machine.
> Rather, it just swapped back and forth. (Guessing a bit
> there.)
>
> Hmm ... you sure it was not either VPIX or DOS/Merge -- ISC
> built VPIX in cooperation with the Phoenix Tech folks for
> PC/IX. I always bought a copy with it, but it may have been an
> option. LCC did DOS/Merge originally as part of the AIX work
> for IBM and would become a core part of OS/2 Warp IIRC. Both
> Merge and VPIX had some rough edges but certainly worked fine
> for DOS 3.3 programs. The issue tended to be Win and DOS
> graphics-based programs/games that played fast and loose,
> bypassing the DOS OS interface and accessing the HW directly.
> For instance, I never got the flight simulator (Air War over
> Germany) for Dad's WWII plane (P-47 Thunderbolt) to run under
> either (i.e., only under DOS directly on the HW. FWIW: In that
> mode, Dad said the simulator flew a lot like how he remembered
> it).
>
> Both Merge and VPIX used the 386 VM support and a bunch of
> work in the core OS. Heinz would have to fill us in here.
> The version of the 386 port ISC delivered to AT&T and Intel
> only had the kernel changes to allow the VM support for VPIX
> to be linked in, but it was not there. IICR (and I'm not
> sure I am) is that Merge could run on PC/IX also, but you had
> to replace a couple of kernel modules. It certainly would
> work on the AT&T and Intel versions.
> ᐧ
>
>
>
> --
> /My new email address is mrochkind(a)gmail.com/
>
Did some reading today, curious on the current state of things with AT&T's UNIX copyright genealogy. The series of events as I understand it are:
AT&T partners with Novell for the Univel initiative.
Novell then acquires System V and USL from AT&T.
Novell sells UNIX System V's source to SCO, but as the courts have ruled, not the copyright.
Novell gets purchased by Microfocus.
Microfocus gets purchased by OpenText Corporation.
Does this make OpenText the current copyright holders of the commercial UNIX line from AT&T.
What got me looking a bit closer into this is curiosity regarding how the opening of Solaris and the CDDL may impact publication of UNIX code between System III and SVR4. I then felt the need to refresh on who might be the current copyright holder and this is where the trail has lead me.
My understanding too is that Sun's release under the CDDL set the precedent that other sub-licencees of System V codebases are also at liberty to relicense their codebases, but this may be reading too far into it. There's also the concern that the ghost of SCO will continue to punish anyone else who tries with costly-but-doomed-to-fail litigation. Have there been any happenings lately with regards to getting AT&T UNIX post-PDP-11 opened up more in the world? Reading up a bit on OpenText's business, they don't seem like they're invested in the OS world, seems that their primary sector is content management. Granted, there's certainly under-the-radar trading of bits and pieces, but it would be nice to have some more certainty about what can happen out in the open.
- Matt G.
Hi all,
I've been working quite a bit recently with SunOS 4 on a SPARCstation 5,
seeing what I can coax out of it in terms of building and supporting a
modern computing environment. I know that TUHS isn't really the right
place for this, but can someone point me to somewhere that is? I've made
significant progress in some areas and spent a lot of cycles to get there -
for instance, I have GCC 3.4.6 up and running - so I'd like to contribute
to a community if one exists. Is there a modern equivalent of sun-managers?
-Henry
Hi all (and TUHS),
The Third Edition rand(III) page [1] ends with
WARNING The author of this routine has been writing
random-number generators for many years and has
never been known to write one that worked.
My understanding is that Ken wrote the rand implementation.
But I'm curious about the origin of this warning.
I had assumed that Ken wrote it as a combination warning+joke,
but Rob suggested that to him it didn't sound like Ken and
perhaps Doug or Dennis had written it. Does anyone remember?
Separately, I am trying to find out what the very first
Unix rand implementation was. In the TUHS archives,
the incomplete V2 sources contain a reference to srand
in cmd/bas0.s [2], but there is no definition in the tree.
The V3 man pages list it, but as far as I can tell full
library sources do not appear in the TUHS archives
until the V6 snapshot. The V6 rand [3] is:
rand:
mov r1,-(sp)
mov ranx,r1
mpy $13077.,r1
add $6925.,r1
mov r1,r0
mov r0,ranx
bic $100000,r0
mov (sp)+,r1
rts pc
Perhaps this is the original rand as well? It is hard to imagine
a much simpler one, other than perhaps removing the addition,
but doing so would create a sequence of only odd numbers.
From the man page description it sounds like this has to be the
original generator, perhaps with different constants.
Thanks!
Best,
Russ
[1]
https://github.com/dspinellis/unix-history-repo/blob/Research-V3/man/man3/r…
[2]
https://github.com/dspinellis/unix-history-repo/blob/Research-V2/cmd/bas0.s
[3]
https://github.com/dspinellis/unix-history-repo/blob/Research-V6/usr/source…
I don't know about the PiSCSI in particular. For the SCSI2SD, if you have
the drive properly defined in the controller you can just use dd to write
the image to the SD card at the offset where the drive is defined. If the
drive is the first thing on the card, dd if=image of=drive conv=notrunc
will do what you want.
-Henry
On Wed, 13 Mar 2024 at 18:12, <earl(a)baugh.org> wrote:
> I’ll have to see about pulling stuff out this weekend and maybe move
> forward.
>
> Still am missing one part — how to get an external SCSI emulator to the
> point where I can get a disk image to it.
>
> Is there a way to move the disk created in TME onto an emulator?? (BTW,
> I’ll probably be using the PiSCSI for this, since I want to have multiple
> images out there, as well as a SD drive so I don’t chance losing stuff
> after getting it all set up.
>
> Earl
>
> On Mar 13, 2024, at 6:09 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>
> The emulation of proper tape drive records is present in TME - see this
> fragment from the setup file that I have to install SunOS 2:
>
> ## power up the machine:
> ##
> # uncomment this line to automatically power up the machine when
> # tmesh starts:
> #
> command tape0 load sunos-2.0-sun2/tape1/01 sunos-2.0-sun2/tape1/02
> sunos-2.0-sun2/tape1/03 sunos-2.0-sun2/tape1/04 sunos-2.0-sun2/tape1/05
> sunos-2.0-sun2/tape1/06 sunos-2.0-sun2/tape1/07 sunos-2.0-sun2/tape1/08
> sunos-2.0-sun2/tape1/09 sunos-2.0-sun2/tape1/10
> command mainbus0 power up
>
> Let me know if you need more of a walkthrough, I'd have to get NetBSD
> running in a VM as I haven't worked with this in a long time, but I'm sure
> it still works.
>
> -Henry
>
> On Wed, 13 Mar 2024 at 18:04, <earl(a)baugh.org> wrote:
>
>> I had old instructions to do this but getting TME running was a bit
>> quirky. And the package had lost most of it’s support.
>> (I did just go out and find that some folks have somewhat resurrected
>> it…)
>>
>> I have the install manual for 3.5 (
>> http://www.bitsavers.org/pdf/sun/sunos/3.5/800-2089-10A_Release_3.5_Manual_…
>> )
>> And did find this about TME Now ( https://pkgsrc.se/wip/tme )
>> And these instructions (which from the link before this page indicated as
>> of 2019 they still worked
>> http://people.csail.mit.edu/fredette/tme/sun3-150-nbsd.html )
>>
>> That would get me “close” if I could somehow write to an emulated SCSI
>> device.. or the SD card that supported it… etc. Blue SCSI, Green SCSI, Pi
>> SCSI, etc. I don’t care which (would prefer something that would let me use
>> a “real” drive… SSD or similar is fine… rather than SD card). I do have an
>> image that gets me “somewhat” booting with a SCSI2SD but the additional
>> drive mounts are wrong in the fstab/mtab so I can’t get it fully to boot….
>>
>> If I can figure out the process, I’ll make images and share them (for all
>> the early Sun OS’s) and write up a web page and post it to archive.org so
>> nobody has to go thru this again :-)
>>
>> Earl
>>
>> On Mar 13, 2024, at 5:56 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>>
>> TME - most recently https://osdn.net/projects/nme/ - in theory does what
>> you want. Its setup and use is a bit idiosyncratic, and I have found that
>> it is unhappy running on OSs other than NetBSD, but if you get it running
>> it just works. I've used it to set up installations of SunOS 3 and 4 on
>> sun2, sun3, and sun4 architectures.
>>
>> -Henry
>>
>> On Wed, 13 Mar 2024 at 17:49, <earl(a)baugh.org> wrote:
>>
>>> I’m looking for a “Sun OS 3.5” emulation running where I can attach a
>>> SCSI emulator to it and get the full OS installed.
>>> I’ve got tape images but I haven’t found the process to emulate how it
>>> used to work.
>>>
>>> From the initial boot prompt, you extracted them to the “swap partition”
>>> and then started the install and it would prompt you for the next tape when
>>> needed.
>>> So, I guess we’d need an emulated tape or something, etc. I have all
>>> the tar’s (all the way back to Sun OS 1 or so) but have been frustrated
>>> trying to make some progress.
>>>
>>> Earl
>>>
>>>
>>> On Mar 13, 2024, at 5:31 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>>>
>>> On Wed, 13 Mar 2024 at 17:27, Will Senn <will.senn(a)gmail.com> wrote:
>>>
>>>> On 3/13/24 3:12 PM, Henry Bent wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I've been working quite a bit recently with SunOS 4 on a SPARCstation
>>>> 5, seeing what I can coax out of it in terms of building and supporting a
>>>> modern computing environment. I know that TUHS isn't really the right
>>>> place for this, but can someone point me to somewhere that is? I've made
>>>> significant progress in some areas and spent a lot of cycles to get there -
>>>> for instance, I have GCC 3.4.6 up and running - so I'd like to contribute
>>>> to a community if one exists. Is there a modern equivalent of sun-managers?
>>>>
>>>> -Henry
>>>>
>>>> Not an answer to the question, but on a tangent...
>>>>
>>>> I recently saw that Solaris 11.4 SRU66 was released and had a yearning
>>>> to see how things in Solaris land were doing (can't stand Gnome so
>>>> OpenIndiana's a bust)... but with Oracle's Solaris, it's a mess at least
>>>> for hobbyists (only get release patches, so I'm guessing the most up to
>>>> date 'release' was 11.4 in 2018). So, when I saw your post on SunOS 4, I
>>>> thought I'd tool around and see if it was easy to get rolling as a VM,
>>>> turns out things have come a long way on that front:
>>>>
>>>> https://defcon.no/sysadm/playing-with-sunos-4-1-4-on-qemu/
>>>>
>>>> OpenWindows 3... wow... works great on my Mint instance. Now, if I
>>>> could just remember how commands work on SunOS :).
>>>>
>>>
>>> Thanks Will! You may also be interested in
>>> https://john-millikin.com/running-sunos-4-in-qemu-sparc as another
>>> resource about running SunOS 4 in QEMU. I have considered moving my setup
>>> to QEMU, especially as it would be very easy to create a hard drive image
>>> since I am using a SCSI2SD board, but there is something about running
>>> these things on the original hardware that is difficult to leave behind.
>>>
>>> -Henry
>>>
>>>
>>>
>>
>
The emulation of proper tape drive records is present in TME - see this
fragment from the setup file that I have to install SunOS 2:
## power up the machine:
##
# uncomment this line to automatically power up the machine when
# tmesh starts:
#
command tape0 load sunos-2.0-sun2/tape1/01 sunos-2.0-sun2/tape1/02
sunos-2.0-sun2/tape1/03 sunos-2.0-sun2/tape1/04 sunos-2.0-sun2/tape1/05
sunos-2.0-sun2/tape1/06 sunos-2.0-sun2/tape1/07 sunos-2.0-sun2/tape1/08
sunos-2.0-sun2/tape1/09 sunos-2.0-sun2/tape1/10
command mainbus0 power up
Let me know if you need more of a walkthrough, I'd have to get NetBSD
running in a VM as I haven't worked with this in a long time, but I'm sure
it still works.
-Henry
On Wed, 13 Mar 2024 at 18:04, <earl(a)baugh.org> wrote:
> I had old instructions to do this but getting TME running was a bit
> quirky. And the package had lost most of it’s support.
> (I did just go out and find that some folks have somewhat resurrected it…)
>
> I have the install manual for 3.5 (
> http://www.bitsavers.org/pdf/sun/sunos/3.5/800-2089-10A_Release_3.5_Manual_…
> )
> And did find this about TME Now ( https://pkgsrc.se/wip/tme )
> And these instructions (which from the link before this page indicated as
> of 2019 they still worked
> http://people.csail.mit.edu/fredette/tme/sun3-150-nbsd.html )
>
> That would get me “close” if I could somehow write to an emulated SCSI
> device.. or the SD card that supported it… etc. Blue SCSI, Green SCSI, Pi
> SCSI, etc. I don’t care which (would prefer something that would let me use
> a “real” drive… SSD or similar is fine… rather than SD card). I do have an
> image that gets me “somewhat” booting with a SCSI2SD but the additional
> drive mounts are wrong in the fstab/mtab so I can’t get it fully to boot….
>
> If I can figure out the process, I’ll make images and share them (for all
> the early Sun OS’s) and write up a web page and post it to archive.org so
> nobody has to go thru this again :-)
>
> Earl
>
> On Mar 13, 2024, at 5:56 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>
> TME - most recently https://osdn.net/projects/nme/ - in theory does what
> you want. Its setup and use is a bit idiosyncratic, and I have found that
> it is unhappy running on OSs other than NetBSD, but if you get it running
> it just works. I've used it to set up installations of SunOS 3 and 4 on
> sun2, sun3, and sun4 architectures.
>
> -Henry
>
> On Wed, 13 Mar 2024 at 17:49, <earl(a)baugh.org> wrote:
>
>> I’m looking for a “Sun OS 3.5” emulation running where I can attach a
>> SCSI emulator to it and get the full OS installed.
>> I’ve got tape images but I haven’t found the process to emulate how it
>> used to work.
>>
>> From the initial boot prompt, you extracted them to the “swap partition”
>> and then started the install and it would prompt you for the next tape when
>> needed.
>> So, I guess we’d need an emulated tape or something, etc. I have all
>> the tar’s (all the way back to Sun OS 1 or so) but have been frustrated
>> trying to make some progress.
>>
>> Earl
>>
>>
>> On Mar 13, 2024, at 5:31 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>>
>> On Wed, 13 Mar 2024 at 17:27, Will Senn <will.senn(a)gmail.com> wrote:
>>
>>> On 3/13/24 3:12 PM, Henry Bent wrote:
>>>
>>> Hi all,
>>>
>>> I've been working quite a bit recently with SunOS 4 on a SPARCstation 5,
>>> seeing what I can coax out of it in terms of building and supporting a
>>> modern computing environment. I know that TUHS isn't really the right
>>> place for this, but can someone point me to somewhere that is? I've made
>>> significant progress in some areas and spent a lot of cycles to get there -
>>> for instance, I have GCC 3.4.6 up and running - so I'd like to contribute
>>> to a community if one exists. Is there a modern equivalent of sun-managers?
>>>
>>> -Henry
>>>
>>> Not an answer to the question, but on a tangent...
>>>
>>> I recently saw that Solaris 11.4 SRU66 was released and had a yearning
>>> to see how things in Solaris land were doing (can't stand Gnome so
>>> OpenIndiana's a bust)... but with Oracle's Solaris, it's a mess at least
>>> for hobbyists (only get release patches, so I'm guessing the most up to
>>> date 'release' was 11.4 in 2018). So, when I saw your post on SunOS 4, I
>>> thought I'd tool around and see if it was easy to get rolling as a VM,
>>> turns out things have come a long way on that front:
>>>
>>> https://defcon.no/sysadm/playing-with-sunos-4-1-4-on-qemu/
>>>
>>> OpenWindows 3... wow... works great on my Mint instance. Now, if I could
>>> just remember how commands work on SunOS :).
>>>
>>
>> Thanks Will! You may also be interested in
>> https://john-millikin.com/running-sunos-4-in-qemu-sparc as another
>> resource about running SunOS 4 in QEMU. I have considered moving my setup
>> to QEMU, especially as it would be very easy to create a hard drive image
>> since I am using a SCSI2SD board, but there is something about running
>> these things on the original hardware that is difficult to leave behind.
>>
>> -Henry
>>
>>
>>
>
TME - most recently https://osdn.net/projects/nme/ - in theory does what
you want. Its setup and use is a bit idiosyncratic, and I have found that
it is unhappy running on OSs other than NetBSD, but if you get it running
it just works. I've used it to set up installations of SunOS 3 and 4 on
sun2, sun3, and sun4 architectures.
-Henry
On Wed, 13 Mar 2024 at 17:49, <earl(a)baugh.org> wrote:
> I’m looking for a “Sun OS 3.5” emulation running where I can attach a SCSI
> emulator to it and get the full OS installed.
> I’ve got tape images but I haven’t found the process to emulate how it
> used to work.
>
> From the initial boot prompt, you extracted them to the “swap partition”
> and then started the install and it would prompt you for the next tape when
> needed.
> So, I guess we’d need an emulated tape or something, etc. I have all
> the tar’s (all the way back to Sun OS 1 or so) but have been frustrated
> trying to make some progress.
>
> Earl
>
>
> On Mar 13, 2024, at 5:31 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>
> On Wed, 13 Mar 2024 at 17:27, Will Senn <will.senn(a)gmail.com> wrote:
>
>> On 3/13/24 3:12 PM, Henry Bent wrote:
>>
>> Hi all,
>>
>> I've been working quite a bit recently with SunOS 4 on a SPARCstation 5,
>> seeing what I can coax out of it in terms of building and supporting a
>> modern computing environment. I know that TUHS isn't really the right
>> place for this, but can someone point me to somewhere that is? I've made
>> significant progress in some areas and spent a lot of cycles to get there -
>> for instance, I have GCC 3.4.6 up and running - so I'd like to contribute
>> to a community if one exists. Is there a modern equivalent of sun-managers?
>>
>> -Henry
>>
>> Not an answer to the question, but on a tangent...
>>
>> I recently saw that Solaris 11.4 SRU66 was released and had a yearning to
>> see how things in Solaris land were doing (can't stand Gnome so
>> OpenIndiana's a bust)... but with Oracle's Solaris, it's a mess at least
>> for hobbyists (only get release patches, so I'm guessing the most up to
>> date 'release' was 11.4 in 2018). So, when I saw your post on SunOS 4, I
>> thought I'd tool around and see if it was easy to get rolling as a VM,
>> turns out things have come a long way on that front:
>>
>> https://defcon.no/sysadm/playing-with-sunos-4-1-4-on-qemu/
>>
>> OpenWindows 3... wow... works great on my Mint instance. Now, if I could
>> just remember how commands work on SunOS :).
>>
>
> Thanks Will! You may also be interested in
> https://john-millikin.com/running-sunos-4-in-qemu-sparc as another
> resource about running SunOS 4 in QEMU. I have considered moving my setup
> to QEMU, especially as it would be very easy to create a hard drive image
> since I am using a SCSI2SD board, but there is something about running
> these things on the original hardware that is difficult to leave behind.
>
> -Henry
>
>
>
On Thu, Mar 7, 2024, 4:14 PM Tom Lyon <pugs78 at gmail.com> wrote:
> For no good reason, I've been wondering about the early history of C
> compilers that were not derived from Ritchie, Johnson, and Snyder at Bell.
> Especially for x86. Anyone have tales?
> Were any of those compilers ever used to port UNIX?
An unusual one would be the “revenue bomb” compiler that Charles Simonyi and Richard Brodie did at Microsoft in 1981.
This compiler was intended to provided a uniform environment for the menagerie of 8 and 16-bit computers of the era. It compiled to a byte code which executed through a small interpreter. This by itself was hardly new of course, but it had some unique features. It generated code in overlays, so that it could run a code base larger than 64KB (but it defined only one data segment). It also defined a small set of “system” commands, that allowed for uniform I/O. I still have the implementation spec for that interpreter somewhere.
This compiler was used for the first versions of Multiplan and Word, and my understanding is that the byte code engine was later re-used in Visual Basic. I think the compiler also had a Xenix port, maybe it even was Xenix native (and at this time, Xenix would still essentially have been V7).
I am not sure to what extent this compiler was independent of the Bell compilers. It could well be that it was based on PCC, Microsoft was a Unix licensee after all and at the time busy doing ports. On the other hand, Charles Simonyi would certainly have been capable of creating his own from scratch. I do know that this compiler preceded Lattice C, the latter of which was distributed by Microsoft as Microsoft C 1.0.
Maybe others know more about this Simonyi/Brodie compiler?
Paul
Notes:
http://www.memecentral.com/mylife.htmhttps://web.archive.org/web/20080905231519/http://www.computerworld.com/sof…http://seefigure1.com/images/xenix/xenix-timeline.jpg
> The author of this routine has been writing
> random-number generators for many years and has
> never been known to write one that worked.
It sounds like Ken to me. Although everybody had his
own favorite congruential random number generator,
some worse than others, I believe it was Ken who put
one in the math library.
The very fact that rand existed, regardless of its quality,
enabled a lovely exploit. When Ken pioneered password
cracking by trying every word in word lists at hand, one
of the password files he found plenty of hits in came from
Berkeley. He told them and they responded by assigning
random passwords to everybody. That was a memorable
error. Guessing that the passwords were generated by
a simple encoding of the output of rand, Ken promptly
broke 100% of the newly "hardened" password file.
Doug
The zorland c compiler from zortech, x86 pc compiler from a small uk company.
i used it to write my final year project at college in 1988. sadly i couldn’t use the interdata running v7 as i was doing image processing and needed to access an ISA framestore card.
i built a motion compensated video standards converter, and thanks to the 80287 i managed something like 6 hours per frame.
i think zortech claimed they wrote one of the first c++ compilers (rather than using c++).
-Steve
For no good reason, I've been wondering about the early history of C
compilers that were not derived from Ritchie, Johnson, and Snyder at Bell.
Especially for x86. Anyone have tales?
Were any of those compilers ever used to port UNIX?
Perspective from a friend...
Warner
---------- Forwarded message ---------
From: Poul-Henning Kamp <phk(a)phk.freebsd.dk>
Date: Mon, Mar 11, 2024 at 2:24 AM
Subject: non-Bell C compiler
To: <imp(a)freebsd.org>
I noticed the "non-bell C" thread on TUHS and can add a data point
from datamuseum.dk:
The Danish Company "Christian Rovsing A/S" evidently had a C-compiler
for their CR80 mini computer, and my guess is that they created it
in order to qualify for DoD contracts in the POSIX regime.
Example C source:
https://datamuseum.dk/aa//cr80/80/802c73092.html
Listing file from the compiler:
https://datamuseum.dk/aa//cr80/ef/ef65339dc.html
Listing file from the assembler:
http://datamuseum.dk/aa/cr80/32/32ef5456f.html
Listing from the linker:
https://datamuseum.dk/aa//cr80/17/170304129.html
So far we have not spotted the actual compiler anywhere
in the media we have read.
Mention of C being used for project delivery:
http://datamuseum.dk/aa/cr80/1c/1c0b47f0e.html
And btw: That one is from a CDC disc-pack which a father+son
team has read by building a SMD-USB converter.
That project may be interesting in the TUHS domain as well:
https://github.com/Datamuseum-DK/pico-smd-controller
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk(a)FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
I've been using some variant of Linux (currently Debian 12) as my
primary OS for daily activities (email, web, programming, photo
editing, etc.) for the past twenty years or so. Prior to that it was
FreeBSD for nearly ten years after short stints with Minix and Linux
when they first came out. At the time (early/mid 90's), I was working
for Bell Labs and had a ready supply of SCSI drives salvaged from
retired equipment. I bought a Seagate ST-01A ISA SCSI controller for
whatever 386/486 I owned at the time and installed Slackware floppy by
floppy.
When I upgraded to a Pentium PC for home, Micron P90 I think, I
installed a PCI SCSI controller (Tekram DC-390 equipped with an
NCR53c8xx chip) to make use of my stash of drives. Under Linux it was
never entirely stable. I asked on Usenet and someone suggested trying
the other SCSI driver. This was the ncr driver that had been ported
from FreeBSD. My stability problems went away and I decided to take a
closer look at FreeBSD. It reminded me of SunOS from the good old pre-
System V era along with the version of Unix I had used in grad school
in the late 70's/early 80's so I switched.
I eventually reverted back to Linux because it was clear that the user
community was getting much larger, I was using it professionally at
work and there was just a larger range of applications available.
Lately, I find myself getting tired of the bloat and how big and messy
and complicated it has all gotten. Thinking of looking for something
simpler and was just wondering what do other old timers use for their
primary home computing needs?
Jeff
Because I sometimes use ArcMap, I run Windows. Cygwin plus the sam editor
make me feel at home. The main signs of Microsoft are the desktop, Bing,
File Explorer and Task Manager.
Hello everyone, I reach out in my time of need regarding a potential source of DMERT materials. I've recently come into possession of a hard disk unit from a 5ESS switch, presumably the 5ESS-2000 variant, part UN 375G:
https://i.imgur.com/yQzY5Hs.jpeg
The actual disk itself appears to be a Ultra320 SCSI disk, which I unfortunately do not have the tools to do anything with myself. After looking into various solutions, I'm not getting the warm fuzzies about finding the necessary hardware on my first shot, these sorts of hardware specifics are not my strong suit. The story I got is it is from a working system, so could possibly have artifacts, but at the same time, I've already sunk a little over $1,000 into getting this, I'm hesitant to drop more on hardware I'm not 100% confident is correct for the job.
Does anyone have any recommendations, whether a service, hardware, anything, that I could use to try and get at what is on this disk? Even if it's just sending it off to someone along with enough storage for them to make me a dd image of the thing, I just feel so close yet so far on finally figuring out if I've managed to land a copy of DMERT.
Thanks in advance for any advice, I'm really hoping that the end of this story is I find DMERT artifacts to get archived and preserved, that would be such a satisfying conclusion to all this 3B20/5ESS study as of late. I wish I had the resources to see the rest through myself but this is getting into an area I have quite a bit of trepidation regarding. What I don't want to do is inadvertently damage something by getting it wrong.
- Matt G.
[image: unnamed.png]
CCA EMACS? That's a name I have not heard in a long time ...
I forgot if I'm not allowed to load images, sorry if I just made a mistake.
Normally I wouldn't cross the beams like this but a comment thread John
Nagle posted on this HN story is well written and for me was a great read.
https://news.ycombinator.com/item?id=39630457
On Wednesday, March 6th, 2024 at 3:55 PM, Ken Thompson <kenbob(a)gmail.com> wrote:
> On Wed, Mar 6, 2024 at 1:45 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>
> > On Wednesday, March 6th, 2024 at 11:53 AM, Douglas McIlroy <douglas.mcilroy(a)dartmouth.edu> wrote:
> >
> > > After Multics, I ran interference to keep our once-burned higher management from frowning too much on further operating-system research.
> > >
> > > Doug
> >
> > This alone is an all-too-valuable skill that contributes to the cultural success of countless projects. Great ideas can too often die on the vine when the upper echelons have quite different opinions of where time and effort should be placed, and I am glad that in my own career I likewise work with understanding immediate supervisors and business analysts that go to bat for our needs and concerns. The importance of a supportive workplace culture in which work is genuinely valued and defended cannot be understated.
> >
> > - Matt G.
>
> unix was written in c, c was written in b, b was written in tmg,and doug wrote tmg. it is all his fault.
>
>
Ken, your modesty is showing :)
I feel the same way about big things I'm working on in my day job. No matter how much folks try to laud me as our architect, nothing I did would exist without what my supervisor years and years ago handed me to start with before he moved on to greener pastures. Invention will always be a group effort, I'm just so glad this particular group effort (re: UNIX) has and continues to have the impact that it does.
A former manager (and respected colleague) would often say "I'm rubber, you're glue, what you bounce off me sticks to you." and it took me a little bit to appreciate what I thought he meant, but even longer to realize that saying encompassed the good as well.
- Matt G.
P.S. Hey Dave, I Bcc'd you, discussions with folks here often remind me of your good advice and management. Hope you're well, would love to hear from you if you see this!
Just to bring it full circle, after a bit of discussion it looks like what Henry is working with is the initial System V release for PDP-11/70, not some fabled PDP-11 SVR2, so the documentation I linked as well as some material on squoze.net concerning System V in SimH all apply directly. Subject adjusted accordingly.
- Matt G.
On Wednesday, March 6th, 2024 at 1:55 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
> On Wed, 6 Mar 2024 at 16:51, segaloco <segaloco(a)protonmail.com> wrote:
>
>> On Wednesday, March 6th, 2024 at 1:16 PM, Henry Bent <henry.r.bent(a)gmail.com> wrote:
>>
>>> Hello all,
>>>
>>> I have a distribution of SVR2 on the PDP-11 that I have managed to get booting into the initial root dump, but it is not clear to me how to proceed from there to format a /usr filesystem and setup for multi-user.
>>>
>>> ...
>>>
>>> I haven't managed to find any installation manuals or the like on Bitsavers, and I can't even manage to find a listing in the source of the expected disk partitions/sizes. I feel very much like I am stumbling in the dark here and would appreciate any pointers to how to proceed. Thanks!
>>>
>>> -Henry
>>
>> First off I didn't know SVR2 made it to the PDP-11, I thought they cut it off after the initial System V release, is what you have AT&T or some derivative version?
>>
>> Second, this is the setup instructions for DEC processors for the initial release of System V which included the PDP-11/70: https://archive.org/details/unix-system-administrators-guide-5-0/04%20Setti…
>>
>> Additionally, here is the Operator's Guide which details bootstrapping the system among other things: https://archive.org/details/unix-system-operators-guide-release-5-0/mode/2up
>>
>> While not SVR2, hopefully the differences are minimal enough that you can use those. Good luck!
>>
>> Also regarding finding more documentation, sadly AT&T stripped out the /usr/doc materials with System V, so these critical pieces of documentation actually can't be found in a typical system distribution, rather, you had to get the paper copies. I'm not aware of any discovery of TROFF sources for any of this stuff past System III, I do have it on my long-term list to eventually synthesize copies of said documents from available scans so they can be more easily diff'd, but my current focus is much, much earlier.
>
> Thank you, this is a wonderful starting point. I often forget that sometimes archive.org will have documentation that is not duplicated in other sources, so this is a welcome reminder. I'll read through all of this and report back.
>
> -Henry
Hello all,
I have a distribution of SVR2 on the PDP-11 that I have managed to get
booting into the initial root dump, but it is not clear to me how to
proceed from there to format a /usr filesystem and setup for multi-user.
The root dump boots on a simulated 11/70 with an RP06:
--
sim> boot rp
#0=unixgdtm
UNIX/sysV: unixgdtm
real mem = 3145728 bytes
avail mem = 3068864 bytes
INIT: SINGLE USER MODE
--
I'm mostly a BSD person but I'm familiar enough with some later SysV
systems. That being said, the initialization procedure here is completely
foreign to me. I have cpio files for the entire system and I know in
theory how to extract them, but I'm stuck at the basics of creating /usr,
setting up /etc and the like. I have a fully extracted filesystem from the
cpio files that I can browse but I can't find enough information in the
manpages. I haven't managed to find any installation manuals or the like
on Bitsavers, and I can't even manage to find a listing in the source of
the expected disk partitions/sizes. I feel very much like I am stumbling
in the dark here and would appreciate any pointers to how to proceed.
Thanks!
-Henry
> When Rudd, Doug, Ken, Dennis, *et al* start to develop UNIX
Although I jumped into Unix as soon as it was born, I was not one of those
who "start[ed] to develop it".
Doug
> From: Douglas McIlroy <douglas.mcilroy(a)dartmouth.edu>
> Although I jumped into Unix as soon as it was born, I was not one of
> those who "start[ed] to develop it".
http://doc.cat-v.org/unix/pipes/
Dennis wrote that "UNIX is a very conservative system. Only a handful of its
ideas are genuinely new." (And quite right he was, too!) Among the ones that
are new, pipes, although less important now than they used to be, were a major
part of the constellation of things that drove its adoption, early on. And I
can't see how pushing pipes was not "developing UNIX"! I'm afraid you'll just
have to live with it! :-)
Noel
Hi All,
I was wondering, what were the best early sources of information for
regexes and why did folks need to know them to use unix? In my recent
explorations, I have needed to have a better understanding of them, so
I'm digging in... awk's my most recent thing and it's deeply associated
with them, so here we are. I went to the bookshelf to find something
appropriate and as usual, I've traced to primary sources to some extent.
I started with Mastering Regular Expressions by Friedl, and I won't
knock it (it's one of the bestsellers in our field), but it's much to
long for my personal taste and it's not quite as systematic as I would
like (the author himself notes that his interests are less technical
than authors preceding him on the subject). So, back to the shelves...
Bourne's, The Unix Environment, and Kernighan & Pike's, The Unix
Programming Evironment both talk about them in the context of grep, ed,
sed, and awk. Going further back, the Unix Programmer's Manual v7 - ed,
grep, sed, awk...
After digging around it seems like folks needed regexes for ed, grep,
sed and awk... and any other utility that leveraged the wonderful nature
of these handy expressions. Fine. Where did folks go learn them? Was
there a particularly good (succinct and accurate) source of information
that folks kept handy? I'm imagining (based on what I've seen) that
someone might cut out the ed discussion or the grep pages of the manual
and tape them to their monitors, but maybe I'm stooopid and they didn't
need no stinkin' memory device for regexes - surely they're intuitive
enough that even a simpleton could pick them up after seeing a few
examples... but if that were really the case, Friedl's book would have
been a flop and it wasn't :). So seriously, if you remember that far
back - what was the definitive source of your regex knowledge and what
were the first motivators for learning them?
Thanks,
Will
I hope everyone's having a lovely tail end of whichever season is gracing your
hemisphere. Had some surprise snow this morning up in the NW corner of the US,
hoping it bodes well for a mild summer.
I'm curious, is anyone aware of any attempts to revise John Lions's UNIX
Commentary for versions beyond the Sixth Edition? Having finished my
disassembly of the classic video game Dragon Quest this past year, I'm now doing
some planning for a similar work and have considered practicing the art a little
by doing some diffing of V6 and V7 and feeling out the process by putting down
some revisions, that way I've got some of the flow and kinks worked out before I
start on my own "Commentary on Dragon Quest" manuscript. I'd hate to double up
on something someone else has already done though, so if V7 for instance has
gotten this treatment, then perhaps focusing on PWB or the CB-UNIX kernel would
minimize the potential rehashing.
Also if anyone has any thoughts, suggestions, etc. from their own experiences
working up very detailed source-level documentation of a large software product,
I'd certainly be interested, as I know this is going to turn into quite the
sprawling undertaking once I really get going, especially if I try and work in
the differences between the Japanese and U.S. releases (of which there are many.)
- Matt G.
> From: Bakul Shah
> Use of "flag" for this purpose seems strange. "option" makes more sense.
People on this list seem to forget that there were computers before UNIX.
The _syntax_ of "-f" probably predates any UNIX; Multics used it extensively.
See the "Introduction to Multics", MAC-TR-123, January 1974 (a little after
UNIX V1, but I expect I could probably track it back further in time, if I
cared to put in the effort); pg. 3-24.
Interestingly, I looked though the CTSS manual, and CTSS did not seem to use
this syntax for flag arguments: see, e.g., the SAVE command (section AH.3.03).
The _name_ "flag" came in early on UNIX. (Multics called them "arguments";
see above, pg. 3-27, top line.) We can see this happen - see:
http://squoze.net/UNIX/v1man/man1/du
which calls the "-a" and "-s" "arguments"; but in:
http://squoze.net/UNIX/v1man/man1/ld
"-s", "-u", etc are called "flag arguments".
Long enough ago that certainty about the etymology/rationale is probably now
lost.
Noel
Hello,
A great article on the floppy disk.
https://www.abortretry.fail/p/the-floppy-disk
--
Boyd Gerber <gerberb(a)zenez.com> 801 849-0213
ZENEZ 1042 East Fort Union #135, Midvale Utah 84047
> why did AT&T refer to "flags" as "keyletters" in its SysV documentation?
Bureaucracies beget bureaucratese--polysyllabic obfuscation, witness
APPLICATION USAGE in place of BUGS.
One might argue that replacing "flag" by "option", thus doubling the number
of syllables, was a small step in that direction. In fact it was a
deliberate attempt to discard jargon in favor of normal English usage.
Doug
> Al Kossow wrote:
>
> > there are emulators that can still run it, along with its small library of tools and applications. “NOS/MT was left in an arrested state” as John puts it.
>
> URL?
>
> I've never heard of a surviving copy
Your best start is here: http://www.powertrancortex.com
The UK Powertran Cortex was quite close to the Marinchip M9900 in capabilities and John’s software was ported to it. The website has an emulator and disk images for most of the user land and “MDEX” -- a simple executive that John wrote to bootstrap his software stack. This material survived in the hands of a few Powertran Cortex enthusiasts. They also had disks for NOS/MT (binaries + sysgen), but those were found after that website was made 10+ years ago.
The Powertran Cortex design was also used to build an industrial control computer, the PP95. The UK company behind that did most of the porting work and had a complete M9900 system to do the work on. In 2018 the inventory of that company was found in a garage including that M9900 system. Also more disk images and manuals, including the NOS/MT User Manual. The disks included a few that contained reconstituted (partial) source code for NOS/MT. With a little help from John, I was able to reconstitute the remainder of the source code. All this is not online.
I will contact you off list to see how this can be best preserved.
Paul
Earlier this year two well known computer scientists passed away.
On New Year’s Day it was Niklaus Wirth aged 90. A month later it was John Walker aged 75. Both have some indirect links to Unix.
For Wirth the link is that a few sources claim that Plan 9 and the Go language are in part influenced by the design ideas of Oberon, the language and the OS. Maybe others on this list know more about those influences.
For Walker, the link is via the company that he was running as a side-business before he got underway with AutoCAD: https://www.fourmilab.ch/documents/marinchip/
In that business he was selling a 16-bit system for the S-100 bus, based around the TI9900 CPU (which from a programmer perspective is quite similar to a PDP11). For that system he wrote a Unix-like operating system around 1978-1980, called NOS/MT. He had never worked with Unix, but had spelled the BSTJ issues about it. It was fully written in assembler.
The design was rather unique, maybe inspired by Heinz Lycklama’s “Satellite Processor” paper in BSTJ 57-6. It has a central microkernel that handles message exchange, process scheduling and memory management. Each system call is a message. However, the system call message is then passed on to a privileged “fat kernel” process that handles it. The idea was to provide multiprocessor and network transparency: the microkernel could decide to run processes on other boards in the same rack or on remote systems over a network. Also the kernel processes could be remote. Hence its name “Network Operating System / Multitasking” or “NOS/MT”.
The system calls are pretty similar to Unix. The file system is implemented very similar to Unix (with i-nodes etc.), with some notable differences (there are file locking primitives and formatting a disk is a system call). File handles are not shareable, so special treatment for stdin/out/err is hardcoded. Scheduling and memory management are totally different -- unsurprising as in both cases it reflects the underlying hardware.
Just as NOS/MT was getting into a usable state, John decided to pivot to packaged software including a precursor of what would become the AutoCAD package. What was there worked and saw some use in the UK and Denmark in the 1980’s -- there are emulators that can still run it, along with its small library of tools and applications. “NOS/MT was left in an arrested state” as John puts it. I guess it will remain one of those many “what if” things in computer history.
Just recollecting old memories: why did AT&T refer to "flags" as
"keyletters" in its SysV documentation? Some sort of denial of
Ed5/6/7/BSD's very existence?
The one good they did was the TTY driver...
-- Dave, who used to work for a SysVile shop
All, while I'm reminiscing about Minnie's history, I just noticed that we
have hit 30,000 postings on the combined TUHS/PUPS mailing list.
===
In the spirit of early Unix, does that mean the message number
has wrapped back to low numbers, skipping those occupied by
messages still running, like process IDs?
Norman Wilson
(temporarily thousands of km from)
Toronto ON
On Wed Feb 23 16:33, 1994, I turned on the web service on my machine
"minnie", originally minnie.cs.adfa.edu.au, now minnie.tuhs.org (aka
www.tuhs.org) The web service has been running continuously for thirty
years, except for occasional downtimes and hardware/software upgrades.
I think this makes minnie one of the longest running web services
still in existence :-)
For your enjoyment, I've restored a snapshot of the web site from
around mid-1994. It is visible at https://minnie.tuhs.org/94Web/
Some hyperlinks are broken.
## Web Logs
The web logs show me testing the service locally on Feb 23 1994,
with the first international web fetches on Feb 26:
```
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:13 1994] GET / HTTP/1.0
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:18 1994] GET /BSD.html HTTP/1.0
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:20 1994] GET /Images/demon1.gif HTTP/1.0
...
estcs1.estec.esa.nl [Sat Feb 26 01:48:21 1994] GET /BSD-info/BSD.html HTTP/1.0
estcs1.estec.esa.nl [Sat Feb 26 01:48:30 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
estcs1.estec.esa.nl [Sat Feb 26 01:49:46 1994] GET /BSD-info/cdrom.html HTTP/1.0
shazam.cs.iastate.edu [Sat Feb 26 06:31:20 1994] GET /BSD-info/BSD.html HTTP/1.0
shazam.cs.iastate.edu [Sat Feb 26 06:31:24 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
dem0nmac.mgh.harvard.edu [Sat Feb 26 06:32:04 1994] GET /BSD-info/BSD.html HTTP/1.0
dem0nmac.mgh.harvard.edu [Sat Feb 26 06:32:10 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
```
## Minnie to This Point
Minnie originally started life in May 1991 as an FTP server running KA9Q NOS
on an IBM XT with a 30M RLL disk, see https://minnie.tuhs.org/minannounce.txt
By February 1994 Minnie was running FreeBSD 1.0e on a 386DX25 with 500M
of disk space, 8M of RAM and a 10Base2 network connection. I'd received a copy
of the BSDisc Vol.1 No.1 in December 1993. According to the date on the file
`RELNOTES.FreeBSD` on the CD, FreeBSD 1.0e was released on Oct 28 1993.
## The Web Server
I'd gone to a summer conference in Canberra in mid-February 1994 (see
pg. 29 of https://www.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V15.1.pdf
and https://minnie.tuhs.org/94Web/Canberra-AUUG/cauugs94.html, 10am)
and I'd seen the Mosaic web browser in action. With FreeBSD running on
minnie, it seemed like a good idea to set up a web server on her.
NCSA HTTPd server v1.1 had been released at the end of Jan 1994, see
http://1997.webhistory.org/www.lists/www-talk.1994q1/0282.html
It was the obvious choice to be the web server on minnie.
## Minnie from Then to Now
You can read more about minnie's history and her hardware/software
evolution here: https://minnie.tuhs.org/minnie.html
I obtained the "tuhs.org" domain in May 2000 and switched minnie's
domain name from "minnie.cs.adfa.edu.au" to "minnie.tuhs.org".
Cheers!
Warren
P.S. I couldn't wait until Friday to post this :-)
To expand on Branden's observation that translating from one member of the
roff family to another is hard, I note that the final output usually
presents a text in a shape that has been fine-tuned for appearance. In
grammatic terms it might best be presented in transformational terms a la
Chomsky: a basic text with a fairly simple grammar tweaked by
pretty-printing transforms.
Translation involves parsing input into an AST according to one grammar and
unparsing to generate output according to another. Chomsky's work uses
transformational grammars primarily for generation. I'm not aware of any
implementation of the inverse: parsing according to a transformational
grammar. Certainly no practical tools exist for doing so.
Unfortunately, one doesn't consciously write roff according to the model I
have outlined. This means that parsing it is more like parsing a natural
language than a strictly defined programming language. So, the absence of
formal tools is exacerbated. Roff scripts, like everyday English, are
written according to an intuitive--and occasionally ad hoc--grammar that
varies both with authors and with time. And seventy years of hard work has
not yet fully automated the parsing of English.
Doug
Doug McIlroy is still around and contributing…
With same insight & wry sense of humour :)
===========
<https://www.tuhs.org/mailman3/hyperkitty/list/tuhs@tuhs.org/message/X5P6FYM…>
> Apologies for posting the above title tonTUHS. It's not the first time that
> I've crossed signals between groff and TUHS, but hey, I've got 10 years on Biden.
>
> Doug
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Apologies for posting the above title tonTUHS. It's not the first time that
I've crossed signals between groff and TUHS, but hey, I've got 10 years on
Biden.
Doug
Hello everyone, I'm currently laying the groundwork for a restart of my mandiff project, expanding it to encompass not just the manual-proper, but also the documents leading to the "Documents for UNIX" collections as well. Thus far I'm about halfway done on a ROFF restoration of the earliest surviving draft of Dennis Ritchie's The UNIX Time-Sharing System paper[1], reconstructed from existing, later NROFF text and ROFF conventions from the Third Edition manual[2].
Thus far, the additional documents I've found explicitly referenced in the earlier days are:
User's Reference Manual to B - K. Thompson[3]
C Reference Manual - D. M. Ritchie[4 - see note]
M6 Manual - A. D. Hall[5]
ROFF Manual - J. F. Ossanna[6 - see note]
A Manual for the TMG Compiler-writing Language - M. D. McIlroy[7]
UNIX Assembler Manual - D. M. Ritchie[8 - see note]
NROFF Users' Manual - J. F. Ossanna[9 - see note]
YACC Manual - S. C. Johnson[10 - see note]
Aside from these references, there are two other B papers, one a tutorial[11] by B. W. Kernighan and the other a MH-TSS reference by S. C. Johnson[12]. I don't think I saw either referenced in the manual-proper. The latter then makes further reference to a "Bell Laboratories BCPL" by R. H. Canaday and D. M. Ritchie, although I suspect this is lost, I can't find it.
Anywho, my plan is to take any known ROFF/NROFF sources for the above documents and reconstruct the earliest versions possible and then add them to my revamped repository in the timeframes that they first start showing up as references in the manual to derive a more holistic view of the creation of manuals and guides in the early days. A few matters prompted me to start over:
1. Noticing that there is direct lineage between some of the text in the UnixEditionZero paper and later manual pages like as(I), I want to capture the base text as far back as possible, which in this case would mean ensuring a commit in the chain captures the transfer of the text from the UnixEditionZero paper to as(I) to give a more complete history.
2. Al Kossow has now scanned and preserved a UNIX Program Generic II manual, meaning I no longer have to make as many assumptions about what changed and what didn't in the USG/Research split. Thus far, assumptions about the Program Generic line have been based on the extant MERT manual (which in turn is described as deriving from the Program Generic III manual.)
3. The picture of PWB/2.0 is becoming a bit clearer as time goes on, but is still murky, and that has implications for the changes between the Sixth Edition (where my current mandiff repo[13] ends) and the Seventh Edition. Rather than having to go back and redo a bunch of work, I think the first pass can stand on its own as a source of guidance on redoing this.
4. The cleanliness of the repository history is not to my liking, there are several instances of multiple commits across pages related to some larger, holistic change that would really be easier to study if they were in one. Starting over, I now have a much clearer picture of V1->V6 that I can use to produce a tighter history.
Anywho, to summarize what I'm looking for feedback on, first, are there any major documents I'm omitting from this investigation? Any particular technical memoranda that are crucial to the big picture? Additionally, is anyone aware whether USG Program Generic I (or earlier?) had a formal edition of the Programmer's Manual or if they would've just referred folks to the research manual prior to PG II? With the latter question I'm trying to determine if USG manual history starts with the PG II manual Al Kossow has scanned or if I should be considering a hole in the record where a PG I manual goes.
Thanks for following along, hopefully getting this groundwork in place will ensure the next go at this project is even more fruitful than the last!
- Matt G.
--- References ---
1 - https://www.tuhs.org/Archive/Distributions/Research/McIlroy_v0/UnixEditionZ…
2 - https://www.tuhs.org/cgi-bin/utree.pl?file=V3/man
3 - https://www.bell-labs.com/usr/dmr/www/kbman.html
4 - I may have a copy of the earliest version of this I can identify. The earliest version I can find online is dated January 15th, 1974 (https://www.bell-labs.com/usr/dmr/www/cman74.pdf) and contains the text "C is also available on the HIS 6070 computer at Murray Hill and on the IBM System/370 at Holmdel" whereas this particular copy of the paper states "C is also available on the HIS 6070 computer at Murray Hill, using a compiler written by A. Snyder and currently maintained by S. C. Johnson. A compiler for the IBM System/360/370 series is under construction." The manual is TROFF printout and isn't formatted as a memorandum like the link included here. References to the C Reference Manual begin to show up as early as the Second Edition manual, although these imply the C manual is still being written. Does anyone know if the C Reference Manual started in ROFF and then moved to NROFF some time before the earliest copies we're aware of? In any case, I intend to scan this copy, it just hasn't bubbled up in my project list yet.
5 - https://tuhs.pdp-11.org.ru/Documentation/TechReports/Bell_Labs/CSTRs/2.pdf
6 - I have a copy that defers from the one I could find here: https://www.cs.dartmouth.edu/~doug/roff71/roff71.pdf It is not in technical memorandum format and also may be missing a few pages (in mine, the tutorial ends with the "Translation" section but the linked document contains a couple more paragraphs on page offset (.po), merge patterns, and an envoi (conclusion). The most striking difference is that the linked paper is Doug's version for TSS, but the paper I've got lists the invocation in the UNIX style (roff +N -M name1 name2 ...) and is likely representative of the UNIX version with Joe Ossanna's work. Doug if you catch this and believe the attribution on this page (https://wiki.tuhs.org/doku.php?id=systems:2nd_edition) should have your by-line or both you and jfo, happy to make the edit. The text of the UNIX version I have does seem to descend from your original paper. By the way, an even earlier version of this paper for runoff is available here (https://manpages.bsd.lv/history/runoff69.low.pdf)
7 - https://www.tuhs.org/Archive/Distributions/Research/1972_stuff/tmg.pdf
8 - This is first referenced in the Third Edition manual. Some of the text may derive from the second Appendix of the "UnixEditionZero" paper linked above, the manpage certainly has influence from that document. Not sure if any of that implies the manual may have started in ROFF, but in any case, constitutes an early reference.
9 - This reference first appears, verifiably, in the Third Edition. However, the Second Edition manual does list nroff(I) in the TOC, but this page is not actually included in the extant PDF in the archive. In any case, the earliest version of the NROFF Users' Manual I'm aware of is the Second Edition, dated 9/11/74. Is any such First Edition extant on the public record?
10 - The earliest reference to this manual I can find is in the Third Edition. Not sure if there are any earlier specimens than the text in the Sixth Edition sources.
11 - https://www.bell-labs.com/usr/dmr/www/btut.html
12 - https://www.bell-labs.com/usr/dmr/www/bref.html
13 - https://gitlab.com/segaloco/mandiff
Accidentally ran into this today.
I’ve never seen this put together and thought it worth adding to the TUHS archives.
Hadn’t realised that both the authors of “Ball & Brown” (1968) were Aussies and UNSW alumni.
Studying a little accounting, this paper was mentioned as ’the most cited’ paper in the field.
The Big New Idea in 1968 was to use computers to analyse stock market data & show correlations.
I hadn’t known either had come back to Australia (QLD or WA then UNSW/AGSM),
then founded AGSM, with a focus on digital analysis of data.
Ian Johnstone, from CSE, went to AGSM to run their computers.
He recommended DEC + Unix and was backed by Brown, the director.
[ Andy Hume was recruited by Ian J, before leaving for a job at Bell Labs in the Computing Research Centre. ]
The AGSM license caused conniptions with the AT&T lawyers.
While AGSM fell into the near free “University & Education” license, they weren’t using Unix just for ‘education’.
AGSM became the first commercial licensee of Unix, or so I was told at the time.
Ian Johnstone was AUUGN editor while at AGSM, before scooting off to the USA and rising to heights there.
While Ball & Brown studied in Faculty of Commerce, they obviously had enough of a grounding
in ‘computing’ and data collection / handling / analysis to set the stage for their 1968 paper.
In 1971, Fortran IV was taught to first year students in Science, using John M Blatt’s (of UNSW) textbook.
It’s not unreasonable that Finance & Accounting had courses or training in Computing 5 years before that.
Within 10 years, they were both back at UNSW, running AGSM, teaching & using Digital research methods,
based solidly on Unix…
cheers
steve
===============
<https://www.agsm.edu.au/bobm/editorials/0206edit.html>
Looking back, I realise it must have been a fortuitous convergence for me:
thanks to Philip Brown and Ian Johnstone, the AGSM had been running Unix machines since 1976;
thanks to Bob Wood, I read of Bob Axelrod's work with GAs in examining the Repeated Prisoner's Dilemma before it was published
(and Axelrod was also at Michigan);
thanks to my innate curiosity, I had been reading and contributing to the Usenet news groups on the Internet since 1986.
Sydney was not so far from Ann Arbor, finally.
===============
Phillip Brown
<https://fbe.unimelb.edu.au/accounting/caip/aahof/ceremonies/philip_brown>
Philip Brown holds an important and unique place within the annals of Australian accounting.
As co-author of the research paper that redefined the course of academic accounting research in the last forty years
he inadvertently set the research agendas and directions for a legion of academics that followed.
Philip started school at Riverstone in western Sydney with a short stint at Summer Hill in his final two years of primary education
proceeding to Canterbury Boys High School where he scored an average pass in his Leaving Certificate.
He then worked as a junior clerk in the accounting department of British Motor Corporation at Zetland.
Advised to seek tertiary qualifications he thought he should enrol for a commerce degree at the University of NSW.
Despite this advice, Philip enrolled as a part-time student in the Faculty of Commerce at University of New South Wales gaining the highest pass in the course.
This level of achievement was maintained throughout his degree leading inevitably to an honours year,
graduating with First Class Honours and taking a University Medal.
After graduation Philip tutored at University of New South Wales,
received a Fulbright Scholarship to study in the USA heading to the University of Chicago Graduate School of Business.
He completed his MBA in 1963 finishing top of the class
During this period [2 years after MBS] he met Ray Ball with whom he wrote a seminal paper that defined the course of accounting research for the next forty years.
Rather than pursue a career in the United States, Philip returned to Australia as a Reader in Accounting at the University of Western Australia (July, 1968 – June, 1970).
In 1974, Philip moved to Sydney to help establish the Australian Graduate School of Management (AGSM).
As inaugural Foundation Director he introduced world-class MBA and MPA (public administration) programs
to develop the skills of Australia's future leaders.
During his AGSM days Philip championed the development of Australian data in financial accounting research.
He saw the need for Australian share price data to be systematically collected and made available to researchers
spending a great deal of time personally collecting data and providing programming support for these databases.
The existence of these databases as a high quality resource for researchers is often taken for granted today
but it was the foresight scholars with foresight like Philip who saw the need and acted accordingly.
===============
Ray Ball
<https://fbe.unimelb.edu.au/accounting/caip/aahof/ceremonies/ray-ball>
Raymond John Ball is one of the most influential contemporary accounting scholars,
having held professorial positions in Australia at UNSW and Queensland,
and in the United States at Rochester and Chicago.
With a first-class honours degree and the University Medal from UNSW,
Ray moved to the University of Chicago where he earned an MBA and PhD.
In 1968 Ray Ball co-authored the seminal paper
‘An Empirical Evaluation of Accounting Income Numbers’
that revolutionised financial accounting research.
Drawing on the developing financial economics literature and linking accounting information and share prices in a novel manner,
the paper provided the foundation for modern capital markets-based research.
As the inaugural recipient of the American Accounting Association’s Seminal Contributions to the Accounting Literature Award in 1986
it was observed that
‘no other paper … has played so important a role in the development of accounting research during the past thirty years’.
It remains the most highly cited accounting research paper.
Ray Ball has also had a major influence on accounting education in Australia, h
aving been Professor of Accounting at the University of Queensland (1972-1976),
and foundation professor at the Australian Graduate School of Management (UNSW) (1976-1986),
where he was instrumental in the development of the first US-style PhD program in Accounting and Finance in Australia.
During his time at Queensland and UNSW he was instrumental in developing rigorous empirical research in Australian capital markets,
addressing issues such as the risk/return trade-off, dividend policy and taxation mechanisms.
===============
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Hello fellow lovers of old UNIX,
Would anyone happen to have a raster scan (not OCR) of the original
printing of UNIX Programmer's Manual, 7th edition? Does such a thing
exist? Given that Brian S. Walden produced and published a PDF reprint
of this manual (presumably done with some "modern" version of troff)
back in 1998, I reason that there probably wasn't much interest in
preserving the original print by painstaking scanning (and the files
from such a scan would have been ginormous by 1998 standards), hence I
am not certain if such a scanned version exists - but I thought I
would ask nonetheless.
I was however very pleased to discover that some very kind soul named
Erica Fischer did scan and upload the complete set of Usenix printed
books for 4.2BSD and 4.3BSD - here is the 4.2BSD version:
https://archive.org/details/uum-ref-4.2bsdhttps://archive.org/details/uum-supplement-4.2bsdhttps://archive.org/details/upm-ref-4.2bsdhttps://archive.org/details/upm-supplement-4.2bsdhttps://archive.org/details/smm-4.2bsd
and here is 4.3BSD:
https://archive.org/details/uum-ref-4.3bsdhttps://archive.org/details/uum-supplement-4.3bsdhttps://archive.org/details/upm-ref-4.3bsdhttps://archive.org/details/upm-sup1-4.3bsdhttps://archive.org/details/upm-sup2-4.3bsdhttps://archive.org/details/smm-4.3bsdhttps://archive.org/details/uum-index-4.3bsd
It is my understanding that all supplementary docs (the papers that
were originally in volumes 2a and 2b in the V7 manual) were retroffed
by UCB/Usenix for 4.3BSD edition, but the earlier 4.2BSD Usenix print
seems to be different - it looks like for 4.2BSD they only did a new
troff run for all man pages and for new (Berkeley-added) supplementary
docs, but in the case of docs which originally appeared in V7 vol 2,
it appears that Usenix did some kind of analogue mass reproduction
from a historical V7 master, *without* doing a new troff run on those
docs. *If* this hypothesis is correct, then Erica's uploaded scan of
4.2BSD manuals can serve as a practical substitute for the presumably-
missing scan of the original printing of V7 manual - but I would like
to double-check my hypothesis with others who are presumably more
knowledgeable about this ancient history (some of you actually lived
through that history, unlike me!), hence the reason for this post.
I would appreciate either confirmation or correction of the guesses
and conjectures I expressed above.
M~
Hello TUHS,
I recently have been working on the Plan 9 fs/v6fs and fs/v32fs programs,
another member of the community had noticed bugs within them and I wanted
to verify that the new code is working as expected. I haven't had an issue
verifying v6fs using files from the TUHS archive but v32fs has proved to
be a bit more tricky. After a little bit of work we were able to get the 'file2'
located at https://www.tuhs.org/Archive/Distributions/USDL/32V/ to mount and read
files. But given that all the files here are binaries it was a bit hard to make sure
we're getting the correct information. I attempted to cross reference the files I get
against the file2.tar also located within that spot in the archive but I am getting tar
errors when extracting this file, so its not exactly obvious if what I am checking against
is correct.
So I would like to ask if someone here knows exactly what the sha1sums of these files are
supposed to be and/or has another image with known contents I could test against. I will
preface this with the fact that I am not very well versed in old UNIX filesystems so
please let me know if I've missed anything.
Thank you,
Jacob Moody
Hi
I am interested in reconstructing the Public Domain 32000 (PD32) which appeared in 1986 edition of MicroCornicopia.
It claimed to run Unix System V on a PC 8-bit ISA board using the NS32016 chip set. Does anyone remember this system and/or have any interest in it?
Here is a link from Hackaday more fully describing the effort:
ISA bus slave NS32016 processor board | Hackaday.io
Thanks, Andrew Lynch
> From: Paul Ruizendaal
>> the ambiguous phrase "had the first implementation of FTP", which
>> has been flagged as needing clarification
> From RFC 354 ... and from RFC 414
Those are NCP FTP, a slightly different protocol, and implementation, from TCP
FTP. (The code from the NCP one was sometimes recycled into the TCP one; see
e.g.:
https://github.com/PDP-10/its-vault/blob/master/files/sysnet/ftpu.161
which has both in one program.)
These RFC's you listed are obviously pre-TCP; the first TCP RFC is
RFC-675. (The first RFC that even mentions TCP seems to be RFC-661.) RFC's
are all NCP-related until around #700 or so, when the mix starts to change.
Maybe the "needing clarification" refers to these two different FTP's? Without
an explicit classifier, does that text refer to NCP FTP or TCP FTP?
Noel
> From: Bakul Shah
> He was part of NSFNet, so could have got first FTP on NSFnet or a
> later version of FTP.
You all are talking about _two separate FTP's_ (as I pointed out
previously). If you all would stop confusing yourselves, you'd be able to sort
out the bogons.
In this particular case, the NSFnet appeared at a _much_ later stage of the
growth of the Internet (yes, it is spelled with a capital 'I'; the morons at
the AP were not aware that 'internet' was a pre-existing word with a
_different meaning_) than when Dave was working with the Fuzzball, and by that
point there were _many_ TCP FTP's (e.g. the ITS one I previously sent the URL
to the source for), so the 'first FTP on NSFnet' is a non-concept.
The best bet for accurate data is to look at the TCP meeting minutes from the
IEN series:
https://www.rfc-editor.org/ien/ien-index.html
Looking quickly, the first one that Dave appears in might be IEN-160,
"Internet Meeting Notes -- 7-8-9 October 1980". (He wasn't in the "Attendees"
lists of any of the earlier ones I looked at.) Look in the "Status Reports"
sections to see if he says anything about where he's at. The one for this one
says:
"Dave described the configuration of equipment at COMSAT which consists of a
number of small hosts, mainly LSI-11s. ... COMSAT has also used NIFTP to
transmit files between their hosts and ISIE. The NIFTP software was provided
by UCL. ... COMSAT plans to .. arrange a permanent connection to the ARPANET."
I have no idea what a "NIFTP" might be. Also, there is a reason that serious
historians prefer contemporary written records, not people's memories.
Noel
> I see that the wording on his Wikipedia page has the ambiguous phrase "had
> the first implementation of FTP", which has been flagged as needing
> clarification, so I intend to provide it.
>
> In both this interview:
>
> https://conservancy.umn.edu/bitstream/handle/11299/113899/oh403dlm.pdf
>
> ... and this video recording of Mills himself giving a lecture at UDel:
>
> https://youtu.be/08jBmCvxkv4?t=428
>
> ... it's quite clear that it's literally true - he authored, compiled,
> installed, implemented, and tested the very first (and apparently second)
> FTP server.
It may be impossible to provide hard evidence. From RFC 354 it seems to me that the protocol took on a recognisable shape around July 1972 and from RFC 414 it seems to me that there were a number of implementations by November 1972, and unfortunately Dave Mills is not mentioned. His recollection may well be correct, but finding proof he was the first in a 4 months time slot 50+ years ago may be too ambitious.
https://www.ietf.org/rfc/rfc354.txthttps://www.ietf.org/rfc/rfc414.txt
Maybe the internet history list can shed some more light on the matter:
https://elists.isoc.org/mailman/listinfo/internet-history
Dave Mills, of fuzzball and ntp fame, one time U Delaware died on the 17th
of January.
He was an interesting, entertaining, prolific and rather idosyncratic
emailer. Witty and informative.
G
What is the best public, unambiguous, non-YouTube reference I can cite for
the late David Mills' initial FTP work?
I see that the wording on his Wikipedia page has the ambiguous phrase "had
the first implementation of FTP", which has been flagged as needing
clarification, so I intend to provide it.
In both this interview:
https://conservancy.umn.edu/bitstream/handle/11299/113899/oh403dlm.pdf
... and this video recording of Mills himself giving a lecture at UDel:
https://youtu.be/08jBmCvxkv4?t=428
... it's quite clear that it's literally true - he authored, compiled,
installed, implemented, and tested the very first (and apparently second)
FTP server. But Wikipedia's guidelines discourage YouTube-only citations,
and the text in the interview seems insufficiently detailed to have
citation value.
What is the best reference I can cite?
Thanks!
--
Royce
Hi Lennart,
At 2024-01-18T15:45:55+0000, Lennart Jablonka wrote:
> Quoth John Gardner:
> > Thanks for reminding me, Branden. :) I've yet to get V7 Unix working with
> > the latest release of SimH, so that's kind of stalled my ability to develop
> > something in K&R-friendly C.
>
> I went ahead and write a little C/A/T-to-later-troff-output converter in
> v7-friendly and C89-conforming C:
>
> https://git.sr.ht/~humm/catdit
This is an exciting prospect but I can't actually see anything there.
I get an error.
"401 Unauthorized
You don't have the necessary permissions to access this page. Index"
> I’m not confident in having got the details of spacing right (Is that
> 55-unit offset when switching font sizes correct?)
I've never heard of this C/A/T feature/wart before. Huh.
> and the character codes emitted are still those of the C/A/T,
> resulting in the wrong glyphs being used.
The codes should probably be remapped by default, with a command-line
option to restore the original ones. I would of course recommend
writing out 'C' commands with groff special character names.
> I created the attached document like this:
>
> v7$ troff -t /usr/man/man0/title >title.cat
> host$ catdit <title.cat | dpost -F. -Tcat >title.ps
>
> (Where do the two blank pages at the end come from?)
Good question; we may need to rouse a C/A/T expert.
> PS: Branden, for rougher results, if you happen to have a Tektronix
> 4014 at hand (like the one emulated by XTerm), you can use that to
> look at v7 troff’s output. Tell your SIMH to pass control bytes
> through and run troff -t | tc.
I'd love to, just please make your repo available to the public. :)
Regards,
Branden
John Gardner wrote:
> I'm a professional graphic designer with access to commercial typeface
> authoring software. Send me the highest-quality and most comprehensive
> scans of a C/A/T-printed document, and I'll get to work.
Are you offering to donate your labor in terms of typeface design, or
will it be a type of deal where the community will need to collectively
pitch in money to cover the cost of you doing it professionally?
In either case, the "C/A/T-printed document" of most value to this
project would be the same one G. Branden Robinson is referring to:
> If you don't have my scan of CSTR #54 (1976), which helpfully dumps all
> of the glyphs in the faces used by the Bell Labs CSRC C/A/T-4, let me
> know and I'll send it along. I won't vouch for its high quality but it
> should be comprehensive with respect to coverage.
The paper in question is Nroff/Troff User's Manual by Joseph F. Ossanna,
dated 1976-10-11, which was indeed also CSTR #54. The document is 33
pages long in its original form, and page 31 out of the 33 is the most
interesting one for the purpose of font recreation: it is the page that
exhibits all 4 fonts of 102 characters each. Here are the few published
scans I am aware of:
1) Page 245 of:
http://bitsavers.org/pdf/att/unix/7th_Edition/UNIX_Programmers_Manual_Seven…
2) Page 235 of:
http://bitsavers.org/pdf/att/unix/7th_Edition/UNIX_Programmers_Manual_Seven…
3) Page 239 of:
http://bitsavers.org/pdf/att/unix/7th_Edition/VA-004A_UNIX_Programmers_Manu…
4) Page 499 of:
https://archive.org/details/uum-supplement-4.2bsd
Question to Branden: the scan you are referring to as "my scan", how
does it compare to the 4 I just linked above? If your scan has better
quality than all 4 versions I linked above, can you please make it
public?
M~
> All, I got this e-mail from Holger a while back. Somehow it went into
> a folder and has lurked unseen for way too long.
>
> Does anybody know any more about PCS Unix and, if so, where should
> I place the code that Holger has donated into the Unix Archive?
I don’t know much about PCS Unix, but I did come across many references to Newcastle Connection (and Unix United) when researching early networking and the various approaches to giving early Unix a networking API. I think there is no other set of surviving sources for this. Maybe Holger disagrees, but I would say that PCS Unix is best placed in the “Early networking” section.
By the way, for those interested, here is a start to read up on Unix United: https://en.wikipedia.org/wiki/Newcastle_Connection
To some extent, it is similar to the “RIDE” software developed at Bell Labs Naperville by Priscilla Lu and to S/F Unix developed by GWR Luderer at Murray Hill. As far as I know the sources for both of those have been lost to time, afaik.
Hi again John,
> I only meant "professional" insofar as aptitude with graphics is concerned.
> I won't accept money; I'm offering my labour out of love for typography,
> computer history and its preservation, and of course, the technology that
> got Unix the funding it needed to revolutionise computing. In any case,
> there's no actual "design" work involved: it's literally just tracing
> existing shapes to recreate an existing design. I do stuff like this
> <https://github.com/file-icons/icons#why-request-an-icon-cant-i-submit-a-pr>
> for *fun*, for crying out loud.
Sounds great! If you are indeed serious about trying to recreate the
ancient C/A/T character set in PostScript fonts (or some other font
format that can be converted into a form that can be downloaded into a
genuine PostScript printer), I'll try to find some time to produce the
following:
1) A set of C/A/T binary files corresponding to that CSTR #54 manual,
as well as BWK's troff tutorial which usually follows right after in
book compilations. This step is simply a matter of running the original
troff executable (with -t option) on the original source files for
these docs - but since I actually run an OS that still includes that
original version of troff and you said you don't, it would probably be
easier for me to produce and publish these files.
2) A converter tool from C/A/T binary codes to PostScript, using
whatever fonts you give it. I'll test it initially using the set of
fonts which I developed for my 4.3BSD-Quasijarus pstroff - all
characters will be there, all positioning will come from original
troff, but it won't look pretty because most PS native font characters
don't match those of C/A/T. Then as you progress with your font
drawing project, you should be able to substitute your fonts instead
of mine, and see how the output improves.
> Nice! The more material I have, the merrier. As for the scan that Branden
> and I were referring to, I've uploaded a copy to Dropbox
Using pdfimages utility with -list option, I compared the image format
and resolution in various scans I described in my previous mail, plus
this new one you just shared, and concluded that the best quality is
contained in these two:
http://bitsavers.org/pdf/att/unix/7th_Edition/UNIX_Programmers_Manual_Seven…http://bitsavers.org/pdf/att/unix/7th_Edition/VA-004A_UNIX_Programmers_Manu…
Here are extracted PNG images of just the relevant page from both PDFs:
https://www.freecalypso.org/members/falcon/troff/cstr54-fontpage-sri.pnghttps://www.freecalypso.org/members/falcon/troff/cstr54-fontpage-ucb.png
Each PNG is a lossless extract from the corresponding PDF, made with
pdfimages utility. Each image is described as being 600x600 DPI in
PDF metadata, and the print is said to be in 12 point - numbers for
converting from pixels to .001m units in font reconstruction...
M~
Hi John,
At 2024-01-18T00:43:41+1100, John Gardner wrote:
> I'm a professional graphic designer with access to commercial typeface
> authoring software. Send me the highest-quality and most comprehensive
> scans of a C/A/T-printed document, and I'll get to work.
If you don't have my scan of CSTR #54 (1976), which helpfully dumps all
of the glyphs in the faces used by the Bell Labs CSRC C/A/T-4, let me
know and I'll send it along. I won't vouch for its high quality but it
should be comprehensive with respect to coverage.
> Thanks for reminding me, Branden. :) I've yet to get V7 Unix working
> with the latest release of SimH,
Let me know in private mail where you got stuck. Maybe I can help.
> I'm still up for this, assuming you've not already started.
No, I haven't--perhaps because I am an Ada fanboy, the prospect of
coding in pre-standard C and its mission to turn anything that can be
lexically analyzed into _some_ sequence of machine instructions has not
stoked my excitement.
(Which isn't to say that one _can't_ write safe code using K&R C; my
fear is that having to remember all of the things the compiler won't do
for you would overwhelm the task at hand. Too bad Unix V7 didn't have
Perl, since this is basically a text transformation problem.)
Regards,
Branden
All, I got this e-mail from Holger a while back. Somehow it went into
a folder and has lurked unseen for way too long.
Does anybody know any more about PCS Unix and, if so, where should
I place the code that Holger has donated into the Unix Archive?
Many thanks, Warren
----- Forwarded message from hveit01(a)web.de -----
From: hveit01(a)web.de
To: wkt(a)tuhs.org
Subject: PCS kernel sources
Hi Warren,
Some time ago I subscribed to the tuhs mailing list because of my
interests in Unix.
I have been working on regenerating the ancient PCS unix (see more
details in the README file in the attached archive).
Now it is in a state to publish the results. You may decide to put this
up on the TUHS website for reference.
PCS UNIX (dubbed MINUX) is special in the way that it is derived from
an SVR3.2 UNIX with the Newcastle connection integrated.
The Newcastle connection is an early attempt for multicomputer
networking; it provides a shared file system over the network, similar
to the later NFS.
To my knowledge, it is the first time that source for it are described
(beyond some publicly availablereasearch paper); I haven't yet managed
to find the original sources of this.
Regards
Holger Veit
----- End forwarded message -----
No idea what COFF is, but in the early 1980s, two non-troff options on
the software side were -
1) TeX. From Donald Knuth, which means tau epsilon chi, pronounced tech
not tex. The urban legend was upon seeing an inital copy of one of his
books sometime in the 1970s, he yelled "blech!" and decided that if you
wanted your documents to look right, you need to do be able to it
yourself, and TeX rhymes with blech.
2) Scribe. From Brian Reid, of Carnegie-Mellon
See http://www.columbia.edu/cu/computinghistory/scribe.pdf
-Brian
Clem Cole clemc at ccc.com wrtoe:
> Not really UNIX -- so I'm BCC TUHS and moving to COFF
>
> On Tue, Jan 9, 2024 at 12:19b/PM segaloco via TUHS <tuhs at tuhs.org> wrote:
>
> > On the subject of troff origins, in a world where troff didn't exist, and
> > one purchases a C/A/T, what was the general approach to actually using the
> > thing? Was there some sort of datasheet the vendor supplied that the end
> > user would have to program a driver around, or was there any sort of
> > example code or other materials provided to give folks a leg up on using
> > their new, expensive instrument? Did they have any "packaged bundles" for
> > users of prominent systems such as 360/370 OSs or say one of the DEC OSs?
> >
> Basically, the phototypesetter part was turnkey with a built-in
> minicomputer with a paper tape unit, later a micro and a floppy disk as a
> cost reduction. The preparation for the typesetter was often done
> independently, but often the vendor offered some system to prepare the PPT
> or Floppy. Different typesetter vendors targeted different parts of the
> market, from small local independent newspapers (such as the one my sister
> and her husband owned and ran in North Andover MA for many years), to
> systems that Globe or the Times might. Similarly, books and magazines
> might have different systems (IIRC the APS-5 was originally targeted for
> large book publishers). This was all referred to as the 'pre-press'
> industry and there were lots of players in different parts.
>
> Large firms that produced documentation, such as DEC, AT&T *et al*., and
> even some universities, might own their own gear, or they might send it out
> to be set.
>
> The software varied greatly, depending on the target customer. For
> instance, by the early 80s, the Boston Globe's input system was still
> terrible - even though the computers had gotten better. I had a couple of
> friends working there, and they used to b*tch about it. But big newspapers
> (and I expect many other large publishers) were often heavy union shops on
> the back end (layout and presses), so the editors just wanted to set strips
> of "column wide" text as the layout was manual. I've forgotten the name of
> the vendor of the typesetter they used, but it was one of the larger firms
> -- IIRC, it had a DG Nova in it. My sister used CompuGraphic Gear, which
> was based on 8085's. She had two custom editing stations and the
> typesetter itself (it sucked). The whole system was under $35K in
> late-1970s money - but targeted to small newspapers like hers. In the
> mid-1908s, I got her a Masscomp at a reduced price and put 6 Wyse-75
> terminals on it, so she could have her folks edit their stories with vi,
> run spell, and some of the other UNIX tools. I then reverse-engineered the
> floppy enough to split out the format she wanted for her stories -- she
> used a manual layout scheme. She still has to use the custom stuff for
> headlines and some other parts, but it was a load faster and more parallel
> (for instance, we wrote an awk script to generate the School Lunch menus,
> which they published each week).
>
Hi All.
V10 had a program called "monk" which was a "document compiler".
It produced troff and know how to run eqn, tbl, and pic and I'm
guessing also refer. It seems to have been inspired by Scribe.
The V10 files from Dan Cross have the device independent troff output
for the paper that describes monk.
G. Branden Robinson was kind enough to turn it into PostScript for me;
his story is below, forwarded by permission. I'm also attaching
the PostScript file.
I'm curious, did this see a lot of use inside Research or outside of it?
At first glance, it looks like the kind of thing that might have
caught on, especially for people who weren't already used to troff.
Thanks,
Arnold
> Date: Wed, 10 Jan 2024 12:25:53 -0600
> From: "G. Branden Robinson" <g.branden.robinson(a)gmail.com>
> To: Aharon Robbins <arnold(a)skeeve.com>
> Subject: Re: v10 ditroff output file
>
> Hi Arnold,
>
> At 2024-01-09T08:50:28+0200, Aharon Robbins wrote:
> > Hi.
> >
> > The file of interest is attached. It's from vol2/monk in Dan Cross's
> > V10 sources in the Distributions/Research directory from TUHS.
> >
> > If you can get postscript out of it somehow,
>
> Bad news and good news.
>
> ...unfortunately there was too much impedance mismatch with groff/grops.
>
> Some font names differ but that's not a big deal. (Also, today I
> learned: the troff that generated this reloaded all the fonts at each
> new page.) The troff(1) that generated this also attempted vertical
> motion before starting the first page. That also wasn't a big deal. I
> thought I was going to be able to text-edit my way to a solution...but
> then...
>
> grops really wants the device resolution to be 72,000 dpi, not 720, and
> we'd have to write a rescaling feature to support that. Just editing
> the output file won't do because the file uses Kernighan's optimized,
> anonymous output command pervasively.
>
> groff_out(5):
> ddc Move right dd (exactly two decimal digits) basic units u,
> then print glyph with single‐letter name c.
>
> In groff, arbitrary syntactical space around and within this
> command is allowed to be added. Only when a preceding
> command on the same line ends with an argument of variable
> length a separating space is obligatory. In classical
> troff, large clusters of these and other commands were used,
> mostly without spaces; this made such output almost
> unreadable.
>
> So all these two-digit motions would need to become five-digit motions
> or all the glyphs would pile up on each other. (And that's exactly what
> I happened after doing a couple of fixups and throwing gxditview(1) at
> it.)
>
> Out of curiosity, I tried DWB 3.3 troff.
>
> It did well, until the fourth page, when it fell over and produced
> PostScript that made Ghostscript very angry.
>
> So I tried Heirloom Doctools troff.
>
> 20 pages of goodness.
>
> But be advised: some sort of extension was used to embed other
> PostScript files:
>
> ./bin/dpost: can't open picture file samples/tailor.ps (line 1493) (page 2)
> ./bin/dpost: can't open picture file samples/memo.ps (line 1749) (page 3)
> ./bin/dpost: can't open picture file samples/tmbody.ps (line 2151) (page 4)
> ./bin/dpost: can't open picture file samples/tmcs.ps (line 2694) (page 5)
>
> So I went to minnie.tuhs.org to see if they were there...
>
> ...and they were.
>
> So here you go. Renders without errors (though Heirloom is nowhere near
> as validation-happy as groff), and the output seems plausible.
>
> > I'll really appreciate it. :-)
>
> Enjoy!
>
> Regards,
> Branden
> The <C>omputerphile Youtube channel did a video about 10 years ago about
> "The Great 202 jailbreak:"
https://www.youtube.com/watch?v=CVxeuwlvf8w
It may be superfluous in this forum, but one should note that the video's
characterization of Brian Kernighan as the father of typesetting at Bell
Labs does great disservice to Joe Ossanna, who single-handedly
brought the first phototypesetter to the labs, subjected it to computer
control, and wrote troff (which lives on 50 years later) to drive it.
In passing, the video denigrates the C/A/T because it had a fixed font
repertoire and no general graphic capability. But without the antecedent
of C/A/T and troff, the famous Linotron summer-vacation project would
never have been undertaken.
Doug
SunRPC, among other protocols, needs transaction IDs (XIDs) to distinguish
RPCs.For SunRPC, it's important that XIDs not be reused (not for all
protocols; 9p has no such requirement). Stateless protocols like NFS and
reused XIDs can get messy.
There is a vague, 30 year old memory, I have, that at some point SPARC got
a time register, or some other register, that always provided a different
answer each time it was read, even if read back to back, in part to enable
creation of non-reused XIDs. Note that things like the TSC or RISC-V MTIME
register make no such guarantee.
I am pretty sure someone here can fill me in, or tell me I'm wrong, about
my SPARC memory.
thanks
Mychaela Falconia falcon at freecalypso.org wrote:
.
.
.
> > It was made under Solaris 2.6, on an Ultra 2 ("Pulsar"), using the troff, tbl,
> > eqn, pic, refer and macros as supplied by Sun at that time, and NOT any GNU
> > ones. Why? These were the versions written by AT&T that Sun got directly from
> > them during their SVR4 collaboration. I used the PostScript output option to
> > troff (which obviously did not exist in 1979).
>
> You did the right thing: the version you used certainly feels much more
> "right" than anything from GNU.
I was just tryting to use the tool that would give the path of least
resistance for that troff source. Even between flavors of UNIX
in the 1980s, there were issues getting correctly formatted output
bewteen Documenter's Workbench (DWB) and UCB.
> > That code to produce PostScript
> > outout, had a high probability of being written by the graphics group run by
> > Nils-Peter Nelson in Russ Archer's Murray Hill Computer Center (department
> > 45268).
>
> So it is a different ditroff-to-PS chain than psdit from Adobe
> Transcript? I am not too familiar with the latter, as I ended up
> writing my own troff (derived from V7 version, just published) that
> emits PS directly, but it is my understanding that Back In The Day
> most people used psdit for this type of workflow.
The DWB way of troff to PostScript is --
$ pic file | tbl | eqn | troff -mm -Tpost | dpost >file.ps
$ # if you want to print it near the "bird cage" printer, near a famous stair case in MH
$ i10send -dbirdie -lpost file.ps
$ # which would eventually call postio for you
$ postio -l /dev/tty?? file.ps
As this is pre-ethernet time, QMS printers are connected via RS-232
serial lines and postio does the communication to the printer.
You can find dpost at https://www.tuhs.org/cgi-bin/utree.pl?file=OpenSolaris_b135/cmd/lp/filter/p…
(or at https://github.com/n-t-roff/DWB3.3/blob/master/postscript/dpost/dpost.c )
Looking at the last few lines of https://www.tuhs.org/cgi-bin/utree.pl?file=OpenSolaris_b135/cmd/lp/filter/p… it is signed,
Richard Drechsler
MH 2F-241 x7442
mhuxa!drexler
Which is that group I mentioned. Rich wrote dpost for sure, also if you
look at the last person thanked in the Preface of The C programming
Language, Second Edition (1988) --
Rich Drechsler helped greatly with typesetting.
On a sad side note, Carmela L'Hommedieu, I was going to say "recently," but
it's been almost four years now, who also worked in that group, has passed
https://www.tributearchive.com/obituaries/10822663/Carmela-Scrocca-LHommedi…
>
> > I did have a volume 2A that also had the correct 7th Edition C Reference
> > Manual
> > in it. The one you get in my 1988 PDF is from the 6th Edition, notice it is
> > the old =+ syntax and not the += one. Dennis said that not even Lucent could
> > provide that as a free PDF, as it was a published book by Prentice-Hall. I
> > was asked to destroy all PDFs that had that version in it.
>
> Ouch, until you pointed it out in this ML post, I hadn't even noticed
> that the C Reference Manual doc is "wrong" in your PDF version! But
> here comes the really important question: if you once had a PDF reprint
> with the "right" version of this doc, where did you get the troff
> source for it? This is the source that was actually censored from the
> V7 tape:
>
> https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/doc/cman
Yes that is the missing C Reference Manual. I was gifted the troff source
for it, and unfortunately I do not have that gifted copy anymore.
>
> I don't have this problem for my 4.3BSD reprint: the source for 4.3BSD
> version of this doc is included on the tape; the corresponding SCCS
> log begins with "document received from AT&T", checked in on 86/05/14,
> and then revised by BSD people into what they wanted printed in their
> version of the manual. But if someone wishes to do a *proper* reprint
> of the V7 manual (or 4.2BSD, where this doc and many others were
> literally unchanged duplications from V7 master at the plate level),
> we need the troff source for the V7 version of this doc.
>
> If this source is totally lost, we as in community can probably do an
> OCR from a surviving scan (for example, the one in 4.2BSD PSD book)
> and then painstakingly produce a new troff source that would format
> into an exact replica - but if there is a leaked copy of the original
> source somewhere, it would certainly make our job way easier.
>
> > Larry McVoy asked me for my modified files to make the PDFs too, in 1999 or
> > 2000, for bitkeeper or bitsavers. But since I was not allowed to share them
> > and I had moved companies, I had lost them. I thought I had saved a copy but
> > I could no longer find it. I asked Dennis if he still had them, he did not.
> > This work is truly lost.
>
> Aside from the unresolved issue of "cman" document, we as in community
> can produce an even better work if we so wish. I am deferring a more
> detailed discussion until I put out my 4.3BSD PS reprint, so I can
> point to it as a reference for how I like to do things, and maybe by
> then we'll have some clarity on what happened to V7 "cman" troff source.
You will need to check on the legality of that. It is missing because
it was published as Appendix A of the first edition of The C Programming
Language in 1978 by Prentice-Hall, which means they (not Bell Labs, nor
successor compaies, AT&T, Lucent, Alcatel, Nokia) contractually own the
rights to it for some period of time. I you read Dennis' old home page at
https://www.bell-labs.com/usr/dmr/www/ you'll see this verbage --
"The version of the C Reference Manual Postscript (250KB) or PDF, (79K) that came with 6th Edition Unix (May 1975), in the second volume entitled ``Documents for Use With the Unix Time-sharing System''. For completeness, there are also versions of Kernighan's tutorial on C, in Postscript or PDF format.
There is also a slightly earlier (January 1974) version of the C manual, in the form of an uninterpreted PDF scan of a Bell Labs Technical Memorandum, visible here, if you can accommodate 1.9MB.
No updated version of this manual was distributed with most machine readable versions of the 7th Edition, since the first edition of the `white book' K&R was published about the same time. The tutorial was greatly expanded into the bulk of the book, and the manual became the book's Appendix A.
However, it turns out that the paper copies of the 7th Edition manual that we printed locally include not only what became Appendix A of K&R 1, but also a page entitled "Recent Changes to C", and I retyped this. I haven't been able to track down the contemporary machine-readable version (it's possible that some tapes were produced that included it). This is available in PostScript or PDF format."
As we know from the recent public domaining of Mickey Mouse, copyright
is retained 70 years past the date of death of the (last surviving)
author. So if Brian Kernighan lives to the ripe old age of 101, this
work cannot be used without permisson until 2113, unless the rights
holders place it into the public domian before hand. Since the 1st
edition is out of print, it's rights *may* have reverted back,
but to which companies? Probabaly Nokia and AT&T jointy. But there
is no way to know if you can use it, without an official notice of such.
-Brian
Arnold,
Thank you, it's nice to have one's work appreciated. And I know, you were doing exactly what I
was doing, trying to make it more accessible to more people. And Dennis, being who he was, always
gave credit where credit was do. There's nothing else he could or would have done. And like I said,
it was long ago and has not bothered me in a very long time.
Thanks for your continued dedication to gawk. Awk still just flows out of my fingers without even
needing to think much or at all. Professionally, I have programmed in python for years, and have
never gotten to the same level intimacy I have with awk.
-Brian
arnold at skeeve.com wrote:
> Thanks for this history Brian.
>
> It was a long time ago, but I think all I did was figure out how
> to turn the PDF back into postscript, since I had a postscript printer
> at the time and it was easier for me to print postscript.
>
> I sent the files to Dennis _only_ with the thought that they might be
> useful to other people, and certainly with no intent to steal any credit.
>
> Your files were great; I printed out hardcopy at the time and
> still have them on a shelf in my basement.
>
> Thanks!
>
> Arnold
Hello fellow lovers of old UNIX,
After almost 20 y of intermittent development (started in the fall of
2004), I just made the first official release of my version of troff:
https://www.freecalypso.org/pub/UNIX/components/troff/qjtroff-r1.tar.Zhttps://www.freecalypso.org/pub/UNIX/components/troff/qjtroff-r1.tar.gz
(The .Z is the native format; the .gz is for greater accessibility.)
The README file inside the tarball gives the full story, but basically
it is my own derivative from classic V7 troff (not derived from
ditroff, and certainly not groff) that runs under 4.3BSD and emits
PostScript. Only PS output is supported, no non-PS targets of ditroff.
I started it in 2004, but I still use it to this day (on a real
MicroVAX running my "organically grown" 4.3BSD variant) to write
various TPS reports and technical manuals etc, for my other projects
that don't have much of anything to do with Ancient UNIX.
For anyone who loves intricacies of troff and/or PostScript, you might
find the source code quite interesting to study. :)
Some Time Soon I am hoping to put out my PostScript reprint of the
first 3 books of the 4.3BSD manual set (namely, URM, USD and PRM books)
made with this troff. The actual book reformatting job is already done
(for these 3 books, not for the other 3 yet), but I need to write new
colophons to be appended (with pstmerge, a tool from my troff suite)
at the end of each book. (The colophons I wrote for URM and PRM back
in 2012 are in need of corrections and updates, and I didn't have the
USD book done in 2012.)
I will also be responding to BSW's detailed account of V7 PDF reprint
in the other thread shortly - but I wanted to get this troff release
out first, so I won't be in a position of saying "please look at my
creation" when that creation is not publicly accessible.
M~
This isn't directly UNIX related, and yes, the thread is 3 years old. But since it made national news last night, probably due to its proximity to Newark Airport. The enormous fire in Elizabeth, NJ, I recognized in the local news as the old Singer factory. That factory was the catalyst that linked me into finding out more on Fred Grampp, and his ancestry.
Here's a non-paywalled link that also mentions it is indeed the old Singer factory: https://newjersey.news12.com/elizabeth-nj-fire-industrial-building
On Tue, Mar 16, 2021 at 11:12 AM M Douglas McIlroy <m.douglas.mcilroy at dartmouth.edu> wrote:
>
> Serendipitous find! I hadn't realized that Fred had been the third
> generation in the hardware store.
> His father ("Pops") retired to Drayton Island in the St Johns River
> about 60 miles south of Jacksonville.
> Fred often visited him, driving the 19-hour trip in one stint.
>
> Doug
>
> On Mon, Mar 15, 2021 at 6:47 PM Brian Walden <tuhs at cuzuco.com> wrote:
> >
> > Amazing coincidences. A week prior I was researching Topper Toys
> > looking for their old factory ("largest toy factory in the world")
> > As there was litte on it's location and it lead me to find out
> > in 1961 it took over the old Singer Factory in Elizabeth, NJ.
> > So looking up the Singer factory led me to "Elizabeth,
> > New Jersey, Then and Now" by Robert J. Baptista
> >
> > https://ia801304.us.archive.org/11/items/ElizabethNewJerseyThenAndNowSecond…
> >
> > Which had no information on Topper, but had had this paragraph in it's Singer
> > section on page 28 --
> >
> > Boys earned money "rushing the growler" at lunchtime at the Singer plant.
> > German workers lowered their covered beer pails, called growlers, on ropes
> > to the boys waiting below. They earned a nickel by filling them with beer
> > at Grampp's saloon on Trumbull St. One of these boys was Thomas Dunn who
> > later became a long term Mayor. In the early 1920s Frederick Grampp went
> > into the hardware business at the corner of Elizabeth Ave. and Reid St.
> >
> >
> > When I read it I thought funny, as I know the name Fred Grampp. But beleived
> > just a coincidenental same name. After reading the biography post, I went back
> > to the book as it turns out that Fred Grampp is your Fred Grampps's
> > grandfather. You can find more his family and the hardware store and
> > Grampp himself on pages 163-164, and 212.
> >
> > -Brian
> >
>
Rather than increase subject drift on a thread I started
"UNIX on (not quite bare) System/370", here's a new thread.
Since the TUHS archive seems to now include documentation,
I decided to take a look and see if the earliest UNIX manual I have
is in the archive:
It was given to me by a friend at Stevens Tech in Hoboken NJ (c. 1980)
who had graduated, and worked for AT&T.
It's hole punched for a four ring binder
(I found an unused Bell System Project Telstar binder to put it in).
The cover page has:
Upper left corner:
Bell Telephone Laboratories Incorperated
PROGRAM APPLICATION INSTRUCTION
Upper right corner:
PA-1C300-01
Section 1
Issue 1, January 1976
AT&TCo SPCS
Center:
UNIX PROGRAMMER'S MANUAL
Program Generic PG-1C300 Issue 2
Published by the UNIX Support Group
January, 1976
The preface starts with:
This document is published as part of the UNIX Operating System Program Generic,
PG-1C300 Issue 2. The development of the Program Generic is the result of the
efforts of the members of the UNIX Support Group, supervised by J.F. Maranzano
and composed of: R. B. Brant, J. Feder, C. D. Perez. T. M. Raleigh, R. E. Swift,
G. C. Vogel and I. A. Winheim.
and ends with
For corrections and comments please contact C. D. Perez, MH 2C-423, Extension
6041.
Not knowing who else I could ask, I brought it to a Boston Usenix (in
the 90's or oughts), and asked DMR if he could identify it. He said
it was an early supported UNIX, and he signed the preface page for me.
The manual has sections I through VIII; all manual pages start with page -1-
I found https://www.tuhs.org/Archive/Distributions/USDL/unix_program_description_ja…
with cover page:
UNIX PROGRAM DESCRIPTION
Program Generic PG-1C300 Issue 2
Published by the UNIX Support Group
January 1976
contents:
NUMBER ISSUE TITLE
PD-1C301-01 1 Operating System
PD-1C302-01 1 Device Drivers Section 1
PD-1C303-01 1 Device Drivers Section 2
And consists of descriptions of kernel functions.
So it seems likely that my manual is a companion to that.
I have a Brother printer/scanner, but the paper is fragile, so unless
it's of immediate and burning value to someone, it's unlikely to rise
to the top of my ever-static list of documents to scan....
But if someone has specific questions I can look up, let me know....
>> What was the physical form of this book? Was it a "perfect bound"
>> book?
> The HRW copies I have are perfect bound. But I can't remember if they
> were 3-hole punched as well.
The Holt Rinehart edition was 3-hole punched. The original V7
(and its predecessors) were prepared for AT&T standard 4-hole binders, but
distributed in Accopress binders that used only 2 of the 4.
4-hole paper was punched 2" and 3 3/8" from top and bottom of 11" paper.
This reduced the stress concentration that makes the isolated end holes in
3-hole paper vulnerable to tearing out. It was a let-down when AT&T
eventually acceded to a sort of loose-leaf Gresham's law and switched to 3
holes.
Doug
The Plan 9 C compiler must predate Plan 9 and therefore it must
have been created on Research Unix.
The v10 manual doesn't mention them, fair enough, they document
Unix and not Plan 9, but they do say that rc(1) is the Plan 9
shell...
Research Unix of the time ran on VAX. A natural question arises,
was VAX the original target of the Plan 9 compilers? Where is it?
Why isn't it mentioned anywhere?
If VAX was never a target, then what was the original purpose of
these compilers and how were they tested on a target that Research
Unix never ran on?
One might think they might have been used for the Jerq/Blit/DMD-5620,
but no, the Unix manual documents a different compiler used for
these (which is distinct from the main C compiler).
The Plan 9 compilers seems to have appeared out of thin air, but
this certainly can't be the case.
--
Aram Hăvărneanu
Wanted to share this in case anyone is in the market for one. Someone's posted a 3B2/400 to eBay along with many documents and some peripherals and such. Kicker is it's $2,000 altogether...
https://www.ebay.com/itm/186237940947
Way outside what I'm wiling to sink into one, although a 3B2 would be very nice to have around. Anywho, figured I'd spread the word in case someone in the far flung UNIX-verse is seeking one and has the funds to spare.
- Matt G.
Got some exciting stuff in the mail today, and for once it isn't going to amount to sitting in front of a scanner for hours:
https://archive.org/details/5ess-switch-dk5e-cd-1999-05
After the link is the April/May 1999 issue of the 5ESS-Switch DK5E-CD, a collection of documents and schematics concerning the 5ESS-2000 variant of the 5ESS switch, as supported at the time by Lucent-Bell Labs. Of particular UNIX interest is the following:
https://ia601200.us.archive.org/view_archive.php?archive=/12/items/5ess-swi…
This is the November 1998 issue of the 5ESS-2000 Switch UNIX RTR Operating System Reference Manual (235-700-200, Issue 7.00). From the text it appears to be a descendant of the standard UNIX literature, although it only contains the intro, basic info, section 1, as well as a section on administration as well as an EMACS paper. It alludes to a more complete manual although I have not located that in this document collection (granted I'm busy on a work thing right now, just taking the time to upload and spread the word ATM.)
There's probably plenty of other relevant stuff in there, plus plenty of content regarding the 5ESS and 3B20D generally. These CDs were included with a paper binder of installation and identification info. The binder appears to be largely for training programs and I have yet to verify whether its contents are included in these CDs or the two supplement each other. Either way, this should present plenty of leads on more potential sources of 5ESS, 3B20D, and maybe UNIX RTR stuff. Unfortunately the discs only seem to contain documents, there wasn't the holy grail of a snapshot of UNIX RTR in there that I was kinda hoping might be bumping around. Thus the hunt for 3B20 UNIX continues...
- Matt G.
P.S. This is a bit more modern than what I've been dealing with generally, hopefully given the current state of 5ESS and Nokia-Bell Labs seeming to be winding things down, that means this isn't a problem to have put up. I just urge caution on any use of this stuff that even remotely smells of commercial activities, but I probably don't have to tell anyone that. Just covering my bases.
[TUHS bcc, moved to COFF]
On Thursday, January 4th, 2024 at 10:26 AM, Kevin Bowling <kevin.bowling(a)kev009.com> wrote:
> For whatever reason, intel makes it difficult to impossible to remove
> the ME in later generations.
Part of me wonders if the general computing industry is starting to cheat off of the smartphone sector's homework, this phenomenon where whole critical components of a hardware device you literally own are still heavily controlled and provisioned by the vendor unless you do a whole bunch of tinkering to break through their stuff and "root" your device. That I can fully pay for and own a "computer" and I am not granted full root control over that device is one of the key things that keeps "smart" devices besides my work issued mobile at arms length.
For me this smells of the same stuff, they've gotten outside of the lane of *essential to function* design decisions and instead have now put in a "feature" that you are only guaranteed to opt out of by purchasing an entirely different product. In other words, the only guaranteed recourse if a CPU has something like this going on is to not use that CPU, rather than as the device owner having leeway to do what you want. Depends on the vendor really, some give more control than others, but IMO there is only one level of control you give to someone who has bought and paid for a complete device: unlimited. Anything else suggests they do not own the device, it is a permanently leased product that just stops requiring payments after a while, but if I don't get the keys, I don't consider myself to own it, I'm just borrowing it, kinda like how the Bell System used to own your telephone no matter how many decades it had been sitting on your desk.
My two cents, much of this can also be said of BIOS, UEFI, anything else that gets between you and the CPUs reset vector. Is it a nice option to have some vendor provided blob to do your DRAM training, possibly transition out of real mode, enumerate devices, whatever. Absolutely, but it's nice as an *option* that can be turned off should I want to study and commit to doing those things myself. I fear we are approaching an age where the only way you get reset vector is by breadboarding your own thing. I get wanting to protect users from say bricking the most basic firmware on a board, but if I want to risk that, I should be completely free to do so on a device I've fully paid for. For me the key point of contention is choice and consent. I'm fine having this as a selectable option. I'm not fine with it becoming an endemic "requirement." Are we there yet? Can't say, I don't run anything serious on x86-family stuff, not that ARM and RISC-V don't also have weird stuff like this going on. SBI and all that are their own wonderful kettle of fish.
BTW sorry that's pretty rambly, the lack of intimate user control over especially smart devices these days is one of the pillars of my gripes with modern tech. Only time will tell how this plays out. Unfortunately the general public just isn't educated enough (by design, not their own fault) on their rights to really get a big push on a societal scale to change this. People just want I push button I get Netflix, they'll happily throw all their rights in the garbage over bread and circuses....but that ain't new...
- Matt G.
Hi,
I've found myself wondering about partitions inside of BSD disk labels.
Specifically, when and where was the convention that "a" is root, "b" is
swap, etc?
I also understand the "c" partition to be the entire disk, unless it
isn't, at which point it's the entire slice (BIOS / MBR partition)
containing the BSD disklabel and "d" is the entire disk.
I also found something last night that indicated that OpenBSD uses disk
labels somewhat differently than FreeBSD.
Aside: This is one of the dangers of wondering how something curious
came to be and why it came to be when working on 10-15 year old FreeBSD
systems.
--
Grant. . . .
[TUHS as Bcc]
I just saw sad news from Bertrand Meyer. Apparently, Niklaus Wirth
passed away on the 1st. :-(
I think it's fair to say that it is nearly impossible to overstate his
influence on modern programming.
- Dan C.
Disk sections (I don't think anyone in Research called them
partitions--certainly the Research manuals didn't) were
originally defined in the device driver, not by data on the
disk. In those days, system management included recompiling
stuff, including the OS kernel, and it was not unusual for
sites to edit hp.c or whatnot to adjust things to local
preference.
There was nothing magic about the mapping between device
names and minor device numbers either; the system came with
certain conventions on the original tape, but it was not
at all uncommon to change them.
By the time I arrived at the first Unix site I ever helped
run, in a physics group at Caltech, we already used a different
naming convention: a BSD-like ddNs, where dd was a driver
name, N the physical drive unit number, s a section letter.
I don't know whether that was borrowed from BSD (it must have
started during the 3BSD era, since I started there in mid-1980
and 4BSD appears to have been released late in that year).
Looking at my archival copy of that much-locally-hacked
source tree, I see that we later moved the definitions of
all the disk-section tables to a single file compiled at
system-configuration time (we used a USG-like scheme that
compiled most of the system into libraries, rather than
compiling every file separately for each target system a
la V7 and BSD). That simplified handling our somewhat-
complicated disk topology: all but system disks were connected
through System Industries 9400 disk controllers, which were
a neat design (each controller could interface to as many as
four hosts and four disks) but in practice were not always
reliable. On one hand, we arranged for one disk to be used
in parts by our main time-sharing VAX and a subsidiary PDP-11/45,
making the 11/45 cheaper to keep around; on the other, the
main VAX had two paths to each of its disks, through different
SI controllers, so when an SI controller conked out we could
run without it until the service guys fixed it. (Each disk
was dual-ported, as was common in the SMD world, hence
connected to two controllers.)
Reliability took rather more work in those days.
A different data point: by the time I moved from California
to New Jersey and joined 1127, Research was also using a
different naming scheme for disk sections. By then the
internal naming convention was e.g. ra17 for physical unit
1, section 7; by further convention 7 (the highest-numbered
section). At some point a little later we added an ioctl
to set the starting block and size of a particular section
on a particular drive, but we never went to having the OS
itself try to find a label and trust its contents (something
that still makes the 1980s part of me feel a little creepy,
though 21st century me has come to terms with it).
Norman Wilson
Certified old fart
Toronto ON
In S.S. Pirzada's 1988 paper[1], page 35, section 3.3.2, he writes:
"Some operating telephone companies and the switching control center system (SCCS) group in Holmdel, NJ decided to use UNIX to collect maintenance data from their switches and for administration purposes. Other departments also started building applications on top of UNIX, some part of turnkey systems licensed by Western Electric (WECo)."
This is describing the situation before the establishment of USG in September 1973. I'm curious, does anyone recall what some of these pre-USG WECo "turnkey systems" were?
The things that come to mind when I think of that phrase don't come about for several years, such as the 5ESS and other work surrounding Bellmac stuff. The SCCS UNIX connection describes what becomes CB-UNIX if I understand the situation correctly, but that stays a bit afield from the more conventional pool that is dipped into for WECo needs. Switching and UNIX all kinda come back together with DMERT on 1A/3B-API and 5ESS, but again that's late 70s R&D, early 80s deployment, not this time period, leaving me terribly curious what WECo would've been bundling UNIX with and shipping out to telcos. The famous early use of UNIX in the Bell System is typography, and WECo did have involvement with Teletype equipment, so perhaps something along those lines?
If it helps set the scene, a binder I recently picked up from ~1972 describing Western Electric test sets distributed to telcos describes the following additional classes of such documents:
Shop Services - Special non-standard products
Public Telephones - System standard public telephone equipment
Data Communications - Teletypewriter and Data Sets
Subscriber Products - System standard PBX's, station equipment and special services
Non-Subscriber Products - Microwave, cable, power equipment, etc.
Non-Bell Equipment Index - Non-Bell System manufactured communication equipment
Unfortunately haven't seen any of the other binders yet but I've been keeping an eye out, one or another might describe something WECo was shipping around that had some UNIX up in it. Nothing in this binder seems computer-y enough to run an operating system, just lots of gauges, dials, and probes. Luckily it is quite clear what data test sets are designed for 103-data set maintenance so I have fodder for seeking Dataphone tools...
Anywho, happy soon to be new year folks, I'm excited to see what turns up next year!
- Matt G.
[1] - https://spiral.imperial.ac.uk/bitstream/10044/1/7942/1/Shamim_Sharfuddin_Pi…
Hi. I am trying to compile cron for the 3b2-400 and 3b2-700
and am apparently missing required libraries. The reason is
on the 3b2-400 after boot up it complains there is corruption
in the crontab for every user lp, sysadm, root and so on.
# make cron
cc -O cron.c -o cron
undefined first referenced
symbol in file
el_add cron.o
el_delete cron.o
el_empty cron.o
el_first cron.o
el_init cron.o
xmalloc cron.o
el_remove cron.o
num cron.o
days_in_mon cron.o
days_btwn cron.o
ld fatal: Symbol referencing errors. No output written to cron
*** Error code 13
Stop.
Does anyone have these libraries? Thanks.
--
WWL 📚
As I messed with making a custom cron in the late 80s,
remembered the the el_ functions are for event list processing.
But you didn't specify the source of your source, by the hardware
it needs to be of System V heritage. And a lot of that source is hard
to offically come by due to licensing. Good way to view old SVR4 is
to look at OpenIndiana, Illumos, or OpenSolaris code bases that you can
find. You're in luck, as TUHS has a copy, see -- https://www.tuhs.org/cgi-bin/utree.pl?file=OpenSolaris_b135/cmd/cron/elm.c
Look at the Makefile for the other files you need to compile into it too.
-Brian
KenUnix wrote:
> Hi. I am trying to compile cron for the 3b2-400 and 3b2-700
> and am apparently missing required libraries. The reason is
> on the 3b2-400 after boot up it complains there is corruption
> in the crontab for every user lp, sysadm, root and so on.
>
> # make cron
> cc -O cron.c -o cron
> undefined first referenced
> symbol in file
> el_add cron.o
> el_delete cron.o
> el_empty cron.o
> el_first cron.o
> el_init cron.o
> xmalloc cron.o
> el_remove cron.o
> num cron.o
> days_in_mon cron.o
> days_btwn cron.o
> ld fatal: Symbol referencing errors. No output written to cron
> *** Error code 13
>
> Stop.
Good time of day folks, I often ponder on people's attachments to pixels on the screen that come about by clicking *this* icon and typing in a box surrounded by blue and with an icon in position <xyz> vs pixels on the screen that come about instead by opening that application that is a black border with a little paper airplane button in the bottom right vs....etc.
To make it more clear, I find myself often confused at people treating email different from SMS different from social media DMs different from forum posts different from some other mechanism that like literally all the others is pixels arranged into glyphs on a screen conveying an approximation of human speech. This difference among these different ways to send said pixels to people has eluded me all my life despite working with technology since I was a tot.
What this has me curious on is if in the early days of UNIX there were attempts at suggesting which provided communication mechanisms were appropriate for what. For instance something that smells of:
It is appropriate to use mail(1) to send a review of a piece of work vs it is appropriate to use write(1) to ask Jim if he wants to take a lunch break before the big meeting. Did this matter to people back then like it seems to now? To me it's just pixels on a screen that are there when I look at them and aren't when I don't.
Truth be told I am hoping to learn something from this because I only do a couple email lists and web forums, my social life generally does not involve SMS, phone calls, nor social media. Where it has become tedious is someone I meet who seems to want to communicate over pixels on screens is then put off when I provide them an email address, usually asking instead if I have a Facebook or whatever the kids are calling Twitters today. The few times I've tried to explain email will be me transmitting you communication as pixel glyphs on a screen just like anything else would be me transmitting you communication as pixel glyphs on the screen, this doesn't diffuse their concerns any, they then just think there is something wrong with me for comparing words as pixels on a screen to words as pixels on a screen. Granted, I've probably avoided plenty of vapid people this way, but it feels like it's becoming more and more expected that "these pixels on the screen in *this* program are only for this and those pixels on the screen in *that* program are only for that".
Is this a recent phenomenon? Has communication over electronic means always had these arbitrary limitations hoisted on it by the humans that use it? Or did people not give a hoot what you sent over what program and actually cared more about the words you're saying than the word you typed at a terminal to then be able to transmit words? I doubt what I learn is going to royally change my approach to allowing technology in my irl social life, but it would be nice to at least have more mental ammo when someone asks to be friends online and then gives me mad sideeye when I go "sure here's my email address!"
- Matt G.
Are there any documented or remembered instances PDP-7 or Interdata 8/32 UNIX being installed on any such machines for use in the Bell System aside from their original hosts? Along similar lines, was the mention of PDP-7 UNIX also supporting the PDP-9 based solely on consistencies in the architecture or did this early version of UNIX actually get bootstrapped on a real PDP-9 at some point?
My understanding of the pre-3B-and-VAX landscape of UNIX in the Bell System is predominantly PDP-11 systems, but there was also work in the late 70s regarding 8086 hosts as evidenced in some BSTJ and other publications, and there is the System/370 work (Holmdel?) which I don't know enough about to say whether it technically starts before or after UNIX touches the VAX.
Thanks for any info!
- Matt G.
Hi All.
This is a bit off-topic, but people here are likely to know the answer.
V7 had a timzone function:
char *timezone(int zone, int dst);
that returned a timezone name. POSIX has a timezone variable which is
the offset in seconds from UTC.
The man pages for all of {Net,Free,Open}BSD seem to indicate that both
are available on those systems.
My question is, how? The declarations for both are given as being in <time.h>.
But don't the symbols in libc.a conflict with each other? How does a programmer
on *BSD choose which version of timezone they will get?
Feel free to reply privately.
Thanks,
Arnold
> Paul -- you left out the other "feature" -- the noise, which was still
deafening even with a model N1 and its cover.
It was indeed loud, but GE out-roared them with a blindingly fast card
reader. The machine had a supposedly gentle touch; it grabbed cards with
vacuum rather than tongs. But the make-and-break pneumatic explosions
sounded like a machine gun. A noise meter I borrowed from the Labs' tool
crib read 90db 6 feet away.
Doug
I have been working with a VAX780 sim running
Unix System V r2 VAX780 and am having strange
issues.
TERM is defined at vt100
When firing up vi at times the cursor is positioned
in the wrong place or when inserting text it over
writes areas on the screen.
I have tried vt100, vt100-am, vt100-nam and none
work as expected.
Any ideas? Thanks
-Ken
Happy holidays
--
WWL 📚
Thanks Ken! I hadn't, even considered the PDP-15. Looks like SimH supports it, that could make for an interesting project...
- Matt G.
On Monday, December 18th, 2023 at 6:12 PM, Ken Thompson <kenbob(a)gmail.com> wrote:
> the pdp-7 was run on the almost compatible
> pdp-9 and pdp-15 computers.
> i dont think that version of unix ever made
> it out of the research department.
>
> On Mon, Dec 18, 2023 at 5:54 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>
>> Are there any documented or remembered instances PDP-7 or Interdata 8/32 UNIX being installed on any such machines for use in the Bell System aside from their original hosts? Along similar lines, was the mention of PDP-7 UNIX also supporting the PDP-9 based solely on consistencies in the architecture or did this early version of UNIX actually get bootstrapped on a real PDP-9 at some point?
>>
>> My understanding of the pre-3B-and-VAX landscape of UNIX in the Bell System is predominantly PDP-11 systems, but there was also work in the late 70s regarding 8086 hosts as evidenced in some BSTJ and other publications, and there is the System/370 work (Holmdel?) which I don't know enough about to say whether it technically starts before or after UNIX touches the VAX.
>>
>> Thanks for any info!
>>
>> - Matt G.
> On 17 Dec 2023, at 13:02, KenUnix <ken.unix.guy(a)gmail.com> wrote:
-8<—
> I have tried vt100, vt100-am, vt100-nam and none
> work as expected.
I have a long-ago recollection that using vt100 had rendering issues with emacs, but vt102 was fine. Maybe worth a shot?
d
Somewhere between UNIX Release 3.0 and Release 4.1, a portion of the User's Manual was split off into a separate Administrator's Manual, leading to a reordering of the sections among other things. In the directories, these pieces would be placed in u_man and a_man respectively.
There may be some evidence of the manual being intact as of 4.0 or at least not completely separated. I've found consistently that references to manpages in the Documents for UNIX Release 4.0 collection follow their pre-split numbering and all refer to the User's Manual. The catch is that all references are to the UNIX User's Manual Release 3.0, so this may not point conclusively to the state of /usr/man on disk at the time. The Release 4.1 Administrator's Manual hasn't turned up yet but the User's Manual reflects the renumbering and is less the a_man pages. To complete the circle, the various Release 5.0 revisions of the documents do refer to the Administrator's Manual where appropriate.
Was the manual getting split up of any great shock or was it to be expected as the software grew? It would come to happen again between SysV and SVR2 with p_man. Out of curiosity I checked how my own manpage set is organized, it seems to be of the research order, with special files in section 4 rather than section 7 for instance. I've never studied how far reaching the different orders are.
- Matt G.
In a BSTJ article[1] it is said "The availability of a simulated UNIX operating system in DMERT allows UNIX programs from other processors to execute on the 3B20D Processor." Does this just mean C programs which are rebuilt or is there some implication DMERT's particular UNIX environment featured some sort of emulation facilities? I may be reading too much into it...
- Matt G.
[1] - https://archive.org/details/bstj62-1-303/page/n11/mode/2up
P.S. I learned I may walk past DMERT and a 3B20D most days, there's a long-operational telephone CO on my usual walk that through referencing public records I've discovered has a WECo 5ESS up in it somewhere. That's all the listing said, dunno if WECo is given as meaning an early model or just a generic name. Either way...so close yet so far, makes me ever so curious what dusty old bookshelves in that building might hold.
> From: Ken Thompson
> someone rewired someones desk lamp. i dont know how that worked out.
Sometimes electrical 'jokes' don't pan out - in a big way.
I was hacking the light switch in Jerry Saltzer's office (I don't recall
exactly what I was planning; IIRC, something mundane and lame like flipping
it upside down), and as I took it out of the box, the hot terminal touched
the side of the box (which was, properly, well grounded).
The entire 5th floor powered down.
What had happened was that the breaker for Jerry's office probably hadn't
been tripped in decades (maybe since it was put in), and it was apparently a
little sticky. Also, the floor had originally been wired back when all that
most people had in their offices, in the way of electrical load, was an
incandescent desk lamp or so. Now, most offices had, not just a couple of
terminals, most also had an Alto - greatly increased overall load. The total
draw for the whole floor was now very close to the rating of the main breaker
for the whole floor - and my slip of the hand had put it over. And that one
_wasn't_ sticky.
The worst part was that when we looked in the 5th floor electrical closet, we
couldn't find anything wrong. An electrician was summoned (luckily, or
unluckily, it was daytime; not having access to a 5th floor master, we'd gone
in while everything was unlocked - daytime), and he finally located the
breaker responsible - in an electrical closet on the 9th floor.
I got carpeted by Jerry, when he got back; I escaped without major
punishment,in part, IIRC, because I pointed out that I'd exposed a
previously-unsuspected issue. (I have this vague memory that the wiring on
the 5th floor was upgraded not long after.)
That wasn't the only historic CS building that has been abandoned. 545
Technology Square, one-time home of the Multics project, the MIT AI Lab,
and much else (including the above story) was exited by MIT some years
ago.
There, too, some history was abandoned - including the hack that allowed
people to call the elevators to their floor from their terminals. (Some
hackers had run some carefully disguised wires up into the elevator
controller - ran them along the back of structural members, carefully hidden
- and thence to the TV-11 that ran all the Knight TV bit-mapped displays
attached to the AI ITS time-sharing machine. So from a Knight TV console, if
you typed 'Escape E', it called the elevator to your floor - the code:
https://github.com/PDP-10/its/blob/master/src/system/tv.147
even has a table - at ELETAB: - giving which floor each console was on, so it
got called to the correct floor. I wonder what happened to that when the
Knight TV system was ditched? Did it get moved to another machine? Actually,
I have a dim memory that the elevator people found it, and it was removed.)
Noel
ken.unix.guy(a)gmail.com:
A company I used to work for was vacating a building. I asked, has anyone
checked under the raised floor tiles?
The answer was no. Well I did and found a lot of history down there. From
component parts from long forgotten
systems to water cooling lines for long gone IBM heavy metal and a ground
window.
===
I bet you didn't find a bowling ball.
Norman Wilson
Toronto ON
After much saying I would and never getting around to it, I've finally started filling out a bit of documentation on the various UNIX manuals I've been tracking down, fleshing out history around, tracing bibliographic references though, etc etc.
https://wiki.tuhs.org/doku.php?id=publications:manuals
Thus far I've got the research and CB pages filled out from available information, and PWB/commercial up through about '85-'86, give or take some things. I apologize in advance if I've omitted your favorite piece of trivia or got something wrong, please suggest corrections in any areas needing them, or even better, a Wiki is a communal resource so with Warren's OK, I'm sure you can also make contributions.
Most of the pictures are from my own library, although I've added a few others from thing around the net. There are links to various documents covered, TUHS content where most appropriate, a few archive.org and bitsavers links here and there. I don't intend to include links to any documents after System V's initial 1983 run, just pictures of covers for ease of identification.
I've already mentioned a few times but I highly encourage contributions. I intend to do another round at this sometime soon and round out at least the BSD stuff and later System V. If anyone else has photographs or documents they think should be in these articles and you don't want to do the Wiki part yourself, feel free to send me stuff and I'll make sure it gets put up there.
Finally, some reflection on the path here. "What was UNIX System IV" was one of those questions that plagued my mind for a long time, much before I knew much else about the history of UNIX. Not a crucial question by any means, but it was one of those little mysteries I always wanted to know more about, which is what then lead me to trying to find Release 4.0 documents and all that. Of course, that then lead to the rabbit hole of continuing to turn stuff up, I never imagined I'd actually be successful in trying to turn up more info on that version, let alone then continuing to find little pieces of history and slot in missing parts of stories. Along the way I've learned more about this darn operating system than I ever intended on learning and now feel net gain in several areas of my study. Plus, all this Bell System proximity is largely responsible for my interests in telephony as of late, and may come full circle in the gear I got for telephone experiments helping me resurrect this poor UNIX PC I've got sitting on the floor right now. I don't know what I would've been doing with so much of my free time the past few years otherwise, especially these colder months.
Hope folks enjoy the commentary!
- Matt G.
P.S. Combing over things for this, I've found a few more pieces of the UNIX/TS puzzle. The details are in the Release 3.0 section of the PWB/Commerial page linked above. Short of it is there are some interesting "leaks" of the name "UNIX/TS" into Release 3.0 documentation, inconsistently between the sources on the UNIX tree and the physical document I recently obtained.
Really sobering is the estimate that it will bring 1000 jobs to New
Brunswick. That's a small fraction of the capacity of Murray Hill. On the
upside is proximity to Rutgers.
Contrary to what the article said, Murray Hill does not date from the Labs'
foundation in 1925. The Labs was in the meat-packing district on West
Street in New York in a building now called Westbeth, said to be the
world's largest artist community. The High Line runs right through it. I
worked there one summer in the penthouse with a fine view of ship traffic
on the Hudson. Murray Hill opened in 1941 and West Street closed in about
1967.
> Goodbye, Unix Room!
The Unix Room was dismantled some time ago, but its quirky contents were
grabbed by the Labs archivist, who had them on display at the Unix50
celebration--pink flamingo, G. R. Emlin, CCW clock and all. I wonder
whether these relics will make the move.
Doug