"I'd go to the local University that teaches Fortran and ask around."
Aye, there's the rub.
SIUE (Southern Illinois University at Edwardsville) still had a COBOL
curriculum a decade ago, and they might still. They were fairly forthright
in training people to go work at a lot of the stodgier St. Louis
enterprises that still had a large COBOL footprint (AB, Enterprise
Rent-A-Car, Caterpillar, et al). By 2010, though, Express Scripts was
trying hard to move away from its ANCHOR (COBOL) system and
whatever-it-was-they-had running on VMS, and it sure felt like in the early
2010s STL was mostly Java EE.
I would think that FORTRAN is likelier to be passed around as folk wisdom
and ancient PIs (uh, Primary Investigators, not the detective kind)
thrusting a dog-eared FORTRAN IV manual at their new grad students and
snarling "RTFM!" than as actual college courses.
That said, if you want to learn FORTRAN and don't mind working from modern
FORTRAN back, I really was impressed by https://lfortran.org/ , and the
ability to run it in a JupyterLab playground environment is fantastic for
quick-turnaround experimentation. Plus Ondřej Čertík
<https://ondrejcertik.com/> was fun to talk to and hang out with.
On Mon, Feb 24, 2020 at 8:19 AM Larry McVoy <lm(a)mcvoy.com> wrote:
> On Mon, Feb 24, 2020 at 10:40:10AM +0100, Sijmen J. Mulder wrote:
> > Larry McVoy <lm(a)mcvoy.com> wrote:
> > > Fortran programmers are formally trained (at least I
> > > was, there was a whole semester devoted to this) in accumulated errors.
> > > You did a deep dive into how to code stuff so that the error was
> reduced
> > > each time instead of increased. It has a lot to do with how floating
> > > point works, it's not exact like integers are.
> >
> > I was unaware that there's formal training to be had around this but
> > it's something I'd like to learn more about. Any recommendations on
> > materials? I don't mind diving into Fortran itself either.
>
> My training was 35 years ago, I have no idea where to go look for this
> stuff now. I googled and didn't find much. I'd go to the local
> University that teaches Fortran and ask around.
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
> From: Warren Toomey
> Heinz Lycklama has shared a binder full of old technical memos with
> Clem Cole, who has scanned them in. Thanks to both of them for
> preserving these documents.
A big thank you to Heinz and Clem for their roles in making this happen!
Very interesting material. I live in hope that someday the source will turn
up - even a listing would be enough.
Noel
All, I received this interesting e-mail from Michael Thompson:
-----
Date: Fri, 21 Feb 2020 12:50:12 -0500
From: Michael Thompson <michael.99.thompson(a)gmail.com>
To: Warren Toomey <wkt(a)tuhs.org>
Subject: Unix V0 on SIMH PDP-9
I modified the PDP-7 .simh file so it will run on a SIMH PDP-9.
(attached)
We have a running PDP-9 at the RICM. If I added EAE, (we likely have
the necessary parts) and made a disk emulator like the one at the LCM,
we could also run UNIX V0 on it. It would be nice to have the disk
emulator emulate an RB disk, but that would also require emulating a
DMA adapter.
I am considering making an FPGA to emulate the memory controller and
32KW of memory. If I did that, I could put the RB and DMA emulation in
the same device.
--
Michael Thompson
----- End forwarded message -----
set cpu 8k
set cpu eae
set cpu history=100
show cpu
# set up SIMH devices:
# UNIX character translations (CR to NL, ESC to ALTMODE):
set tti unix
# RB09 fixed head disk:
set rb ena
att rb image.fs
# enable TELNET in GRAPHICS-2 keyboard/display(!!)
#set g2in ena
#att -U g2in 12345
# disable hardware UNIX-7 doesn't know about:
set lpt disa
set drm disa
set dt disa
set mt disa
set rf disa
set ttix disa
set ttox disa
# show device settings:
show dev
# load and run the paper tape bootstrap
# (loads system from disk)
load boot.rim 010000
go
Cheers, Warren
All, Heinz Lycklama has shared a binder full of old technical memos
with Clem Cole, who has scanned them in. Thanks to both of them for
preserving these documents. I've just put them at:
https://www.tuhs.org/Archive/Documentation/TechReports/Heinz_Tech_Memos/
Here's a list of the documents:
A_Minicomputer_Satellite_Processor_System.pdf
A_Virtual_Memory_Mini-Computer_System_516-TSS.pdf
MM-71-1383-3_Performance_Simulation_and_Measurement_of_a_Virtual_Memory_Multi-progamming_System_for_a_Small_Computer_19710120.pdf
MM-72-1353-16_Bus_Interface_in_a_Single_Bus_Multi-processor_Environment_19720920.pdf
Office_Communication_Research_in_Lab_135_19770208.pdf
TM-74-1352-1_Implementstion_of_Large_Contiguous_Files_and_Asynchronous_IO_in_UNIX_19740104.pdf
TM-74-1352-7_Plotting_Facilities_for_Mini-Computer_Systems_19740614.pdf
TM-75-1352-2_Emulation_of_UNIX_on_Peripheral_Processors_19750109.pdf
TM-75-1352-3_GLANCE_Terminals_on_UNIX_Time-Sharing_19750303.pdf
TM-75-1352-4_A_Structured_Operating_System_for_a_PDP-11.45_19750506.pdf
TM-75-1352-7_MERT_A_Multi-Environment_Real-Time_Operating_System_19751118.pdf
TM-77-1352-1_The_MINI-UNIX_19770103.pdf
TM-78-3114-1_UNIX_on_a_Microprocessor_19780322.pdf
TM-78-3114-2_A_Minicomputer_Satellite_Processor_System_19780322.pdf
TM-78-3114-3_The_MERT_Operating_System_19780422.pdf
TM-78-3114-4_The_MERT-UNIX_Supervisor_19780420.pdf
TM-78-3114-5_File_System_Structures_for_Real-Time_Applications_19780420.pdf
The_MERT_Operating_System.pdf
UNIX_on_a_Microprocessor_19780322.pdf
Cheers, Warren
> one of the things I wanted to do in my retirement was convert
> all the stuff that is in debian back from info to man(7)
*all* the stuff? Please don't do that literally. The garrulity
quotient of info pages dwarfs even that of the most excessive
modern man pages. But I appplaud the intent to assure man
pages are complete.
Doug
I was more interested in the "Mach" kernel itself as I've only recently been able to get it to boot up from sources for the i386.
I hadn't looked into the other aos/vrm stuff. But that is interesting, a 4.3 with the vfs.
In hind sight maybe Mach wasn't so bad with its messaging and threads, along with multiprocessor support.. Its what we all were eventually desiring anyway.
One thing is for sure, multiple GHz machines sure make it a lot easier to use, these days.
I'd gotten lucky with Mach as the platform code is really modular and even a monkey like me banging on a keyboard of an existing Mach 386 machine was able to get the latter source running under the older platform code. Shame Mach 3 seems to have broken all the fun stuff or requires real effort and understanding... Things I lack.
But I was really surprised about the coprocessor cards.. I wonder what other interesting things are in there. Or how hard it is to hammer 386 BSD into aos "sort of a 4.3 Tahoe ++"
From: Kevin Bowling <kevin.bowling(a)kev009.com>
Sent: Tuesday, February 18, 2020, 9:02 p.m.
To: Jason Stevens
Cc: Charles H Sauer; TUHS
Subject: Re: [TUHS] Bitsavers' RT/PC, AIX, AOS, etc. recent additions
Thanks for clarifying. I will reassert that the three pieces of systems software I mentioned (VRM, AIX2, AOS) are not Mach in any way I know about. AOS may have some generic cross pollination, it’d be whatever was going on at CSRG also for non-RT (4.2-4.3?) BSD platforms at the time of checkout. Kirk or Warner may be able to elucidate if provided the date and some reference material from AOS or I can do some original research.
Most distinctly and important: VRM is not in any way Mach, it was its own bespoke microkernel. The microkernel would have been the most “Mach” part of Mach research, so this makes the VRM concept even more unique and enjoyable to me being so different and ambitious. Therefore I don’t think it is particularly correct to say any of VRM, AIX, AOS software is Mach without its ukernel.
What you linked is a very late port (late 1990s) of a hybrid of 4.3 and 4.4 BSD (late meaning in the time when Net, Free, and Open had long taken over from CSRG BSD). I will quote a Twitter communication I had with Miod Vallat in the past:“Also it's not really 4.4. It's a mix of 4.3BSD-Reno plus the 4.4 VFS layer and new system calls. It still uses the 4.3, pre-Mach, VM system, hence no mmap(2).”
What Miod means by “pre-Mach” above: 4.4 BSD adopted the kernel memory subsystem of Mach into the existing BSD monolithic kernel. Not any of the ukernel or things like Mach IPC.
Not trying to be overly pedantic with you just trying to keep the records straight since these machines are one of my keen interests and I welcome new information on them.
Regards,Kevin
On Tue, Feb 18, 2020 at 5:30 AM Jason Stevens <jsteve(a)superglobalmegacorp.com> wrote:
Oh sure!
I'm having to use my phone...
It's the combined sources here:http://bitsavers.trailing-edge.com/bits/IBM/RT/rt_bsd44/
doc mk
jsteve@localhost:~/rt_bsd4/src/sys/.local/mach2.4$ pwd
/home/jsteve/rt_bsd4/src/sys/.local/mach2.4
jsteve@localhost:~/rt_bsd4/src/sys/.local/mach2.4/mk/conf$ cat vers*6951X
So 5.1x edit 69
jsteve@localhost:~/rt_bsd4/src/sys/.local/mach2.4/mk$ more CHANGELOGHISTORY 17-May-88 David Golub (dbg) at Carnegie-Mellon University XM21: David Black completely rewrote the accurate timing code (which is now implemented on all machines) and the priority and scheduling algorithms. The system now correctly reports cpu_usage per thread.
The all file has this before i386 was added.
So it's an older v2 than what is on the CSRG CD, but not as old as the VAX '86 stuff.
It seems to be March 11 1989, although that could be when this was either archived or ported.. I guess they didn't exactly sync to a public kernel tree all that often.
On Tue, Feb 18, 2020 at 4:05 PM +0800, "Kevin Bowling" <kevin.bowling(a)kev009.com> wrote:
I’m asking exactly where the Mach is in the linked archive. VRM, AIX or AOS? Can you support this with a reference for my own documentation
On Tue, Feb 18, 2020 at 1:02 AM Jason Stevens <jsteve(a)superglobalmegacorp.com> wrote:
It's the CMU micro kernel. The hybrid "2.6" lived on in NeXTSTEP, and OPENSTEP, with various upgrades to bring it up to OS X.
The RT as I understand it was a research machine, hence the BSD ports, and Mach port.
What is interesting the more I dig around is that there was ROMP coprocoessor cards, and an OS/2 and DOS monitor program to let you boot BSD on the card. Peripheral IO was done on the x86 side.
If RT's are rare, I can't imagine how impossible it would be to get one of those cards!
The BSD assembler and linker source is in the archives too, no doubt it'll help someone make a RT emulator.
Get Outlook for Android
On Tue, Feb 18, 2020 at 12:54 PM +0800, "Kevin Bowling" <kevin.bowling(a)kev009.com> wrote:
Can you clarify what is Mach in this archive if I have a gap in my knowledge? I didn’t know the VRM had any direct relationship to Mach
Regards,Kevin
On Mon, Feb 17, 2020 at 9:43 PM Jason Stevens <jsteve(a)superglobalmegacorp.com> wrote:
Interesting stuff! And another version of Mach is buried in there.
So the 4 csrg cd set may have updates to the romp support as it's an older version of the 5.1 kernel from 89... Not that think there is any Mach romp users.
Get Outlook for Android
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Charles H Sauer <sauer(a)technologists.com>
Sent: Tuesday, February 18, 2020, 5:51 a.m.
To: TUHS
Subject: [TUHS] Bitsavers' RT/PC, AIX, AOS, etc. recent additions
The Bitsavers' RSS feed
(http://user.xmission.com/~legalize/vintage/bitsavers-bits.xml) seemed
to me to be dominated by RT, AIX, AOS (BSD for RT), etc. stuff in the
last week or so. I've only sampled a few items, but discovered a few
things that I should have known (or knew and forgot?) while I was at IBM.
http://www.bitsavers.org/pdf/ibm/pc/rt/
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/Skype/Twitter: CharlesHSauer
I've noticed that some guy named Dr. Shiva Ayyadurai is all over
Twitter, claiming that he is the inventor of email. He doesn't look
like he's nearly old enough. I thought it was Ray Tomlinson. Looks
like he's trying to create some press for his Senate run.
Anyone older that me here that can either confirm or deny? Thanks!
Interesting stuff! And another version of Mach is buried in there.
So the 4 csrg cd set may have updates to the romp support as it's an older version of the 5.1 kernel from 89... Not that think there is any Mach romp users.
Get Outlook for Android
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Charles H Sauer <sauer(a)technologists.com>
Sent: Tuesday, February 18, 2020, 5:51 a.m.
To: TUHS
Subject: [TUHS] Bitsavers' RT/PC, AIX, AOS, etc. recent additions
The Bitsavers' RSS feed
(http://user.xmission.com/~legalize/vintage/bitsavers-bits.xml) seemed
to me to be dominated by RT, AIX, AOS (BSD for RT), etc. stuff in the
last week or so. I've only sampled a few items, but discovered a few
things that I should have known (or knew and forgot?) while I was at IBM.
http://www.bitsavers.org/pdf/ibm/pc/rt/
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/Skype/Twitter: CharlesHSauer
With Doug’s permission, I’d like to bring the group’s attention to a recent oral history with him by the Computer History Museum.
You can find the records for the two interviews here, and in them the links to the PDF transcripts:
https://www.computerhistory.org/collections/oralhistories/?s=mcilroy
As I am sure you can imagine, it was a great pleasure to interview Doug. I learned a tremendous amount.
Best wishes,
David
..............
David C. Brock
Director and Curator
Software History Center
Computer History Museum
computerhistory.org/softwarehistory<http://computerhistory.org/softwarehistory>
Email: dbrock(a)computerhistory.org
Twitter: @dcbrock
Skype: dcbrock
1401 N. Shoreline Blvd.
Mountain View, CA 94943
(650) 810-1010 main
(650) 810-1886 direct
Pronouns: he, him, his
Apropos of Knuth and me, may I immodestly point to
https://comic.browserling.com/tag/knuth
The second likeness of Don is quite good. And
the screen almost justifies posting to tuhs.
Doug
Arnold wrote:
> Well said. The markup language was clearly inspired by Scribe, which
> was quite popular in Academia (at least) at the time.
>
> As a *markup language*, I personally find it superior to anything
> else currently in use, but that's a whole different discussion that
> on TUHS inevitably degenerates into the current spate of ranting,
> so I won't start on it.
So in other words, you mean:
@Flame(Off)
-Don
While messing around with the '87 release of GCC, I was going through the steps of setting up TME, and I stumbled across this derived emulator that is incredibly simple to setup and run, unlike TME:
https://github.com/lisper/emulator-sun-2
Additional patches adding a BPF backend Ethernet adapter is here:
https://github.com/sigurbjornl
The program itself is only slightly C++ with a few variables being declared inline which was trivial to move to section starting to get it to compile with a picky C compiler (Microsoft C). The IO is SDL based, so making an x86/ARM win32 was really trivial.
Anyway for all the SunOS enthusiasts I figured that you would love to give this one a shot!
For Windows users, or anyone wanting to just run it on some unsuspecting normies I put Win32 x86 binaries here:
https://sourceforge.net/projects/bsd42/files/4BSD%20under%20Windows/v0.4/SU…
I have to wonder how impossible it would be to integrate it into SIMH...
> From: Jon Steinhart
> When you're looking for the documentation for pdf2svg, for example, and
> there is no man page, how long does it take to figure out that there is
> no documentation at all?
I am _sooo_ tempted to say 'What do you think source is for?' :-)
Noel
> maybe if interest, i have a copy of an article by sandy fraser, “early experiments with time division networks” from ieee networks, jan 1993, pp12-26.
>
> this is a high level paper and describes spider, datakit, incon.
>
> it may have little new but i felt it had a lot of good background and a useful references list.
>
> i am wary of scanning it as its the ieee...
>
> -Steve
Many thanks for the suggestion! Just the other day I bought another Fraser paper on IEEE, "Towards a Universal Data Transport System” from 1983, which is also a high level descriptive overview.
A few people have responded off list and will be looking through their archives for relevant papers.
https://fosdem.org/2020/schedule/event/early_unix/
The video of Warner Losh's FOSDEM presentation "The Hidden Early History of Unix" is now available.
Cheers, Warren
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
maybe if interest, i have a copy of an article by sandy fraser, “early experiments with time division networks” from ieee networks, jan 1993, pp12-26.
this is a high level paper and describes spider, datakit, incon.
it may have little new but i felt it had a lot of good background and a useful references list.
i am wary of scanning it as its the ieee...
-Steve
> On 15 Feb 2020, at 2:00 am, tuhs-request(a)minnie.tuhs.org wrote:
>
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. Datakit early end-to-end protocol(s) (Paul Ruizendaal)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 14 Feb 2020 17:22:37 +0100
> From: Paul Ruizendaal <pnr(a)planet.nl>
> To: TUHS main list <tuhs(a)minnie.tuhs.org>
> Subject: [TUHS] Datakit early end-to-end protocol(s)
> Message-ID: <7128AB08-C99E-490E-BD81-7D62503FF676(a)planet.nl>
> Content-Type: text/plain; charset=utf-8
>
>
> I’m looking for the end-to-end datakit network protocol as it existed in 7th Edition.
>
> Context is as follows:
>
> - The Spider network guaranteed reliable, in-order delivery of packets at the TIU interface. There does not seem to have been a standard host end-to-end protocol, although applications did of course contain sanity checks (see for instance the ‘nfs’ source here: http://chiselapp.com/user/pnr/repository/Spider/tree?ci=tip)
>
> - Datakit dropped the reliable delivery part (although it did retain the in-order guarantee) and moved this responsibility to the host. It is the (early) evolution of the related protocol that I’m trying to dig up.
>
> - 7th Edition appears to have had a (serial line based) Datakit connection. Datakit drivers are not in the distributed files, but its tty.h file has defines for several Datakit related constants. Also, as the first Datakit switches became operational at Murray Hill in ’78 or ’79, it seems a reasonable assumption that the Research code base included drivers & protocols for it around that time.
>
> - After that the trail continues with the 8th edition which has a stream filter (dkp.c) for the “New Datakit Protocol”: http://chiselapp.com/user/pnr/repository/v8unix/artifact/01b4f6f05733aba5 This suggests that there was an “Old Datakit Protocol” as well - if so, this may have been the protocol in use at the time of 7th Edition.
>
> The “New Datakit Protocol” appears to be (more or less) the same as what was later called URP (Universal Receiver Protocol). At the time of Plan9 its IL/IP protocol appears to have been designed as an equivalent for URP/Datakit. The early protocols where apparently (co-)designed by Greg Chesson and maybe also stood at the base of his later XTP protocol work.
>
> Any recollections about the early history and evolution of this Datakit protocol are much appreciated. Also, if the source to the 7th Edition Datakit network stack survived I’d love to hear.
>
> Paul
>
>
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
>
>
> ------------------------------
>
> End of TUHS Digest, Vol 51, Issue 18
> ************************************
I’m looking for the end-to-end datakit network protocol as it existed in 7th Edition.
Context is as follows:
- The Spider network guaranteed reliable, in-order delivery of packets at the TIU interface. There does not seem to have been a standard host end-to-end protocol, although applications did of course contain sanity checks (see for instance the ‘nfs’ source here: http://chiselapp.com/user/pnr/repository/Spider/tree?ci=tip)
- Datakit dropped the reliable delivery part (although it did retain the in-order guarantee) and moved this responsibility to the host. It is the (early) evolution of the related protocol that I’m trying to dig up.
- 7th Edition appears to have had a (serial line based) Datakit connection. Datakit drivers are not in the distributed files, but its tty.h file has defines for several Datakit related constants. Also, as the first Datakit switches became operational at Murray Hill in ’78 or ’79, it seems a reasonable assumption that the Research code base included drivers & protocols for it around that time.
- After that the trail continues with the 8th edition which has a stream filter (dkp.c) for the “New Datakit Protocol”: http://chiselapp.com/user/pnr/repository/v8unix/artifact/01b4f6f05733aba5 This suggests that there was an “Old Datakit Protocol” as well - if so, this may have been the protocol in use at the time of 7th Edition.
The “New Datakit Protocol” appears to be (more or less) the same as what was later called URP (Universal Receiver Protocol). At the time of Plan9 its IL/IP protocol appears to have been designed as an equivalent for URP/Datakit. The early protocols where apparently (co-)designed by Greg Chesson and maybe also stood at the base of his later XTP protocol work.
Any recollections about the early history and evolution of this Datakit protocol are much appreciated. Also, if the source to the 7th Edition Datakit network stack survived I’d love to hear.
Paul
All, I've also set this up to try out for the video chats:
https://meet.tuhs.org/COFF
Password to join is "unix" at the moment.
I just want to test it to confirm that it works; I'll be heading
out the door to go to the shops soon.
Cheers, Warren
I rather enjoyed having the “unix50.org” website around: very handy to test out bits and pieces of Unix history.
It seems to have been taken down. Would it make sense to have this resource available permanently?
> What i like is the autocorrect feature in v8:
>
> $ cd /usr/blot
> /usr/blit
> $ pwd
> /usr/blit
Here I am, editor of the v8 manual and unaware of the feature.
We now know that silent correction is a terrible idea.
Postel's principle: "be conservative in what you do, be liberal
in what you accept from others" was doctrine in early HTML
specs, and led to disastrous disagreement among browsers'
interpretation of web pages. Sadly, the "principle" lives on
despite its having been expunged from the HTML spec.
Today's "langsec" movement grew out of bitter experience
with malicious inputs exploiting "liberal" interpretation of
nonconforming data.
Today's NYT has an article about fake knockoffs of George Orwell
for sale on Amazon. It cites an edition of "Animal Farm"
apparently pirated by lowgrade OCR autocorrected and never
proofread. One of the many gaffes is that every instance of
"iv" beame ChapterIV, as in "prChapterIVacy".
I didn't like some Lisp systems' DWIM (do what I mean) when I
first heard about the feature, and I like it even less 40-some
years on. I would probably have remonstrated with Rob had I
realized the shell was doing it.
Doug
>What's funny is that in doing the work to get 'se' running on Georgia
>Tech's Vax, I had to learn vi. By the time I was done, vi had become
>my main editor and had burned itself into my finger's ROMs.
I do ed/se occasionally for simple tasks, vim frequently , because it loads fast, and emacs for all bigger projects, beside liteide for golang.
I have always suspected that the brevity of the Unix command names was strongly
influenced by the clunky keyboards on the teletypes that were being used. Can
anyone confirm, deny, and/or comment on this?
-r
On 1/17/20, Brantley Coile <brantley(a)coraid.com> wrote:
> what he said.
>
>> On Jan 17, 2020, at 6:20 PM, Rob Pike <robpike(a)gmail.com> wrote:
>>
>> Plan 9 is not a "single-system-image cluster".
>>
>> -rob
>>
>
>
I guess SSI isn't the right term for Plan 9 clustering since not
everything is shared, although I would still say it has some aspects
of SSI. I was talking about systems that try to make a cluster look
like a single machine in some way even if they don't share everything
(I'm not sure if there's a better term for such systems besides the
rather vague "distributed" which could mean anything from full SSI to
systems that allow transparent access to services/devices on other
machines without trying to make a cluster look like a single system).
[x-posting to COFF]
Idea: anybody interested in a regular video chat? I was thinking of
one that progresses(*) through three different timezones (Asia/Aus/NZ,
then the Americas, then Europe/Africa) so that everybody should be
able to get to two of the three timezones.
(* like a progressive dinner)
30-60 minutes each one, general old computing. Perhaps a guest speaker
now and then with a short presentation. Perhaps a theme now and then.
Perhaps just chew the fat, shoot the breeze as well.
Platform: Zoom or I'd be happy to set up a private Jitsi instance.
Something else?
How often: perhaps weekly or fortnightly through the three timezones,
so it would cycle back every three or six weeks.
Comments, suggestions?!
Cheers, Warren
> From: Dave Horsfall <dave(a)horsfall.org>
> [ Getting into COFF territory, I think ]
I'm sending this reply to TUHS since the message I'm replying to has some
errors, and I'd like for the corrections to be in the record close by.
> On Thu, 30 Jan 2020, Clem Cole wrote:
>> They way they tried to control it was to license the bus interface chips
>> (made privately by Western Digital for them IIRC but were not available
>> on the open market).
Although DEC did have some custom chips for QBUS interfacing, they didn't
always use them (below). And for the UNIBUS, the chips were always, AFAIK,
open market (and the earliest ones may have predated the UNIBUS).
E.g. the M105 Address Selector, a single-width FLIP CHIP, used in the earliest
PDP-11's when devices such as the RK11-C, RP11 and TM11 were made out of a
mass of small FLIP CHIPS, used SP380A's for its bus receivers and 8881's for
transmitters.
On the QBUS, the KDF11-A and KDJ11-A CPU cards used AMD 2908's as bus
transceivers, even though DEC had its own custom chips. The KDF11-A also
used DS8640's and DS8641's (transmitters and receivers), and also an 8881!
(The UNIBUS and QBUS were effectively identical at the analog level, which is
why a chip that historical was still in use.)
>> If I recall it was the analog characteristics that were tricky with
>> something like the BUS acquisition for DMA and Memory timing, but I
>> admit I've forgotten the details.
One _possibility_ for what he was talking about was that it took DEC a while
to get a race/metastability issue with daisy-chained bus grant lines under
control. (The issue is explained in some detail here:
https://gunkies.org/wiki/Bus_Arbitration_on_the_Unibus_and_QBUS
and linked pages.) This can been seen in the myriad of etch revisions for the
M782 and related 'bus grant' FLIP CHIPs:
https://gunkies.org/wiki/M782_Interrupt_Control
By comparison, the M105 only had 3 through it's whole life!
It wasn't until the M7821 etch D revision, which came out in 1977, almost a
decade after the first PDP-11's appeared, that they seemed to have absorbed
that the only 'solution' to the race/metastability issue involved adding
delays.
In all fairness, the entire field didn't really appreciate the metastability
issue until the LINC guys at WUSTL did a big investigation of it, and then
started a big campaign to educate everyone about it - it wasn't DEC being
particularly clueless.
> Hey, if the DEC marketoids didn't want 3rd-party UNIBUS implementations
> then why was it published?
Well, exactly - but it's useful to remember the differening situation for DEC
from 1970 (first PDP-11's) and later.
In 1970 DEC was mostly selling to scientists/engineers, who wanted to hook up
to some lab equipment they'd built, and OEM's, who often wanted to use a mini
to control some value-added gear of their own devising. An open bus was really
necessary for those markets. Which is why the 1970 PDP-11/20 manual goes into
a lot of detail on how to interface to the PDP-11's UNIBUS.
Later, of course, they were in a different business model.
Noel
Talking of editors...
Once I learned Wordstar in old CP/M (before that it was mostly line
editing), and then soon, other editors that supported the Wordstar key
combinations, I got hooked on those. Joe is, to date, one of my
favorites.
On ancient UNIX, my editor of choice was 's' from Software Tools, its
main advantage being that it didn't require curses. Then we got VMS and
'eve' and that took over for a while (though I never took advantage of
all its power), mostly until I ported 's' and 'joe'.
Then came X, and when nedit was released, I was hooked on it. It has
been for decades almost the only one that could do block selection 'a
la' RAND editor.
I have been struggling to continue using it despite it lack of support
for UTF, trying various projects spun off nedit, until I recently
discovered xnedit, which is an update available on GitHub and is again
all I need, with support for UTF8, some minor UI improvements and
support for modern fonts.
Now, I still use 's' for ancient Unix emulators, 'joe' for the
command line and 'xnedit' for X.
j
--
Scientific Computing Service
Solving all your computer needs for Scientific
Research.
http://bioportal.cnb.csic.es
I’ve seen the archives of Atari System V Release 4 for the TT030, and the scanned user and developer manuals. Has anything else been preserved, e.g. the installation tapes and any other manuals?
Is there even a full accounting of what was in the box and what shipped afterwards (patches etc.)?
-- Chris
> Does anybody have or know of a list of system calls that describes
> when and what version of UNIX (and descendents) they were added?
Hardly a week goes by in which I don't refer to the attached
condensed listing of all the man pages in v1-v9, taken from
my "Research Unix Reader". It casts a much narrower net than
Diomedes Spinelli's repository. but it takes no clicking to
look thing up--just a quick grep.
Doug
[ Getting into COFF territory, I think ]
On Thu, 30 Jan 2020, Clem Cole wrote:
> BTW: Dave story is fun, but I think a tad apocryphal. He's right that
> DEC marketing was not happy about people using it, but it was well
> spec'ed if you had CPU schematics. They way they tried to control it
> was to license the bus interface chips (made privately by Western
> Digital for them IIRC but were not available on the open market). IIRC
> if you did not use DEC's chips, you could have issues if you >>similar<<
> function chips from National Semi. I remember Ken O'Munhundro giving a
> talk at a USENIX (while he was CEO of Able) talking about 'be careful
> with foreign UNIBUS implementations.' If I recall it was the analog
> characteristics that were tricky with something like the BUS acquisition
> for DMA and Memory timing, but I admit I've forgotten the details.
Ah; the chips could explain it. I can't remember where I heard the story,
but it was likely in ";login:" or some place. Hey, if the DEC marketoids
didn't want 3rd-party UNIBUS implementations then why was it published?
> I think you are confusing VAX's SBI with UNIBUS. With the Vax, unlike
> PDP-11, the systems did not come with complete schematics for all
> boards. So to design for the SBI you had to reverse engineer the CPU
> and Memory boards. DEC having successfully won the CalData suit, went
> after Systems Industries who was the first to build SBI controllers.
> DEC lost, but the truth was that because they had work had been reverse
> engineering, SI was close but not 100% right and they had a number of
> issues when the boards first hit the street, particularly with UNIX
> which did a better job of overlapped I/O than VMS did. At UCB we had a
> logic analyzer in one of the 780s at all times, and the phone number of
> the SI engineers. We eventually helped them put out a couple ECO's
> that make the original boards work in practice much better.
No; it was definitely UNIBUS (I wasn't aware of the SBI at the time).
As for overlapped seeks, when they were implemented in Unix it broke the
RK-11 controller, and DEC pointed the finger at Unix (of course) since
their own gear worked. To cut a long story short, they were forced to use
some fancy diagnostic (DECEX?) which hammered everything at the same time,
and the problem showed up. Turned out that their simpler diagnostics did
not test for overlapped seeks, because they knew that it didn't work; out
same the FE to modify the controller...
> BTW: My friend Dave Cane lead the BI at DEC after finishing up the
> VAX/750 project (he had designed the SBI for 780 before that). In
> fact, the BI was >>supposed<< to be 'open' like Multibus and VME and all
> chips were supposed to be from the merchant market. But at the last
> minute, DEC marketing refused and locked down the specs/stopped shipping
> schematics with the new systems destined to use BI. Dave was so pissed,
> he left DEC to found Masscomp and design the MC500 (using the
> Multibus).
Yet another reason why DEC went under, I guess...
-- Dave
Greetings,
Is this issue online? I may have a copy buried in my boxes of books, and am
on the road. I'd like to read the article on portability and/or the one on
performance. One of those has a table of internal vs external release names
/ dates. archive.org and elsewhere only has through 83. I discovered I
might have it this morning 20 minutes before I had to leave for the airport
for another talk. :(
Thanks for any help you can provide....
Warner
> From: Warner Losh
> this predates everything except Whirlwind which I can't find a paper for.
Given the 'Whirlwind is a ringer' comment, I asssume this:
https://en.wikipedia.org/wiki/Whirlwind_I<
is what they mean.
Pretty interesting machine, if you study its instruction set, BTW; with no
stack, subroutines are 'interesting'.
Noel
> From: Clem Cole
> So WD designs and builds a few LSI-11 as a sales demo of what you could
> do
> ...
> he put it on the QBUS which DEC could not lock up because they did not
> create it as WD had.
Wow! WD created the QBUS? Fascinating. I wonder if DEC made any changes to the
QBUS between the original demo WD boards and the first DEC ones? Are there any
documents about the WD original still extant, do you know?
(FWIW, it seems that whoever did the QBUS interrupt cycle had heard about the
metastability issues when using a flop to do the grant-passing arbitrations;
see here for more:
https://gunkies.org/wiki/Bus_Arbitration_on_the_Unibus_and_QBUS#QBUS_Interr…
DEC had previously bent themselves into knots trying to solve it on the UNIBUS:
https://gunkies.org/wiki/M782_Interrupt_Control#Revisions
so it would be interesting to know if it was WD or DEC who did the DIN thing to
get rid of it on the QBUS.)
Noel
> Always use '\&' (a non-printing, zero width character) to
> make it clear to the software, that the _function_ of the
> character next to it, is neither a sentence-terminating nor
> a control one.
It is unfortunate that such advice has to be given. One should
not have to defend against stupid AI. This is one of only two
really unfortunate design choices (in my opinion) in original
[nt]roff. (The other is beginning a new page when the vertical
position reaches--as distinct from definitively passing--the
bottom of a page.)
If AI is used, it should be optional. I happen not to like
double-width intersentence space, but it keeps getting foisted
on me essentially at random. Instead of fattening the manual
with annoying duties like that quoted above, I suggest fattening
it with a new request, "turn on/off doubling of spaces between
apparent sentences", or "put at least the specified space
between apparent sentences". One can still use \&, but then
it's for a chosen purpose, not just defense against gremlins.
Incidentally, "next to" in the quoted advice must be read with
care. Sometimes it means before, sometimes after.
------------------------------------------------------------
In this old AI-induced trouble I see a cautionary lesson for
paragraph-based line breaking. fmt(1) is an existing program
that tries to do this. On unjustified text (i.e. all text
handled by fmt) it produces paragraphs of different "optimal"
widths, which can be even more distracting than unusually
ragged right margins.
Doug
All, I was asked by Max to pass this query on to the TUHS list. Can
you e-mail back to Max directly. Thanks, Warren
----- Forwarded message from Maximilian Lorlacks <maxlorlax(a)protonmail.com> -----
Date: Sun, 26 Jan 2020 19:46:38 +0000
From: Maximilian Lorlacks <maxlorlax(a)protonmail.com>
To: "wkt(a)tuhs.org" <wkt(a)tuhs.org>
Subject: Fwd request: Text of Caldera's free licenses for UnixWare/OpenServer
Hi Warren,
Could you please forward this to the TUHS list? I'm not a subscriber
to the list, but I perhaps someone there might know something about
this.
In 2001 and early 2002 (I can't believe it's already been almost two
decades), Caldera Systems, Inc. offered non-commercial licenses at no
cost for OpenServer 5.0.6, UnixWare 7.1(.1?) and Open UNIX 8. However,
the web archive could not to capture the actual agreement hidden behind
the entrypoint form. I failed to get a license during that time since I
wasn't really interested in UNIX at that point, but in the interest of
historical preservation, I'm interested if anyone got those licenses
from back then and if so, if they've saved the actual license agreement
text. I'm interested in what it reads. I'm also curious about whether
the license keys from back then still work with Xinuos's new
registration platform, but it's probably too much to ask for people to
test that.
Please note that I am *not* trying to revive the trainwreck that is
the issue of the validity and scope of the Ancient UNIX license. The
only way to properly resolve that would be a letter signed from Micro
Focus's legal department, but they've made it exceedingly clear that
they will persistently ignore any and all attempts to elicit any kind
of response regarding Ancient UNIX.
Cheers,
Max
----- End forwarded message -----
> It might be worth mentioning that the Cambridge Ring (in the UK) used a very
> similar idea: a head end circulated empty frames which stations could fill in.
I'm quite sure the similarity is not accidental. Fraser began the Spider
project almost immediately upon moving from Cambridge to Bell Labs.
Doug
> > On Jan 26, 2020, at 11:28 AM, arnold at skeeve.com wrote:
> >
> > "Jose R. Valverde via TUHS" <tuhs at minnie.tuhs.org> wrote:
> >
> >> Talking of editors...
> >>
> >> On ancient UNIX, my editor of choice was 's' from Software Tools, its
> >> main advantage being that it didn't require curses.
> >
> > That editor was from "A Software Tools Sampler" by Webb Miller, not
> > "Software Tools" by Kernighan and Plauger.
> Well, that would explain why I couldn’t find it. Do you have softcopy of the editor source? I’d really like a screen editor for v7…. Adam
So do I.
Editor source seems to be here:
https://github.com/udo-munk/s
If you are doing a build for V7, I’d be interested in hearing the results.
I noted with much pleasure that the main bitsavers site is back up, and that at some point it has added a full set of scans of “Datamation”. The Feb 1975 issue contains an article from Dr. Fraser about Spider and the network setup in Murray Hill early in 1975:
http://bitsavers.org/pdf/datamation/197502.pdf
For ease of reference I have also temporarily put the relevant 4 pages of the issue here:
https://gitlab.com/pnru/spider/blob/master/spider.pdf
I find the graphic that shows how Spider connected machines and departments the most interesting, as it helps understand how the pro’s and con’s of Arpa Unix might have been perceived at that time.
The more I read, the more confused I become whether the “Pierce loop” was a precursor to “Spider” or a parallel effort.
The facts appear to be that John Pierce (https://en.wikipedia.org/wiki/John_R._Pierce) submitted his paper to BSTJ in December 1970, essentially describing a loop network with fixed size short datagrams, suggesting T1 frames. It is quite generic. In February 1971 W.J. Kropfl submits a paper that describes an implementation of the ideas in the Pierce paper with actual line protocols and a TIU. In October 1971 C.H. Coker describes in a 3rd paper how to interact with this TIU from a H516 programming perspective.
Several Spider papers mention that the project was started in 1969 and that the first Spider link was operational in 1972. The team appears to be entirely different: the h/w is credited to Condon and Weller, and the s/w to Frazer, Jensen and Plaugher. The Spider TIU is much more complex (200 TTL chips vs. 50 in the Kropfl TIU). The main reason for that - at first glance - appears to be that in the Spider network the TIU handled guaranteed in order delivery (i.e managed time outs and retransmissions), whereas in the Kropfl implementation this was left to the hosts.
It would seem logical that the latter was an evolution of the former, having been developed at the same site at the same time. A 1981 book seems to take that view as well: “Local Computer Network Technologies” by Carl Tropper includes the text "Spider Spider is an experimental data communications network which was built at the Bell Telephone Laboratories (Murray Hill, New Jersey) under the direction of A. G. Fraser. A detailed description of the network is given by Fraser [FRAS74]. This network was built with the notion of investigating Pierce's idea of ...” The chapter is titled “The Pierce loop and its derivatives”. This is a much as Google will give me - if somebody has the book please let me know.
On the other hand, the Spider papers do not mention the Kropfl network or Pierce’s paper at all. The graphic in Datamation appears to show two Kropfl loops as part of the network setup. Yet, this is described in the accompanying text as "4. Honeywell 5l6: Supports research into comunications techniques and systems. The machine has a serial loop I/O bus threaded through several labs at Murray Hill. Equipment under test is connected either directly to the bus or to a minicomputer which is then connected to the bus. Also avail- able are graphics display terminals and a device that can write read-only memory chips.” Maybe this is a different bus, but if it is the same as the Kropfl loop, to call it a “serial loop I/O bus” suggests it was a parallel effort unrelated to Spider.
Does anybody on the list recall whether Spider was a parallel effort or a continuation of the earlier work?
The anecdote below came from Nils-Peter Nelson, who as a
manager in the computer center bought and installed the
Labs' biggest Unix machine, a Cray 2. He also originated
the string.h package.
Doug
Dennis told me he was going to a class reunion at Harvard.
Me: "I guess you're the most famous member of your class."
dmr: "No, the Unabomber is.
> From: Paul Ruizendaal
> a loop network with fixed size short datagrams
It might be worth mentioning that the Cambridge Ring (in the UK) used a very
similar idea: a head end circulated empty frames which stations could fill in.
I think it started slightly later, though. Material about it is available
online.
Noel
> Ugh. Memory lane has a lot of potholes. This was a really long time ago.
Many thanks for that post - really interesting!
I had to look up "Pierce Network", and found it described in the Bell Journal:
https://ia801903.us.archive.org/31/items/bstj51-6-1133/bstj51-6-1133_text.p…
In my reading the Spider network is a type of Pierce network.
However, the network that you remember is indeed most likely different from Spider:
- it was coax based, whereas the Spider line was a twisted pair
- there was more than one, whereas Spider only ever had one (operational) loop
Condon and Weller are acknowledged in the report about Spider as having done many of its hardware details. The report discusses learnings from the project and having to tune repeaters is not among them (but another operational issue with its 'line access modules’ is discussed).
All in all, maybe these coax loops were pre-cursors to the Spider network, without a switch on the loop (“C” nodes in the Pierce paper). It makes sense to first try out the electrical and line data protocol before starting work on higher level functions.
I have no idea what a GLANCE G is...
The first edition ran on pdp-11, not pdp-7.
Tukey buttered parsnips at the labs, but Brits did
so several centuries before.
Contrary to urban legend, patent was not invoked to
justify the Unix pdp-11; word-processing was. The
quiz does not make this mistake.
The phototypesetter did not smell. The chemicals
for (externally) devoloping photo paper did.
Shahpazian is Dick Shahpazian; Maranzano is Joe Maranzano.
cagbef addresses out of bounds.
I appreciate Rob's discretion about the Waterloo theft.
Doug
Hi folks,
I've been adding a history subsection to the groff_man(7) page for the
next groff release (date TBD) and thanks to the TUHS archives I've been
able to answer almost all the questions I had about the origins of the
man(7) language's macros and registers (number and string).
I'm inlining my findings in rendered and source form below, but there's
one feature I haven't been able to sort out--where did .SB (small bold)
come from? The oldest groff release I can find online is 1.02 (June
1991), and .SB is already there, but I can't find it anywhere else. Is
it a GNUism? Did it perhaps appear in a proprietary Unix first?
I'm aware of Kristaps Dzonsons's history of Unix man pages[1], but
unfortunately for me that is more of a history of the *roff system(s),
and does not have much detail about the evolution of the man(7) macro
language itself.
If you can shed any light on this, I'd appreciate it!
History
Version 7 Unix (1979) supported all of the macros described in this
page not listed as extensions, except .P, .SB, and the deprecated .AT
and .UC. The only string registers defined were R and S; no number
registers were documented. .UC appeared in 3BSD (1980) and .P in AT&T
Unix System III (1980). 4BSD (1980) added lq and rq string registers.
4.3BSD (1986) added .AT and AT&T's .P. DEC Ultrix 11 (1988) added the
Tm string register.
.\" ====================================================================
.SS History
.\" ====================================================================
.
Version\~7 Unix (1979) supported all of the macros described in this
page not listed as extensions,
except
.BR .P ,
.BR .SB ,
.\" .SS was implemented in tmac.an but not documented in man(7).
and the deprecated
.BR .AT
and
.BR .UC .
.
The only string registers defined were
.B R
and
.BR S ;
no number registers were documented.
.
.B .UC
appeared in 3BSD (1980) and
.B .P
in AT&T Unix System\~III (1980).
.
4BSD (1980) added
.\" undocumented .VS and .VE macros to mark regions with 12-point box
.\" rules (\[br]) as margin characters, as well as...
.B lq
and
.B rq
string registers.
.
4.3BSD (1986) added
.\" undocumented .DS and .DE macros for "displays", which are .RS/.RE
.\" wrappers with filling disabled and vertical space of 1v before and
.\" .5v after, as well as...
.B .AT
and
AT&T's
.BR .P .
.
DEC Ultrix\~11 (1988) added the
.B Tm
string register.
.
.\" TODO: Determine provenance of .SB.
Regards,
Branden
[1] https://manpages.bsd.lv/history.html
On 1/22/20, Noel Chiappa <jnc(a)mercury.lcs.mit.edu> wrote:
> Pretty interesting machine, if you study its instruction set, BTW; with no
> stack, subroutines are 'interesting'.
Another machine family like that was the CDC 6x00 and 7x00 machines of
the late 1960s and early 1970s.
I worked on a CDC 6400 for a few years. A call was done by storing
the return address in the first word of the called routine, and
jumping to its second word. The return was done with an indirect jump
through the first word.
That was fine for Fortran, which at the time had no concept of
recursion. However, Urs Ammann implemented a compiler for Niklaus
Wirth's Pascal language on a CDC 6400 (or 6600) in Zurich, and he had
to simulate a stack. See
On Code Generation in a PASCAL Compiler
Software --- Practice and Experience 7(3) 391--423 May/June 1977
https://doi.org/10.1002/spe.4380070311
I have read that article in the past, but don't have download access
from our academic library to get a copy to refresh my memory.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Kind of scary what's in my basement. For those of you building UNIX
workstations in the early days, I have a big fat notebook full of
Weitek floating point chip specs, many of which are marked as preliminary.
Also a set of CORBA specs. Again, low-hanging fruit that's getting
recycled unless anyone has a use for them.
In the not completely sure that I want to part with them yet for some
strange reason, I have a set of SunOS manuals.
Also, if anyone collects old hardware I have a SparcStation 20 with a
slightly modified SunOS sitting around and an Ultra 60 Solaris box.
Jon
There is more in that issue of BSTJ, and indeed it seems this was a precursor.
https://ia801905.us.archive.org/25/items/bstj51-6-1147/bstj51-6-1147_text.p…https://ia801603.us.archive.org/0/items/bstj51-6-1167/bstj51-6-1167_text.pdf
The first paper makes mention of repeaters starting to self oscillate, and a redesign being underway.
There is a possibility that a Unix PDP11 was connected to this earlier network prior to Spider existing, in which case the accepted quiz answer would be wrong.
>> Ugh. Memory lane has a lot of potholes. This was a really long time ago.
>
> Many thanks for that post - really interesting!
>
> I had to look up "Pierce Network", and found it described in the Bell Journal:
> https://ia801903.us.archive.org/31/items/bstj51-6-1133/bstj51-6-1133_text.p…
>
> In my reading the Spider network is a type of Pierce network.
>
> However, the network that you remember is indeed most likely different from Spider:
> - it was coax based, whereas the Spider line was a twisted pair
> - there was more than one, whereas Spider only ever had one (operational) loop
>
> Condon and Weller are acknowledged in the report about Spider as having done many of its hardware details. The report discusses learnings from the project and having to tune repeaters is not among them (but another operational issue with its 'line access modules’ is discussed).
>
> All in all, maybe these coax loops were pre-cursors to the Spider network, without a switch on the loop (“C” nodes in the Pierce paper). It makes sense to first try out the electrical and line data protocol before starting work on higher level functions.
>
> I have no idea what a GLANCE G is...
Was looking for my DomainOS manuals and came across a fat notebook
containing the DECNET Phase III spec. Anyone want it? Not anything
that I need to keep and low-hanging fruit on the decluttering list.
Jon
> I have vague memories here that maybe Heinz can help with if his are any better.
> I believe that Sandy played a part in "the loop" or "the ring" or whatever it
> was called that we had connecting our Honeywell 516 to peripherals. I do
> remember the 74S00 repeaters because of the amount of time that Dave Weller
> spent tuning them when the error rate got high. Also, being a loop, Joe
> Condon used to pull his connectors out of the wall whenever people weren't
> showing up to a meeting on time. I don't know whether our network was a
> forerunner to the spider network.
It most likely was Spider - it became operational in 1972. The vist report that I linked to earlier also says:
"The current system contains just one loop with the switching computer (TEMPO I),
four PDP-11/45 computers, two Honeywell 516 computers, two DDP 224 computers,
and one each of Honeywell 6070, PDP-8 and PDP-11/20. In fact many of these are
connected in turn to other items of digital equipment.”
It would be interesting to know more about the H516’s and Spider, any other recollections?
I can answer some of the below, as I was looking into that a few years ago.
> 81. Q: What was the first Unix network?
> A: spider
> You thought it was Datakit, didn't you? But Sandy Fraser had an earlier
> project.
>
> When did Alexander G Fraser's spider cell network happen? For that matter,
> when did Datakit happen? I can't find references to either start date on
> line (nor anything on spider except for references to it in Dr Fraser's
> bio). I can find references to Datakit in 1978 or so.
Spider was designed between 1969 and 1974 - the final lab report (#23) dates from December 1974. It was based around a serial loop running at T1 signalling speed (~1.5Mhz). Here is a video recorded by Dr. Fraser about it: https://www.youtube.com/watch?v=ojRtJ1U6Qzw (first half is about Spider, second half about Datakit).
It connected to its hosts via a (discrete TTL-based) microcontroller or “TIU” and seems to have been connected almost immediately to Unix systems: the oldest driver I have been able to locate is in the V4 tree (https://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys/dmr/tdir/tiu.c) It used a DMA-based parallel interface into the PDP11. As such, it seems to have been much faster than the typical Datakit connection later - but I know too little about Datakit to be sure.
There is an interesting visit report from 1975 that discusses some of the stuff that was done with Spider here: https://stacks.stanford.edu/file/druid:rq704hx4375/rq704hx4375.pdf
Beyond those experiments I think Spider usage was limited to file serving (’nfs’ and ‘ufs’) and printing (’npr’). It would seem logical that it was used for remote login, but I have not found any traces of such usage. Same for email usage.
From what little I know, I think that Datakit became operational in a test network in 1979 and as a product in 1982.
> I thought the answer was "ARPANET" since we had a NCP on 4th edition Unix
> in late 1974 or early 1975 from the University of Illinois dating from that
> time (the code in TUHS appears to be based on V6 + a number of patches).
“Network Unix” (https://www.rfc-editor.org/rfc/rfc681.html) was written by Steve Holmgren, Gary Grossman and Steve Bunch in the last 3 months of 1974. To my best knowledge they used V5 and migrated to V6 as it came along. I think they were getting regular update tapes, and they implemented their system as a device driver (plus userland support) to be able to keep up with the steady flow of updates. Greg Chesson was also involved with this Arpanet Unix.
As far as I can tell, Arpanet Unix saw fairly wide deployment within the Arpanet research community, also as a front end processor for other systems.
A few years back I asked on this list why “Network Unix” was not more enthusiastically received by the core Unix development team and (conceptually) integrated into the main code base. I understood the replies as that (i) people were very satisfied with Spider; and (ii) being part of Bell they wanted a networking system that was more compatible with the Bell network, i.e. Datakit.
==
In my opinion both “Spider Unix” and “Arpanet Unix” threw a very long conceptual shadow. From Spider onwards, the Research systems viewed the network as a device (Spider), that could be multiplexed (V8 streams) or even mounted (Plan9). The Arpa lineage saw the network as a long distance bidirectional pipe, with the actual I/O device hidden from view; this view persists all the way to 4.2BSD and beyond.
I often wonder if it was (is?) possible to come up with a design with the conceptual clarity of Plan9, but organised around the “network as a pipe” view instead.
> Because we can't ask Greg sadly, I think the Holmgren is the last around that would know definitively and I've personally lost track of him.
Steve Holmgren and the Arpanet Unix team are still around (at least they were 3 years ago). I just remembered that I put some of my notes & findings in a draft wiki that I wanted to develop for TUHS - but I never finished it:
http://chiselapp.com/user/pnr/repository/TUHS_wiki/wiki?name=early_networki…
The recent find of CSRG report 3 and 4 may be the incentive I needed to complete my notes about 4.1a, 4.1c and 4.2BSD. However, still looking for the actual source tape to 4.1a - the closest I have is its derivative in 2.9BSD (https://minnie.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/net)
Apologies that this isn't specifically a Unix specific question but I
was wondering if anyone had insight in running domain/OS and it's
relationship to Plan 9 (assuming there is any).
One of my early mentors was a former product person at Apollo in Mass.
and was nice enough to tell me all sorts of war stories working there.
I had known about Plan9 at the time, and from what he described to me
about domain/OS it sounded like there was lots of overlap between the
two from a high level design perspective at the least. I've always been
keen to understand if domain/OS grew out of former Bell Labs folks, or
how it got started.
As an aside, he gifted me a whole bunch of marketing collateral from
Apollo (from before the HQ acquisition) that i'd be happy to share if
there is any historical value in that. At the time I was a
video/special effects engineer are was amazed at how beneficial having
something like domain/OS or Plan9 would have been for us, it felt we
were basically trying to accomplish a lot of the same goals by duct
taping a bunch of Irix and Linux systems together.
Cheers,
-pete
--
Pete Wright
pete(a)nomadlogic.org
@nomadlogicLA
My memory failed me: the part numbers were Z8001/Z8002 for the original and Z8003/Z8004 for the revised chips (segmented/unsegmented).
Hence it is unlikely that the Onyx had any form of demand paging (other than extending the stack in PDP11-like fashion).
——
A somewhat comparable machine to the Onyx was the Zilog S8000. It ran “Zeus”, which was also a Unix version:
https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/zilog/s8000/
Instead of the MMU described below it used the Zilog segmented MMU chips, 3 of them. These could be used to give a plain 16 bit address space divided in 3 segments, or could be used with the segmented addresses of the Z8001. The approach used by Onyx seems much cleaner to me, and reminiscent of the MMU on a DG Eclipse.
I think the original chips were the Z8000 (unsegmented) and the the Z8001 (segmented). These could not abort/restart instructions and were replaced by the Z8002 (unsegmented) and Z8003 (segmented). On these chips one could effectively assert reset during a fault and this would leave the registers in a state where a software routine could roll back the faulted instruction.
If the sources to the Onyx Unix survived, it would be interesting to see if it used this capability of the Z8002 and implemented a form demand paging.
Last but not least, the Xenix overview I linked earlier (http://seefigure1.com/images/xenix/xenix-timeline.jpg) shows Xenix ports to 4 other Z800 machines: Paradyne, Compucorp, Bleasedale and Kontron; maybe all of these never got to production.
> Message: 7
> Date: Tue, 21 Jan 2020 21:32:51 +0000
> From: Derek Fawcus <dfawcus+lists-tuhs(a)employees.org>
> To: The Unix Heritage Society mailing list <tuhs(a)tuhs.org>
> Subject: [TUHS] Onyx (was Re: Unix on Zilog Z8000?)
> Message-ID: <20200121213251.GA25322(a)clarinet.employees.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Jan 21, 2020 at 01:28:14PM -0500, Clem Cole wrote:
>> The Onyx box redated all the 68K and later Intel or other systems.
>
> That was a fun bit of grubbing around courtesy of a bitsavers mirror
> (https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/)
>
> It seems they started with a board based upon the non-segmented Z8002
> and only later switched to using the segmented Z8001. In the initial
> board, they created their own MMU:
>
> Page 6 of: https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/Onyx_C…
>
> Memory Management Controller:
>
> The Memory Management Controller (MMC) enables the C8002 to perform
> address translation, memory block protection, and separation of
> instruction and data spaces. Sixteen independent map sets are
> implemented, with each map set consisting of an instruction map and
> a data map. Within each map there are 32 page registers. Each page
> register relocates and validates a 2K byte page. The MMC generates
> a 20 bit address allowing the C8002 to access up to one Mbyte of
> physical memory.
>
> So I'd guess the MMC was actually programed through I/O instuctions
> to io space, and hence preserved the necessary protection domains.
>
> Cute. I've had a background interest in the Z8000 (triggered by reading
> a Z80000 datasheet around 87/88), and always though about using
> the segmented rather than unsegmented device.
>
> The following has a bit more info about the version of System III
> ported to their boxes:
>
> https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/UNIX_3…
>
> DF
A somewhat comparable machine to the Onyx was the Zilog S8000. It ran “Zeus”, which was also a Unix version:
https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/zilog/s8000/
Instead of the MMU described below it used the Zilog segmented MMU chips, 3 of them. These could be used to give a plain 16 bit address space divided in 3 segments, or could be used with the segmented addresses of the Z8001. The approach used by Onyx seems much cleaner to me, and reminiscent of the MMU on a DG Eclipse.
I think the original chips were the Z8000 (unsegmented) and the the Z8001 (segmented). These could not abort/restart instructions and were replaced by the Z8002 (unsegmented) and Z8003 (segmented). On these chips one could effectively assert reset during a fault and this would leave the registers in a state where a software routine could roll back the faulted instruction.
If the sources to the Onyx Unix survived, it would be interesting to see if it used this capability of the Z8002 and implemented a form demand paging.
Last but not least, the Xenix overview I linked earlier (http://seefigure1.com/images/xenix/xenix-timeline.jpg) shows Xenix ports to 4 other Z800 machines: Paradyne, Compucorp, Bleasedale and Kontron; maybe all of these never got to production.
> Message: 7
> Date: Tue, 21 Jan 2020 21:32:51 +0000
> From: Derek Fawcus <dfawcus+lists-tuhs(a)employees.org>
> To: The Unix Heritage Society mailing list <tuhs(a)tuhs.org>
> Subject: [TUHS] Onyx (was Re: Unix on Zilog Z8000?)
> Message-ID: <20200121213251.GA25322(a)clarinet.employees.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Jan 21, 2020 at 01:28:14PM -0500, Clem Cole wrote:
>> The Onyx box redated all the 68K and later Intel or other systems.
>
> That was a fun bit of grubbing around courtesy of a bitsavers mirror
> (https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/)
>
> It seems they started with a board based upon the non-segmented Z8002
> and only later switched to using the segmented Z8001. In the initial
> board, they created their own MMU:
>
> Page 6 of: https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/Onyx_C…
>
> Memory Management Controller:
>
> The Memory Management Controller (MMC) enables the C8002 to perform
> address translation, memory block protection, and separation of
> instruction and data spaces. Sixteen independent map sets are
> implemented, with each map set consisting of an instruction map and
> a data map. Within each map there are 32 page registers. Each page
> register relocates and validates a 2K byte page. The MMC generates
> a 20 bit address allowing the C8002 to access up to one Mbyte of
> physical memory.
>
> So I'd guess the MMC was actually programed through I/O instuctions
> to io space, and hence preserved the necessary protection domains.
>
> Cute. I've had a background interest in the Z8000 (triggered by reading
> a Z80000 datasheet around 87/88), and always though about using
> the segmented rather than unsegmented device.
>
> The following has a bit more info about the version of System III
> ported to their boxes:
>
> https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/UNIX_3…
>
> DF
[Resending as this got squashed a few days ago. Jon, sorry for the
duplicate. Again.]
On Sun, Jan 12, 2020 at 4:38 PM Jon Steinhart <jon(a)fourwinds.com> wrote:
> [snip]
> So I think that the point that you're trying to make, correct me if I'm
> wrong,
> is that if lists just knew how long they were you could just ask and that
> it
> would be more efficient.
>
What I understood was that, by translating into a lowest-common-denominator
format like text, one loses much of the semantic information implicit in a
richer representation. In particular, much of the internal knowledge (like
type information...) is lost during translation and presentation. Put
another way, with text as usually used by the standard suite of Unix tools,
type information is implicit, rather than explicit. I took this to be less
an issue of efficiency and more of expressiveness.
It is, perhaps, important to remember that Unix works so well because of
heavy use of convention: to take Doug's example, the total number of
commands might be easy to find with `wc` because one assumes each command
is presented on a separate line, with no gaudy header or footer information
or extraneous explanatory text.
This sort of convention, where each logical "record" is a line by itself,
is pervasive on Unix systems, but is not guaranteed. In some sense, those
representations are fragile: a change in output might break something else
downstream in the pipeline, whereas a representation that captures more
semantic meaning is more robust in the face of change but, as in Doug's
example, often harder to use. The Lisp Machine had all sorts of cool
information in the image and a good Lisp hacker familiar with the machine's
structures could write programs to extract and present that information.
But doing so wasn't trivial in the way that '| wc -l' in response to a
casual query is.
While that may be true, it sort of assume that this is something so common
> that
> the extra overhead for line counting should be part of every list. And it
> doesn't
> address the issue that while maybe you want a line count I may want a
> character
> count or a count of all lines that begin with the letter A. Limiting this
> example
> to just line numbers ignores the fact that different people might want
> different
> information that can't all be predicted in advance and built into every
> program.
>
This I think illustrates an important point: Unix conventions worked well
enough in practice that many interesting tasks were not just tractable, but
easy and in some cases trivial. Combining programs was easy via pipelines.
Harder stuff involving more elaborate data formats was possible, but, well,
harder and required more involved programming. By contrast, the Lisp
machine could do the hard stuff, but the simple stuff also required
non-trivial programming.
The SQL database point was similarly interesting: having written programs
to talk to relational databases, yes, one can do powerful things: but the
amount of programming required is significant at a minimum and often
substantial.
> It also seems to me that the root problem here is that the data in the
> original
> example was in an emacs-specific format instead of the default UNIX text
> file
> format.
>
> The beauty of UNIX is that with a common file format one can create tools
> that
> process data in different ways that then operate on all data. Yes, it's
> not as
> efficient as creating a custom tool for a particular purpose, but is much
> better
> for casual use. One can always create a special purpose tool if a
> particular
> use becomes so prevalent that the extra efficiency is worthwhile. If
> you're not
> familiar with it, find a copy of the Communications of the ACM issue where
> Knuth
> presented a clever search algorithm (if I remember correctly) and McIlroy
> did a
> critique. One of the things that Doug pointed out what that while Don's
> code was
> more efficient, by creating a new pile of special-purpose code he
> introduced bugs.
>
The flip side is that one often loses information in the conversion to
text: yes, there are structured data formats with text serializations that
can preserve the lost information, but consuming and processing those with
the standard Unix tools can be messy. Seemingly trivial changes in text,
like reversing the order of two fields, can break programs that consume
that data. Data must be suitable for pipelining (e.g., perhaps free-form
text must be free of newlines or something). These are all limitations.
Where I think the argument went awry is in not recognizing that very often
those problems, while real, are at least tractable.
Many people have claimed, incorrectly in my opinion, that this model fails
> in the
> modern era because it only works on text data. They change the subject
> when I
> point out that ImageMagick works on binary data. And, there are now stream
> processing utilities for JSON data and such that show that the UNIX model
> still
> works IF you understand it and know how to use it.
>
Certainly. I think you hit the nail on the head with the proviso that one
must _understand_ the Unix model and how to use it. If one does so, it's
very powerful indeed, and it really is applicable more often than not. But
it is not a panacea (not that anyone suggested it is). As an example, how
do I apply an unmodified `grep` to arbitrary JSON data (which may span more
than one line)? Perhaps there is a way (I can imagine a 'record2line'
program that consumes a single JSON object and emits it as a syntactically
valid one-liner...) but I can also imagine all sorts of ways that might go
wrong.
- Dan C.
[I originally asked the following on Twitter which was probably not the smartest idea]
I was recently wondering about the origins of Linux, i.e. Linux Torvalds doing his MSc and deciding to write Linux (the kernel) for the i386 because Minix did not support the i386 properly. While this is perfectly understandable I was trying to understand why, as he was in academia, he did not decide to write a “free X” for a different X. The example I picked was Plan 9, simply because I always liked it but X could be any number of other operating systems which he would have been exposed to in academia. This all started in my mind because I was thinking about my friends who were CompSci university students with me at the time and they were into all sorts of esoteric stuff like Miranda-based operating systems, building a complete interface builder for X11 on SunOS including sparkly mouse pointers, etc. (I guess you could define it as “the usual frivolous MSc projects”) and comparing their choices with Linus’.
The answers I got varied from “the world needed a free Unix and BSD was embroiled in the AT&T lawsuit at the time” to “Plan 9 also had a restrictive license” (to the latter my response was that “so did Unix and that’s why Linus built Linux!”) but I don’t feel any of the answers addressed my underlying question as to what was wrong in the exposure to other operating systems which made Unix the choice?
Personally I feel that if we had a distributed OS now instead of Linux we’d be better off with the current architecture of the world so I am sad that "Linux is not Plan 9" which is what prompted the question.
Obviously I am most grateful for being able to boot the Mathematics department’s MS-DOS i486 machines with Linux 0.12 floppy disks and not having to code Fortran 77 in Notepad followed by eventually taking over the department with X-Terminals based on Linux connected to the departmental servers (Sun, DEC Alpha, IBM RS/6000s). Without Linux they had been running eXeed (sp?) on Windows 3.11! In this respect Linux definitely filled in a huge gap.
Arrigo
Hi,
Have you ever used shell level, $SHLVL, in your weekly ~> daily use of Unix?
I had largely dismissed it until a recent conversation in a newsgroup.
I learned that shelling out of programs also increments the shell level.
I.e. :shell or :!/bin/sh in vim.
Someone also mentioned quickly starting a new sub-shell from the current
shell for quick transient tasks, i.e. dc / bc, mount / cp / unmount,
{,r,s}cp, etc., in an existing terminal window to avoid cluttering that
first terminals history with the transient commands.
That got me to wondering if there were other uses for shell level
($SHLVL). Hence my question.
This is more about using (contemporary) shells on Unix, than it is about
Unix history. But I suspect that TUHS is one of the best places to find
the most people that are likely to know about shell level. Feel free to
reply to COFF if it would be better there.
--
Grant. . . .
unix || die
I thought Benno Rice’s argument a bit unorganized and ultimately unconvincing, but I think the underlying point that we should from time to time step back a bit and review fundamentals has some merit. Unfortunately he does not distinguish much between a poor concept and a poor implementation.
For example, what does “everything is a file” mean in Unix?
- Devices and files are accessed through the same small API?
- All I/O is through unstructured byte streams?
- I/O is accessed via a single unified name space? etc.
Once that is clear, how can the concept then best be applied to USB devices?
Or: is there a fundamental difference between windows-style completion ports and completion signals?
Many of the underlying questions have been considered in the past, with carefully laid out arguments in various papers. In my view it is worthwhile to go back to these papers and see how the arguments pro and contra various approaches were weighed then and considering if the same still holds true today.
Interestingly, several points that Benno touches upon in his talk were also the topic of debate when Unix was transitioning to a 32 bits address space and incorporating networking in the early 80’s, as the TR/4 and TR/3 papers show. Of course, the system that CSRG delivered is different from the ambitions expressed in these papers and for sure opinions on the best choices differed as much back then as they will now - and that makes for an interesting discussion.
Rich was kind enough to look through the Joyce papers to see if it contained "CSRG Tech Report 4: Proposals for Unix on the VAX”. It did.
As list regulars will know I’ve been looking for that paper for years as it documents the early ideas for networking and IPC in what was to become 4.2BSD.
It is an intriguing paper that discusses a network API that is imo fairly different from what ended up being in 4.1a and 4.2BSD. It confirms Kirk McKusick’s recollection that the select statement was modelled after the ADA select statement. It also confirms Clem Cole’s recollection that the initial ideas for 4.2BSB were significantly influenced by the ideas of Richard Rashid (Aleph/Accent/Mach).
Besides IPC and networking, it also discusses file systems and a wide array of potential improvements in various other areas.
> If you search for "Jolitz"
Oh, I meant in the DDJ search box, not a general Web search.
> One of the items listed in WP, "Copyright, Copyleft, and Competitive
> Advantage" (Apr/1991) wasn't in the search results .. Since it's not in
> the 'releases' page, it might not really be part of the series?
Also, the last article in the series ("The Final Step") says the series was 17
articles long, not the 18 you get if you include "Copyright".
Noel
>Date: Tue, 07 Jan 2020 14:57:40 -0500.
>From: Doug McIlroy <>
>To: tuhs(a)tuhs.org, thomas.paulsen(a)firemail.de
>Subject: Re: [TUHS] screen editors
>Message-ID: <202001071957.007JveQu169574(a)coolidge.cs.dartmouth.edu>
>Content-Type: text/plain; charset=us-ascii
.. snip ..
>% wc -c /bin/vi bin/sam bin/samterm
>1706152 /bin/vi
> 112208 bin/sam
> 153624 bin/samterm
>These mumbers are from Red Hat Linux.
>The 6:1 discrepancy is understated because
>vi is stripped and the sam files are not.
>All are 64-bit, dynamically linked.
That's a real big vi in RHL. Looking at a few (commercial) unixes I get
SCO UNIX 3.2V4.2 132898 Aug 22 1996 /usr/bin/vi
- /usr/bin/vi: iAPX 386 executable
Tru64 V5.1B-5 331552 Aug 21 2010 /usr/bin/vi
- /usr/bin/vi: COFF format alpha dynamically linked, demand paged
sticky executable or object module stripped - version 3.13-14
HP-UX 11.31 748996 Aug 28 2009 /bin/vi
-- /bin/vi: ELF-32 executable object file - IA64
I'm trying to grab some stuff from bitsavers.org. It seems to be failing to
lookup name records. I'd send mail directly to Al, but the only address I
have for him at at bitsavers.org :(
Anybody have a better contact or good back-channel to Al?
Warner
I would imagine that the user land changes made its way into 386 Mach. Although I haven't seen anything I can recall off the top of my head about 386 commits in user land until much later.
Maybe one day more of that Mt Xinu stuff will surface, although I'm still amazed I got the kernel to build.
Internet legend is that the rift was massive.
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Larry McVoy <lm(a)mcvoy.com>
Sent: Sunday, January 19, 2020, 12:26 a.m.
To: Greg 'groggy' Lehey
Cc: UNIX Heritage Society
Subject: Re: [TUHS] Early Linux and BSD (was: On the origins of Linux - "an academic question")
On Sat, Jan 18, 2020 at 03:19:13PM +1100, Greg 'groggy' Lehey wrote:
> On Friday, 17 January 2020 at 22:50:51 -0500, Theodore Y. Ts'o wrote:
> >
> > In the super-early days (late 1991, early 1992), those of us who
> > worked on it just wanted a "something Unix-like" that we could run at
> > home (my first computer was a 40 MHz 386 with 16 MB of memory). This
> > was before the AT&T/BSD Lawsuit (which was in 1992) and while Jolitz
> > may have been demonstrating 386BSD in private, I was certainly never
> > aware of it
>
> At the start of this time, Bill was working for BSDI, who were
> preparing a commercial product that (in March 1992) became BSD/386.
Wikipedia says he was working on 386BSD as early has 1989 and that
clicks with me (Jolitz worked for me around 1992 or 3). I don't
remember him mentioning working at BSDI, are you sure about that
part? Those guys did not like each other at all.
Ted Ts'o mentioned Bruce Evans in a reply to "On the origins of
Linux". I'm really sorry to have to announce that he died last month.
His family is holding a "small farewell gathering" in Sydney in late
February. To quote his sister Julie Saravanos:
We would be pleased if you, or any other BSD/computer friend, came
There's no date yet, and I don't think it's appropriate to broadcast
details. If anybody is interested, please contact Warren or me.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
> but…damn, even ex/vi 3.x is huge
It was so excesssive right from the start that I refused to use it.
Sam was the first screen editor that I deemed worthwhile, and I
still use it today.
Doug
Of all those CSV repositories, geocities sites and yahoo groups are any indicator, it's going to be up to people to put the past onto plastic and get it out there.
If anything right now the utzoo archives along with people posting source and patches to usenet survived...
Not to mention all those shovel ware CD-ROMs from the 90s that ironically preserved so much early free software and other gems of the pre Linux/NT world.
Github will eventually be shuttered like anything else and all that will remain is dead links.. It really needs to be distributed by nature, but then you have people using Github as cloud storage of all things.
I don't think the CSRG CD's were hot sellers, and I couldn't imagine getting utzoo or TUHS pressed... Although maybe it's something to look at.
It might be interesting. From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Lars Brinkhoff <lars(a)nocrew.org>
Sent: Friday, January 17, 2020, 2:47 p.m.
To: Warren Toomey
Cc: tuhs(a)tuhs.org
Subject: Re: [TUHS] History of TUHS
Warren Toomey wrote:
> Heh, I hadn't thought that TUHS itself should now be considered
> historical
I often imagine future historians 100 years from now pouring over
mailing list archives and bitrotted GitHub repositories, including those
that contain historical research. Metahistory maybe?
Hello people in the future! How's the singularity treating you?
Sorry about the climate.
Is there a history of TUHS page I've missed?
When was it formed? Was it an outgrowth of PUPS? etc.
Again, I'm working on a talk and would like to include some of this
information and it made me think that the history of the historians should
be documented too.
Warner
TL; DR. I'm trying to find the best possible home for some dead trees.
I have about a foot-high stack of manilla folders containing "early Unix papers". They have been boxed up for a few decades, but appear to be in perfect condition. I inherited this collection from Jim Joyce, who taught the first Unix course at UC Berkeley and went on to run a series of ventures in Unix-related bookselling, instruction, publishing, etc.
The collection has been boxed up for a few decades, but appears to be in perfect condition. I don't think it has much financial value, but I suspect that some of the papers may have historical significance. Indeed, some of them may not be available in any other form, so they definitely should be scanned in and republished.
I also have a variety of newer materials, including full sets of BSD manuals, SunExpert and Unix Review issues, along with a lot of books and course handouts and maybe a SUGtape or two. I'd like to donate these materials to an institution that will take care of them, make them available to interested parties, etc. Here are some suggested recipients:
- The Computer History Museum (Mountain View, CA, USA)
- The Internet Archive (San Francisco, CA, USA)
- The Living Computers Museum (Seattle, WA, USA)
- The UC Berkeley Library (Berkeley, CA, USA)
- The Unix Heritage Association (Australia?)
- The USENIX Association (Berkeley, CA, USA)
According to Warren Toomey, TUHS probably isn't the best possibility. The Good News about most of the others is that I can get materials to them in the back of my car. However, I may be overlooking some better possibility, so I am following Warren's suggestion and asking here. I'm open to any suggestions that have a convincing rationale.
Now, open for suggestions (ducks)...
-r
I just found out about TUHS today; I plan to skim the archives RSN to get some context. Meanwhile, this note is a somewhat long-winded introduction, followed by a (non-monetary) sales pitch. I think some of the introduction may be interesting and/or relevant to the pitch, but YMMV...
Introduction
In 1970, I was introduced to programming by a cabal of social science professors at SF State College. They had set up a lab space with a few IBM 2741 (I/O Selectric) terminals, connected by dedicated lines to Stanford's Wylbur system. I managed to wangle a spot as a student assistant and never looked back. I also played a tiny bit with a PDP-12 in a bio lab and ran one (1) program on SFSC's "production system", an IBM 1620 Mark II (yep; it's a computer...).
While a student, I actually got paid to work with a CDC 3150, a DEC PDP-15, and (once) on an IBM 360/30. After that, I had some Real Jobs: assembler on a Varian 620i and a PDP-11, COBOL on an IBM mainframe, Fortran on assorted CDC and assorted DEC machines, etc.
By the late 80's, my personal computers were a pair of aging LSI-11's, running RT-11. At work (Naval Research Lab, in DC), I was mostly using TOPS-10 and Vax/VMS. I wanted to upgrade my home system and knew that I wanted all the cool stuff: a bit-mapped screen, multiprocessing, virtual memory, etc.
There was no way I could afford to buy this sort of setup from DEC, but my friend Jim Joyce had been telling me about Unix for a few years, so I attended the Boston USENIX in 1982 (sharing a cheap hotel room with Dick Karpinski :-) and wandered around looking at the workstation offerings. I made a bet on Sun (buying stock would have been far more lucrative, but also more risky and less fun) and ended up buying Sun #285 from John Gage.
At one point, John was wandering around Sun, asking for a slogan that Sun could use on a conference button to indicate how they differed from the competition. I suggested "The Joy of Unix", which he immediately adopted. This decision wasn't totally appreciated by some USENIX attendees from Murray Hill, who printed up (using troff, one presumes) and wore individualized paper badges proclaiming themselves as "The <whatever> of Unix". Imitation is the sincerest form of flattery... (bows)
IIRC, I received my Sun-1 late in a week (of course :-), but managed to set it up with fairly little pain. I got some help on the weekend from someone named Bill, who happened to be in the office on the weekend ... seemed quite competent ... I ran for a position on the Sun User Group board, saying that I would try to protect the interests of the "smaller" users. I think I was able to do some good in that position, not least because I was able to get John Gilmore and the Sun lawyers to agree on a legal notice, edit some SUGtapes, etc.
Later on, I morphed this effort into Prime Time Freeware, which produced book/CD collections of what is now called Open Source software. Back when there were trade magazines, I also wrote a few hundred articles for Unix Review, SunExpert, etc. Of course, I continue to play (happily) with computers...
Perkify
If you waded through all of that introduction, you'll have figured out that I'm a big fan of making libre software more available, usable, etc. This actually leads into Perkify, one of my current projects. Perkify is (at heart) a blind-friendly virtual machine, based on Ubuntu, Vagrant, and VirtualBox. As you might expect, it has a strong emphasis on text-based programs, which Unix (and Linux) have in large quantities.
However, Perkify's charter has expanded quite a bit. At some point, I realized that (within limits) there was very little point to worrying about how big the Vagrant "box" became. After all, a couple of dozen GB of storage is no longer an issue, and having a big VM on the disk (or even running) doesn't slow anything down. So, the current distro weighs in at about 10 GB and 4,000 or so APT packages (mostly brought in as dependencies or recommendations). Think of it as "a well-equipped workshop, just down the hall". For details, see:
- http://pa.cfcl.com/item?key=Areas/Content/Overviews/Perkify_Intro/main.toml
- http://pa.cfcl.com/item?key=Areas/Content/Overviews/Perkify_Index/main.toml
Sales Pitch
I note that assorted folks on this list are trying to dig up copies of Ken's Space Travel program. Amusingly, I was making the same search just the other day. However, finding software that can be made to run on Ubuntu is only part of the challenge I face; I also need to come up APT (or whatever) packages that Just Work when I add them to the distribution.
So, here's the pitch. Help me (and others) to create packages for use in Perkify and other Debian-derived distros. The result will be software that has reliable repos, distribution, etc. It may also help the code to live on after you and I are no longer able (or simply interested enough) to keep it going.
-r
Greetings,
I've so far been unable to locate a copy of munix. This is John Hawley's
dual PDP-11/50 version of Unix he wrote for his PHd Thesis in June 1975 at
the Naval Postgraduate School in Monterey, CA.
I don't suppose that any known copies of this exist? To date, my searches
have turned up goose-eggs.
Hawley's paper can be found here https://calhoun.nps.edu/handle/10945/20959
Warner
P.S. I'm doing another early history talk at FOSDEM in a couple of weeks.
So if you're in the audience, no spoilers please :)
Hello,
https://www.bell-labs.com/usr/dmr/www/spacetravel.html says:
> Later we fixed Space Travel so it would run under (PDP-7) Unix instead
> of standalone, and did also a very faithful copy of the Spacewar game
I have a file with ".TITLE PDP-9/GRAPHIC II VERSION OF SPACEWAR". (I
hope it will go public soon.) It seems to be a standalone program, and
it's written in something close to MACRO-9 syntax. I'm guessing the
Bell Labs version would have been written using the Unix assembler.
Best regards,
Lars Brinkhoff
The Executable and Linkable Format (ELF) is the modern standard for
object files in Unix and Unix-like OSes (e.g., Linux), and even for
OpenVMS. LInux, AIX and probably other implementations of ELF have a
feature in the runtime loader called symbol preemption. When loading
a shared library, the runtime loader examines the library's symbol
table. If there is a global symbol with default visibility, and a
value for that symbol has already been loaded, all references to the
symbol in the library being loaded are rebound to the existing
definition. The existing value thus preempts the definition in the
library.
I'm curious about the history of symbol preemption. It does not exist
in other implementations of shared libraries, such as IBM OS/370 and
its descendants, OpenVMS, and Microsoft Windows NT. ELF apparently
was designed in the mid-1990s. I have found a copy of the System V
Application Binary Interface from April 2001 that describes symbol
preemption in the section on the ELF symbol table.
When was symbol preemption when loading shared objects first
implemented in Unix? Are there versions of Unix that don't do symbol
preemption?
-Paul W.
Random832 <random832 at fastmail.com> writes:
>markus schnalke <meillo at marmaro.de> writes:
>> [2015-11-09 08:58] Doug McIlroy <doug at cs.dartmouth.edu>
>>> things like "cut" and "paste", whose exact provenance
>>> I can't recall.
>>
>> Thanks for reminding me that I wanted to share my portrait of
>> cut(1) with you. (I sent some questions to this list, a few
>> months ago, remember?) Now, here it is:
>>
>> http://marmaro.de/docs/freiesmagazin/cut/cut.en.pdf
>
>Did you happen to find out what GWRL stands for, in the the comments at
>the top of early versions of cut.c and paste.c?
>
>/* cut : cut and paste columns of a table (projection of a relation) (GWRL) */
>/* Release 1.5; handles single backspaces as produced by nroff */
>/* paste: concatenate corresponding lines of each file in parallel. Release 1.4 (GWRL) */
>/* (-s option: serial concatenation like old (127's) paste command */
>
>For that matter, what's the "old (127's) paste command" it refers to?
I know this thread is almost 5 years old, I came across it searching for
something else But as no one could answer these questions back then, I can.
GWRL stands for Gottfried W. R. Luderer, the author of cut(1) and paste(1),
probably around 1978. Those came either from PWB or USG, as he worked with,
or for, Berkley Tague. Thus they made their way into AT&T commercial UNIX,
first into System III and the into System V, and that's why they are missing
from early BSD releases as they didn't get into Research UNIX until the
8th Edition. Also "127" was the internal department number for the Computer
Science Research group at Bell Labs where UNIX originated
Dr. Luderer co-authored this paper in the orginal 1978 BSTJ on UNIX --
https://www.tuhs.org/Archive/Documentation/Papers/BSTJ/bstj57-6-2201.pdf
I knew Dr. Luderer and he was even kind enough to arrange for me stay with his
relatives for a few days in Braunschweig, West Germany (correct county name for
the time) on my first trip to Europe many decades ago. But haven't had contact nor
even thought of him forever until I saw his initials. I also briefly worked for Berk
when he was the department head for 45263 in Whippany Bell Labs before moving to
Murray Hill.
And doing a quick search for him, it looks like he wrote and autobiograhy, which I
am now going to have to purchase
http://www.lulu.com/shop/gottfried-luderer/go-west-young-german/paperback/p…
-Brian
Hi All:
I looking for the source code to the Maitre'd load balancer. It is
used to run jobs on lightly used machines. It was developed by Brian
Berhard at Berkeley's Computer Systems Support Group. I have the
technical report for it (dated 17-dec-1985). But haven't run across the
tarball.
thanks
-ron
All, I've had a few subscribers argue that the type checking
thread was still Unix-related, so feel free to keep posting
here in TUHS. But if it does drift away to non-Unix areas,
please pass it over to COFF.
Thanks & apologies for being too trigger-happy!
Cheers, Warren
>> After scrolling through the command list, I wondered how
>> long it was and asked to have it counted. Easy, I thought,
>> just pass it to a wc-like program. But "just pass it" and
>> "wc-like" were not givens as they are in Unix culture.
>> It took several minutes for the gurus to do it--without
>> leaving emacs, if I remember right.
> This is kind of illustrative of the '60s acid trip that
> perpetuates in programming "Everything's a string maaaaan".
> The output is seen as truth because the representation is
> for some reason too hard to get at or too hard to cascade
> through the system.
How did strings get into the discussion? Warner showed how
emacs could be expected to do the job--and more efficiently
than the Unix way, at that: (list-length (command-list-fn)).
The surprise was that this wasn't readily available.
Back then, in fact, you couldn't ask sh for its command
list. help|wc couldn't be done because help wasn't there.
Emacs had a different problem. It had a universal internal
interface--lists rather than strings--yet did not have
a way to cause this particular list to "cascade through
the system". (print(command-list-fn)) was provided, while
(command-list-fn) was hidden.
Doug
Mention of elevators at Tech Square reminds me of visiting there
to see the Lisp machine. I was struck by cultural differences.
At the time we were using Jerqs, where multiple windows ran
like multiple time-sharing sessions. To me that behavior was a
no-brainer. Surprisingly, Lisp-machine windows didn't work that
way; only the user-selected active window got processor time.
The biggest difference was emacs, which no one used at Bell
Labs. Emacs, of course was native to the Lisp machine and
provided a powerful and smoothly extensible environment. For
example, its reflective ability made it easy to display a
list of its commands. "Call elevator" stood out amng mundane
programmering actions like cut, paste and run.
After scrolling through the command list, I wondered how long
it was and asked to have it counted. Easy, I thought, just
pass it to a wc-like program. But "just pass it" and "wc-like"
were not givens as they are in Unix culture. It took several
minutes for the gurus to do it--without leaving emacs, if I
remember right.
Doug
This question comes from a colleague, who works on compilers.
Given the definition `int x;` (without an initializer) in a source file the
corresponding object contains `x` in a "common" section. What this means is
that, at link time, if some object file explicitly allocates an 'x' (e.g.,
by specifying an initializer, so that 'x' appears in the data section for
that object file), use that; otherwise, allocate space for it at link time,
possibly in the BSS. If several source files contain such a declaration,
the linker allocates exactly one 'x' (or whatever identifier) as
appropriate. We've verified that this behavior was present as early as 6th
edition.
The question is, what is the origin of this concept and nomenclature?
FORTRAN, of course, has "common blocks": was that an inspiration for the
name? Where did the idea for the implicit behavior come from (FORTRAN
common blocks are explicit).
My colleague was particularly surprised that this seemed required: even at
this early stage, the `extern` keyword was present, so why bother with this
behavior? Why not, instead, make it a link-time error? Please note that if
two source files have initializers for these variables, then one gets a
multiple-definition link error. The 1988 ANSI standard made this an error
(or at least undefined behavior) but the functionality persists; GCC is
changing its default to prohibit it (my colleague works on clang).
Doug? Ken? Steve?
- Dan C.
Jon Steinhart:
One amusing thing that Steve told me which I think I can share is why the
symmetry of case-esac, if-fi was broken with with do-done; it was because
the od command existed so do-od wouldn't work!
=====
As I heard the story in the UNIX room decades ago (and at least five
years after the event), Steve tried and tried to convince Ken to
rename od so that he could have the symmetry he wanted. Ken was
unmoved.
Norman Wilson
Toronto ON
> From: Clem Cole
> when she found out the elevators were hacked and controlled by the
> student's different computers, she stopped using them and would take
> the stairs
It wasn't quite as major as this makes it sound! A couple of inconspicuous
wires were run from the 'TV 11' on the MIT-AI KA10 machine (the -11 that ran
the Knight displays) into the elevator controller, and run onto the terminals
where the wires from the 'down' call buttons on the 8th and 9th floors went.
So it wasn't anything major, and there was really no need for her to take the
stair (especially 8 flights up :-).
The code is still extant, in 'SYSTEM; TV >'. It only worked (I think) from
Knight TV keyboards; typing 'ESC E' called the elevator to the floor
that keyboard was on (there's a table, 'ELETAB', which gives the physical
floor for each keyboard).
The machine could also open the locked 9th floor door to the machine room
(with an 'ESC D'), and there some other less major things, e.g. print screen
hardcopy. I'm not sure what the hardware in the TV-11 was (this was all run
out of the 'keyboard multiplexor'); it may have been something the AI Lab
built from scratch.
Noel
> When Bernie Greenberg did EMACS for Multics, he had a similar issue. I
> recall reading a document with an extensive discussion of how they dealt
> with this ... If anyone's really interested in this, and can't find it
> themselves, I can try looking for it.
I got a request for this; a Web search turned up:
https://www.multicians.org/mepap.html
which covers the points I mentioned (and more besides, such as why LISP was
chosen). I don't think this is the thing I remembered (which was, IIRC, an
informal note), but it does seem to be a later version of that.
Noel
> From: Otto Moerbeek <otto(a)drijf.net>
> I believe it was not only vi itself that was causing the load, it was
> also running many terminals in raw mode that killed performance.
I'm not familiar with the tty driver in late versions of Unix like 4.1 (sic),
but I'm very familiar with the one in V6, and it's not the raw mode _itself_
that caused the load (the code paths in the kernel for cooked and raw aren't
that different), but rather the need to wake up and run a process on every
character that was the real load.
When Bernie Greenberg did EMACS for Multics, he had a similar issue. I recall
reading a document with an extensive discussion of how they dealt with this,
especially when using the system over the ARPANET. IIRC, normal printing
characters were echoed without waking up the process; remotely, when using
the network. If anyone's really interested in this, and can't find it themselves,
I can try looking for it.
Noel
> From: Clem Cole <clemc(a)ccc.com>
> So, unless anyone else can illuminate, I'm not sure where the first cpp
> that some of us using v6 had originated.
I recall a prior extensive discussion about 'cpp'. I looked, and found it
(March 30, 2017) but it was a private discussion, not on TUHS (although you
were part of it :-). Here are clips of what I wrote (I don't want to re-post
what others wrote) from what I wrote, which tell most of the story:
There were a series of changes to C before V7 came out, resulting in the
so-called 'phototypsetter C compiler' (previously discussed on TUHS), and they
included the preprocessor. There's that series of short notes describing
changes to C (and the compiler), and they include mention of the preprocessor.
[Available here: http://gunkies.org/wiki/Typesetter_C for those who want to see
them.]
The MIT 'V6' Unix (which was, AFAICT, an augmented version of an early version
of PWB Unix) had that C compiler; and if you look at the PWB1 tree online, it
does have the C with 'cpp':
http://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/c/c
I did a diff of that 'cpp' with the MIT one, and they are basically identical.
----
I went looking for the C manual in the V6 distro, to see if it mentioned the
pre-processor. And it does:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/doc/c/c5
(Section 12, "Compiler control lines", about half way down.) So, I'm like,
'WTF? I just looked at cc.c and no mention of cpp!'
So I looked a little harder, and if you look at the cc.c in the distro (URL
above), you see this:
insym(&defloc, "define");
insym(&incloc, "include");
insym(&eifloc, "endif");
insym(&ifdloc, "ifdef");
insym(&ifnloc, "ifndef");
insym(&unxloc, "unix");
The pre-processor is integrated into 'cc' in the initial V6. So we do have a very
early version of it, after all...
----
So, 'cc' in V5 also included pre-processor support (just #define and #include,
though):
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s1/cc.c
Although we don't have the source to 'cc' to show it, V4 also appears to have
had it, per the man page:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/man/man1/cc.1
"If the -p flag is used, only the macro prepass is run on all files whose name
ends in .c"; and the V4 system source:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys
also has .h files.
No sign of it in the man page for cc.1 in V3, though.
This all makes sense. .h files aren't any use with[out] #include, and without
#include, you have to have the structure definition, etc in multiple source
files. So #include would have gotten added very early on.
In V3, the system was apparently still in assembler, so no need.
-----
Also, there's an error in:
https://ewe2.ninja/computers/cno/
when it says "V6 was a very different beast for programming to V7. No c
preprocessor. The practical upshot of this is no #includes." that's
clearly incorrect (see above). Also, if you look at Lions (which is pure
early V6), in the source section, all the .c files have #include's.
Noel
Do we really need another boring old editor war? The topic
is not specific to UNIX in the least; nor, alas, is it historic.
Norman Wilson
Toronto ON
(typing this in qed)
Date: Wed, 8 Jan 2020 17:40:10 -0800
> From: Bakul Shah <bakul(a)bitblocks.com>
> To: Larry McVoy <lm(a)mcvoy.com>
> Cc: Warner Losh <imp(a)bsdimp.com>, The Eunuchs Hysterical Society
> <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] screen editors
> Message-ID: <D192F5A5-2A67-413C-8F5C-FCF195151E4F(a)bitblocks.com>
> Content-Type: text/plain; charset=utf-8
>
> On Jan 8, 2020, at 5:28 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> >
> > On Wed, Jan 08, 2020 at 05:08:59PM -0700, Warner Losh wrote:
> >> On Wed, Jan 8, 2020, 4:22 PM Dave Horsfall <dave(a)horsfall.org> wrote:
> >>
> >>> On Wed, 8 Jan 2020, Chet Ramey wrote:
> >>>
> >>>>> That's a real big vi in RHL.
> >>>>
> >>>> It's vim.
> >>>
> >>> It's also VIM on the Mac.
> >>>
> >>
> >> Nvi is also interesting and 1/10th the size of vim. It's also the
> FreeBSD
> >> default for vi.
> >
> > I was gonna stay out of this thread (it has the feel of old folks
> somehow)
> > but 2 comments:
> >
> > Keith did nvi (I can't remember why? licensing or something) and he did
> > a pretty faithful bug for bug compatible job. I've always wondered why.
> > I like Keith but it seemed like a waste. There were other people taking
> > vi forward, elvis, xvi (I hacked the crap out of that one, made it mmap
> > the file and had a whole string library that treated \n like NULL) and
> > I think vim was coming along. So doing a compat vi felt like a step
> > backward for me.
> >
> > For all the vim haters, come on. Vim is awesome, it gave me the one
> > thing that I wanted from emacs, multiple windows. I use that all the
> > time. It's got piles of stuff that I don't use, probably should, but
> > it is every bit as good of a vi as the original and then it added more.
> > I'm super grateful that vim came along.
>
> The first thing I do on a new machine is to install nvi. Very grateful to
> Keith Bostic for implementing it. I do use multiple windows — only
> horizontal splits but that is good enough for me as all my terminal
> windows are 80 chars wide. Not a vim hater but never saw the need.
>
Not sure if you’re saying horizontal splits are all you need, or all you’re
aware of, but nvi “:E somefile” will split to a top/bottom arrangement and
“:vsplit somefile” will do a left/right arrangement, as well as being able
to “:fg”, “:bg” screens. I too am a (NetBSD) nvi appreciator.
-bch
Working on a new project that's unfortunately going to require some changes
to the linux kernel. Lived a lot of my life in the embedded world, haven't
touched a *NIX kernel since 4.3BSD. Am writing a travelogue as I find my way
around the code. Wasn't planning another book but this might end up being
one. Anyway, a few questions...
Was looking at the filesystem super_block structure. A large number of the
members of the structure (but not all) begin with a s_ prefix, and some of
the member names are in the 20 character long range. I recall that using
prefixes was necessary before structures and unions had their own independent
namespaces. But I also seem to recall that that was fixed before long
identifier names happened. Does anybody remember the ordering for these two
events?
Also, anybody know where the term superblock originated? With what filesystem?
Jon
below... -- warning veering a little from pure UNIX history, but trying to
clarify what I can and then moving to COFF for follow up.
On Wed, Jan 8, 2020 at 12:23 AM Brian Walden <tuhs(a)cuzuco.com> wrote:
> ....
>
> - CMU's ALGOL68S from 1978 list all these ways --
> co comment
> comment comment
> pr pragmat
> pragmat pragmat
> # (comment symbol) comment
> :: (pragmat symbol) pragmat
> (its for UNIX v6 or v7 so not surprising # is a comment)
> http://www.softwarepreservation.org/projects/ALGOL/manual/a68s.txt/view
Be careful of overthinking here. The comment in that note says was it was
for* PDP-11's *and lists V6 and V7 was *a possible target*, but it did not
say it was. Also, the Speach and Vision PDP-11/40e based systems ran a
very hacked v6 (which a special C compiler that supported CMU's csv/cret
instructions in the microcode), which would have been the target systems.
[1]
To my knowledge/memory, the CMU Algol68 compiler never ran anywhere but
Hydra (and also used custom microcode). IIRC there was some talk to move
it to *OS (Star OS for CM*) I've sent a note to dvk to see if he remembers
it otherwise. I also ask Liebensperger what he remembers, he was hacking on
*OS in those days. Again, IIRC Prof. Peter Hibbard was the mastermind
behind the CMU Algol68 system. He was a Brit from Cambridge (and taught
the parallel computing course which I took from him at the time).
FWIW: I also don't think the CMU Algol68 compiler was ever completely
self-hosting, and like BLISS, required the PDP-10 to support it. As to why
it was not moved to the Vax, I was leaving/had left by that time, but I
suspect the students involved graduated and by then the Perq's had become
the hot machine for language types and ADA would start being what the gvt
would give research $s too.
>
>
> ...
>
> But look! The very first line of that file! It is a single # sitting all
> by itself. Why? you ask. Well this is a hold over from when the C
> preprocessor was new. C orginally did not have it and was added later.
> PL/I had a %INCLUDE so Ritchie eventaully made a #include -- but pre 7th
> Edition the C preprocessor would not be inkoved unless the very first
> character of the C source file was an #
>
That was true of V7 and Typesetter C too. It was a separate program (
/lib/cpp) that the cc command called if needed.
> Since v7 the preprocessor always run on it. The first C preprocessor was
> Ritchie's work with no nested includes and no macros. v7's was by John
> Reiser which added those parts.
>
Right, this is what I was referring too last night in reference to Sean
comments. As I said, the /bin/cc command was a shell script and it peaked
at the first character to see if it was #. I still find myself starting
all C programs with a # on a line by itself ;-)
Note that the Ritchie cpp was influenced by Brian's Ratfor work, so using #
is not surprising.
This leads to a question/thought for this group, although I think needs to
move to COFF (which I have CC'ed for follow up).
I have often contended, that one of the reasons why C, Fortran, and PL/1
were so popular as commercial production languages were because they could
be preprocessed. For a commercial place where lots of different targets is
possible, that was hugely important. Pascal, for instance, has semantics
that makes writing a preprocessor like cpp or Ratfor difficult (which was
one of the things Brian talks about in his "*Why Pascal is not my favorite
Programming Language <http://www.lysator.liu.se/c/bwk-on-pascal.html>*"
paper). [2]
So, if you went to commercial ISV's and looked at what they wrote in. It
was usually some sort of preprocessed language. Some used Ratfor like a
number of commercial HPC apps vendors, Tektronix wrote PLOT10 in MORTRAN.
I believe it was Morgan-Stanley had a front-end for PL/1, which I can not
recall the name. But you get the point ... if you had to target different
runtime environments, it was best for your base code to not be specific.
However ... as C became the system programming language, the preprocessor
was important. In fact, it even gave birth the other tools like autoconfig
to help control them. Simply, the idiom:
#ifdef SYSTEMX
#define SOME_VAR (1)
... do something specific
#endif /* SYSTEMX */
While loathsome to read, it actually worked well in practice.
That fact is I hate the preprocessor in many ways but love it for what it
for the freedom it actually gave us to move code. Having programmed since
the 1960s, I remember how hard it was to move things, even if the language
was the same.
Today, modern languages try to forego the preprocessor. C++'s solution is
to throw the kitchen sink into the language and have 'frameworks', none of
which work together. Java's and its family tries to control it with the
JVM. Go is a little too new to see if its going to work (I don't see a lot
of production ISV code in it yet).
Note: A difference between then and now, is 1) we have few target
architectures and 2) we have fewer target operating environments, 3) ISV
don't like multiple different versions of their SW, they much prefer very
few for maintenance reasons so they like # 1 and #2 [i.e. Cole's law of
economics in operation here].
So ... my question, particularly for those like Doug who have programmed
longer and at least as long as I, what do you think? You lived the same
time I did and know the difficulties we faced. Is the loss of a
preprocessor good or bad?
Clem
[1] Historical footnote about CMU. I was the person that brought V7 into
CMU and I never updated the Speach or Vision systems and I don't think
anyone did after I left. We ran a CMU V7 variant mostly on the 11/34s (and
later on a couple of 11/44s I believe) that had started to pop up.
Although later if it was a DEC system, CS was moving to Vaxen when they
could get the $s (but the Alto's and Perq's had become popular with the CMU
SPICE proposal). Departments like bio-engineering, mech ee, ran the
cheaper systems on-site and then networked over the Computer Center's Vaxen
and PDP-20's when they needed address space).
[2] Note: Knuth wrote "Web" to handle a number of the issues, Kernighan
talks about - but he had to use an extended Pascal superset and his program
was notable for not being portable (he wrote for it for the PDP-10
Pascal). [BTW: Ward Cunningham, TW Cook and I once counted over 8
different 'Tek Pascal' variants and 14 different 'HP Basics'].
U'll Be King of the Stars wrote in <68b3d6df-94f6-625d-39bf-6149b4c177c9\
@andrewnesbit.org>:
|On 08/01/2020 15:15, Steffen Nurpmeso wrote:
|> (But i think emacs is better here, i see one markable
|> emacs developer taking care on the Unicode list, regarding real
|> BiDi support, for example.)
|
|I have been following the emacs-devel mailing list out of interest for
|many years. From this, I think the person you are referring to in your
|comment, is Eli Zaretskii. Is that right?
Yep. Then again i have to say it was a lot of a mistake, because
the OSS man who i referred to and who is known for questions deep
in the material is indeed Karl Williamson of i think Perl.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
This web page has some details about XENIX prior to 1985:
http://seefigure1.com/2014/04/15/xenixtime.html
In particular this chart is intriguing:
http://seefigure1.com/images/xenix/xenix-timeline.jpg
I’d love to have XENIX from the 1980-1985 era in the TUHS archive, as it documents the tail end of the Unix on 16 bits era. It would have been great if MS had released that as part of the Unix-at-50 events.
Paul
Hi,
Does the wump.c source exist for v6? The game's in the distribution and
so is the man page, but I can't find the source. I see it's in v7, but I
don't know the provenance of the game source, hence the question.
I find the following interesting... in the v7 source it says:
/*
* wumpus
* stolen from PCC Vol 2 No 1
*/
But it's actually from PCC Vol 2 No 2 (Nov 1973):
https://archive.computerhistory.org/resources/access/text/2017/09/102661095…
and the basic source is given in the games issue:
https://archive.computerhistory.org/resources/access/text/2017/09/102661095…
The correct volume is noted in the v6 manpage:
This program is based on one described in 2 (No-
vember 1973). It will never replace Space War.
and in the v7 manpage:
This program is based on one described in People's Computer
Company, 2, 2 (November 1973).
BUGS
It will never replace Space War.
I'm curious if it was ported to c for v6, or if it was basic?
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
Dave Horsfall wrote:
>On Tue, 7 Jan 2020, Bakul Shah wrote:
>
>> In Algol68 # ... # is one of the forms for block comments!
>
>Interesting... All we had at university though was ALGOL W (as far as I
>know; there were several languages that mere students could not use, such
>as FORTRAN H).
Yes, but when was it implemented? Kernighan is first ever if it is not
before 1974. So I decided to look and it took me down a rabbit hole of
ALGOL taht leads back to Bourne shell and then right back to # (but in C)
By reading the ALGOL 68 wiki page, the laguange seemed to have had a
character set problem since day one, and it seems if you didn't have the
cent-sign you were to use PR for pragmat for comments. And since it
had problems it was continually extened. I just cant find when # was defined.
I looked at various old implementations (none pre 1974 list #) --
- CDC's ALGOL 68 compiler from 1975 you could only use use PR .. PR
(both # and CO were not defined)
http://www.bitsavers.org/pdf/cdc/Tom_Hunter_Scans/Algol_68_version_1_Refere…
- The official revised ALGOL86 spec from 1978 lists all these ways to enter
them (bottom of page 112) in this order --
brief comment symbol: cent-sign
bold comment symbol: comment
style 1 comment symbol: co
style 2 comment symbol: #
bold pragmat symbol: pragmat
style 1 pragmat symbol: pr
seeing # is "style 2" it looks like a later extention to me
http://www.softwarepreservation.org/projects/ALGOL/report/Algol68_revised_r…
- ALGOL68/19 from 1975 list these 4 symbols as comments: # % co pr
http://www.softwarepreservation.org/projects/ALGOL/manual/Gennart_Louis-Alg… 68_19_Reference_Manual.pdf
- DECs ALGOL (1976 printing but first released was 1971) for system10 uses
a ! for a comment as # means "not equal" --
http://www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/DEC-10-LALMA-B-D… decsystem10%20ALGOL%20Programmer's%20Reference%20Manual.pdf
- CMU's ALGOL68S from 1978 list all these ways --
co comment
comment comment
pr pragmat
pragmat pragmat
# (comment symbol) comment
:: (pragmat symbol) pragmat
(its for UNIX v6 or v7 so not surprising # is a comment)
http://www.softwarepreservation.org/projects/ALGOL/manual/a68s.txt/view
- Rutgers ALGOL 68 interprter from 1987 for UNIX does not implement
PR nor PRAMAT and says comments are # CO or COMMENT
https://www.renyi.hu/~csirmaz/algol-68/linux/manual
I could not find a freely accessible manual for ALGOL68R (very 1st one) nor
Cambridge's ALGOL68C. What's intresting here is Stephen Bourne was on the
team that made ALGOL68C before he move to Bell Labs. It'd be pretty funny
if he implemented a language that there were 7 or 8 ways to enter a comment
(cent, co, comment, pr, pragmat, #, ::, %) yet there were zero ways
to enter a comment in the Bourne shell.
Also the style of "COMMENT put a note here COMMENT" is very un-ALGOL like
(with DO .. OD, IF .. FI) shouldn't it be like this?
COMMENT put a note here TNEMMOC
CO put a note here OC
PRAGMAT directive here TAMGARP
PR directive here RP
So then I remembered Bourne used the C preprocssor to make C like ALGOL when
he wrote the shell. If you've never seen it, his C looks like this --
case TSW:
BEGIN
REG STRING r = mactrim(t->swarg);
t=t->swlst;
WHILE t
DO ARGPTR rex=t->regptr;
WHILE rex
DO REG STRING s;
IF gmatch(r,s=macro(rex->argval)) ORF (trim(s), eq(r,s))
THEN execute(t->regcom,0);
t=0; break;
ELSE rex=rex->argnxt;
FI
OD
IF t THEN t=t->regnxt FI
OD
END
break;
ENDSW
So I wanted to see if he remapped C comments /* */
I am not even sure you could even do that with the C preprocessor
but took alook anywy and in
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c
It's first lines are this --
#
/*
* UNIX shell
*
* S. R. Bourne
* Bell Telephone Laboratories
*
*/
#include "defs.h"
#include "sym.h"
So nope, just regular C comments (which came from PL/I btw which was
what multics was programmed in)
But look! The very first line of that file! It is
a single # sitting all by itself. Why? you ask. Well this is a hold
over from when the C preprocessor was new. C orginally did not
have it and was added later. PL/I had a %INCLUDE so Ritchie eventaully
made a #include -- but pre 7th Edition the C preprocessor would not be
inkoved unless the very first character of the C source file was an #
Since v7 the preprocessor always run on it. The first C preprocessor
was Ritchie's work with no nested includes and no macros. v7's was by
John Reiser which added those parts.
that 1st line with a single # sitting by itself reminds me of the
csh construct as well.
-Brian
A bit more on this.
csh(1) was wrtten around 1978 and yes # as a comment was only for
scrtipts, think it was why would you need to comment interactively?
And the # as an addition to be a comment in Bourne shell had to be around 1980
as that is when Dennis Ritchie added #! to exec(2) in the kernel. From this
point on this forced all UNIX scripting languages to use # as a comment as
it just exec'd the first string after the #! with the name of the current
file being exec'd as the single argument. So things like perl(1) and python(1)
had to use # if they wanted the #! mechanism to work for them too.
So this worked great for shell scripts, it didn't work for awk(1) nor sed(1)
nor s(1)(that is R(1) now) scripts (all needed a -f for input from file)
nor dc(1) scripts as dc had no comment character.
While Research UNIX got !# in 1980, this was after the 7th Edition release
and the 8th Edition wasn't released until 1985), BSD got it around 1982-83,
and System V didn't implement it until 1988. Eventully #! was extented
so #!/usr/bin/awk -f would work.
Also Bill Joy was the first to use # as a comment character in an /etc config
file for his /etc/ttycap (which became /etc/termcap) for vi(1). Most configs
did not have a comment at all at that time, while /etc/master used a * as a
comment (SCCS used * as a comment too btw)
Also before you say wait! ALGOL uses # as comment and is older than
Kernighan' ratfor(1). This is a later addition. The original used the EBCDIC
cent sign character to start and another cent sign to end the comment
(i.e. programmer's two cents). If you were on an ASCII system this became
"co" (for comment) as the original ASCII does not have a cent sign
-Brian
McIlroy:
> [vi] was so excesssive right from the start that I refused to use it.
> Sam was the first screen editor that I deemed worthwhile, and I
> still use it today.
Paulsen:
> my sam build is more than 2 times bigger than Gunnar Ritter's vi
> (or Steve Kirkendall's elvis) and even bigger than Bram Moolenaar's vim.
% wc -c /bin/vi bin/sam bin/samterm
1706152 /bin/vi
112208 bin/sam
153624 bin/samterm
These mumbers are from Red Hat Linux.
The 6:1 discrepancy is understated because
vi is stripped and the sam files are not.
All are 64-bit, dynamically linked.
Clem Cole wrote:
>A heretic!! Believers all know '*Bourne to Program, Type with Joy' *and*
>'One true bracing style' *are the two most important commandments of UNIX
>programmer!
>
>Seriously, I still write my scripts as v7 and use (t)csh as my login shell
>on all my UNIX boxes ;-)
>
>Clem
You know what's amazing? that Bill Joy code to launch either
csh or bourne shell based on the first character of teh file is
still in tcsh codebase today. It even has #! support just in case
your kernel does not. However this code never gets run as
who write scripts without #! anymore .. but here's a little test ---
$ tcsh
You have 2 mail messages.
> cat x1.sh
PATH=/bin
echo $SHELL
> ./x1.sh
/bin/sh
> cat x2.csh
#
setenv path /bin
echo $shell
> ./x2.csh
/usr/local/bin/tcsh
> exit
you can see it in https://github.com/tcsh-org/tcsh/blob/master/sh.exec.c
-Brian
Doug McIlroy wrote:
>Brian Walden's discussion of sh #, etc, is right on.
>However, his etymology for unary * in C can be
>pushed back at least to 1959. * was used for
>indirect addressing in SAP, the assembler for
>the IBM 7090.
Thank you for both the confirmation and also that history update.
-Brian
>
>> From: Warner Losh <imp(a)bsdimp.com>
>
>> There's no wupus source before V7.
>
> If you look at Clem's original message:
>
>>> From: Clem Cole <clemc(a)ccc.com>
>>> Date: Mon, 6 Jan 2020 16:08:50 -0500
>
>>> You got my curiosity up and found the V5 and V6 source code
>
> (the one Will was replying to), Clem's talking about the source to crt0.s,
> etc.
>
> Noel
>
Sorry. I could have been clearer. I thought Clem was saying that he found the Wumpus code in v5/v6. Now, I see that he was just talking about the crt files. When I said I couldn’t find the source prior to v7, I meant the wumpus source :).
On another note, porting the v7 code to MacOS is tricky, lots of minor differences, but I’m giving it a go. Prolly easier to just figure out what it’s supposed to do and do it with modern idioms, but it’s a fun puzzle to try to replicate the same functionality with only minor adjustments.
Will
> From: Warner Losh <imp(a)bsdimp.com>
> There's no wupus source before V7.
If you look at Clem's original message:
>> From: Clem Cole <clemc(a)ccc.com>
>> Date: Mon, 6 Jan 2020 16:08:50 -0500
>> You got my curiosity up and found the V5 and V6 source code
(the one Will was replying to), Clem's talking about the source to crt0.s,
etc.
Noel
> I'm interested in the possible motivations for a redirection to be
> a simple command.
I use it to truncate a file to zero length.
Or to create (an empty) file.
Doug
> From: Will Senn
> On another note,You said you looked in v5 and v6 source code? I looked
> at tuhs and didn't see anything earlier than v7. Where did you find
> them?
Huh? https://www.tuhs.org/cgi-bin/utree.pl
Noel
i started on the 7th edition on a perkin elmer (ne interdata) - this was v7 with some 2.1bsd sprinkled on top.
i remember the continual annoyance of unpacking shar files starting with hash comments : only on ed7. in the end i wrote a trivial sed to remove them called unshar.
i haven't thought of that for decades...
-Steve
Mike Haertel:
That's amusing, considering that the 5620 stuff was in /usr/jerq on
Research systems! Apparently the accident became institutionalized.
=====
I remember the name Jerq being tossed around to mean 5620
when I was at 1127. That doesn't mean it was historically
accurate, but it is consistent with the directory names, and
the latter are probably where I got my mistaken idea of the
history.
Thanks to Rob, who certainly should know, for clearing it up.
Norman Wilson
Toronto ON
Brian Walden's discussion of sh #, etc, is right on.
However, his etymology for unary * in C can be
pushed back at least to 1959. * was used for
indirect addressing in SAP, the assembler for
the IBM 7090.
Richard Salz wrote:
>> not the kernel. This had traditionally been done after the exec() failed
>> then shell ould run "sh argv[0]", but with two shells this was now a
>> problem.
>>
>
>It seems the kernel did that; http://man.cat-v.org/unix_7th/2/exec since
>argv[-1] was altered.
As a user of these systems, the offical 7th Edition kernel most certainly
could not execute a script, only binaries. It happend after the release
1979 and took time to make its way out, which it did via DSB before 8th Ed
was finalized in 1985.
The usenet announcement of this new functionality from Dennis is on
Jan 10, 1980. Is listed here https://en.wikipedia.org/wiki/Shebang_(Unix)
Dennis stated the idea was not his, it came up during csonverastions at
a conference.
-Brian
More than you ever wanted to know about #
The first shell to use it as a comment was csh(1), Bill Joy did this.
This was also pre #! in the kernel so the shell had to exec scripts,
not the kernel. This had traditionally been done after the exec() failed
then shell ould run "sh argv[0]", but with two shells this was now a problem.
So csh would look at the first line of the script and if it was a #\n
it would exec csh on it if not it would exec sh(1) on it. This was check
was also placed into to BSD's (not v7 nor att's) bourne shell so it could
run csh scripts as well.
However this was not the first use of # as a comment character. That award
goes to Brian Kernighan's ratfor(1) (rational fortran) compiler in 1974-75.
Then Feldman used in make(1) in 1976, followed by Kernighan's m4(1), learn(1)
and most famously awk(1) in 1977
Bourne shell, written around 1976, eventualy picked this up later on but after
the initial v7 release. And as some noted the : was kind of a comment, it
was a command that did an exit(0) orginally for labels for Thompson's
shell's goto command. The : command was eventually hard linked to the
true(1) command
Remember # was hard to type on teletypes as that was the erase character, so
to enter it, you needed to type \#
(# as erase and @ as line kill came from multics btw)
It was so hard to type that the orignal assember based on DEC PAL-11R,
that addressing syntax changed @ to * and # to $.
In DEC it would be--
MOV @X, R0;
In UNIX asm it became --
mov *x, r0
So this is also why C pointers use * notation.
-Brian
> From: Dave Horsfall dave at horsfall.org
>
>On Sat, 4 Jan 2020, Chet Ramey wrote:
>
>>> Which reminds me: which Shell introduced "#" as a true comment?
>>
>> Define "true comment." The v7 shell had `#' as the comment character, but
>> it only worked when in non-interactive shells. I think it was the Sys III
>> shell that made it work when the shell was interactive.
>
>Yes, that's what I meant.
>
>> This is, incidentally, why bash has the `interactive_comments' option,
>> which I saw in another message. BSD, which most of the GNU developers were
>> using at the (pre-POSIX) time, used the v7 shell and didn't have
>> interactive comments. When a sufficiently-advanced POSIX draft required
>> them, we added it.
>
>I never did catch up with all the options on the various shells; I just
>stick with the defaults in general. Eg:
>
> aneurin% man bash | wc -l
> 5947
>
>Life's too short...
>
>-- Dave
Hoi,
in a computer forum I came across a very long command line,
including `xargs' and `sh -c'. Anyways, throughout the thread
it was modified several times, when accidently a pipe symbol
appeared between the command and the output redirection. The
command line did nothing; it ran successful. I was confused,
because I expected to see a syntax error in case of
``cmd|>file''. This made me wonder ...
With help of Sven Mascheck, I was able to clear my understanding.
The POSIX shell grammer provided the answer:
pipeline : pipe_sequence
...
pipe_sequence : command
| pipe_sequence '|' linebreak command
;
command : simple_command
...
simple_command : cmd_prefix cmd_word cmd_suffix
| cmd_prefix cmd_word
| cmd_prefix <--- HIER!
| cmd_name cmd_suffix
| cmd_name
;
cmd_prefix : io_redirect
...
io_redirect : io_file
...
io_file : '<' filename
| LESSAND filename
| '>' filename
...
A redirection is a (full) simple_command ... and because
``simple_command | simple_command'' is allowed, so is
``io_file | io_file''. This can lead to such strange (but
valid) command lines like:
<a | >b
>b | <a
Sven liked this one:
:|>:
Here some further fun variants:
:|:>:
<:|:>:
They would provide nice puzzles. ;-)
My understanding was helped most by detaching from the
semantics and focussing on syntax. This one is obviously
valid, no matter it has no effect:
:|:|:
From there it was easier to grasp:
>a | >a | >a
Which is valid, because ``>a'' is a (complete) simple_command.
Thus, no bug but consistent grammer. ;-)
If one would have liked to forbid such a corner case,
additional special case handling would have been necessary
... which is in contrast to the Unix way.
Sven checked the syntax against various shells with these
results:
- Syntax ok in these shells:
SVR2 sh (Ultrix), SVR4 sh (Heirloom)
ksh93
bash-1.05, bash-aktuell
pdksh-5.2.14
ash-0.4.26, dash-0.5.6.1
posh-0.3.7, posh-0.12.3
mksh-R24, mksh-R52b
yash-2.29
zsh-3.0.8, zsh-4.3.17
- Exception to the rule:
7thEd sh:
# pwd|>>file
# echo $?
141
On first sight ok, but with a silent error ... SIGPIPE (128+13).
I'd be interested in any stories and information around this
topic.
What about 7thEd sh?
meillo
> I was always sad that the development of C that became Alef never got off
> the ground.
It eventuated in Go, which is definitely aloft, and responds
to Mike Bianchi's specific desires. Go also has a library
ecosystem, which C does not.
With its clean parallelism, Go may be suitable for handling
the complexity of whole-paragraph typsetting in the face
of unexpected traps, line-length changes, etc.
Doug
I'm having a party on Saturday January 11 (and if any of you are in Tucson,
or want to come to Tucson for it, you're invited; email me for the address
and time).
Although the party is Elvis-themed, it's really about boardgaming and
classic videogaming.
So I kind of wanted to put a general-purpose Z-machine interpreter on my
PiDP-11, so that people could play Infocom (and community) games on a real
terminal.
Turns out there wasn't really one, so I ported the venerable ZIP (which I
have renamed "zterp" for obvious reasons) to 2.11BSD on the PDP-11, and I
also wrote a little utility I call "tmenu" to take a directory (and an
optional command applying to files in the directory) and make a numbered
menu, so that my guests who are not familiar with Actual Bourne Shell can
play games too.
These things are at:
https://github.com/athornton/pdp11-zterp
and
https://github.com/athornton/pdp11-tmenu/
Both are K&R C, and compile with the 2.11BSD system C compiler.
My biggest disappointment is that the memory map of Trinity, my favorite
Infocom game, is weird and even though it's only a V5 game, I can't
allocate enough memory to start it. Other than that, V5 and below seem to
work mostly fine; V8 is in theory supported but no game that I've tried has
little enough low memory that I can malloc() it using C on 2.11BSD.
Adam
I have always marveled at folks who can maintain multiple
versions of software, but Larry's dispatch from the
trenches reveals hurdles I hadn't imagined. Kudos for
keeping groff alive.
Speaking of which, many thanks to all who pitched in
on the %% nit that I reported. The instant response
compares rather favorably to an open case I've been
following in gcc, which was originally filed in 2002.
Doug
The use of %% to designate a literal % in printf is not
a recent convention. It was defined in K&R, first edition.
Doug
Ralph Cordery wrote:
Though that may seem odd to our modern C-standardised eyes, it's
understandable in that if it isn't a valid %f, etc., format specifier
then it's a literal percent sign.
According to K&R the behavior of % followed by something
unexpected is undefined. So the behavior of Ralph's example
is officially an accident. (It's uncharacteristic of Dennis
to have defined printf so that there was no guaranteed way
to get a literal % into a format.)
Doug
------------------------------------------------
Ralph Corderoy wrote:
$ printf '%s\n' \
.PS 'print sprintf("%.17g %.0f% % %%", 3.14, 42, 99)' .PE |
> pic >/dev/null
3.1400000000000001 42% % %%
Though that may seem odd to our modern C-standardised eyes, it's
understandable in that if it isn't a valid %f, etc., format specifier
then it's a literal percent sign.
The linux kernel never implemented support for a few features of obsolete
terminals. I find myself wanting to use Raspberry Pi-style linux machines
with old hardware, so this became quite frustrating.
So, I've put together a patch to the n_tty line discipline that adds some
things needed for using a Teletype model 33 or similar natively:
- XCASE, escaping uppercase (and a few special characters) for input and
display,
- CRDLY, delay to allow time for the carriage-return function;
- NLDLY, delay to allow time for the newline function.
With XCASE and ICANON, the terminal outputs a backslash before uppercase
characters; and accepts a backslash escape to set input to uppercase. The
usual way to use this is `stty lcase`, which also down-cases all input by
default. The special character escapes are:
\^ to ~
\! to |
\( to {
\) to }
\' to `
With CRDLY there are three options, CR0 through CR2; and with NLDLY there
are options NL0 (no delay) and NL1 (one delay). This patch uses fill
characters for delay, not timing, so these flags only take effect when
OFILL is also set.
Note: this doesn't change `agetty`, which I don't think implements
uppercase login detection right now. I have a Teletype running with
auto-login; and then `stty 110 icanon lcase ofill cr1 nl1`.
Code changes and some brief build instructions are here:
https://github.com/hughpyle/ASR33/tree/master/rpi/kernel
Compare with the raspberrypi tree, here,
https://github.com/raspberrypi/linux/compare/rpi-4.19.y...hughpyle:teletype
Not yet submitted upstream - the changes are in quite a high-traffic code
path, and also I just don't know how :) Feedback is very welcome!
-Hugh
All, I got a new printer with a better duplex scanner. I've just scanned
all the Unix Review magazines that I've got (1984-85 period) and uploaded
them to www.archive.org:
https://archive.org/search.php?query=title%3A%28Unix%20Review%29%20AND%20me…
Merry festive-season-of-your-choice,
Warren
P.S I have a bunch of Unix/World magazines, just waiting for a stronger
guillotine to arrive.
Computer History Museum curator Dag Spicer passed along a question from former CHM curator Alex Bochannek that I thought someone on this list might be able to answer. The paper "The M4 Macro Processor” by Kernighan and Ritchie says:
> The M4 macro processor is an extension of a macro processor called M3 which was written by D. M. Ritchie for the AP-3 minicomputer; M3 was in turn based on a macro processor implemented for [B. W. Kernighan and P. J. Plauger, Software Tools, Addison-Wesley, Inc., 1976].
Alex and Dag would like to learn more about this AP-3 minicomputer — can anyone help?
I sense a hint of confusion in some of the messages
here. To lay that to rest if necessary (and maybe
others are interested in the history anyway):
As I understand it, the Blit was the original terminal,
hardware done by Bart Locanthi (et al?), software by
Rob Pike (et al?). It used an MC68000 CPU. Western
Electric made a small production run of these terminals
for use within AT&T. I don't think it was sold to the
general public.
By the time I arrived at Bell Labs in late 1984, the
Standard Terminal of 1127 was the AT&T 5620, locally
called the Jerq. This was a makeover with hardware
redesigned by a product group to use a Bellmac 32 CPU,
and software heavily reworked by a product group.
This is the terminal that was manufactured for general
sale.
I'm not sure, but I think the Blit's ROM was very basic,
just enough to be some sort of simple glass-tty or
perhaps smartass-terminal* plus an escape sequence to
let you load in new code. The Jerq had a fancier ROM,
which was a somewhat-flaky ANSI-ish terminal by default,
but an escape sequence put it into graphics-window-manager
mode, more or less like what had run a few years earlier
on the Blit.
By then the code used in Research had evolved considerably,
in particular allowing the tty driver to be exported to
the terminal (those familiar with 9term should know what
I mean). In 1127 we used a different escape sequence to
download a standalone program into the terminal and
replace the ROM window manager entirely, so we could run
our newer and (to my taste anyway) appreciably better code.
The downloaded code lived in RAM; you had to reload it
whenever the terminal was power-cycled or lost its connection
or whatnot. (It took a minute or so at 9600bps, rather
longer at 1200. This is not the only reason we jumped at
the chance to upgrade our home-computing scheme to use
9600bps over leased lines, but it was an important one.)
The V8 tape was made in late 1984 (I know that for sure
because I helped make it). It is unlikely to have anything
for the MC68000 Blit, only stuff for the Mac-32 Jerq.
Likewise for the not-really-a-release snapshots from the
9/e and 10/e eras. The 5620 ROM code is very unlikely to
be there anywhere, but the replacement stuff we used should
be somewhere.
Norman Wilson
Toronto ON
> If 5620s were called Jerqs, it was an accident. All the software with that
> name would be for the original, Locanthi-built and -designed 68K machines.
>
> The sequence is thus Jerq, Blit, DMD-5620
Maybe the “Jerq” name had a revival. If the processor switch came with some upheaval it is not hard to see how that revival could have happened.
The Dan Cross tar archive with the source code has two top level directories, one named “blit" with the 68K based source and another one named “jerq" with the Bellmac based source. The tar archive seems to have been made in the summer of 1985, or at least those dates are on the top level directories.
I am of course not disputing that the original name was Jerq. There are many clues in the source supporting that, among which this funny comment in mcc.c:
int jflag, mflag=1; /* Used for jerq. Rob Pike (read comment as you will) */
Bit hard to classify this one; separate posts since COFF was created?
Augusta Ada King-Noel, Countess of Lovelace (and daughter of Lord Byron), was
born on this day in 1815; arguably the world's first computer programmer and a
highly independent woman, she saw the potential in Charles Babbage's
new-fangled invention.
J.F.Ossanna was given unto us on this day in 1928; a prolific programmer, he
not only had a hand in developing Unix but also gave us the ROFF series.
Who'ld've thought that two computer greats would share the same birthday?
-- Dave
Moo and hunt-the-wumpus got quite a lot of play
both in the lab and at home. Wump was an instant
hit with my son who was 4 or 5 years old at the
time.
Amusingly, I speculated on how to generate degree-3
graphs for wump, but obviously not very deeply. It
was only much later that I realized the graph
always had the same topology--a dodecahedron.
Doug\
We lost Dr. John Lions on this day in 1998; he was one of my Comp Sci
lecturers (yes, I helped him write The Book, and yes, you'll find my name
in the back).
-- Dave
I’m looking for the origins of SLIP and PPP on Unix. Both seem to have been developed long before their RFC’s appeared.
As far as I can tell, SLIP originally appeared in 3COM’s UNET for the PDP11, around 1980. From the TUHS Unix tree, first appearance in BSD seems to be 4.3 (1986).
Not sure when PPP first appeared, but the linux man page for pppd has a credit that goes back to Carnegie Mellon 1984. First appearance in BSD seems to be FreeBSD 5.3 (2004), which seems improbably late (same source).
Paul
Hello All.
Anyone who pulled the code for v10spell that I made available a few
months ago should 'make clean', 'git pull', and 'make'.
A critical bug has been fixed for 64 bit systems, and the code has
had some additional cleanups and the doc updated some as well.
The repos is at git://github.com/arnoldrobbins/v10spell.
Enjoy,
Arnold
> From: Paul Ruizendaal
> I'm looking for the origins of SLIP and PPP on Unix. Both seem to have
> been developed long before their RFC's appeared.
You're dealing with an epoch when the IETF motto - "rough consensus and
running code" - really meant something. Formal RFC's way lagged protocol
development; they're the last step in the process, pretty much.
If you want to study the history, you'd need to look at Internet Drafts (if
they're still online). Failing that, look at the IETF Proceedings; I think
all the ones from this period have been scanned in. They won't have the
detail that the I-D's would have, but they should give the rough outlines
of the history.
Noel
I've been fixing and enhancing James Youngman's git-sccsimport to use
with some of my SCCS archives, and I thought it might be the ultimate
stress test of it to convert the CSRG BSD SCCS archives.
The conversion takes about an hour to run on my old-ish Dell server.
This conversion is unlike others -- there is some mechanical compression
of related deltas into a single Git commit.
https://github.com/robohack/ucb-csrg-bsdhttps://github.com/robohack/git-sccsimport
--
Greg A. Woods <gwoods(a)acm.org>
Kelowna, BC +1 250 762-7675 RoboHack <woods(a)robohack.ca>
Planix, Inc. <woods(a)planix.com> Avoncote Farms <woods(a)avoncote.ca>
We lost J.F. Ossanna on this day in 1977; he had a hand in developing Unix, and
was responsible for "roff" and its descendants. Remember him, the next time
you see "jfo" in Unix documentation.
He also accomplished a lot more, too much to summarise here.
-- Dave
We retired gets from Research UNIX back in 1984 or perhaps
earlier, with no serious pain because replacing it wasn't
hard and everybody agreed with the reason.
I'm glad to hear some part of the rest of the world is
catching up.
We also decided to retire the old Enigma-derived crypt(1),
except we didn't want to throw it out entirely in case
someone had an old encrypted file and wanted the contents
back. So it was removed from the manual and the binary
moved to /usr/games.
Norman Wilson
Toronto ON
Seen in the FreeBSD Quarterly Report:
gets(3) retirement
Contact: Ed Maste <emaste(a)FreeBSD.org>
gets is an obsolete C library routine for reading a string from
standard input. It was removed from the C standard as of C11 because
there was no way to use it safely. Prompted by a comment during Paul
Vixie's talk at vBSDCon 2017 I started investigating what it would take
to remove gets from libc.
The patch was posted to Phabricator and refined several times, and the
portmgr team performed several exp-runs to identify ports broken by the
removal. Symbol versioning is used to preserve binary compatibility for
existing software that uses gets.
The change was committed in September, and will be in FreeBSD 13.0.
This project was sponsored by The FreeBSD Foundation.
And the world is a slightly safer place...
-- Dave
I'm looking for a reference to any Unix ports where the kernel ran in
a non-paged address space and user mode was paged. I could swear this
was done at some point, and memory says it was on a soft-TLB system
like the MIPS, to avoid TLB pollution and TLB fault overhead.
But maybe I'm nuts. I am happy to hear either answer.
I had a hand-held degausser, but lent it to someone years ago
and never got it back.
It was actually Exabyte that made me buy it. I bought a new
8505 through a reseller to supersede the 8200 I was using for
home backups. It turned out the 8505's firmware refused to
overwrite a tape already written at any but the highest density,
so I couldn't reuse any of my existing backup tapes. Exabyte
insisted it was a feature, not a bug. So I gave up and bought
a degausser so I could turn a used tape into a blank tape so
the damn tape drive would write on it.
For further vintage-computing amusement: I decided to buy at
that time because the reseller had arranged a deal with Exabyte:
trade in any old tape drive, working or not, and get a couple of
hundred bucks off on a brand-new 8505. So I gave the reseller
an old, broken TK05 I had lying around. My sales contact for
the reseller was a former service tech at the same company, so
I figured (correctly) he'd get the joke.
Norman Wilson
Toronto ON
Snotty remarks aside, I have a couple of Exabyte drives in my
home world. They haven't been used for a long time, but when
they were (for some years I used them as a regular backup device)
they worked just fine.
I've pinged the guy.
Norman Wilson
Toronto ON
I keep a copy of the utzoo files.
And then I hacked the altavista desktop search the files using Apache to filter content inline.
https://altavista.superglobalmegacorp.com/altavista
I know I'd love to feed it more data, the utzoo stuff is massive for 1991, but it's really trivial for 2019. It's around 10GB decompressed.
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Larry McVoy <lm(a)mcvoy.com>
Sent: Thursday, November 21, 2019, 11:53 AM
To: Bakul Shah
Cc: tuhs(a)tuhs.org
Subject: Re: [TUHS] Steve Bellovin recounts the history of USENET
On Wed, Nov 20, 2019 at 07:50:53PM -0800, Bakul Shah wrote:
> On Wed, 20 Nov 2019 19:14:23 -0800 Larry McVoy wrote:
> > Yeah, I'd be super happy if he joined the list. I enjoyed reading
> > those, wished he had gone into more detail.
> >
> > On the Usenet topic, does anyone remember dejanews? Searchable
> > archive of all the posts to Usenet. Google bought them and then,
> > so far as I know, the searchable part went away.
> >
> > If someone knows how to search back to the beginnings of Usenet,
> > my early tech life is all there, I'd love to be able to show my kids
> > that. Big arguing with Mash on comp.arch, following Guy Harris on
> > comp.unix-wizards, etc.
>
> I have occasionally downloaded some mbox.zip files from
> https://archive.org/details/usenet
> But there are too many files there. Would be nice if there
> was a collaborative effort to organize them in a more usable,
> searchable state. Pretty much all of it (minus binaries
> groups) can be stored locally (or using some global
> namespace.
So is that all of Usenet?
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
Dear All:
I was wondering if anyone had any first-hand information about the early decisions at Western Electric to make an education license for Unix that was both royalty-free and with an extremely modest “service charge”/delivery fee, or if anyone knows the names of key people who made these decisions.
Best wishes,
David
..............
David C. Brock
Director and Curator
Software History Center
Computer History Museum
computerhistory.org/softwarehistory<http://computerhistory.org/softwarehistory>
Email: dbrock(a)computerhistory.org
Twitter: @dcbrock
Skype: dcbrock
1401 N. Shoreline Blvd.
Mountain View, CA 94943
(650) 810-1010 main
(650) 810-1886 direct
Pronouns: he, him, his
> From: Arnold Robbins
> The Bell Labs guys in some ways were too.
And there's the famous? story about the Multics error messages in Latin,
courtesty of Bernie Greenberg. One actually appeared at a customer site once,
whereupon hilarity ensued.
Noel
Clem Cole:
Al Arms
wrote and administer the license BTW.
====
Aside for entertainment purposes: at one point, the root
password for the UNIX systems I ran in the Caltech High
Energy Physics group was derived from Al's name, but through
a level of punning indirection. I believe Mark Bartelt
came up with it.
Later we decided to change it. I believe I chose the
successor, which continued the UNIX-licensing scheme, but
in a different direction:
*UiaTMoBL
The systems that had either of these passwords are long-
since turned off.
Norman Wilson
Toronto ON
This stuff is extremely poorly preserved. No time like the present to
fix that. I was reading Tom's blog
https://akapugs.blog/2018/05/12/370unixpart3/ and have been aware of
Amdahl UTS a couple of the other ports for a while.
I've got an HP 88780 quad density 9-track and access to a SCSI IBM
3490. Can fit them in air cargo and bring a laptop with a SCSI card.
Tell me where to go.
Regards,
Kevin
Tangential, but interesting:
https://blog.edx.org/congratulations-edx-prize-2019-winners/
Where would you expect a MOOC about C to originate? Not, it
turns out, in a computer-science department. Professor
Bonfert-Taylor is a mathematician in the school of
engineering at Dartmouth.
Doug
On Tue, 19 Nov 2019, Tony Finch wrote:
> Amusingly POSIX says the C standard takes precedence wrt the details of
> gets() (and other library functions) and C18 abolished gets(). I'm
> slightly surprised that the POSIX committee didn't see that coming and
> include the change in the 2018 edition...
Didn't know that gets() had finally been abolished; it's possibly the most
unsafe function (OK, macro) on the planet. I've long been tempted to
remove gets() and see what breaks...
-- Dave
Bakul Shah:
Unfortunately strcpy & other buffer overflow friendly
functions are still present in the C standard (I am looking at
n2434.pdf, draft of Sept 25, 2019). Is C really not fixable?
====
If you mean `can C be made proof against careless programmers,'
no. You could try but the result wouldn't be C. And Flon's
Dictum applies anyway, as always.
It's perfectly possible to program in C without overflowing
fixed buffers, just as it's perfectly possible to program in
C without dereferencing a NULL or garbage pointer. I don't
claim to be perfect, but before the rtm worm rubbed my nose
in such problems, I was often sloppy about them, and afterward
I was very much aware of them and paid attention.
That's all I ask: we need to pay attention. It's not about
tools, it's about brains and craftmanship and caring more
about quality than about feature count or shiny surfaces
or pushing the product out the door.
Which is a good bit of what was attractive about UNIX in
the first place--that both its ideas and its implementation
were straightforward and comprehensible and made with some
care. (Never mind that it wasn't perfect either.)
Too bad software in general and UNIX descendants in particular
seem to have left all that behind.
Norman Wilson
Toronto ON
PS: if you find this depressing, cheer yourself up by watching
the LCM video showing off UNICS on the PDP-7. I just did, and
it did.
I hadn't seen this yet - apologies if it's not news:
https://livingcomputers.org/Blog/Restoring-UNIX-v0-on-a-PDP-7-A-look-behind…
Quoting:
"I recently sat down with Fred Yearian, a former Boeing engineer, and
Jeff Kaylin, an engineer at Living Computers, to talk about their
restoration work on Yerian’s PDP-7 at Living Computers: Museum +
Labs."
[...]
Up until the discovery of Yearian’s machine, LCM+L’s PDP-7 was
believed to be the only operational PDP-7 left in the world. Chuckling
to himself, Yearian recalls hearing this history presented during his
first visit to LCM+L.
“I walked in the computer museum, and someone said ‘Oh, this is the
only [PDP-7] that’s still working’.
And I said, ‘Well actually, I got one in my basement!’”
[end quote]
Fun story - and worthy work. Nicely done.
--
Royce
Hi.
Doug McIlroy is probably the best person to answer this.
Looking at the V3 and V4 manuals, there is a reference to the m6 macro
processor. The man page thereof refers to
A. D. Hall, The M6 Macroprocessor, Bell Telephone Laboratories, 1969
1. Is this memo available, even in hardcopy that could be scanned?
2. What's the history of m6, was it written in assembler? C?
3. When and why was it replaced with m4 (written by DMR IIRC)?
More generally, what's the history of m6 prior to Unix?
IIRC, the macro processor in Software Tools was inspired by m4,
and in particular its immediate evaluation of its arguments during
definition.
I guess I'll also ask, how widespread was the use of macro processors
in high level languages? They were big for assembler, and PL/1 had
a macro language, but I don't know of any other contemporary languages
that had them. Were the general purpose macro processors used a lot?
E.g. with Fortran or Cobol or ...
I'm just curious. :-)
Thanks,
Arnold
I think I recall an explicit statement somewhere from an
interview with Robert that the worm was inspired partly
by Shockwave Rider.
I confess my immediate reaction to the worm was uncontrollable
laughter. I was out of town when it happened, so I first
heard it from a newspaper article (and wasn't caught up in
fighting it or I'd have laughed a lot less, of course); and
it seemed to me hilarious when I read that Robert was behind
it. He had interned with 1127 for a few summers while I was
there, so I knew him as very bright but often a bit careless
about details; that seemed an exact match for the worm.
My longer-term reaction was to completely drop my sloppy
old habit (common in those days not just in my code but in
that of many others) of ignoring possible buffer overflows.
I find it mind-boggling that people still make that mistake;
it has been literal decades since the lesson was rubbed in
our community's collective noses. I am very disappointed
that programming education seems not to care enough about
this sort of thing, even today.
Norman Wilson
Toronto ON
> That was the trouble; had he bothered to test it on a private network (as
> if a true professional would even consider carrying out such an act)[*] he
> would've noticed that his probability calculations were arse-backwards
Morris's failure to foresee the results of even slow exponential
growth is matched by the failure of the critique above to realize
that Morris wouldn't have seen the trouble in a small network test.
The worm assured that no more than one copy (and occasionally one clone)
would run on a machine at a time. This limits the number of attacks
that any one machine experiences at a time to roughly the
number of machines in the network. For a small network, this will
not be a major load.
The worm became a denial-of-service attack only because a huge
number of machines were involved.
I do not remember whether the worm left tracks to prevent its
being run more than once on a machine, though I rather think
it did. This would mean that a small network test would not
only behave innocuously; it would terminate almost instantly.
Doug
FYI.
----- Forwarded message from Linus Torvalds <torvalds(a)linux-foundation.org> -----
Date: Wed, 13 Nov 2019 12:37:50 -0800
From: Linus Torvalds <torvalds(a)linux-foundation.org>
To: Larry McVoy <lm(a)mcvoy.com>
Subject: Re: enum style?
On Wed, Nov 13, 2019 at 10:28 AM Larry McVoy <lm(a)mcvoy.com> wrote:
>
> and asked what was the point of the #defines. I couldn't answer, the only
> thing I can think of is so you can say
>
> int flags = MS_RDONLY;
>
> Which is cool, but why bother with the enum?
For the kernel we actually have this special "type-safe enum" checker
thing, which warns about assigning one enum type to another.
It's not really C, but it's close. It's the same tool we use for some
other kernel-specific type checking (user pointers vs kernel pointers
etc): 'sparse'.
http://man7.org/linux/man-pages/man1/sparse.1.html
and in particular the "-Wenum-mismatch" flag to enable that warning
when you assign an enum to another enum.
It's quite useful for verifying that you pass the right kind of enum
to functions etc - which is a really easy mistake to make in C, since
they all just devolve into 'int' when they are used.
However, we don't use it for the MS_xyz flag: those are just plain
#define's in the kernel. But maybe somebody at some point wanted to do
something similar for the ones you point at?
The only other reason I can think of is that somebody really wanted to
use an enum for namespace reasons, and then noticed that other people
had used a #define and used "#ifdef XYZ" to check whether it was
available, and then instead of changing the enums to #defines, they
just added the self-defines.
In the kernel we do that "use #define for discoberability" quite a lot
particularly with architecture-specific helper functions. So you migth
have
static inline some_arch_fn(..) ...
#define some_arch_fn some_arch_fn
in an architecture file, and then in generic code we have
#ifndef some_arch_fn
static inline some_arch_fn(.,,) /* generic implemenbtation goes here */
#endif
as a way to avoid extra configuration variables for the "do I have a
special version X?"
There's no way to ask "is the C symbol X available in this scope", so
using the pre-processor for that is as close as you can get.
Linus
----- End forwarded message -----
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
Most of this post is off topic; the conclusion is not.
On the afternoon of Martin Luther King Day, 1990, AT&T's
backbone network slowed to a crawl. The cause: a patch intended
to save time when a switch that had taken itself off line (a
rare, but routine and almost imperceptible event) rejoined the
network. The patch was flawed; a lock should have been taken
one instruction sooner.
Bell Labs had tested the daylights out of the patch by
subjecting a real switch in the lab to tortuously heavy, but
necessarily artificial loads. It may also have been tested on
a switch in the wild before the patch was deployed throughout
the network, but that would not have helped.
The trouble was that a certain sequence of events happening
within milliseconds on calls both ways between two heavily
loaded switches could evoke a ping-pong of the switches leaving
and rejoining the network.
The phenomenon was contagious because of the enhanced odds of a
third switch experiencing the bad sequence with a switch that
was repeatedly taking itself off line. The basic problem (and
a fortiori the contagion) had not been seen in the lab because
the lab had only one of the multimillion-dollar switches to
play with.
The meltdown was embarrassing, to say the least. Yet nobody
ever accused AT&T of idiocy for not first testing on a private
network this feature that was inadvertently "designed to
compromise" switches.
Doug
M6 originated as a porting tool for the Fortran source code
for Stan Brown's Altran language for algebraic computation. M6
itself was originally written in highly portable Fortran.
Arnold asked, "How widespread was the use of macro processors
in high level languages? They were big for assembler, and
PL/1 had a macro language, but I don't know of any other
contemporary languages that had them."
Understanding "contemporary" to mean pre-C, I agree. Cpp,
a particularly trivial macroprocessor, has been heavily used
ever since--even for other languages, e.g. Haskell.
The rumor that Bob Morris invented macros is off the
mark. Macros were in regular use by the time he joined Bell
Labs. He did conceive an interesting "form-letter generator",
called "form", and an accompanying editor "fed". A sort of
cross between macros and Vannevar Bush's hypothetical memex
repository, these were among the earliest Unix programs and
appeared in the manual from v1 through v6.
Off-topic warning: pre-Unix stories follow.
Contrary to an assertion on cat-v.org, I did not invent macros
either. In 1959 Doug Eastwood and I, at the suggestion of
George Mealy, created the macro facility for SAP (SHARE assmbly
program) for the IBM 704. However, the idea was in the air at
the time. In particular, we knew that GE already had macros,
though we knew no details about their syntax or semantics.
There were various attempts in the 1960s to build languages by
macro extension. The approach turned out to entail considerable
social cost: communication barriers arise when everyone
can easily create his own dialect. A case in point: I once
had a bright young mathematician summer employee who wrote
wonderfully concise code by heaping up macro definitions. The
result was inscrutable.
Macros caught on in a big way in the ESS labs at Indian Hill.
With a macro-defined switching language, code builds were
slow. One manager there boasted that his lab made more
thoroughgoing use of computers than other departments and
cited enormous consumption of machine time as evidence.
Steve Johnson recalls corrrectly that there was a set of macros
that turned the assembler into a Lisp compiler. I wrote it
and used it for a theorem-proving project spurred by Martin
Davis. (The project was blown away when Robinson published
the resolution princple.) The compiler did some cute local
optimization, taking account of facts such as Bob Morris's
astute observation that the 704 instruction TNZ (transfer on
nonzero) sets the accumulator to zero.
Doug
Dave Horsfall:
And for those who slagged me off for calling him an idiot, try this quick
quiz: on a scale from utter moron to sheer genius, what do you call
someone who deliberately releases untested software designed to compromise
machines that are not under his administrative control in order to make
some sort of a point?
=====
I'd call that careless and irresponsible. Calling it stupid or
idiotic is, well, a stupid, idiotic simplification that succeeds
in being nasty without showing any understanding of the real problem.
Carelessness and irresponsibility are characteristic of people
in their late teens and early 20s, i.e. Robert's age at the time.
Many of us are overly impressed with our own brilliance at that
age, and even when we take some care (as I think Robert did) we
don't always take enough (as he certainly didn't).
Anyone who claims not to have been at least a bit irresponsible
and careless when young is, in my opinion, not being honest. Some
of my former colleagues at Bell Labs weren't always as careful and
responsible as they should be, even to the point of causing harm
to others. But to their credit, when they screwed up that way they
owned up to having done so, tried to make amends, and tried to do
better in future, just as Robert did. It was just Robert's bad
luck that he screwed up in such a public way and did harm to so
many people.
I save my scorn for those who are long past that age and still
behave irresponsibly and harmfully, like certain high politicians
and certain high-tech executives.
Probably future discussion of this should move to COFF unless it
relates directly to the culture and doings in 1127 or other historic
UNIX places.
Norman Wilson
Toronto ON
Sent to me by someone not on this list; I have no idea whether it's been
mentioned here before.
-- Dave
---------- Forwarded message ----------
To: Dave Horsfall <dave(a)horsfall.org>
Subject: Unix Programmer's Manual, 3rd edition (1973)
Hi Dave,
Some nostalgic soul has shared a PDF on the interwebz:
> MIT CSAIL (@MIT_CSAIL) tweeted at 3:12 am on Mon, Nov 04, 2019:
> #otd in 1971 Bell Labs released the first Unix Programmers Manual.
>
> Download the free PDF here: https://t.co/BYh3dAhaJU
I wonder what became of the first and second editions?
> From: Nemo Nusquam
> One comment .. stated that (s)he worked at The Bell and they wrote it
> "unix" (lower-case) to distinguish it from MULTICS. Anyone care to
> comment on this?
All the original Multics hardcopy documentation I have (both from GE and MIT,
as well as later material from Honeywell) spells it 'Multics'. Conversely, an
original V6 UPM spells it 'UNIX'; I think it switched to 'Unix' around the
time of V7. (I don't know about _really_ early, like on the PDP-7.)
The bit about case to differentiate _might_ be right.
Noel
I may still have AOS 4.3 tape images still around somewhere. I will have
to search around and see if I still have them. Though even if I do, I'm
not sure if the license would permit me to make them available - if I
recall correctly, this wasn't an actual LPP, but there may be some IBM
license on this over and above the Berkeley license. Yes, it did come on
tape cartridges.
--Pat.
Another possible source of inspiration — including the name “worm” — were the publications by John Shoch and Jon Hupp on programs they wrote at Xerox PARC around 1979-1980 and published in 1980 and 1982:
John F. Shoch and Jon Hupp:
The “Worm" Programs — Early Experience with a Distributed Computation.
Xerox SSL-80-3 and IEN 159. May 1980, revised September 1980
http://www.postel.org/ien/pdf/ien159.pdf
John F. Shoch and Jon Hupp:
The “Worm" Programs — Early Experience with a Distributed Computation.
CACM V25 N3 (March 1982)
http://www.eecs.harvard.edu/~margo/cs261/background/shoch.pdf
> On Nov 3, 2019, Paul Winalski <paul.winalski(a)gmail.com> wrote:
>
> On 11/2/19, Warner Losh <imp(a)bsdimp.com <mailto:imp@bsdimp.com>> wrote:
>>
>> the notion of a self propagating thing
>> was quite novel (even if it had been theoretically discussed in many places
>> prior to the worm, and even though others had proven it via slower moving
>> vectors of BBS).
>
> Novel to the Internet community, perhaps, but an idea that dates back
> to the 1960s in IBM mainframe circles. Self-submitting OS/360 JCL
> jobs, which eventually caused a crash by filling the queue files with
> jobs, were well-known in the raised-floor world.
>
>> In hindsight people like to point at it and what a terrible thing it was,
>> but Robert just got there first.
>
> Again, first on the Internet. Back in 1980 I accidentally took down
> DEC's internal engineering network (about 100 nodes, mostly VAX/VMS,
> at the time) with a worm. ...
>
> Robert Morris worked as an intern one summer in DEC's compiler group.
> The Fortran project leader told Morris about my 1980 worm incident.
> So he certainly had heard of the concept before he fashioned his
> UNIX/Internet-based worm a few years later.
>
> -Paul W.
All, the second Unix artifact that I've been waiting to announce has
arrived. This time the LCM+L is announcing it. It's not the booting PDP-7.
So, cast your eyes on https://www.tuhs.org/Archive/Distributions/IBM/370/
Cheers, Warren
P.S Thanks to Stephen Jones for this as well.
Full disclosure: I served as a character witness at Robert Morris's trial.
Before the trial, the judge was quite incredulous that the prosecutor
was pursuing a felony charge and refused to let the trial go forward
without confirmation from the prosecutor's superiors in Washington.
> I'm sure that Bob was proud of his son's accomplishments -- but not
that one.
As Bob ut it, "It {being the father] is not a great career move."
Robert confessed to Bob as soon as he realized the folly of loosing
an exponential, even with a tiny growth rate per generation. I
believe that what brought computers to their knees was the
overwhelming number of attacks, not the cost of cecryption. The
worm did assure that only one copy would be allowed to proceed
at a time.
During high school, Robert worked as a summer employee for Fred
Grampp. He got high marks for finding and correcting an exploit.
> making use of known vulnerabilities
Buffer overflows were known to cause misbehavior, but few people
at the time were conscious that the misbehavior could be controlled.
I do not know whether Berkeley agonized before distributing the
"debug" feature that allowed remote super-user access via sendmail.
But they certainly messed up by not documenting it.
Doug
Hey I'm at the hackers conference (having a blast, I thought I was too
retired and burned out and I'm apparently still somewhat OK with that
crowd, much to my surprise. Super fun bunch of nerds).
Steve Bourne is here and I mentioned this list and he didn't know
about it. His interest perked up a bit when I said Doug and Rob and
Ken are here, I think his comment was something like "if Ken is there
it must be something, Ken likes to do stuff more than talk about stuff".
Probably have that not quite right but it was something like that.
I'd love to have all of the Bell Labs alumni here, hearing history from
them is awesome.
So Warren, it's your list, Steve is srb(a)acm.org, you want to do an invite?
I can do it if you prefer that but I thought I'd ask first.
Cheers,
--lm
Perhaps someone can help me locate a very humorous short
essay from Dick Haight of PWB (I believe Dick was John Mashey's
boss at the time) work in Piscataway. I had a paper copy that Dick
gave me that has long since disappeared in many office moves over
almost 50 years.
*John "Jack" Lossin Adams*
*LinkedIn CV <http://lnkd.in/_Q_w7p>* and on *Facebook
<http://facebook.com/John.Lossin.Adams>*
*If God is your Co-Pilot, you're sitting in the wrong seat!*
Veritas per Scientiam - NJIT motto
*We live at a time when emotions and feelings **count more than **truth,
and there *
*is a vast ignorance of science. - James Lovelock*
*Technology is dominated by two types of people: those who understand what *
*they **do not manage, and those who manage what they do not *
*understand. - Archibald Putt*
*We live in a society exquisitely dependent on science and technology, in
which hardly *
*anyone knows anything about science and technology. - Carl Sagan*
*Only two things are infinite, the universe and human stupidity, *
*and I'm not sure about the former. - Albert Einstein*
-------- Original Message --------
From: Stephen Jones <StephenJo(a)livingcomputers.org>
Sent: 3 November 2019 3:05:31 am AEST
Subject: Re: UNIX-7 boots on sn 129
A couple of videos of the action this week:
https://m.youtube.com/watch?v=pvaPaWyiuLA&t=18shttps://m.youtube.com/watch?v=L5MKwp2uj2k&t=119s
The JK09 turns out not to be an emulator but the newest storage device and driver for the pdp-7 and unix v0!
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
The infamous Morris Worm was released in 1988; making use of known
vulnerabilities in Sendmail/finger/RSH (and weak passwords), it took out a
metric shitload of SUN-3s and 4BSD Vaxen (the author claimed that it was
accidental, but the idiot hadn't tested it on an isolated network first). A
temporary "condom" was discovered by Rich Kulawiec with "mkdir /tmp/sh".
Another fix was to move the C compiler elsewhere.
-- Dave
ISBN 9781695978553, for anyone who wants to know that.
I see it for sale on amazon.com and amazon.ca, paperback, `Independently
published.' Does anyone know if it is likely to appear in bricks-and-mortar
bookshops any time soon?
Norman Wilson
Toronto ON
Robert Clausecker <fuz(a)fuz.su> wrote:
> > I've tried Microport SystemV /386 (SysV R3.2). It uses COFF
> Nice find! It seems to use lcall to selector 7 for system calls. A
> similar choice was made in 386BSD all the way through FreeBSD 2.2.8
> where it was replaced with int $0x80 as in Linux.
Technically speaking
lcall $0x07,$0
uses selector 0 with RPL=3 (bit0 and bit1==1) and LDT (bit2==1)
It seems it's oldest way to call kernel from userspace on x86 architecture.
AT&T's programmers used this sycall convention for SysVR3 and
SysVR4 on i386 (not sure about SysVR2 on i286).
There are very few examples with lcall-type syscall i.e.
http://www.sco.com/developers/devspecs/abi386-4.pdf
(figure 3-26)
(and leaked SysVR4 i386 sources)
William Jolitz used this convention in his amazing articles about
porting BSD4.3 to the i386 (c)1991
http://www.informatica.co.cr/unix-source-code/research/1991/0101.html
(p."System Call Inteface"). See also 386BSD 0.0:
https://github.com/386bsd/386bsd/blob/0.0/arch/i386/i386/locore.s#L361
(Did he run AT&T userspace on his kernel ???)
As you mentioned, most of early *BSD systems on i386 also used lcall.
Linus selected to use "DOS-style" call with INT 0x80.
More recent BSD on i386 also use INT.
https://john-millikin.com/unix-syscallshttp://asm.sourceforge.net/intro/hello.html
Solaris on x86 (ex. SysVR4) also uses lcall. See a
https://www.cs.dartmouth.edu/sergey/cs258/solaris-on-x86.pdf
p.4.2.3
and Solaris (later OpenSolaris and later Illumos) sourcecode.
All, I just received this from Stephen Jones at the LCM+L.
----- Forwarded message from Stephen Jones <StephenJo(a)livingcomputers.org> -----
Subject: UNIX-7 boots on sn 129
Hello Folks .. you’ll hear through official channels along with videos
and pictures (hopefully soon) that we just got PDP-7 UNICS to boot on
a real PDP-7 (sn 129) using our newly designed “JK09” disk drive.
The recent posting of source is going to be great .. we’ve been using
the simh image that has been available for a while.
BTW, compiling the B Hello World on a real 7 is much more satisfying
than it is under simh …
More to come, please watch Living Computers for updates.
(PS sorry we’re late to the BTL party).
Stephen M. Jones
----- End forwarded message -----
> 10-36-55.pdf user-mode programs: pool game
This game, written by ken, used the Graphic 2. One of its
earliest tests--random starting positions and velocities on
a frictionless table with no collision detection--produced
a mesmerizing result. This was saved in a program called
"weird1", which was carried across to the PDP11.
Weird1 was a spectacular accidental demonstration of structure
in pseudo-random numbers. After several minutes the dots
representing pool balls would evanescently form short local
alignments. Thereafter from time to time ever-larger alignments
would materialize. Finally in a grand climax all the balls
converged to a single point.
It was stunning to watch perfect order emerge from apparent
chaos. One of my fondest hopes is to see weird1 revived.
Doug
Some time ago, I wrote a piece [1] about the design of the AT&T
assembler syntax. While I'm still not quite sure if everything in there
is correct, this explanation seemed plausible to me; the PDP-11
assembler being adapted for the 8086, then the 80386 and then ELF
targets, giving us today's convoluted syntax.
The one thing in this chain I have never found is an AT&T style
assembler for x86 before ELF was introduced. Supposedly, it would get
away without % as a register prefix, thus being much less obnoxious to
use. Any idea if such an assembler ever existed and if yes where?
I suppose Xenix might have shipped something like that.
The only AT&T syntax assemblers I know today are those from Solaris,
the GNU project, the LLVM project, and possibly whatever macOS ships.
Are there (or where there) any other x86 AT&T assemblers? Who was
the first party to introduce this?
Yours,
Robert Clausecker
[1]: https://stackoverflow.com/a/42250270/417501
--
() ascii ribbon campaign - for an 8-bit clean world
/\ - against html email - against proprietary attachments
Some time ago, I wrote a piece [1] about the design of the AT&T
assembler syntax. While I'm still not quite sure if everything in there
is correct, this explanation seemed plausible to me; the PDP-11
assembler being adapted for the 8086, then the 80386 and then ELF
targets, giving us today's convoluted syntax.
The one thing in this chain I have never found is an AT&T style
assembler for x86 before ELF was introduced. Supposedly, it would get
away without % as a register prefix, thus being much less obnoxious to
use. Any idea if such an assembler ever existed and if yes where?
I suppose Xenix might have shipped something like that.
The only AT&T syntax assemblers I know today are those from Solaris,
the GNU project, the LLVM project, and possibly whatever macOS ships.
Are there (or where there) any other x86 AT&T assemblers? Who was
the first party to introduce this?
Yours,
Robert Clausecker
[1]: https://stackoverflow.com/a/42250270/417501
--
() ascii ribbon campaign - for an 8-bit clean world
/\ - against html email - against proprietary attachments
Robert Clausecker <fuz(a)fuz.su>wrote:
> The one thing in this chain I have never found is an AT&T style
> assembler for x86 before ELF was introduced.
There were alot of AT&T codebase ports to x86 architecture except Xenix:
Microport, INTERACTIVE, Everex, Wyse e.t.c. using AT&T x86 syntax.
I've tried Microport SystemV /386 (SysV R3.2). It uses COFF
as format for executables:
See:
http://www.vcfed.org/forum/showthread.php?67736-History-behind-the-disk-ima…
(Rather interesting kernel ABI/Call convention)
and
https://gunkies.org/wiki/Unix_SYSVr3
There were also SystemV R2 to i286 ports i.e.:
https://gunkies.org/wiki/Microport_System_V
with a.out binary format.
Bother:
Here's a good pictore of G R herself, with what I believe to be at
least a second-generation badge.
Forgot to paste in the URL. Here it is:
http://www.peteradamsphoto.com/g-r-emlin/
Mary Ann Horton:
I'm enjoying bwk's book very much, but it has me wondering. There are
two stories I've heard that supposedly occurred at Murray Hill, but he
didn't include them.
====
You can't expect every story to be there. The book would be too
heavy to lift!
Could the `monkey picture on a badge' story be that of G. R. Emlin's
badge? She was a gremlin doll, not a monkey, but it would be
reasonable to mistake the former for the latter.
Here's a good pictore of G R herself, with what I believe to be at
least a second-generation badge. The original badge was an old-style
Bell Labs one with a green border; I forget whether that meant
contractor or something else, but a regular MTS badge was blue-bordered
at the time.
Norman Wilson
Toronto ON
> What is the special meaning of using / as directory partition in UNIX? And \ as the escape character.
\ came from Multics. The first day Multics ran at Bell Labs Bob Morris
famously typed backslash-newline at the login prompt and crashed the
system.
Multics had a hierarchical file system, too, but I don't recall how
pathnames were punctuated.
Doug
> From: Charles Anthony
>> I think it was >user_dir_dir>Group>User, wasn't it?
> user_dir_dir>Project>User
Oh, right. Too many years spent on Unix! :-)
> "Names" are aliases, similar to soft links
I feel like they are more similar to hard links; they belong to a segment, and
if the name is given to another segment, and the original segment has only
that name, it goes away. (See the discussion under "add_name" in the MPM
'Commands and Active Fuinctions'). Also, Multics does real soft links (too),
so names can't be soft links! :-)
Noel
My talk has been posted.
https://youtu.be/FTlzaDgzPY8
Thanks to everyone who helped make it better.
Warner
P.s. this may be a duplicate email... I had domain issues when I sent it
before...
> From: Charles Anthony
> /home/CAnthony
I think it was >user_dir_dir>Group>User, wasn't it? I seem to remember my
homedir on MIT-Multics was >udd>CSR>JNChiappa?
And I wonder if the 'dd' directory on PDP-7 Unix owe anything to 'udd'?
Getting back to the original query, I'm wondering if '/' was picked
as it wasn't shifted, unlike '>'?
Noel
On Mon, 21 Oct 2019, Andrew Hume wrote:
> the gt40??? oh my lord! good job i am en route to the bell labs 50th
> anniversary.
> its been a long time since i heard the name “Dave Horsfall”!
Yep :-) Although now retired, I'm still active in Unix projects.
-- Dave
I was about to add a footnote to history about
how the broad interests and collegiality of
Bell Labs staff made Space Travel work, when
I saw that Ken beat me to telling how he got
help from another Turing Award winner.
> while writing "space travel,"
> i could not get the space ship integration
> around a planet to keep from either gaining or
> losing energy due to floating point errors.
> i asked dick hamming if he could help. after
> a couple hours, he came back with a formula.
> i tried it and it worked perfectly. it was some
> weird simple double integration that self
> corrected for fp round off. as near as i can
> ascertain, the formula was never published
> and no one i have asked (including me) has
> been able to recreate it.
If I remember correctly, the cause of Ken's
difficulty was not roundoff error. It
was discretization error in the integration
formula--probably f(t+dt)=f(t)+f'(t)dt.
Dick saw that the formula did not conserve
energy and found an alternative that did.
All, we had another dozen TUHS suscribers to the list overnight. Welcome.
A reminder that we're here to discuss Unix Heritage, so I'll nudge you
if the conversation goes a bit off-topic.
So I'll kick off another thread. What was your "ahah" moment when you
first saw that Unix was special, especially compared to the systems you'd
previously used?
Mine was: Oh, I can:
+ write a simple script
+ to edit a file on the fly
+ with no temporary files (a la pipes)
+ AND I can change the file suffix and the system won't stop me!
I was using TOPS-20 beforehand.
Cheers, Warren
DMR explained how PDP-7 UNIX was used in "The Evolution of the Unix
Time-sharing System" but having played with it myself, I stumbled in a
couple of cases and found it a bit awkward to use.
Maybe someone (ken and doug?) can shed some light on "elaborate set of
conventions" that dmr mentioned.
My questions are these:
you cannot execute a program if you're in a directory you can't write into.
I asked Warren about this when I first tried pdp7 unix and he
explained it to me: the shell creates a link to the binary and executes
it. If it can't write into the current directory, it fails to create the
link and hence can't execute the program.
How was this handled in practice? did users have write
permissions on all directories? did you just stay in your directory all
the time?
. and ..
Was this introduced first with PDP-11 unix or did the convention
start on the PDP-7 already? It certainly seems to be the case with .
but how about ..? the dd directory seems to take on the role of a sort
of root directory and the now discovered program pd actually creates a
file .. (haven't tried to understand what it does though yet)
What does dd stand for, dotdot? directory directory?
aap
> As noted in the jargon file, the dd(1) syntax is deliberately reminiscent
> of the DD statement in IBM JCL. This was presumably a joke
That is certainly true and reflects its major early usage to
prepare tapes to carry to other systems.
Though I haven't use dit in ages, I recall that the joke was
so fully engtained that the command was more likely to be
written "dd ifile=x ofile=y" than "dd <x >y"
Doug
> From: Abhinav Rajagopalan
> I only now realized that only mknod existed, up until a long time, only
> later on with the GNU coreutils did mkdir as a command come into
> existence.
Huh? See:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V6/usr/man/man1/mkdir.1
(And probably before that, that was the quickest one to find?)
Maybe that was a typo for 'mkdir as a system call'? (I recall having to do a
fork() to execute 'mkdir', back when.) But 4.2 had mkdir().
Noel
Bakul Shah:
Being an OS student I had read "The Unix Timesharing System" paper by
Ritchie and Thompson and had wanted to use Unix years before I actually
had the chance. I don't remember an "Aha!" moment but I took to it like
a duck to water. Most everything felt just so comfortable and right.
It was very much as I had imagined it to be.
=====
That's more or less what it was like to me. Not so much
an aha! moment, more just a feeling of coming home. It
took a while to understand the different way things worked
in UNIX (I had previously used TOPS-10 for several years)
but as it all sank in it felt more and more right.
C felt the same way. It took me a while to grok the pointer
syntax (I had done a lot of MACRO-10 programming so I certainly
understood the concept, just not how it fit into the higher-
level language), but things like the three-clause condition
in for so that all control for a loop could be in one place
were just magically right.
I don't think I read the CACM paper before I touched UNIX,
but I had read both editions of Software Tools, so my brain
was perhaps pre-seeded with some of the ideas.
Norman Wilson
Toronto ON