This book can apparently be "borrowed" from the Internet Archive. I'm not
sure how they do that, haven't tried to, I just see it says you need to
log in to borrow the book for 14 days.
https://archive.org/details/unixprimerplus00wait
--Pat.
OK, something that's not a ping :-)
I'm trying to track down the author of a cartoon that I'd like to use
in my book so that I can try to get permission. Last one that I need!
It's a cartoon that I only have on paper and don't know where it came
from. It has two frames, then and now, the first with a bunch of
cavemen grunting awk, grep, mkdir, yank, the second with a bunch of
people sitting at computers uttering the same.
I recently stumbled upon something that said that these were in a book
called "UNIX Primer PLUS". Anybody have a copy of that? If so, can
you please check to see if that's the original or whether they got it
from somewhere else?
Thanks,
Jon
On 04/10/19 09:59, arnold(a)skeeve.com wrote:
> Nemo <cym224(a)gmail.com> wrote:
>> On 10/04/2019, Warren Toomey <wkt(a)tuhs.org> wrote:
>>> Just checking you are all still out there :-) Cheers, Warren
>> Well, this is not "Forever September"? #6-) I just finished reading a
>> fascinating article on Inferno and was most amused by the comment in
>> Rob Pike's biblio note at the end. N.
> So, please share article link and comment with the list? Thanks, Arnold
Apologies -- I found it here:
https://ieeexplore.ieee.org/document/6772868/ Bell Labs Tech. J., Vol.
2, Iss. 1, 1997 (or here
https://onlinelibrary.wiley.com/doi/pdf/10.1002/bltj.2028 but there must
an open version available by now).
Pike wrote: "He has never written a program that uses cursor addressing."
N.
So, a while back I mentioned that I'd done tweaked versions of 'cp', 'mv',
'chmod' etc for V6 which retained the original modified date of a file (when
the actual contents were not changed). I had some requests for those versions,
which I have finally got around to checking and uploading (along with 'mvall',
which for some reason V6 didn't have). I've added them to a couple of my V6
pages:
http://www.chiappa.net/~jnc/tech/V6Unix.html#mvallhttp://www.chiappa.net/~jnc/tech/ImprovingV6.html#FileWrite
Note (per the page) that the latter group all require the smdate() system
call, which was commented out in 'vanilla' V6 (because using it confused the
backup system); the page gives instructions on how to turn it back on.
Noel
> "taperead" in http://github.com/brouhaha/tapeutils can extract files
> from a tape image.
The format is very simple: a 32-bit little-endian record length,
followed by that many bytes, followed by the length again for
integrity checking. A record length of zero is a file mark.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
I noticed that the TUHS archive does not include a 4.1BSD distribution.
Also, while poking around the net, I've found a number of purported
tape images of 4.1BSD dated 7/10/1981 that look to me to a little sketchy,
since most contain files dated well into 1982.
So it appears to me that 4.1BSD is semi-lost.
While googling all this, I discovered that the School of Computer Science
and Statistics at Trinity College Dublin has an online archive catalog
which lists a couple of 4.1BSD distribution tapes in the "John Gabriel Byrne
Computer Science Collection".
https://scss.tcd.ie/SCSSTreasuresCatalog/
Perhaps someone from TUHS who lives near Dublin could investigate and
see if images can be made of these tapes?
Years ago just before one of the USENIX meetings in Atlanta Dennis made some
joke comment that nobody had ever asked for a plaster cast of his genitals.
A bunch of us thought it would be fun at the conference to hand out genital
casting kits to Dennis and certain others of note. We ran down to a
local art supply store and bought some plaster and portioned out into zone
ziplock bags We added some paper cups to use for molds and wooded sticks
to mix with. We needed a release agent. Vaseline would work, but I
couldn't figure out how we'd get small portions (I couldn't use the ziplock
bag idea practically). Fortunately, there was a little gift shop in the
hotel lobby and they had these travel size jars. Perfect.
Now the interesting thing was that concurrent with USENIX was the Southern
Baptist Conference meetings (this led to some odd events at local
restaurants).
Anyhow, I walk up to the cashier and plop down ten jars of Vaseline.
She looks at me and says, "I guess y'all aren't with the Baptists."
Oddly, most recipients took the gift in good spirit but Redman had a fit for
some reason. Babette suggested perhaps we made the kit too large for him.
According to my (possibly inaccurate) notes:
NetBSD checked in 1993
Message-ID: <alpine.NEB.2.20.1803211456560.25928(a)t1.m.reedmedia.net>
Revision 1.1, Sun Mar 21 09:45:37 1993 UTC (25 years ago) by cgd
http://cvsweb.netbsd.org/bsdweb.cgi/src/sbin/init/init.c?rev=1.1&content-ty…
"Today is commonly considered the birthday of NetBSD. As far as I know,
it is the oldest continuously-maintained complete open source operating
system. (It predates Slackware Linux, FreeBSD, and Debian Linux by some
months.)"
-- Dave
Hello,
I'm looking for old DNS software - servers, clients, libraries. The
oldest one I've found is BIND 4.3 from 4.3BSD (and it works as a
resolver on 2019 Internet), but there were earlier ones -
http://www.donelan.com/dnstimeline.html says that UCB released first
BIND in 1985, something was running on earlier servers. Any help would
be appreciated.
Witold Krecicki
We lost computer pioneer John Backus on this day in 2007; amongst other things
he gave us FORTRAN (yuck!) and BNF, which is ironic, really, because FORTRAN
has no syntax to speak of.
-- Dave
Hello,
I'm looking for old DNS software - servers, clients, libraries. The
oldest one I've found is BIND 4.3 from 4.3BSD (and it works as a
resolver on 2019 Internet), but there were earlier ones -
http://www.donelan.com/dnstimeline.html says that UCB released first
BIND in 1985, something was running on earlier servers. Any help would
be appreciated.
Witold Krecicki
>> But sed, awk, perl, python, ... lex and parse once into an AST or
>> bytecode, removing the recurring cost of comments, etc. that impact
>> groff. So I don't think it's an even comparison.
>
> Of course it's a valid comparison. Which sed or awk or shell script is
> distributed in a stripped/compressed form? Do they store their AST
> somewhere, so as to avoid recompilation? They do not. Just as
> with groff, every parse starts anew.
Comments inside of a macro definition get scanned each time it's called.
This justifies the first paragraph above.
In the wild, almost all comments occur outside macro definitions.
This justifies the second.
Thus comments are harmless in practice.
Doug
An amusing Unix-related snippet from a science fiction site:
https://www.tor.com/2019/03/12/more-please-authors-we-wish-would-publish-mo…
> Back when the world was young and a ten megabyte hard drive required a team of six sturdy workers to move, P. J. Plauger quite reliably delivered to the world a story or so per year—memorable tales like “Wet Blanket” and “Child of All Ages,” stories that won him a Campbell for Best New Writer and a Hugo nomination for Best Short Story. Tragedy struck when he was enticed away from science fiction by the seedy world of Unix, which offered its arcane practitioners unnecessary luxuries like indoor living, food, and even health care.
(The rest of the page is not relevant to this list)
Tony.
--
f.anthony.n.finch <dot(a)dotat.at> http://dotat.at
>A bit off topic (sorry) but wondering about that PDF conversion. This
>may be a dumb question but did you ever try the PDF conversion in
>calibre ( https://calibre-ebook.com )?
To answer my own question. Yes, that was a dumb question. I completely
over looked the 'image' in Nelson's post
>The result is a
>searchable document, with a single page image per PDF page, rather
>than the mixed bitmap scan of 1-up and 2-up pages in the original PDF
>file.
Calibre's PDF to whatever conversion doesn't do anything worthwhile with images.
The posting of the link
http://bitsavers.org/pdf/regnecentralen/RC_4000_Reference_Manual_Jun69.pdf
brought back old memories. When I moved to Aarhus University in
Denmark in December 1973, I found that the Department of Chemistry
where I worked had a Regnecentralen RC-4000 minicomputer with paper
tape input that was used to manage the department's accounting. It
had a dedicated operator who kept in running nicely until at least
after I left in 1977. It had too little physical memory to be
considered practical for the quantum chemistry work that I was then
engaged in, and may not even have had a Fortran compiler; instead, we
used the campus CDC 6400 for our research.
In memory of the RC-4000, and Per Brinch Hansen's (13 Nov
1938--31-Jul-2007) many contributions to the literature of computer
science, programming language design, and parallel computing, I
prepared an enhancement of its manual, available here:
http://www.math.utah.edu/~beebe/RC-4000/
The Web page that comes up gives a directory of the available files,
and documents the steps needed to produce them. The result is a
searchable document, with a single page image per PDF page, rather
than the mixed bitmap scan of 1-up and 2-up pages in the original PDF
file.
Despite my 40+ year engagement in the TeX community, I only recently
learned via the texhax mailing of some PDFTeX internals that allow one
to construct a file that can retypeset n-up files into 1-up format.
I therefore make this posting in the hope that someone else might be
encouraged to tackle similar document improvements in the bitsavers
archives. Although it takes a bit of experimentation, with the
exception of the OCR conversion, the entire operation can be done with
free software on pretty much any modern computing platform, thanks to
the portability of the needed software.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
>Date: Sat, 9 Mar 2019 15:29:11 -0700
>From: "Nelson H. F. Beebe" <beebe(a)math.utah.edu>
>To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>Subject: Re: [TUHS] Failing Memory of an Algol Based System from years ago
>Message-ID: <CMM.0.95.0.1552170551.beebe(a)gamma.math.utah.edu>
... snip ...
> http://www.math.utah.edu/~beebe/RC-4000/
>
>The Web page that comes up gives a directory of the available files,
>and documents the steps needed to produce them. The result is a
>searchable document, with a single page image per PDF page, rather
>than the mixed bitmap scan of 1-up and 2-up pages in the original PDF
>file.
>Despite my 40+ year engagement in the TeX community, I only recently
>learned via the texhax mailing of some PDFTeX internals that allow one
>to construct a file that can retypeset n-up files into 1-up format.
>
>I therefore make this posting in the hope that someone else might be
>encouraged to tackle similar document improvements in the bitsavers.
>archives. Although it takes a bit of experimentation, with the
>exception of the OCR conversion, the entire operation can be done with
>free software on pretty much any modern computing platform, thanks to
>the portability of the needed software.
A bit off topic (sorry) but wondering about that PDF conversion. This
may be a dumb question but did you ever try the PDF conversion in
calibre ( https://calibre-ebook.com )?
I like the PDF to htmlz conversion even if mostly the result still
needs (a lot of) extra work.
Cheers,
uncle rubl
Clem,
I think the "Algol machine" you have in mind is the RC-2000 (not quite sure
of the designation--could look it up in the attic if it matters) designed by
Per Brinch Hansen for Regencentralen (again, the name may not be quite right).
The manual used Algol as a hardware description language. The instruction
set was not unusual. It has come up before in TUHS. I have the manual
if you need more info.
Doug
i have worked in tv, developing systems for archive restoration for many years.
if you have valuable sticky tapes i suggest you try and contact a lical video archivist, there are other tricks than just baking that can help - old tapes can present complex problems.
<story>
there was a sticky valuable rolling stones 24 track master tape i heard of. it was sent for analysis, and they discovered the stickiness was “vodka and coke” :-).
</story>
-Steve
Today's tape recovery gem. UBC's PDP-11 UNIX tools distribution ca. 1983 which includes UBC BASIC and their RT-11
emulation. It has a couple of bad blocks, but I couldn't find another copy of this anywhere.
http://bitsavers.org/bits/UBC/
If anyone has a complete copy, it would be good to replace it, but most is better than none of it.
All, I've locked the "Women in Computing" topic in the TUHS list
as it's not specifically Unix and liable to be contentious. Feel free
to continue it over on the COFF list.
E-mail me if you'd like to join the COFF list.
Cheers, Warren
> From: Deborah Scherrer
> In the early days of Usenix, I used to keep track of the women.
> Initially, about 30% of the organization was female. That dropped every
> year.
Interesting. Any ideas/thoughts on what was going on, what caused that?
Noel
Unless my leg is being pulled, I sent that for pure amusement.
Gcc has a very open mind on the subject, using both options
in the same sentence.
-----------------------------------------------------------
> Doug wrote:
> > A diagnostic from gcc chimes in:
> > 'mktemp' is deprecated: the use of `mktemp' is dangerous; use `mkstemp'
...
> https://www.gnu.org/prep/standards/standards.html#Quote-Characters
My impression was Doug was passing on a warning about the continued used
of mktemp(3) rather than the continued use of ASCII.
> From: Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org>
> Seeing as how this is diverging from TUHS, I'd encourage replies to
> the COFF copy that I'm CCing.
Can people _please_ pick either one list _or_ the other to reply to, so those
on both will stop getting two copies of every message? My mailbox is exploding!
Noel
> On Wed, Feb 06, 2019 at 10:16:24AM -0700, Warner Losh wrote:
> In many ways, it was a classic second system effect because they were
> trying to fix everything they thought was wrong with TCP/IP at the time
I'm not sure this part is accurate: the two efforts were contemporaneous; and
my impression was they were trying to design the next step in networking, based
on _their own_ analysis of what was needed.
> without really, truly knowing the differences between actual problems
> and mere annoyances and how to properly weight the severity of the issue
> in coming up with their solutions.
This is I think true, but then again, TCP/IP fell into some of those holes
too: fragmentation for one (although the issue there was unforseen problems in
doing it, not so much in it not being a real issue), all the 'unused' fields
in the IP and TCP headers for things that never got really got
used/implemented (Type of Service, Urgent, etc).
` Noel
> From: Kevin Bowling
> t just doesn't mesh with what I understand
Ah, sorry, I misunderstood your point.
Anyway, this is getting a little far afield for TUHS, so at some point it
would be better to move to the 'internet-history' list if you want to explore
it in depth. But a few more...
> Is it fair to say most of the non-gov systems were UNIX during the next
> handful of years?
I assume you mean 'systems running TCP/IP'? If so, I really don't know,
because for a while during that approximate period one saw many internets
which weren't connected to the Internet. (Which is why the capitalization is
important, the ill-educated morons at the AP, etc notwithstanding.) I have no
good overall sense of that community, just anecdotal (plural is not 'data').
For the ones which _were_ connected to the Internet, then prior to the advent
of the DNS, inspection of the host table file(s) would give a census. After that,
I'm not sure - I seem to recall someone did some work on a census of Internet
machines, but I forget who/were.
If you meant 'systems in general' or 'systems with networking of some sort', alas
I have even less of an idea! :-)
Noel
> From: Larry McVoy
> TCP/IP was the first wide spread networking stack that you could get
> from a pile of different vendors, Sun, Dec, SGI, IBM's AIX, every kernel
> supported it.
Well, not quite - X.25 was also available on just about everything. TCP/IP's
big advantage over X.25 was that it worked well with LAN's, whereas X.25 was
pretty specific to WAN's.
Although the wide range of TCP/IP implementations available, as well as the
multi-vendor support, and its not being tied to any one vendor, was a big
help. (Remember, I said the "_principle_ reason for TCP/IP's success"
[emphasis added] was the size of the community - other factors, such as these,
did play a role.)
The wide range of implementations was in part a result of DARPA's early
switch-over - every machine out there that was connected to the early Internet
(in the 80s) had to get a TCP/IP, and DARPA paid for a lot of them (e.g. the
BBN one for VAX Unix that Berkeley took on). The TOPS-20 one came from that
source, a whole bunch of others (many now extinct, but...). MIT did one for
MS-DOS as soon as the IBM PC came out (1981), and that spun off to a business
(FTP Software) that was quite successful for a while (Windows 95 was, IIRC,
the first uSloth product with TCP/IP built in). Etc, etc.
Noel
> From: Kevin Bowling
> Seems like a case of winners write the history books.
Hey, I'm just trying to pass on my best understanding as I saw it at the time,
and in retrospect. If you're not interested, I'm happy to stop.
> There were corporate and public access networks long before TCP was set
> in stone as a dominant protocol.
Sure, there were lots of alternatives (BITNET, HEPNET, SPAN, CSNET, along with
commercial systems like TYMNET and TELENET, along with a host of others whose
names now escape me). And that's just the US; Europe had an alphabet soup of its
own.
But _very_ early on (1 Jan 1983), DARPA made all their fundees (which included
all the top CS departments across the US) convert to TCP/IP. (NCP was turned
off on the ARPANET,and everyone was forced to switch over, or get off the
network.) A couple of other things went for TCP/IP too (e.g. NSF's
super-computer network). A Federal ad hoc inter-departmental committee called
the FRICC moved others (e.g. NASA and DoE) in the direction of TCP/IP,
too.
That's what created the large user community that eventually drove all the
others out of business. (Metcalfe's Law.)
Noel
> From: Kevin Bowling
> I think TCP was a success because of BSD/UNIX rather than its own
> merits.
Nope. The principle reason for TCP/IP's success was that it got there first,
and established a user community first. That advantage then fed back, to
increase the lead.
Communication protocols aren't like editors/OS's/yadda-yadda. E.g. I use
Epsilon - but the fact that few others do isn't a problem/issue for me. On the
other hand, if I designed, implemented and personally adopted the world's best
communication protocol... so what? There'd be nobody to talk to.
That's just _one_ of the ways that communication systems are fundamentally
different from other information systems.
Noel
On Mon, Feb 4, 2019 at 8:43 PM Warner Losh <imp(a)bsdimp.com> wrote:
>
>
> On Sun, Feb 3, 2019, 8:03 AM Noel Chiappa <jnc(a)mercury.lcs.mit.edu wrote:
>
>> > From: Warner Losh
>>
>> > a bunch of OSI/ISO network stack posters (thank goodness that didn't
>> > become standard, woof!)
>>
>> Why?
>>
>
> Posters like this :). OSI was massively over specified...
>
oops. Hit the list limit.
Posters like this:
https://people.freebsd.org/~imp/20190203_215836.jpg
which show just how over-specified it was. I also worked at The Wollongong
Group back in the early 90's and it was a total dog on the SysV 386
machines that we were trying to demo it on. A total and unbelievable PITA
to set it up, and crappy performance once we got it going. Almost bad
enough that we didn't show it at the trade show we were going to.... And
that was just the lower layers of the stack plus basic name service. x.400
email addresses were also somewhat overly verbose. In many ways, it was a
classic second system effect because they were trying to fix everything
they thought was wrong with TCP/IP at the time without really, truly
knowing the differences between actual problems and mere annoyances and how
to properly weight the severity of the issue in coming up with their
solutions.
So x.400 vs smtp mail addresses:
"G=Warner;S=Losh;O=WarnerLoshConsulting;PRMD=bsdimp;A=comcast;C=us" vis "
imp(a)bsdimp.com"
(assuming I got all the weird bits of the x.400 address right, it's been a
long time and google had no good examples on the first page I could just
steal...) The x.400 addresses were so unwieldy that a directory service was
added on top of them x.500, which was every bit as baroque IIRC.
TP4 might not have been that bad, but all the stuff above it was kinda
crazy...
Warner
> From: Grant Taylor
> I'm not quite sure what you mean by naming a node vs network interface.
Does one name (in the generic high-level sense of the term 'name'; e.g. an
'address' is a name for a unit of main memory) apply to the node (host) no
matter how many interfaces it has, or where it is/moves in the network? If so,
that name names the node. If not...
> But I do know for a fact that in IPv4, IP addresses belonged to the
> system.
No. Multi-homed hosts in IPv4 had multiple addresses. (There are all sorts
of kludges out there now, e.g. a single IP address shared by a pool of
servers, precisely because the set of entity classes in IPvN - hosts,
interfaces, etc - and namespaces for them were not rich enough for the
things that people actually wanted to do - e.g. have a pool of servers.)
Ignore what term(s) anyone uses, and apply the 'quack/walk' test - how is
it used, and what can it do?
> I don't understand what you mean by using "names" for "path selection".
Names (in the generic sense above) used by the path selection mechanism
(routing protocols do path selection).
> That's probably why I don't understand how routes are allocated by a
> naming authority.
They aren't. But the path selection system can't aggregate information (e.g.
routes) about multiple connected entities into a single item (to make the path
selection scale, in a large network like the Internet) if the names the path
selection system uses for them (i.e. addresses, NSAP's, whatever) are
allocated by several different naming authorities, and thus bear no
relationship to one another.
E.g. if my house's street address is 123 North Street, and the house next door's
address is 456 South Street, and 124 North Street is on the other side of town,
maps (i.e. the data used by a path selection algorithm to decide how to get from
A to B in the road network) aren't going to be very compact.
Noel
> Ken wrote ... ed(before regexp ed)
Actually Ken wrote a regexp qed (for Multics) before he wrote ed.
He wrote about it here, before the birth of Unix:
Programming Techniques: Regular expression search algorithm
Ken Thompson
June 1968 Communications of the ACM: Volume 11 Issue 6, June 1968
This is the nondetermistic regexp recognizer that's been used
ever since. Amusingly a reviewer for Computing Reviews panned
the article on the grounds that everybody already knew how to
write a deterministic recognizer that runs in linear time.
There's no use for this slower program. What the reviewer failed
to observe is that it may take time exponential in the size of
the regexp (and ditto for space) to make such a recognizer.
In real life for a one-shot recognizer that can easily be the
dominant cost.
The problem of exponential construction time arose in Al Aho's
egrep. I was an early adopter--for the calendar(1) daemon. The
daemon generated a date recognizer that accepted most any
(American style) date. The regular expresssions were a couple
of hundred bytes long, full of alternations. Aho was chagrinned
to learn that it took about 30 seconds to make a recognizer
that would be used for less than a second. That led Al to the
wonderful invention of a lazily-constructed recognizer that
would only construct the states that were actually visited
during recognition. At last a really linear-time algorithm!
This is one of my favorite examples of the synergy of having
sytems builders and theoreticians together in one small
department.
Doug
Sorry to drop in on the thread a bit late, and, strictly speaking, not
(according to headers) connected to the thread; I am well acquainted
with David Tilbrook, who is sadly not doing too well; it is not
surprising that Leah Neukirchen was unable to get a hold of him as he
hasn't been using email for some number of years > 1, and is
definitely not programming.
Hugh Redelmeier and I are looking into trying to do some preservation
of his QEF toolset that included the QED port.
Neither Hugh nor I are ourselves QED users; I'm about 30 years into my
Emacs learning curve, albeit using Remacs (the Rust implementation)
lately, while Hugh maintains JOVE to the extent to which it remains
maintained. http://www.cs.toronto.edu/pub/hugh/jove-dev/
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
Co-inventor of Unix, he was born on this day in 1943. Just think: without
those two, we'd all be running M$ Windoze and thinking that it's wonderful (I
know, it's an exaggeration, but think about it).
-- Dave
On Sun, Feb 3, 2019 at 2:59 PM Cág <ca6c(a)bitmessage.ch> wrote:
> [Hockey Pucks and AIX are alive, Wikipedia says.
> The problem could be that neither support amd64 and/or
Be careful. The history of proprietary commercial UNIX implementations is
that they were developed by HW manufacturers that had proprietary ISAs. So
that fact that UX was Itanium and AIX was Power (or Tru64 in its day was
Alpha) should not be surprising. It was the way the market developed. Each
vendor sold a unique ecosystem and tried very hard to keep you in it.
Portability was designed as an >>import<< idea, and they tried to keep you
from exporting by getting you to use 'value add.'
I remember during the reign of terror that Solaris created. Take as an
example, the standard portable threading library was pThreads. But
Solaris threads were faster and Sun did everything it could get the ISV's
write using Solaris Threads. Guess what -- they did. So at DEC we found
ourselves implementing a Solaris Threads package for Tru64, so the ISVs
could run their code (I don't know if IBM or HP did it too, because at the
time, our competition was Sun).
BTW: this attitude was nothing new. I've said it before, the greatest
piece of marketing DEC ever did was convince the world that VMS Fortran was
Fortran-77. It was not close. And when you walked into most people
writing real production code (in Fortran of course), you discovered they
had used all of the VMS Fortran extensions. When the UNIX folks arrived
on the scene the f77 in Seventh Edition was not good enough. You saw first
Masscomp in '85, then a year later Apollo and 2 years after that, Sun
develop really, really good Fortran's -- all that were VMS Fortran
compatible.
nobody cares about commercial Unix systems anymore.
>
This is a bit of blind and sweeping statement which again, I would take
some care.
There are very large commercial sites that continue to run proprietary UNIX
on those same proprietary ISAs, often with ISV and in-home developed
applications that are quite valuable. For instance, a lot of the financial
and insurance industries live here. The question comes to how to value
and count it. Just because the hackers don't work there, does not mean
there are not a lots firms doing it.
Those sites are extremely large and represent a lot of money. The number
of them is unlikely to be growing last time I looked at the numbers. In
fact, in some cases, they >>are<< being displaced by Intel*64 systems
running a flavor of Linux. The key driver for this was the moving the
commercial applications such as Oracle and SAP to Linux and in particular,
Linux running on VMs. But a huge issue was code reuse. To reuse, Henry's
great line about BSD, Linux is just like Unix; only different.
Simply has the cost of maintaining your own ISA and complete SW ecosystem
for it continues to rise and in fact is getting more and more expensive as
the market shrinks. At this point, the only ones left are HP, IBM and the
shadow of Sunoracle. They are servicing a market that is fixed.
>
> As far as commercial systems go, even CentOS has a far larger market
> share on the supercomputer territory than RHEL does, according to
> TOP500.
>
Again be careful. In fact this my world that I have lived for about 40+
years. The Top100 system folks really do not want any stinking OS between
their application and the hardware. They never have. Don't kid yourself.
This is why systems like mOS (Rolf Riesen's MultiOS slides
<https://wrome.github.io/slides/rome16_riesen.pdf> and github sources
<https://github.com/intel/mOS/wiki>) are being developed.
Simply put, the HPC folks have always wanted the OS out the way. Unix was
a convenience for them and Linux just replaced UNIX. The RHEL licensing
scheme is per CPU and on a Beowulf style cluster, it does not make a lot of
sense.
I know a lot of the Linux community likes to crow about the supers using
Linux. They really don't Its what runs on the login node and the job
scheduler. It could be anything as long as its cheap, fast and the
physicists can hack on it. This is a behavior that goes back the
Manhatten Project and its unchanged. The 'capability' systems are a
high-end world that is tuned for a very specific job. You can learn a lot
in that area, but because about making generalizations.
As I like to say -- Fortran still pays my salary. These folks codes are
unchanged since my father's time as a 'computer' at Rocket Dyne in the
1950s. What has changed is the size of the datasets. But open up those
codes and you'll discover the same math. They tend to be equation
solvers. We just have a lot more variables.
Clem
> From: Warner Losh
> a bunch of OSI/ISO network stack posters (thank goodness that didn't
> become standard, woof!)
Why? The details have faded from my memory, but the lower 2 layers of the
stack (CLNP and TP4) I don't recall as being too bad. (The real block to
adoption was that people didn't want to get snarled up in the ISO standards
process.)
It at least managed (IIRC) to separate the concepts of, and naming for, 'node'
and 'network interface' (which is more than IPv6 managed, apparently on the
grounds that 'IPv4 did it that way', despite lengthy pleading that in light of
increased understanding since IPv4 was done, they were separate concepts and
deserved separate namespaces). Yes, the allocation of the names used by the
path selection (I use that term because to too many people, 'routing' means
'packet forwarding') was a total dog's breakast (allocation by naming
authority - the very definition of 'brain-damaged') but TCP/IP's was not any
better, really.
Yes, the whole session/presentation/application thing was ponderous and probably
over-complicated, but that could have been ditched and simpler things run
directly on TP4.
{And apologies for the non-Unix content, but at least it's about computers,
unlike all the postings about Jimmy Page's guitar; typical of the really poor
S/N on this list.)
Noel
> without those two we'd all be running M$ Windoze
Apropos of which, I complained to Walter Isaacson about his
writing them out of "The Innovators"--Turing Award, National
Medal of Technology, Japan Prize and all. I suppose I should
not be surprised that he didn't deign to answer.
Doug
[Cross-posted from the 3B2 mailing list]
Hi folks,
I'm in search of source code for AT&T's System V Release 3.2.1, 3.2.2,
and/or 3.2.3 for the 3B2. Does this exist? Has anyone ever seen it?
Note that I'm not looking for the System V Release 3.2 Source Code
Provision for the 3B2 /310 and /400 -- I already have that. It was
absolutely invaluable when I was writing my 3B2/400 emulator.
The reason I'm so keen on getting access is that I have ROM images from
a 3B2/1000, and I'd like to add support for it to my 3B2 emulator. The
system board memory map seems a bit different than the /300, /310, and
/400. These max out at SVR 3.2.
I can't imagine trying to add 3B2/1000 support without the 3.2.x source
code.
I imagine there's some tape image somewhere that's a delta of files that
take you from 3.2 to 3.2.1, 3.2.2 or 3.2.3?
-Seth
--
Seth Morabito
Poulsbo, WA, USA
web(a)loomcom.com
On 1/16/19, Kevin Bowling <kevin.bowling(a)kev009.com> wrote:
> I’ve heard and personally seen a lot of technical arrogance and
> incompetence out of the Masshole area. Was DEC inflicted? In
> “Showstopper” Cutler fled to the west coast to get away from this kind of
> thing.
>
Having worked at DEC from February 1980 until after the Compaq
takeover, I would say that DEC may have exhibited technical arrogance
from time to time, but certainly never technical incompetence. DEC's
downfall was a total lack of skill at marketing. Ken Olsen believed
firmly in a "build it and they will come" philosophy. Contrast this
with AT&T's brilliant "Unix - consider it a standard" ad campaign.
DEC also suffered from organizational paralysis. KO believed in
decisions by consensus. This is fine if you can reach a consensus,
but if you can't it leads to perpetually revisiting decisions and to
obstructionist behavior. There was a saying in DEC engineering that
any decision worth making was worth making 10 times. As opposed to
the "lead, follow, or get out of the way" philosophy at Sun. Or
Intel's concept of disagree and commit. DEC did move towards a
"designated responsible individual" approach where a single person got
to make the ultimate decision, but the old consensus approach never
really died.
Dave Cutler was the epitome of arrogance. On the technical side, he
got away with it because his way (which he considered to be the only
way) was usually at least good enough for Version 1, if not the best
design. Cutler excelled in getting V1 of something out the door. He
never stayed around for V2 of anything. He had a tendency to leave
messes behind him. A Cutler product reminded me of the intro to "The
Peabodys" segment of Rocky & Bullwinkle. A big elaborate procession,
followed by someone cleaning up the mess with a broom.
Cutler believed in a "my way or the highway" approach to software
design. His move to the west coast was to place himself far enough
away that those who wanted to revisit all his decisions would have a
tough time doing so.
On the personal side, he went out of his way to be nasty to people, as
pointed out elsewhere in this thread. Although he was admired
technically, nobody liked him.
-Paul W.
Meant to reply all on this....
Warner
---------- Forwarded message ---------
From: Warner Losh <imp(a)bsdimp.com>
Date: Sat, Feb 2, 2019, 11:37 PM
Subject: Re: [TUHS] Posters
To: Grant Taylor <gtaylor(a)tnetconsulting.net>
I'll take pictures tomorrow. No zeppelin though...
I had hoped that I still had my ultrix version of Phil Figlio's original
usenix artwork. I can find the Usenix one and the Unix one, but not that
one online. Anybody have one they can share?
Warner
On Sat, Feb 2, 2019, 7:32 PM Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org
wrote:
> On 2/2/19 6:35 PM, Warner Losh wrote:
> > Is there any interest from this group in photos of any of these?
>
> I would be interested in pictures of the computer related pictures to
> see if I'd be interested enough to pay for and / or for shipping on any
> of them.
>
>
>
> --
> Grant. . . .
> unix || die
>
>
Noel Chiappa:
{And apologies for the non-Unix content, but at least it's about computers,
unlike all the postings about Jimmy Page's guitar; typical of the really poor
S/N on this list.)
======
Didn't Jimmy Page's guitar use an LSI-11 running Lycklama's Mini-UNIX?
And what was his page size?
Norman Wilson
Toronto ON
I found 3 tubes of posters I'd been hoarding since college (well, since my
first job after college).
There's the usual 18-year-old-boy stuff (Pink Floyd, Led Zeppelin, etc),
but mixed in are a bunch of OSI/ISO network stack posters (thank goodness
that didn't become standard, woof!), a couple movie posters, an 10th
Anniversary poster for RT-11.
The ones that will interest this group, maybe, are the Unix Feuds poster
with the wizard among the waring armies, A 20th Anniversary of Unix poster
by Tenon Intersystems which has a nice picture of Unix through 1990 or so
(with Tenon's Mach^ten 1.0 for Macintosh derived from BSD 4.3 and Mach) on
it. It's in decent share, but not in collector ready shape.
Oh, and I have a Eunice poster that mixes the best of VMS and BSD 4.1 into
a seamless environment.
Is there any interest from this group in photos of any of these?
Warner
> I have tried to OCR program listings before, with rather
> poor results.
I OCR'd a sizable manuscript written on a pretty shabby portable typewriter.
I scanned each page twice, making sure to move the paper between scans.
Then I ran both diff (by words, not lines) and spell to smoke out trouble.
The word list for a program listing is quite short and easy to generate.
(Print a list of all the apparent words and visually eliminate the nonsense.)
And a spell check is an easy pipeline of standard utilities.
doug
Hi All.
I had a bad commit message in the qed-archive I mentioned here a few weeks
ago. I fixed it with a 'git push --force' (even though that's not
recommended) since I expect it to be a read-only archive going forward,
and I wanted it to be right.
In short, if you cloned it, please remove your copy and reclone.
Thanks!
Arnold
Time for a new thread :-)
As today is Knuth's birthday (posted over in COFF), I was wondering (in
the cesspool that is my mind) how much of Unix would have been influenced
by Knuth? We have qsort() of course (which Hoare actually wrote, based on
one of Knuth's algorithms), but I'm guessing that Ken and Dennis would
have been familiar with his work?
Or am I spreading fake news again? :-) Look, I love being corrected if I
make a mistake on a technical mailing list, so fire at will if need be...
-- Dave
> Where did you find the BBN TCP/IP stack?
Easiest place to find it is the TUHS Unix Tree page:
https://www.tuhs.org//cgi-bin/utree.pl?file=BBN-Vax-TCP
Several tapes of it survived in the CSRG archives, currently held by the Bancroft Library at Berkeley.
A late version of the tcp/ip routines survived at the Stanford SAIL archives, currently online here:
https://www.saildart.org/[IP,SYS]/
(mixed in with sources for WAITS).
A much evolved version is in the BSD SCCS history:
https://github.com/weiss/original-bsd/tree/master/sys/deprecated/bbnnet
Note that the location ‘deprecated’ is where the code ended up. Back in 1985 it would have been in the normal build path, but SCCS does not preserve that.
Paul
All, I just found out in the past few days that Kirk McKusick has added another
two CDs in his set of CSRG Archives CD-ROM set. Some details of these CDs
can be found here:
https://www.mckusick.com/csrg/index.html
I've purchased the two additional CDs, and I've put a file listing and
set of checksums for all files on all six CDs here:
https://www.tuhs.org/Archive/Documentation/CSRG_CDs/
Cheers, Warren
I've been on a Data General Aviion restoration binge lately and
re-familiarizing myself with DG/UX. In my case 5.4R3.1 running on a
MC88100 based AV/300 and MC88110 dual core AV/5500. The more I
experience, the more I am impressed. There are a few things about the
system that seem impressive.
- Despite coming from a System V core, there is a lot of BSD influx -
especially on the networking side. This is a personal taste issue as
other ports have tried to mix the best of both worlds. But after a
prior month-long Sun/Solaris restoration binge of similar era hardware
(Super/Hyper/Ultra SPARC) and software (SunOS 4 through Solaris 9),
DG/UX is a welcome and refreshing change! Especially out of the box.
- It has a system of file security that seems unique for that era - at
least in my experience - of explicit and implicit directory tags with
inheritance. There is even a high security extended version of the OS.
- It has a built-in logical volume manager supporting multiple virtual
to physical disk mappings, striping, mirroring, and even archiving -
something several entire sub-industries were created for in other ports.
I am guessing this contributed to EMC's purchase of Data General for
the Clariion disk storage product lines.
- It leveraged open-source tools early. The default m88k compiler
installed with the system is GNU C 2.xx.
- It was among the earliest of operating systems to support NUMA aware
affinity on MP versions of the MC88110. (IRIX, Solaris, BSD, Linux, and
Windows support all came much later).
- Many others.
It does have it's quirks. However I get the overall impression the
folks working at DG were on their game and were a leader in the industry
in many areas. It is unfortunate a) the fate of the Motorola 88K was
tied to Data General's place in the UNIX world, and b) by the time they
migrated to IA86, enterprise business was more interested in Microsoft
NT & SQL server or Linux than an expensive vendor's UNIX port.
That being said, I don't see DG/UX mentioned much in UNIX history. In
fact, I am researching an exhibit I'm putting together for the Vintage
Computer Festival Southeast 7.0, and DG/UX isn't mentioned on any of the
'UNIX Family Tree' diagrams I can find so far. It doesn't even make
Wikipedia's 'UNIX Variants' page. It's own Wikipedia page is also
rather sparse. Like John Snow in season 1, there is a junk of missing
and plot impacting history here - centered around the people involved.
To a lesser degree, IRIX is also a red-headed step-child. It's omitted
from half the lists I can find. It just seems the importance, even if
it's an importance by being the 'first' rather than # of users, of these
ports are pretty significant.
Just curious of others' thoughts. And I wondering if anyone has
first-hand knowledge of Data General's efforts or knows of others that
can illuminate the shadows of what I'm discovering is a pretty exciting
corner of the UNIX world.
Thanks,
-Alan H.
Forgive me if this is a duplicate. My original message appears to have bounced.
On 1/16/19, Kevin Bowling <kevin.bowling(a)kev009.com> wrote:
> I’ve heard and personally seen a lot of technical arrogance and
> incompetence out of the Masshole area. Was DEC inflicted? In
> “Showstopper” Cutler fled to the west coast to get away from this kind of
> thing.
>
Having worked at DEC from February 1980 until after the Compaq
takeover, I would say that DEC may have exhibited technical arrogance
from time to time, but certainly never technical incompetence. DEC's
downfall was a total lack of skill at marketing. Ken Olsen believed
firmly in a "build it and they will come" philosophy. Contrast this
with AT&T's brilliant "Unix - consider it a standard" ad campaign.
DEC also suffered from organizational paralysis. KO believed in
decisions by consensus. This is fine if you can reach a consensus,
but if you can't it leads to perpetually revisiting decisions and to
obstructionist behavior. There was a saying in DEC engineering that
any decision worth making was worth making 10 times. As opposed to
the "lead, follow, or get out of the way" philosophy at Sun. Or
Intel's concept of disagree and commit. DEC did move towards a
"designated responsible individual" approach where a single person got
to make the ultimate decision, but the old consensus approach never
really died.
Dave Cutler was the epitome of arrogance. On the technical side, he
got away with it because his way (which he considered to be the only
way) was usually at least good enough for Version 1, if not the best
design. Cutler excelled in getting V1 of something out the door. He
never stayed around for V2 of anything. He had a tendency to leave
messes behind him. A Cutler product reminded me of the intro to "The
Peabodys" segment of Rocky & Bullwinkle. A big elaborate procession,
followed by someone cleaning up the mess with a broom.
Cutler believed in a "my way or the highway" approach to software
design. His move to the west coast was to place himself far enough
away that those who wanted to revisit all his decisions would have a
tough time doing so.
On the personal side, he went out of his way to be nasty to people, as
pointed out elsewhere in this thread. Although he was admired
technically, nobody liked him.
-Paul W.
Tsort was a direct reference to Knuth's recognition and
christening of topological sort as a worthy software component.
This is a typical example of the interweaving of R and D
which characterized the culture of Bell Labs. Builders
and thinkers were acutely aware of each other, and often
were two faces of one person. Grander examples may be
seen in the roles that automata theory and formal languages
played in Unix. (Alas, these are the subjects of projected
Knuthian volumes that are still over the horizon.)
Doug
Norman is right. The Seattle museum has a 5620. Having seen "5620" in
the subject line, I completely overlooked the operative words
"real blit or jerq" in Seth's posting.
Doug
Seth Morabito:
I'd love to see a real Blit or jerq in person some day, but I don't even
know where I'd find one (it looks like even the Computer History Museum
in Mountain View, CA doesn't have a 68K Blit -- it only has a DMD 5620)
Doug McIlroy:
The Living Computer Museum in Seattle has one. Ad like most things
there, you can play with it.
===
It's a couple of years since I was last in Seattle, but
I remember only a DMD 5620 (aka Jerq); no 68000-based Blit.
Though of course they are always getting new acquisitions,
and have far more in storage than on display. (On one of
my visits I was lucky enough to get a tour of the upper
floor where things are stored and worked on.)
Whether they have a Blit or only a Jerq, it's a wonderful
place, and I expect any member of this list would enjoy a
visit.
I plan to drop in again this July, when I'm in town for
USENIX ATC (in suburban Renton).
Norman Wilson
Toronto ON
> I'd love to see a real Blit or jerq in person some day, but I don't even know where I'd find one
The Living Computer Museum in Seattle has one. Ad like most things
there, you can play with it.
Doug
We've been recovering a 1980s programming language implemented using a
mix of Pascal and C that ran on 4.1 BSD on VAX.
The Makefile distributed to around 20+ sites included these lines for
the C compiler.
CC= occ
CFLAGS= -g
It seems there was a (common?) practice of keeping around the old C
compiler when updating a BSD system and occ was used to reference it.
Anyone care to comment on this observation? was it specific to
BSD-land? how was the aliasing effected, a side-by-side copy of the
compiler pieces? As at 4.1 BSD the C compiler components were in /lib
(Pascal though was in /usr/lib).
# ls -l /lib
total 467
-rwxr-xr-x 1 root 25600 Jul 9 1981 c2
-rwxr-xr-x 1 root 89088 Jul 9 1981 ccom
-rwxr-xr-x 1 root 19456 Jul 9 1981 cpp
-rwxr-xr-x 1 root 199 Mar 15 1981 crt0.o
-rwxr-xr-x 1 root 40960 Jul 9 1981 f1
-rwxr-xr-x 1 root 62138 Jul 9 1981 libc.a
-rwxr-xr-x 1 root 582 Mar 15 1981 mcrt0.o
I'm still happily experimenting with my combination of a V6 kernel with the 1981 tcp/ip stack from BBN, for example figuring out how one would write something like 'ping' using that API. That brought me to pondering the origins of the 'alarm()' sys call and how some things were done on the Spider network.
These are my observations:
1. First of all: I understand that early Unix version numbers and dates mostly refer to the manual editions, and that core users had more frequent snapshots of a constantly evolving code base.
2. If I read the TUHS archive correctly, alarm() apparently did not exist in the original V6 edition of mid-1975. On the same basis, it was definitely there by the time of the V7 edition of early '79 (with sleep() removed) - so alarm() would seem to have appeared somewhere in the '75-'78 time frame.
3. The network enhanced NCP Unix versions in the TUHS archive have alarm() appear next to sleep(). Although the surviving tapes date from '79, it would seem to suggest that alarm() may have originated in the earlier part of the '75-'78 time frame.
4. The Spider network file store program 'nfs' (source here: http://chiselapp.com/user/pnr/repository/Spider/dir?mtime=0&type=flat&udc=1…) uses idioms like the below to avoid getting hung on a dead server/network:
signal(14,timeout); alarm(30);
if((read(fn,rply,100)) < 0) trouble();
alarm(0);
The 'nfs' program certainly was available in the 5th edition, maybe even in the 4th edition (the surviving 4th edition source code includes a Spider device driver). However, the surviving source for 'nfs' is from 1979 as well, so it may include later additions to the original design.
5. Replacing sleep() with alarm() and a user space library routine seems to have happened only some time after alarm() appeared, so it would seem that this was an optimization that alarm() enabled, and not its raison d'être.
So here are some questions that the old hands may be able to shed some light on:
- When/where did alarm() appear? Was network programming driving its inception?
- Did Spider programs use a precursor to alarm() before that? (similar to V5 including 'snstat' in its libc - a precursor to ioctl).
Paul
> / uses the system sleep call rather than the standard C library
> / sleep (sleep (III)) because there is a critical race in the
> / C library implementation which could result in the process
> / pausing forever. This is a horrible bug in the UNIX process
> / control mechanism.
>
> Quoted without comment from me!
Intriguing comment. I think your v6+ system probably has a lot of
PWB stuff in there. The libc source for sleep() in stock V6 is:
.globl _sleep
sleep = 35.
_sleep:
mov r5,-(sp)
mov sp,r5
mov 4(r5),r0
sys sleep
mov (sp)+,r5
rts pc
The PWB version uses something alarm/pause based, but apparently
gets it wrong:
.globl _sleep
alarm = 27.
pause = 29.
rti = 2
_sleep:
mov r5,-(sp)
mov sp,r5
sys signal; 14.; 1f
mov 4(r5),r0
sys alarm
sys pause
clr r0
sys alarm
mov (sp)+,r5
rts pc
1:
rti
I think the race occurs when an interrupt arrives between the sys alarm
and the sys pause lines, and the handler calls sleep again.
sleep() in the V7 libc is a much more elaborate C routine.
When I first read the race condition comment, I thought the issue would
be like that of write:
_write:
mov r5,-(sp)
mov sp,r5
mov 4(r5),r0
mov 6(r5),0f
mov 8(r5),0f+2
sys 0; 9f
bec 1f
jmp cerror
1:
mov (sp)+,r5
rts pc
.data
9:
sys write; 0:..; ..
This pattern appears in several V6 sys call routines, and would
not be safe when used in a context with signal based multi-
threading.
> From: Dave Horsfall
> As I dimly recall ... it returns the number of characters in the input
> queue (at that time)
Well, remember, this is the MIT V6 PDP-11 system, which had a tty driver which
had been completely re-written at MIT years before, so you'd really have to
check the MIT V6 sources to see exactly what it did. I suspect they borrowed
the name, and basic semantics, from Berkeley, but everything else - who
knows.
This user telnet is from 1982 (originally), but I was looking at the final
version, which is from 1984; the use of the ioctl was apparently a later
addition. I haven't checked to see what it did originally for reading from the
user's terminal (although the earlier version also used the 'tasking'
package).
Noel
> From: Paul Ruizendaal
> It would not seem terribly complex to add non-blocking i/o capability to
> V6. ... Adding a 'capacity' field to the sgtty interface would not
> have been a big leap either. ...
> Maybe in the 1975-1980 time frame this was not felt to be 'how Unix does
> it'?
This point interested me, so I went and had a look at how the MIT V6+/PWB
TCP/IP did it. I looked at user TELNET, which should be pretty simple (server
would probably be more complicated, due to PTY stuff).
It's totally different - although that's in part because in the MIT system,
TCP is in the user process, along with the application. In the user process,
there's a common non-premptive 'tasking' package which both the TCP and TELNET
code use. When there are no tasks ready to run, the process uses the sleep()
system call to wait for a fixed, limited quantum (interrupts, i.e. signals,
will usually wake it before the quantum runs out); note this comment:
/ uses the system sleep call rather than the standard C library
/ sleep (sleep (III)) because there is a critical race in the
/ C library implementation which could result in the process
/ pausing forever. This is a horrible bug in the UNIX process
/ control mechanism.
Quoted without comment from me!
There are 3 TCP tasks - send, receive and timer. The process receives an
'asynchronous I/O complete' signal when a packet arrives, and that wakes up
the process, and then one of the tasks therein, to do packet processing
(incoming data, acks, etc).
There appears to be a single TELNET task, which reads from the user's
keyboard, and sends data to the network. (It looks like processing of incoming
data is handled in the context of one of the TCP tasks.) Its main loop starts
with this:
ioctl (0, FIONREAD, &nch);
if (nch == 0) {
tk_yield ();
continue;
}
}
if ((c = getchar()) == EOF) {
so that ioctl() must look to see if there is any data waiting in the terminal
input buffer (I'm too lazy to go see what FIONREAD does, right at the moment).
Noel
Clem, Ron,
Thanks for the explanations! Some comments below.
>> 1. First of all: I understand that early Unix version numbers and dates
>> mostly refer to the manual editions, and that core users had more
>> frequent snapshots of a constantly evolving code base.
>
> Eh? They primarily refer to the distributions (Research V6, V7, PWB, the
> various BSD tapes).
> I'm not sure what "core users" are referring to. Most of us had many
> versions as we hacked and merged the stock releasesx.
I was too brief. I was referring just to the pre-V7 versions, and I had the implicit assumption that alarm() originated at the labs. My understanding was that the labels 5th, 6th and 7th edition had little meaning inside the labs, as there just was a continuously developing code base. Maybe this is a mis-understanding.
> "alarm was introduced as part of Unix/TS" "PWB [..] had both sleep() and alarm() as system calls"
Thanks for those pointers! I'm not sure I fully grasp the lineage of Unix/TS and PWB, but the TUHS wiki has a page about it: https://wiki.tuhs.org/doku.php?id=misc:snippets:mert1
From that, and from the TUHS Unix Tree web page I get that PWB1.0 from mid 1977 was probably the root source of alarm() for people outside AT&T. As PWB apparently got started much before that, it is possible that alarm() goes back much further as well.
> A bigger networking issue was select() (or the like). It used to be an
> interesting kludge of running two processes inorder to do simoultaneous
> read/write before that.
Yes: the NCP Unix team (Grossman/Holmgren/Bunch) also mentioned that as the big issue/annoyance that they ran into in 1975.
As discussed in this list before, 3 years elapsed before Jack Haverty came up with await() for V6. I was told that there was a lot of discussion in the 4.1x/4.2 BSD steering group in 1981/2 whether this functionality should be stateful (like await) or stateless (like select). Looking at the implementations for both, I can see why stateless carried the day.
> Right and select(2) was created by Sam and wnj during the 4.2 development. I've forgotten which sub-version (it was in 4.1c, but it might have been in b or a before that). There was a lot of arguing at the time about it's need; the multiple process solution was considered more 'Unix-like.'
That is an interesting point, and it got me wondering about another related feature that could have been in Unix in the 1975-1980 time frame, being both useful and practical even on a 11/40 class machine, but for some reason wasn't:
It would not seem terribly complex to add non-blocking i/o capability to V6. It could have been implemented as a TTY flag and it is not a big conceptual leap from EINTR to EAGAIN. Adding a 'capacity' field to the sgtty interface would not have been a big leap either. This would have allowed user processes to scan a number of tty lines e.g. once a second in a loop and do processing as needed. In NCP Unix this would not have been hard to extend to network pipes.
The NCP Unix / Arpanet crowd certainly had a need for it, it would have been very useful for Spider/Datakit connections and probably for uucp as well. And from there it is not a million miles to replace the timed user loop with something like select(). Yet non-blocking I/O and select() only appear in 1982.
Maybe in the 1975-1980 time frame this was not felt to be 'how Unix does it'?
Hello folks,
I realized I should mention this here on TUHS, since it is likely of
interest to at least some of you!
I recently wrote a DMD 5620 emulator, currently available on Linux and
Macintosh, with Windows support coming soon. Here's a brief demo of the
Mac version:
https://www.youtube.com/watch?v=tcSWqBmAMeY
I wrote it because DMD 5620s are becoming incredibly rare, and showing
them off in person is quite difficult nowadays.
This emulator is using ROM version 2.0 (8;7;5) dumped from my personal
5620. If anyone out there has a DMD 5620 with an older ROM, I would be
incredibly grateful if you could dump the ROMs. I'd like to find
versions of 1.x (8;7;3 or earlier); so far I've had no luck.
The main reason I'm interested in older ROMs, besides pure preservation
reasons, is that the 'mux' and 'muxterm' system on Research UNIX V8/V9
is hard-coded for the 1.1 ROMs. It doesn't work with the emulator
without significant tweaking of the source. It DOES work perfectly well
with the DMD Core Utilities package for the AT&T 3B2, however.
All the best!
-Seth
--
Seth Morabito
Poulsbo, WA, USA
web(a)loomcom.com
In the June 1966 CACM [1], Wirth and Hoare published "A contribution to the development of ALGOL”, which describes a language very similar to Algol W. In Wirth’s Turing Award lecture (published in the Feb 1985 CACM [2]) "From programming language design to computer construction”, he noted:
“The Working Group assumed the task of proposing a successor and soon split into two camps. On one side were the ambitious who wanted to erect another milestone in language design, and, on the other, those who felt that time was pressing and that an adequately extended ALGOL 60 would be a productive endeavor. I belonged to this second party and submitted a proposal that lost the election. Thereafter, the proposal was improved with contributions from Tony Hoare (a member of the same group) and implemented on Stanford University's first IBM 360. The language later became known as ALGOL W and was used in several universities for teaching purposes.”
In particular, Hoare’s work on “Record Handling” (see [3]) had a strong impact on Algol W and Wirth’s later languages.
[1] http://doi.acm.org/10.1145/365696.3657022
[2] http://doi.acm.org/10.1145/2786.2789
[3] http://www.softwarepreservation.org/projects/ALGOL/standards/.
> On Jan 10, 2019, at 7:06 PM, clemc(a)ccc.com wrote:
>
> From: Clem cole <clemc(a)ccc.com <mailto:clemc@ccc.com>>
> To: Dave Horsfall <dave(a)horsfall.org <mailto:dave@horsfall.org>>
> Cc: The Eunuchs Hysterical Society <tuhs(a)tuhs.org <mailto:tuhs@tuhs.org>>
>
> Dave. The w in Algolw was Wirth. He was at Stanford at the time. It was written in PL/360 btw. The sources are googlable. FWIW Pascal was done a couple of years later with lessons learned from Algolw and reaction to Algol68.
>
> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
>
>> On Jan 10, 2019, at 6:52 PM, Dave Horsfall <dave(a)horsfall.org <mailto:dave@horsfall.org>> wrote:
>>
>> [Not sure whether this is more appropriate for COFF instead, so it's here; advice (apart from STFU) gratefully accepted.)
>>
>> Sir Charles Antony Richard Hoare FRS FREng was born on this day in 1934; a computer pioneer (one of the greats) he gave us things like the quicksort algorithm (which became qsort() in Unix) and ALGOLW (a neat language).
>>
>> -- Dave
>
[Not sure whether this is more appropriate for COFF instead, so it's
here; advice (apart from STFU) gratefully accepted.)
Sir Charles Antony Richard Hoare FRS FREng was born on this day in 1934; a
computer pioneer (one of the greats) he gave us things like the quicksort
algorithm (which became qsort() in Unix) and ALGOLW (a neat language).
-- Dave
So what's the origin of the name 'strategy' for the I/O routine in Unix
that drivers provide? Everything I've found in the early papers just says
that's what the routine is called. Is there a story behind why it was
chosen? My own theory is that it's in the sense of 'coping strategy' when
the driver needs to service more I/O for the upper layers, but that's just
a WAG.
Warner
On Tue, 8 Jan 2019, Warner Losh wrote:
> The name seems obvious because I've seen it for the last 30 years. But
> I've not seen it used elsewhere, nor have I seen it documented except in
> relationship to Unix. It could have been called blkio or bufio or bio or
> even just work or morework and still been as meaningful. VMS uses the
> FDT table to process the IRPs sent down. RT-11 has a series of entry
> points that have boring names. Other systems have a start routine
> (though more often that is a common routine used by both the queue me
> and isr functions). There is a wide diversity here...
I must admit that this is an interesting thread, just as long as it wasn't
called XXoptimize() unless you wanted a backlash from British English
speakers :-)
In hindsight I suppose that XXstrategy() is obvious, but back then, as you
ask? Dunno, but Ken might (if he's reading this thread).
One of my favo[u]rites is sched(); some pronounce it as "shed" and others
as "sked". Another American/British thing, I think...
Wasn't it Mark Twain who said "Two nations divided by a common language"?
I no longer have my Lions books on me, sadly enough (lost in a house move)
but there certainly were some peculiar names in the kernel...
ObGripe: Could anyone replying to the digest version please take the
trouble to update the Subject: line accordingly? I've now put the
original back as a courtesy to others, but I shouldn't have to; it's as
bad as top-posting.
-- Dave
re disk stagey
i understood that this implemented the elevator algorithm, and possible rotational latency compensation.
re non-gcc compilers
there was a time in the early 2000s when some people tried release plan9’s (ken’s) c compiler for use in BSD bsd, sadly (for plan9) this didn't happen. pcc was reanimated instead.
https://www.engadget.com/2019/01/08/national-inventors-hall-of-fame-class-o…
The National Inventors Hall of Fame (NIHF) joined Engadget on stage
today at CES to announce its 2019 class of inductees. [ including ] ...
Dennis Ritchie (Posthumous) and Ken Thompson: UNIX Operating System
-------------------------------------------------------------------
Thompson and Ritchie's creation of the UNIX operating system and the C
programming language were pivotal developments in the progress of computer
science. Today, 50 years after its beginnings, UNIX and UNIX-like systems
continue to run machinery from supercomputers to smartphones. The UNIX
operating system remains the basis of much of the world's computing
infrastructure, and C language -- written to simplify the development
of UNIX -- is one of the most widely used languages today.
Cheers, Warren
Hi All.
Back in October I started a discussion about QED sources. A number of
people were kind enough to send me tarballs and zip files as well as
pointers to other archives.
I have finally cobbled it all together and made it available on GitHub:
https://github.com/arnoldrobbins/qed-archive
I have tried to acknowledge everyone who contributed.
Thanks again to all who did send me stuff and email me.
Enjoy,
Arnold
Off topic, but looking for help and wisdom.
If you visit https://www.scotnet.co.uk/iain/ <https://www.scotnet.co.uk/iain/>saratov you will see some photos of work that I have started on the front panel of a Capatob2.
I plan to get the switches and lights running on a blinkenbone board with a PDP8 emulation behind it. (I already have an PDP11/70 front-panel running on the same infrastructure)
I have been struggling for over a year to get much info about this saratov computer (circuit diagrams etc). So I have started the reverse engineering on the panel.
Does anybody know anything about this computer? online or offline it would be much appreciated.
Iain
Hi all, back from a few days holiday. Just a reminder that the TUHS list
is for things related to Unix. I don't mind a bit of topic drift, but if
the topic goes completely away from Unix then the conversation should
migrate to the COFF (Computer Old Farts Forum) list, coff(a)tuhs.org.
I'll mark the "Isaacson" thread as closed on TUHS, but feel free to
continue it over on COFF.
Thanks, Warren
Been wanting to wade into this for a few days but needed to think about how.
I think that we're all aware that RMS has atrocious personal habits. But I
don't think that this mailing list is the place to discuss them unless it's
somehow in the context of UNIX.
Many seem to excuse RMS's revisionist view of the history of technology on
the grounds that RMS claims that his memory isn't very good. I think that
if he knows that he doesn't remember things then he should refrain from
talking about them as if he does.
As others have said, I don't conflate coding prowess with the ability to
design. I've had many an argument with John Gilmore (one of the people
who doesn't mind footing the cleaning and repair bill after allowing RMS
to stay at his place) where he begins with "When I wrote GNU tar..." I've
always responded by saying that writing tar is no big deal; the specification
was the hard part.
One place where I completely disagree with RMS that I think is in context
for this list is his claim that Linux should be called GNU/Linux. I've
written tons of software in my life, and I don't preface the name of each
one with the parts list.
Even if one believed that such an attribution scheme made sense, I would
claim that it should be called internet/Linux. I would argue that Linux
would not have happened without the internet making it possible for folks
around the world to participate. And I think that there's a good chance
that the tools would have been created anyway.
Of course, I acknowledge that the GNU tools have been ported to Linux.
Big deal. I haven't seen RMS arguing for GNU/Windows now that Microsoft
has seen the light.
Like many of you, Linux is not where I first started using GNU tools; I
started using them on my Sun machines after Sun started charging extra
for the compiler and included a licensing system that was broken and often
interfered with getting work done.
Jon
I think the RMS stuff should go away. It's not because I love the guy,
I don't. It's because we have people like Ken and Rob and other heavy
hitters and my hunch is they have little patience for this sort of thing
(they might correct me if I'm wrong).
I'd love to call out RMS on his BS but this isn't the place. This is
the place for people who actually did real work on Unix to share those
stories. Or so I think, it's up to Warren, not me.
I was given a copy of Walter Isaacson's "The Innovators: How a Group of
Hackers, Geniuses, and Geeks Created the Digital Revolution". It devotes
ten pages to Stallman and Gnu, Torvalds and Linux, even Tannebaum and
Minix, but never mentions Thompson and Ritchie. Unix is identified only
as a product from Bell Labs from which the others learned something--he
doesn't say what. I have heard also that Isaacson's "Idea Factory"
(about Bell Labs) barely mentions Unix. Is Isaacson blind, biased,
or merely brainwashed?
In the case of Steve Jobs, Isaacson tells not just that the Alto system
from Xerox inspired him, but also who its star creators were: Lampson,
Thacker and Kay. But then he stomps on them: "Once again, the greatest
innovation would come not from the people who created the breakthroughs,
but from the people who applied them usefully." While he very describes
innovation as a continuum from invention through engineering to marketing,
he seems to be more impressed by the later stages.
Or maybe he just likes to tell stories, and didn't pick up all the
good ones about Ken. Isaacson describes spacewar, arguably the first
stage of computer-game innovation, at great length. At the same time,
all he has to say about early-stage operating systems is a single
sentence that credits John McCarthy with leading a time-sharing effort
at MIT. (In my recollection, McCarthy proseletized; Corbato led.) He
tells how ARPANET, which he says was mainly developed by BB&N, connected
time-shared computers, but breathes not a word about Berkeley's work,
without which ARPANET would have been an open circuit.
"Innovators" won general critical praise. A couple of reviews predicted
it would become the standard of the field. However, an evidently
knowledgeable review in IEEE Annals of the History of Computing faulted
it for peddling familiar potted legends without really digging for
deeper insight. Regarding Thompson and Ritchie, it looks more like
overt suppression.
Doug
> On Jan 5, 2019,Paul Winalski <paul.winalski(a)gmail.com> wrote:
>
> ... After Lampson
> left Xerox PARC he set up a similar outfit at Digital'--the Western
> Research Lab (WRL).
Actually, WRL was started by Forest Baskett, formerly of Stanford University. Butler Lampson joined DEC's Systems Research Center (SRC) shortly after it was formed by former PARC manager Bob Taylor.
> ... I was working in the software tools
> engineering group at the time, and we would have loved to take WRL's
> work and to incorporate it in our products. But we couldn't. Why?
> Because they wrote everything in Modula 3, and we were using BLISS.
SRC used Modula-3, and before that a similar language called Modula-2+. Originally, WRL used Modula-2, and then I think switched to C. Perhaps DEC’s engineering groups should also have switched from Bliss to C.
> Yes, PARC invented the modern windows-based GUI, but, as with so many
> PARC innovations, Xerox did nothing with it. Based on how the PARC
> alumni at WRL behaved at DEC,I would argue that this was the fault of
> PARC as much as of Xerox management.
Xerox built its Star office automation system based on PARC technology and with lots of support from PARC. Star was of course not a big success. PARC also invented laser printers, and Xerox made quite a bit of money from them.
Paul McJones (former Xerox SDD and DEC SRC member — I have been on both sides of the fence)
>>I have heard also that Isaacson's "Idea Factory" (about Bell Labs)
> Did you mean the work of this title by Jon Gertner?
Indeed. If should fact-check myself if I'm going to challenge
some one else's choice of facts.
Thanks for the catch,
Doug
>> From: Doug McIlroy
>> I have heard also that Isaacson's "Idea Factory" (about Bell Labs)
> Did you mean the work of this title by Jon Gertner? (I have yet to pull
> down my copy to see what it says about Unix
I looked, and it too says next to nothing about Unix (which it describes as a
"programming language" - pg. 346). Oh well.
This is really a pretty serious omission, given that the vast majority of
mobile devices now run Android, which is a Unix derivative (Linux). So just
about everyone has a Unix-derived thing in their pocket.
Noel
Long ago when we were running ACSnet
(https://en.wikipedia.org/wiki/MHSnet) we lacked graphical
workstations so we never saw the Bell Labs face mail
(https://en.wikipedia.org/wiki/Vismon) mechanism in action. I think
colleagues who later had Sun workstations might have briefly had
X-face in operation.
I see on Luca Cardelli's homepage that there was an icon for ACSnet
email, of course it is Kangaroo...
http://lucacardelli.name/indexArtifacts.html
scroll down Original 48x48 bitmaps for "face mail" at Bell Labs.
From: Doug McIlroy
> I have heard also that Isaacson's "Idea Factory" (about Bell Labs)
I was unable to find a book of this title by Isaacson? Did you mean the work
of this title by Jon Gertner? (I have yet to pull down my copy to see what it
says about Unix - it's in another room, and I'm lazy... :-)
> (In my recollection, McCarthy proseletized; Corbato led.)
I think that's an accurate 1-sentence summary.
> breathes not a word about Berkeley's work, without which ARPANET would
> have been an open circuit.
Can you elaborate on this point a bit - I'm not sure what it is you're
referring to?
> A couple of reviews predicted it would become the standard of the
> field.
Among people who spell 'Internet' with a lower-case 'i', perhaps it will
(sadly).
Noel
From: jkh(a)violet.Berkeley.EDU (Jordan K. Hubbard)
Subject: My Broadcast
Date: 2 April 1987 at 21:45:46 CEST
To: hackers_guild(a)ucbvax.Berkeley.EDU, tcp-ip(a)sri-nic.arpa
By now, many of you have heard of (or seen) the broadcast message I sent to
the net two days ago. I have since received 743 messages and have
replied to every one (either with a form letter, or more personally
when questions were asked). The intention behind this effort was to
show that I wasn't interested in doing what I did maliciously or in
hiding out afterwards and avoiding the repercussions. One of the
people who received my message was Dennis Perry, the Inspector General
of the ARPAnet (in the Pentagon), and he wasn't exactly pleased.
(I hear his Interleaf windows got scribbled on)
So now everyone is asking: "Who is this Jordan Hubbard, and why is he on my
screen??"
I will attempt to explain.
I head a small group here at Berkeley called the "Distributed Unix Group".
What that essentially means is that I come up with Unix distribution software
for workstations on campus. Part of this job entails seeing where some of
the novice administrators we're creating will hang themselves, and hopefully
prevent them from doing so. Yesterday, I finally got around to looking
at the "broadcast" group in /etc/netgroup which was set to "(,,)". It
was obvious that this was set up for rwall to use, so I read the documentation
on "netgroup" and "rwall". A section of the netgroup man page said:
...
Any of three fields can be empty, in which case it signifies
a wild card. Thus
universal (,,)
defines a group to which everyone belongs. Field names that ...
...
Now "everyone" here is pretty ambiguous. Reading a bit further down, one
sees discussion on yellow-pages domains and might be led to believe that
"everyone" was everyone in your domain. I know that rwall uses point-to-point
RPC connections, so I didn't feel that this was what they meant, just that
it seemed to be the implication.
Reading the rwall man page turned up nothing about "broadcasts". It doesn't
even specify the communications method used. One might infer that rwall
did indeed use actual broadcast packets.
Failing to find anything that might suggest that rwall would do anything
nasty beyond the bounds of the current domain (or at least up to the IMP),
I tried it. I knew that rwall takes awhile to do its stuff, so I left
it running and went back to my office. I assumed that anyone who got my
message would let me know.. Boy, was I right about that!
After the first few mail messages arrived from Purdue and Utexas, I begin
to understand what was really going on and killed the rwall. I mean, how
often do you expect to run something on your machine and have people
from Wisconsin start getting the results of it on their screens?
All of this has raised some interesting points and problems.
1. Rwall will walk through your entire hosts file and blare at anyone
and everyone if you use the (,,) wildcard group. Whether this is a bug
or a feature, I don't know.
2. Since rwall is an RPC service, and RPC doesn't seem to give a damn
who you are as long as you're root (which is trivial to be, on a work-
station), I have to wonder what other RPC services are open holes. We've
managed to do some interesting, unauthorized, things with the YP service
here at Berkeley, I wonder what the implications of this are.
3. Having a group called "broadcast" in your netgroup file (which is how
it comes from sun) is just begging for some novice admin (or operator
with root) to use it in the mistaken belief that he/she is getting to
all the users. I am really surprised (as are many others) that this has
taken this long to happen.
4. Killing rwall is not going to solve the problem. Any fool can write
rwall, and just about any fool can get root priviledge on a Sun workstation.
It seems that the place to fix the problem is on the receiving ends. The
only other alternative would be to tighten up all the IMP gateways to
forward packets only from "trusted" hosts. I don't like that at all,
from a standpoint of reduced convenience and productivity. Also, since
many places are adding hosts at a phenominal rate (ourselves especially),
it would be hard to keep such a database up to date. Many perfectly well-
behaved people would suffer for the potential sins of a few.
I certainly don't intend to do this again, but I'm very curious as to
what will happen as a result. A lot of people got wall'd, and I would think
that they would be annoyed that their machine would let someone from the
opposite side of the continent do such a thing!
Jordan Hubbard
jkh(a)violet.berkeley.edu
(ucbvax!jkh)
Computer Facilities & Communications.
U.C. Berkeley
On 4 Jan 2019, at 21:46, Clem Cole <clemc(a)ccc.com> wrote:
>From where did that wonderful clip come? It's clearly a sequence from something else. I've never seen it before.
>Thanks,
>Clem
They were from my email archives of Hackers_Guild and the umd cs department staff mailing list. Does anybody else have any h_g archives sitting around?
Here’s some more funny stuff about the NSA! Gotta love how Brian Reid and Rick Adams weigh in. ;)
-Don
From: yee(a)dali.berkeley.edu (Peter E. Yee)
Subject: For those who missed 997@lll-crg, here it is
Date: 19 November 1985 at 15:58:08 CET
To: hackers_guild(a)ucbvax.berkeley.edu
Relay-Version: version B 2.10.3 4.3bsd-beta 6/6/85; site lll-crg.ARpA
Posting-Version: version B 2.10.3 4.3bsd-beta 6/6/85; site lll-crg.ARpA
Path: lll-crg!bandy
From: bandy(a)lll-crg.ARpA (Andrew Scott Beals)
Newsgroups: net.net-people
Subject: oh YES the NSA is on the net!
Message-ID: <997(a)lll-crg.ARpA>
Date: 19 Nov 85 07:11:36 GMT
Date-Received: 19 Nov 85 07:11:36 GMT
References: <324(a)ucdavis.UUCP> <2253(a)umcp-cs.UUCP>
Reply-To: bandy(a)lll-crg.UUCP (Andrew Scott Beals)
Distribution: net
Organization: Computation Research Group, Lawrence Livermore Labs
Lines: 94
Summary: (let's say) unintentional dis-information corrected
In article <2253(a)umcp-cs.UUCP> tlr(a)umcp-cs.UUCP (Terry L. Ridder) writes:
I can almost guarantee that the National Security Agency is
not on USENET or ARPANET. I can further almost guarantee that
very few employees of NSA are even aware that USENET exist.
Signed
Terry L. Ridder
UUCP: seismo!(mimsy.umd.edu|neurad)!bilbo!wiretap!(root|tlr)
^^^^^^^
PHONE: 301-490-2248 (home) 301-859-6642 (work)
Right.
There used to be a host called "TYCHO" (nickname "NSA") at host
zero on imp fifty-seven. (26.0.0.57) (information taken from the old
NIC (Network Information Center for Internet) host tables)
Now there is a machine called "DOCKMASTER" on that same imp port
(TYCHO was an old PDP-11 running version 6 unix (which rumors had
flown for quite some time that someone actually proved was secure)).
Here is what the NIC has to say about DOCKMASTER:
The National Computer Security Center (DOCKMASTER)
820 Elkridge Landing Road
Room A1127, Building FANX-II
Linthicum, MD 21090
NetAddress: 26.0.0.57
Nicknames: NCSC-MULTICS
Host Administrator and Liaison:
Aliff, Stephen W. (SWA1) Aliff.DODCSC@MIT-MULTICS
(301) 850-5888
Multics, if I remember correctly, was just given some level of
certification by the government that it was secure. Interesting, no?
Unfortunately, I'm not nearly as much of a Packrat as some might like
to think so I don't have a Maryland phone book (I do have my silly
putty though), so I can't tell you where this exchange is located
(nor where Terry's work number is located). However, looking up
Linthicum MD (I was born and raised just north of DC) shows that
it's just north of BWI (airport). There is a NASA center right near
there and next to that is an un-marked (of course) NSA center.
All of this points that imp 57 is still NSA's imp.
NIC has this to say about host 1 on imp 57:
National Security Agency (COINS-GATEWAY)
COINS Network Control Center
Fort George G. Meade, MD 20755
NetAddress: 26.1.0.57
Nicknames: COINS
Host Administrator and Liaison:
Smith, Ronald L. (RLS6) COINS@USC-ISI
(301) 688-6375
The NIC generally likes to give a machine the name "-GATEWAY" when
that machine is a gateway into another part of the internet. (the
machine type of COINS is a Plurbus, which is a multiprocessor
gateway machine manufactured by BBN (the folks who do the ARPANET
and MILNET hardware).
In any case, it seems that Mr Ridder is un-(or mis-?)informed.
Side note: at the last (Portland) USENIX, I happened across a
gentlemen (very cleancut) whose badge listed him as working for the
"Department of Defense, Fort Meade Maryland". I said "Oh, you're one
of those NSA guys!" To which he replied "How did you know?!"...
"Everyone else in DOD says /which/ part of DOD they work for..."
andrew scott beals
lawrence livermore national laboratory/university of california
Pooh-bah for LLL-CRG.ARPA
(415) 423-1948 (work) (533-1948 (FTS))
ps. In case anyone is wondering and before you go giving my name to
people that I don't want to talk to (like the Kind Folks at the NSA
(but I'm sure they've heard of me or will before I finish up with my
current round of paperwork with the DOE/OPM/FBI)), I obtained all of
this information through public channels.
--
There once was a thing called a V-2,
To pilot which you did not need to--
You just pushed a button,
And it would leave nuttin'
But stiffs and big holes and debris, too.
andy beals - bandy(a)lll-crg.arpa - {seismo,ihnp4!sun,dual}!lll-crg!bandy
From: jordan(a)ucbarpa.berkeley.edu (Jordan Hayes)
Subject: Re: ``dockmaster''
Date: 19 November 1985 at 16:27:34 CET
To: hackers_guild(a)ucbvax.berkeley.edu
for those so inclined, they should look at what is on port 2 of that
imp ... hmmm ... sorta like putting the CIA on port 4 of imp 78 ...
/jordan
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: Re: ``dockmaster''
Date: 19 November 1985 at 18:27:27 CET
To: hackers_guild(a)ucbvax.berkeley.edu, jordan(a)ucbarpa.berkeley.edu
Maryland lets NSA people use mimsy. The NSA is interested in the
supercomputer designs that they're working on there... (which is
why they have an imp connection)
In any case, I just got a long note from Mr Ridder. I'll forward
it to you when I'm done reading my mail...
andy
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: message from Mr. Ridder
Date: 19 November 1985 at 18:31:44 CET
To: hackers_guild(a)lll-crg.ARPA
From tlr(a)mimsy.umd.edu Tue Nov 19 06:13:44 1985
Date: Tue, 19 Nov 85 09:12:54 EST
From: Terry L. Ridder <tlr(a)mimsy.umd.edu>
Subject: Your posting
Mr. Andrew Scott Beals
I am writing to inform you of at least two facts:
The computer named "wiretap" belongs to my children,
age 9, age 7, age 2. Jennifer, the 7 year old, named
the computer. Sarah, the 9 year old, named the other
computer "bilbo".
Bilbo and wiretap are both private machines. The are
owned by my family and I. They are in no way shape or
form associated with the NSA.
Concerning your posting, I am concerned that you have
no regard for the safety of federal employees. Your
posting is marked for distribution "net", if you would
look at the two previous posting they are marked for
distribution 'usa'. Therefore, you probably have just
told most of the world the location of what you believe
to be an NSA facility. This probably has made the location
a target for any of a number of terrorist groups. What if
you are wrong? You have place in danger the lifes of
innocent people. Just because you may think you know something
does not mean that you tell most of the world.
I would hope that in the future that you would take the
time to think about all the ramifications before making
a posting, similiar in nature to the one in question.
I would hope that you will send out a cancel message on
your posting, before it gets to far.
I sincerely hope that you restrict your speculations about
my family's association with any federal agency. I hope
also that you are mature enough to post an apology for
inferring that my computers were associated with the NSA.
I do not want to think of what the implications are from
that speculation on your part. You may have damaged my
family's reputation and my own reputation.
Please be a little more responsible in the future.
Engage brain before fingers.
Signed
Terry L. Ridder
for the Terry L. Ridder family
---------------------
From: fair(a)ucbarpa.berkeley.edu (Erik E. Fair)
Subject: Re: message from Mr. Ridder
Date: 19 November 1985 at 18:42:34 CET
To: bandy(a)lll-crg.ARPA
Cc: Hackers_Guild(a)ucbvax.berkeley.edu
I wonder if this bozoid has ever read `The Puzzle Palace'?
It identifies several `secret' NSA installations, including
one out in the wilds of Sonoma, just over the border from
Marin County, along the road from Tomales to Petaluma. All
from public sources and Freedom Of Information Act suits.
Erik E. Fair ucbvax!fair fair(a)ucbarpa.berkeley.edu
P.S. Be sure to waive hello in your Email to the folks at the
Maryland Procurement Office...
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: Re: message from Mr. Ridder
Date: 19 November 1985 at 19:11:36 CET
To: fair(a)ucbarpa.berkeley.edu
Cc: Hackers_Guild(a)ucbvax.berkeley.edu
[mimsy.umd.edu]
Login name: tlr In real life: Terry L. Ridder
Office: Laurel MD 20707 Office phone: 859-6642
Home phone: 490-2248 Arpanet Sponsor
Directory: /u/tlr Shell: /bin/csh
Last login Tue Nov 19 09:17 on tty04
Project: To find a new job, raise three children, and have time for the wife.
Plan: To move overseas.
----------------------
Well, this is what it has to say about him. Arpanet sponsor, eh?
andy
From: fair(a)ucbarpa.berkeley.edu (Erik E. Fair)
Subject: Re: message from Mr. Ridder
Date: 19 November 1985 at 19:48:41 CET
To: bandy(a)lll-crg.ARPA
Cc: Hackers_Guild(a)ucbvax.berkeley.edu
Ask Chris Torek what an `Arpanet Sponsor' is...
Erik
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: more follies, dt if uninterested
Date: 20 November 1985 at 02:10:30 CET
To: hackers_guild(a)lll-crg.ARPA
Seems that the gentleman doesn't read his fucking news before going
off at the handle. I sent him an "Excuse me, but if you look at
article ..." note.
LLL General consul? Snicker snicker. Maybe Postmaster or root or
usenet will get a nice note from him telling me what a Bad Boy I've
been... :-)
andy
-----------------------
Date: Tue, 19 Nov 85 18:46:28 EST
From: Terry L. Ridder <tlr(a)mimsy.umd.edu>
Subject: apology is inorder
Mr. Andrew Scott Beals
I again ask that you act in a mature manner and post an
apology concerning your inferring that my private computers
are associated with the NSA.
If you choose not to, would you be kind enough to inform me
what the phone number is for LLL general consul is?
Signed
Terry L. Ridder
From: jordan(a)ucbarpa.berkeley.edu (Jordan Hayes)
Subject: Re: ridder me this ...
Date: 20 November 1985 at 02:59:22 CET
To: hackers_guild(a)ucbvax.berkeley.edu
Methinks either the man is an idiot or he's not really a force to
be reckoned with. If his main mail machine is mimsy, that means he's
on the same imp ... since NSA people have accounts at umd, maybe he's
FROM the NSA ... hmmm ...
/jordan
From: Milo S. Medin (NASA ARC Code ED) <medin(a)orion.ARPA>
Subject: Re: message from Mr. Ridder
Date: 20 November 1985 at 03:03:55 CET
To: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Cc: fair(a)ucbarpa.berkeley.edu, Hackers_Guild(a)ucbvax.berkeley.edu
LLL general counsel? uh oh..... That means lawyers....
Milo
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: ridder me this
Date: 20 November 1985 at 03:14:56 CET
To: hackers_guild(a)lll-crg.ARPA
One of my sources tells me that Mr Ridder is indeed an NSA person. Chris
Torek told me that an "Arpanet Sponsor" in their terminology means that
he's one of the people who helped them get on the network.
andy
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: Re: Ridder me this (qualification)
Date: 20 November 1985 at 03:31:22 CET
To: bandy(a)ll-crg.ARPA, deboor%buddy(a)ucbvax.berkeley.edu
Cc: hackers_guild(a)ucbvax.berkeley.edu
Oh, I already sent him an apology. Here it is:
Relay-Version: version B 2.10.3 4.3bsd-beta 6/6/85; site lll-crg.ARpA
Posting-Version: version B 2.10.3 4.3bsd-beta 6/6/85; site lll-crg.ARpA
Path: lll-crg!bandy
From: bandy(a)lll-crg.ARpA (Andrew Scott Beals)
Newsgroups: net.net-people
Subject: Apology to Terry Ridder
Message-ID: <998(a)lll-crg.ARpA>
Date: 19 Nov 85 17:37:12 GMT
Date-Received: 19 Nov 85 17:37:12 GMT
References: <324(a)ucdavis.UUCP> <2253(a)umcp-cs.UUCP> <997(a)lll-crg.ARpA>
Reply-To: bandy(a)lll-crg.UUCP (Andrew Scott Beals)
Distribution: net
Organization: Computation Research Group, Lawrence Livermore Labs
Lines: 15
I would like to take this opportunity to formally extend my
apologies to Terry L. Ridder (tlr(a)mimsy.umd.edu) and his family for
insinuating that their home machines (bilbo and wiretap) and any
association with any Federal agency (the NSA in this case).
andrew scott beals
uc/llnl
--
There once was a thing called a V-2,
To pilot which you did not need to--
You just pushed a button,
And it would leave nuttin'
But stiffs and big holes and debris, too.
andy beals - bandy(a)lll-crg.arpa - {seismo,ihnp4!sun,dual}!lll-crg!bandy
---------------------
What was interesting was that the file was ~news/net/net-people/666 ...
Tee hee hee.
andy
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: calling LLL {lawyers,diplomats}
Date: 20 November 1985 at 03:38:35 CET
To: hackers_guild(a)lll-crg.ARPA
Of course, they'll tell him that "Anything that our employees say
is their own opinion unless they are a member of the LLNL Public
Information group and are speaking in an official capacity."
"Pin-heads. Pin-heads. Roly-poly pin-heads. Pin-heads. Pin-heads.
Watch them lose. Yow!"
andy
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: teehee
Date: 20 November 1985 at 17:18:25 CET
To: hackers_guild(a)ucbvax.berkeley.edu
From reid@glacier Wed Nov 20 06:59:01 1985
Date: Wed, 20 Nov 85 06:57:35 pst
From: Brian Reid <reid@glacier>
Subject: Re: Apology to Terry Ridder
Newsgroups: net.net-people
Organization: Stanford University, Computer Systems Lab
Terry Ridder is one of the biggest assholes on earth, and I can't fathom
anybody owing him an apology about anything. Oh well.
--
Brian Reid decwrl!glacier!reid
Stanford reid(a)SU-Glacier.ARPA
From: Andrew Scott Beals <bandy(a)lll-crg.ARPA>
Subject: philngai on tlr
Date: 21 November 1985 at 07:21:35 CET
To: hackers_guild(a)lll-crg.ARPA
From amdcad!phil Wed Nov 20 20:41:58 1985
Date: Wed, 20 Nov 85 20:08:04 pst
From: amdcad!phil (Phil Ngai)
Subject: Re: message from Mr. Ridder
what kind of asshole names a computer wiretap and then complains when
others make seemingly reasonable assumptions about it?
who should engage their brain, that's what i want to know.
--
Raise snails for fun and profit! Race them for amusement! Then eat the losers!
Phil Ngai +1 408 749-5720
UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
ARPA: amdcad!phil(a)decwrl.dec.com
From: cuuxb!jab(a)lll-crg.ARPA
Subject: Re: message from Mr. Ridder
Date: 24 November 1985 at 02:25:13 CET
To: lll-crg!sdcsvax.arpa!hutch(a)lll-crg.ARPA
Cc: lll-crg!hackers_guild(a)ucbvax.berkeley.edu
The Ridder guy is a jerk. I would wonder why the ARPANET knows about his
private machines, anyhow: sounds like a misuse of government funding.
Jeff Bowles
Lisle, IL
From: Donnalyn Frey <donnalyn(a)seismo.css.gov>
Subject: Re: private machines on internet
Date: 24 November 1985 at 06:08:34 CET
To: cuuxb!jab(a)lll-crg.ARPA, deboor%buddy(a)ucbvax.berkeley.edu
Cc: hackers_guild(a)ucbvax.berkeley.edu
Ridders machines are NOT on the arpanet. They have uucp links to
Uof Maryland. Ridder himself has an account on mimsy.umd.edu.
Ridders two machines were named by his children. ONe had
just finished reading teh Hobbit (hence bilbo, despite the
2 other known bilbos [not to be confused with certain dildos being discussed])
and the other had finished some spy book, hence wiretap.
He is quite pompous and seems to think the world revolves around
him. We asked him to rename "bilbo" to not conflict. He replied that
the other machines should change because he had already named his
machine.
By the way, we're talking about toys here (maybe somthing as expensive
as an IBM-PC) not the "real" machines you might be led to believe.
He is best ignored.
---rick
(I originally posted this to hacker news, but I’ll repost it here too.)
At the University of Maryland, our network access was through the NSA's "secret" MILNET IMP 57 at Fort Mead. It was pretty obvious that UMD got their network access via NSA, because mimsy.umd.edu had a similar "*.57" IP address as dockmaster, tycho and coins.
https://emaillab.jp/dns/hosts/
HOST : 26.0.0.57 : TYCHO : PDP-11/70 : UNIX : TCP/TELNET,TCP/SMTP,TCP/FTP :
HOST : 26.0.0.57 : DOCKMASTER.NCSC.MIL,DOCKMASTER.DCA.MIL, DOCKMASTER.ARPA : HONEYWELL-DPS-8/70 : MULTICS : TCP/TELNET,TCP/FTP,TCP/SMTP,TCP/ECHO,TCP/DISCARD,ICMP :
HOST : 26.1.0.57 : COINS-GATEWAY,COINS : PLURIBUS : PLI ::
HOST : 26.2.0.57, 128.8.0.8 : MARYLAND,MIMSY,UMD-CSD,UMD8,UMCP-CS : VAX-11/780 : UNIX : TCP/TELNET,TCP/FTP,TCP/SMTP,UDP,TCP/ECHO,TCP/FINGER,ICMP :
https://multicians.org/site-dockmaster.html
Whenever the network went down (which was often), we had to call up a machine room at Fort Mead and ask them to please press the reset button on the box labeled "IMP 57". Sometimes the helpful person who answered the phone had no idea which box I meant, so I had describe to him which box to reset over the phone. ("Nope, that didn't work. Try the other one!" ;) They were even generous enough to issue us (CS department systems staff and undergrad students) our own MILNET TACACS card.
On mimsy, you could get a list of NSA employees by typing "grep contact /etc/passwd", because each of their courtesy accounts had "network contact" in the gecos field.
Before they rolled out TACACS cards, anyone could dial up an IMP and log in without a password, and connect to any host they wanted to, without even having to murder anyone like on TV:
https://www.youtube.com/watch?v=hVth6T3gMa0
I found this handy how-to tutorial guide for "Talking to the Milnet NOC" and resetting the LH/DH, which was useful for guiding the NSA employee on the other end of the phone through fixing their end of the problem. What it doesn't mention is that the key box with the chase key was extremely easy to pick with a paperclip.
Who would answer the Milnet NOC's 24-hour phone was hit or miss: Some were more helpful and knowledgeable than others, others were quite uptight.
Once I told the guy who answered, "Hi, this is the University of Maryland. Our connection to the NSA IMP seems to be down." He barked back: "You can't say that on the telephone! Are you calling on a blue phone?" (I can't remember the exact color, except that it wasn't red: that I would have remembered). I said, "You can't say NSA??! This is a green phone, but there's a black phone in the other room that I could call you back on, but then I couldn't see the hardware." And he said "No, I mean a voice secure line!" I replied, "You do know that this is a university, don't you? We only have black and green phones.”
Date: Thu, 11 Sep 86 13:53:45 EDT
From: Steve D. Miller <steve(a)brillig.umd.edu>
To: staff(a)mimsy.umd.edu
Subject: Talking to the Milnet NOC
This message is intended to be a brief tutorial/compendium of
information you probably want to know if you need to see about
getting the LH/DH thingy (and us) talking to the world.
First, you need the following numbers:
(1) Our IMP number (57),
(2) Mimsy's milnet host address (26.2.0.57),
(3) The circuit number for our link to the NSA
(DSEP07500-057)
(4) The NOC number itself (692-5726).
Second, you need to know something about the hardware. There
are three pieces of hardware that make up our side of the link:
the LH/DH itself, the ECU, and the modem. The LH/DH and the
ECU are the things in the vax lab by brillig; the ECU is the
thing on top (with the switches), and the LH/DH is the thing
on the bottom. The normal state is to have the four red LEDs
on the ECU on and the Host Master Ready, HRY, Imp Master Ready,
and IRY lights on at the LH/DH. If these lights are not on,
something is wrong. If mimsy is down, then we'll only have some
of the lights on, but that should fix itself when mimsy comes up.
Some interesting buttons or switches on the ECU are:
reset - resets something or another
stop - stops something or another
start - restarts something or another
local loopback -- two switches and two leds; you may need
to throw one or the other of these if the NOC asks
you to. These loopback switches should be distinguished
from those on the modem itself.
remote loopback -- like local loopback, but does something else.
The modem is in the phone room beside the terminal room (rm.
4322, if memory serves). It can be opened with the chase key from
the key box...but if someone official and outside of staff asks
you that, you probably shouldn't admit to it. It has a switch on
it, too; it seems that switch normally rests in the middle, and
there's a "LL" setting to the left which I assume puts the modem in
local loopback mode.
Now that you have some idea of where things are, call the NOC.
Identify yourself as from the University of Maryland, and say that
we're not talking to the outside world. They will probably ask for
our Milnet address or the number of the IMP we're connected to,
and will then poke about and see what's happening. They will ask
you to do various things; ask if you're not sure what they mean,
but the background info above should help in puzzling it out.
Hopefully, this will make it easier to find people to fix
our net problems in the future; it's still hard to do 'cause
we have so little info (no hardware manual, for example),
but this should give us a fighting chance.
-Steve
There were rumored to be "explosive bolts" on the ARPA/MILNET gateways (whether they were metaphorical or not, I don't know).
Here's something interesting that Milo Medin wrote about dual homed sites like NSA and NASA, that were on both the ARPANET and MILNET:
To: fair(a)ucbarpa.berkeley.edu (Erik E. Fair)
Cc: Hackers_Guild(a)ucbvax.berkeley.edu, ucdavis!ccohesh(a)ucbvax.berkeley.edu
Subject: Re: a question of definition
Date: Thu, 29 Jan 87 15:33:35 PST
From: Milo S. Medin (NASA ARC Code ED) <medin(a)orion.arpa>
Right, the core has many gateways on it now, maybe 20-30. All the LSI's will
be stubbed off the core however, and only buttergates will be left after
the mailbridges and EGP peers are all converted. Actually, I think DARPA is
paying for it all...
Ames is *not* getting a mailbridge. You are right of course, that we could
use 2 gateways, not just 1 (actually, there will be a prime and backup anyways),
and then push routing info appropriately. But that's anything but simple.
Firstly, the hosts have to know which gateway to send a packet to a given
network, and thus have to pick between the 2. That's a bad idea.
It also means that I have to pass all EGP learned info around on the
local cable, and if I do that, then I can't have routing info from
the local cable pass out via EGP. At least not without violating
the current EGP spec. Think about it. It'd be really simple to
create a loop that way. Thus, in order to maximize the use of both
PSN's, you really need one gateway wired to both PSN's, and just
have it advertise a default route inside. Or use a reasonble IGP,
of which RIP (aka /etc/routed stuff) is not. I'm hoping to get
an RFC out of BBN at this IETF meeting which may go a long way in
reducing the use of RIP as an IGP.
BTW, NSA is an example of a site on both MILNET and ARPANET but without
a mailbridge...
There is no restriction that a network can only be on ARPANET or MILNET.
That goes against the Internet model of doing things. Our local
NASA gatewayed nets will be advertised on both sides. The restriction
on BARRNet is that the constituent elements of BARRNet do not all
have access to MILNET. NSF has an understanding with DARPA and
DCA that NSFnet'd sites can use ARPANET. That does not extend to
the MILNET. Thus, Davis can use UCB's or Stanford's, our even NASA's
ARPANET gateways, with the approval of the site of course, but
not MILNET, even though NASA has MILNET coverage. Thus we are required
to restrict BARRNet routing through our MILNET PSN. If we were willing
to sponsor UCB's MILNET access, for some requirement which NASA
had to implement, then we would turn that on. But BARRNet itself will
but cutoff to MILNET (and probably ARPANET too) at Ames, but not
cut off to other NASA centers or sites that NASA connects. There is
no technical reason that prevents this, in fact, we have to take
special measures to prevent it. But those are the rules. Anyways,
mailbridge performance should improve after the conversion, so
UCB should be in better shape. And you'll certainly be able to
talk to us via BARRNnet... I have noticed recently that MILNET<->
ARPANET performance has been particularly poor... Sigh.
The DCA folks feel that in case of an emergency they may be
forced to use an unsecure network to pass certain info around. The
DDN brochure mentions SIOP related data for example. Who knows,
if the balloon goes up, the launch order might pass through Evans
Hall on its way out to SAC... :-)
Milo
I dug up an "explosive bolts" reference -- fortunately that brilliant plan didn't get far.
(Milo Medin knows this stuff first hand: https://innovation.defense.gov/Media/Biographies/Bio-Display/Article/139585… )
To: fair(a)ucbarpa.berkeley.edu (Erik E. Fair)
Cc: ucdavis!ccohesh(a)ucbvax.berkeley.edu, Hackers_Guild(a)ucbvax.berkeley.edu
Subject: Re: a question of definition
Date: Thu, 29 Jan 87 12:29:36 PST
From: Milo S. Medin (NASA ARC Code ED) <medin(a)orion.arpa>
Actually its:
SCINET -- Secret Compartmented Information Net (if you don't know what
compartmented means, you don't need to ask)
DODIIS -- DoD Intelligence Information Net
The other stuff I think is right, at least without me looking things
up. I probably shouldn't have brought this subject of the secure part
of the DDN up. People like being low key about such things...
Erik, all the BBN gateways on MILNET and ARPANET currently comprise
the core, not just mailbridges. Some are used as site gateways, others
as EGP neighbors, etc... And just because you are dual homed doesn't mean
you get a mailbridge. And the IETF doesn't deal with low level stuff
like that; DCA does all that. In fact, the reason we are getting an
ARPANET PSN is because when DCA came out to do a site survey, they
liked our site so much they asked if they could put one here! It's
amazing how many sites have tried to get ARPANET PSN's the right
way and have had to wait much longer than us... BTW, since we are
dual homed (probably a gateway with 2 1822 interfaces in it), we
are taking steps to be sure that people on ARPANET or MILNET can't
use our gateway to bypass the mailbridges. The code will be hacked
to drop all packets that aren't going to a locally reachable network.
BARRNet, even though its locally reachable, will be excluded
from this however, since the current procedural limitations call for
not allowing any BARRNet traffic to flow out of BARRNet to MILNET
and the reverse. NASA traffic of course can traffic through BARRNet,
and even use ARPANET that way (though that's not a big deal when
we get our own ARPANET PSN). That's because only NASA is authorized
to directly connect to MILNET, not UCB or Stanford, etc...
DCA must have the ability to partition the ARPANET and MILNET in
case of an "emergency", and having non-DCA controlled paths between
the nets prevents that. There was talk some time ago about putting
explosive bolts in the mailbridges that would be triggered by
destruct packets... That idea didn't get far though...
The DDN only includes MILNET,ARPANET,SCINET,etc... Not the attached
networks. If it did, you'd need to file a TSR to add a PC to your
local cable. A TSR is a monstrous piece of paperwork that needs to
be done anytime anything is changed on the DDN... Rick knows all
about them don't you Rick?
The whole network game is filled with acronyms! I gave up trying
to write documents with full explainations in terms long ago...
I have yet to see a short and concise (and correct) way of describing
DDN X.25 Standard Service for example... That's probably one of the
harder things about getting into networking these days. We won't
even talk about Etherbunnies and Martians and other Millspeak...
Milo '1822' Medin
The issue of a.out magic numbers came up. The a.out header was 16 bytes. The first two bytes was 0407 in the original code. This was followed by 16 bit quantities for text, data, and bss sizes. Then the size of the symbol tables. I'm pretty sure the rest of the fields were blank in V6. Later a start address (previously always assumed to be zero) was added.
The number 407 was a neat kludge. It was a (relative) branch instruction on the PDP-11. 0400 was the base op code. 7 referred to jumping ahead 7 words which skipped you over the a.out header (the PC had already been incremented for the branch instruction itself). This allowed you to make a boot block without having to strip off the header. Boot blocks were just one 512 byte block loaded from block zero of the disk into low memory.
Later executables used 410 for a write protected text segment and 411 for split-I/D executables. Later versions added more codes (413 was used in BSD to indicate aligned pages followed etc... Even later systems coded the hardware type into the magic number to distinguish between different architectures.
Note that the fact that 410 and 411 were also PDP-11 branch instructions wasn't ever really used for anything.
According to my notes, the ARPAnet was converted from NCP to TCP on this
day in 1983; except for a temporary dispensation for a few hosts, NCP
support was switched off.
And as every Unix geek knows, today in 1970 is Unix's time epoch.
Trivia: I found a web site that thinks that that's my birthday! Not even
close; try October 1952 instead...
-- Dave
Warner Losh:
But wasn't it tsort that did the heavy lifting to get things in order?
ar c foo.a `tsort *.o`
Ranlib just made it fast by adding an index..
====
There's a little more than that to ranlib.
Without ranlib, ld made a single pass through each library,
loading the modules that resolved unresolved symbols. If
a module itself had unresolved symbols (as many do) and
some of those symbols were defined in a module ld had
already passed, you were out of luck, unless you explicitly
told ld to run a second pass, e.g. cc x.o y.o -la -la.
Hence the importance of explicit ordering when building
the library archive, and the usefulness of tsort.
Ranlib makes a list of all the .o file in this archive and
the global symbols defined or used by each module, and
places the list first in the archive. If ld is (as it
was) changed to recognize the index, it can then make a
list of all the object files needed from this archive, even
if needed only by some file it will load from the same
archive, then collect all required modules in a single pass.
Ld could all along have just made two passes through the
library, one to assemble the same list ranlib did in advance,
a second to load the files. (Or perhaps a first pass to
load what it knows it needs and assemble the list, and a
second only if necessary.) Presumably it didn't both to
make ld simpler and because disk I/O was much slower back
then (especially on a heavily-loaded time-sharing system,
something far less common today). I suspect it would work
fine just to do it that way today.
Nowadays ranlib is no longer a separate program: ar
recognizes object files and maintains an index if any are
present. I never especially liked that; ar is in
principle a general tool so why should it have a special
case for one type of file? But in practice I don't know
anyone who uses ar for anything except libraries any more
(everyone uses tar for the general case, since it does a
better job). Were I to wave flags over the matter I'd
rather push to ditch ar entirely save for compatibility
with the past, move to using tar rather than ar for object
libraries, and let ld do two passes when necessary rather
than requiring that libraries be specially prepared. As
I say, I think modern systems are fast enough that that
would work fine.
Norman Wilson
Toronto ON
Happy New Year to you all. It's also the year we will celebrate the
50th anniversary of the Unix timesharing system. Just FYI, I hope to
be in Los Angeles the week before the 2019 Usenix ATC, to go to the
CHM. Then to Seattle before the ATC to visit the LCM+L, then the ATC.
Hope to see some of you along the way.
I'm feeling the need to get a few other people to help out curate and
maintain the Unix Archive. Not that it changes very often, but it might
help to have some fresh eyes (and opinions) look at it and add/improve it.
So if there is any interest from a few of the long-time TUHS members,
please e-mail me. I run Nextcloud on the server, so if you can run a
client at your end and have about 4G spare disk space (or less if
you are only interested in a specific section), that would be great.
I'll be away for about 7 days but I'll try to read/respond to e-mails.
Cheers, Warren
I found this project online recently. For those who love K&R and the 64 bit world.
https://github.com/gnuless/ncc
It outputs to a custom a.out format so it's not immediately usable.
It's a dual clause BSD license too!
Dear Unix Enthusiasts,
We are seriously considering upgrading our PDP 11/40 clone (SIMH), to a PDP 11/45 (preferably another SIMH) for our Unix v6 installation. Our CEO was traveling and met a techie in first class (seriously, first class?) who told him that we needed one. I thought I had better ask some folks who have gone before about it before we jumped on the bandwagon. By way of background, Our install is pretty small with a few rk’s and 256K of ram along with a few standard peripherals, and some stuff our oldtimers refuse to part with (papertape, card punch, etc). It has fairly low utilization - a developer logs in and writes code every few days and the oldtimers hunt the wumpus and play this weird Brit game about cows. It could be considered a casual development and test environment and an occasional gaming console.
Here is what I would like to know that I think y’all might be particularly equipped to answer:
1. Are there any v6 specific concerns about upgrading?
2. Why should we consider taking the leap to the 11/45? Everything seems to work fine now.
3. If we jump in and do the upgrade, how can we immediately recognize what has changed in the environment? I.e what are some things that we can now do that we couldn’t do before?
4. If we just insert our current diskpacks into the new system, will it just boot and work? Or what do we need to before/after booting to prepare/respond to the new system?
5. Is 256K enough memory or what configuration do y’all recommend?
6. Is there anything else we need to know about?
Regards,
Will
Sent from my iPhone
> From: Larry McVoy
> And cron is really 3246 bytes? And 2054 for init? Don't those seem too
> small? Linux's cron is 44472 and that's with shared libs
No, 3246 is the same as mine, and my init (which has a few changes from stock) is
2064.
I'm not surprised the later one is 44KB - that's in part due to the denseness
of PDP-11 binary (and the word-size is only 16 bits), but more broadly, I
expect that it goes to my complaint about later Unixes - they've lost, IMO,
the single most important thing about the PDP-11 Unixes, which is their
bang/buck ratio.
Noel
We lost Rear Admiral "Amazing" Grace Hopper on this day in 1992; amongst
other things she gave us COBOL and ADA, and was allegedly responsible for
the term "debugging" when she removed a moth from a relay on the Harvard
Mk I.
-- Dave
> From: Will Senn
> We are seriously considering upgrading our PDP 11/40 clone (SIMH), to a
> PDP 11/45 (preferably another SIMH)
Heh! When I saw the subject line, I thought you wanted to upgrade a
_physical_ -11/40 to an -11/45. ('Step 1. Sell the -11/40. Step 2. Buy
an -11/45.' :-)
> for our Unix v6 installation.
Why on earth would an organization have such a thing? :-)
> Our CEO was traveling and met a techie in first class (seriously,
> first class?) who told him that we needed one.
Heh. If said techie knows about the two, he's probably pretty senior (i.e.
eligible for Social Security :-), and thus elegible for first class... :-)
> It has fairly low utilization - a developer logs in and writes code
> every few days
Who the *&%^&*(%& is still writing code under V6?!
And how do you all get the bits in and out? (I run mine under Ersatz-11,
which has this nice device which allows it to read files off the host file
system; transfering stuff back and forth is a snap, I do all my editing with
Epsilon on my Windoze box, 'cause I'm too lazy to bring up the V6 Emacs I
have.)
> 1. Are there any v6 specific concerns about upgrading?
Not that I know of.
> 2. Why should we consider taking the leap to the 11/45? Everything
> seems to work fine now.
You're asking _us_?
Some larger applications will only run on an split-I-D machine, is about the
only reason I can think of.
Oh, also, the floating point instructions on the /45 are the only kind
understood by V6; the C compiler doesn't emit the ones the /40 provides. Any
floating point code run on the /40, the instructions are simulated by a
trap handler (by way of the OS, which has to handle it and reflect it to
the user process). I.e. very slow.
> 3. If we jump in and do the upgrade, how can we immediately recognize
> what has changed in the environment? I.e what are some things that we
> can now do that we couldn't do before?
See above.
> 4. If we just insert our current diskpacks into the new system, will it
> just boot and work? Or what do we need to before/after booting to
> prepare/respond to the new system?
Any V6 disk pack can be read/mounted on any V6 machine. Any binaries (the OS,
or user commands) for the -11/40 will run on the -/45. (Which is why the V6
dist includes binaries for /40 versions of the OS only.)
To make use of the /45, you need a different copy of the OS binary, built from
a slightly different set of modules. (Replace m40.s with m45.s; and you will
need to re-asssemble l.s, prepending it with data.s.) Both variants can live
on the same pack, under different filenames; select the right one at boot
time.
> 5. Is 256K enough memory or what configuration do y'all recommend?
256KB is all you can have. Neither SIMH nor Ersatz-11 support the Able
ENABLE:
http://gunkies.org/wiki/Able_ENABLE
which is what you need to have more than 256KB on a UNIBUS -11.
> From: Clem Cole
> You'll probably want to configure a kernel for the 45 class machine.
> Look at the differences in the *.s files in the kernel.
More importantly, look at the 'run' file in /usr/sys, which has commented
out lines to build the OS image for /45-/70 class machines.
> But either way you should configure the system to use the largest drive
> v6 has.
This is actually of limited utility, since a V6 file system is restricted to
65K blocks _max_. So a disk with 350K blocks (like an RP06), you'll have to
split it into like 5 partitions to use it all.
> From: Will Senn
> Do you know of some commonly used at the time v6 programs that needed
> that much space?
Heh. Spun up my v6, and did "file * | grep separate" in /bin and /usr/bin,
and then recalled that V6 was distributed in a form suitable for a /40. So,
null set.
Did the same thing on /bin from the MIT V6+ system, and got:
a68: separate I&D executable not stripped
a86: separate I&D executable not stripped
bteco: separate I&D executable not stripped
c86: separate I&D executable not stripped
e: separate I&D executable not stripped
emacs: separate I&D executable not stripped
lisp: separate I&D executable not stripped
mail: separate I&D executable not stripped
ndd: separate I&D executable not stripped
s: separate I&D executable not stripped
send: separate I&D executable not stripped
teco: separate I&D executable not stripped
No idea what the difference is between 'teco' and 'bteco', what 's/send' do,
etc.
> Is there any material difference between doing it at install time vs
> having run on 11/40 for a while and moving the disk over to the 11/45
> later?
No; like I said, you can have two different OS binaries on the disk, and
select which one you boot.
> On a related note, how difficult is it to copy the system from rk to
> hp? I know I can rebuild, but I'm sure there's a quicker/easier method...
Build a system with both, and then copy the files? I'd use 'tar' (I have a V6
tar, but it uses a modified OS with the smdate() call added back in) to do the
moving (which would retain the last-write dates); 'tp' or 'stp' would also
work.
The hack _I_ used on simulated systems was to expand the file that held the
'disk pack', mount it as a different kind of pack (RL or RP), and then go in
and hand-patch the disk size in the root block with 'db', then 'icheck -s' to
re-build the free list. Note: this won't give you more inodes, so you may run
out, but the usual inode allocation is pretty generous.
Noel
PS: Speaking of the last write dates, I have versions of mv/mvall, cp/cpall,
ln, chmod etc which retain them (using smdate()). If there's an actual
community of people using V6, I should upload all the stuff I have. Although
it might be good to establish some central location for exchange of V6 code.
However, I don't and won't (don't even ask) use GitHub or any similar modern
thing.
The setting up document hints at how to build world so to speak in v6.
However, when I log in as bin (most files are owned by bin) and:
chdir /usr/source
sh run
I get a number of failed items along these lines:
cp a.out /etc/cron
Can't create new file.
cp a.out /etc/init
Can't create new file.
A little digging around points to the problem - some files are owned by
daemon, others by root:
-rwsrwsr-- 1 daemon 3246 Oct 10 12:54 cron
-rwxrwxr-- 1 root 2054 May 13 23:50 init
My question is this, is the system recompiled en-masse using the run
script in /usr/source or not? It certainly appears to be the method, as
it contains a bunch of chdir somedir; time sh run lines including the
/usr/sys/run file... If it's not, what was the method?
I gather I can force it by logging in as root and running those sections
of the run script pertaining to files owned by root, and the same for
daemon, but that seems inefficient and begs the question why didn't they
have run scripts for root and daemon that were separate from the ones
for bin.
Thanks,
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
> From: Will Senn
> I whacked /usr/sys/lib1 and lib2 'accidentally' meaning I logged in as
> bin changed to /usr/sys and typed rm lib1 and rm lib2 :).
Doesn't sound very accidental... :-)
> sh run as bin doesn't do it.
Odd. 'run' in /usr/sys on my V6 machine (not that I use that, mind) says:
chdir ken
cc -c -O *.c
ar r ../lib1
rm *.o
chdir ../dmr
cc -c -O *.c
ar r ../lib2
rm *.o
which should regenerate them - sort of. I suspect you really meant 'doing sh
run creates a lib1 and lib2, but then I get errors from the ld phase with
missing symbols'. Yes?
If so, the thing is that the V6 linker won't pull in an object module from a
library unless a global in it satisfies an already existing (i.e. in the
linking process) undefined global. (I don't know if this is true of later
linkers; never used 'em.) In other words, when loading a multi-module system,
the module with 'main' has to be first, and then the rest in an order such
that each one holds a previously-undefined symbol.
So the order of the object modules you'll get in lib? from the above, if you
precede them with 'rm lib?', is probably not the right order. (The above shell
script assumes they already exist, with the modules in the right order, so the
above just replaces them with the newly compiled versions...)
> So, what magic incantation is required to rebuild them.
Here's the ordering in lib1:
main.o
alloc.o
iget.o
prf.o
rdwri.o
slp.o
subr.o
text.o
trap.o
sig.o
sysent.o
clock.o
fio.o
malloc.o
nami.o
pipe.o
sys1.o
sys2.o
sys3.o
sys4.o
Other orders would work too (e.g. you could move sys?.o up just after sysent.o
and it should work).
My lib2 is somewhat odd, so I hesitate to list it, but since most modules in
dmr are pulled in from entries in c.c, almost any order will work, I think.
Noel
> From: Will Senn
> Thanks for not dismissing the thread as frivolity.
Hey, anyone wanting to do things with V6 I take seriously! :-)
> I'm sure y'all have seen Mills's winning Best in Show IOCCC entry:
> https://www.ioccc.org/2018/mills/hint.html
Yes, that was pretty awesome.
> Fantastic, I'm prolly gonna try it.
OK; if you want to know what it's doing (somehow I figured you probably didn't
just want to simply follow the instructions :-) that is different from the /40
(it's quite different, and somewhat complicated), I just wrote this:
http://gunkies.org/wiki/Unix_V6_kernel_memory_layout
to explain it a bit. Currently, one has to read the source to 'sysfix', and
also m45.s, to understand how the /45 version works; that new page is a little
crude still, but it hopefully explains the big picture.
> If the instructions in Setting up are as good for the 45 as they are for
> the 40, I should be able to bring one up relatively painlessly.
I just took a look at "Setting up UNIX - Sixth Edition", and it doesn't really
say much about the /45; it basically just says 'the /45 is wiered inside' and
'look at sys/run'. It is certainly true that that does cover all one needs to
bring V6 up on the /45, but... The coverage of what to do if your '45' has
hardware floating point is pretty complete, though.
> What it sounds like is that Unix was transitioning from non-I/D land to
> I/D land and maintaining a measure of backward compatibility
That's pretty accurate. One main advantage of the /45 is that it could have a
lot more disk buffers, but I'm not sure that makes much difference for
emulation. If you have some application that won't fit well in 64KB, that's
big, but that's a user-land difference, not the OS.
> Is there a bootable tape of the MIT system extant?
Not yet, sorry. I do have a complete dump, but it i) includes all the users'
personal files, and ii) is not well organized. It's on my to-do list.
Noel
> But wasn't it tsort that did the heavy lifting to get things in order?
An amusing notion. Having written tsort, I can assure you it couldn't
lift anything heavy--it used the most naive quadratic algorithm. But
it was good enough for libc.
Doug
>Date: Sun, 30 Dec 2018 14:24:55 -0500
>From: Paul Winalski <paul.winalski(a)gmail.com>
>To: Norman Wilson <norman(a)oclsc.org>
>Cc: tuhs(a)tuhs.org
>Subject: Re: [TUHS] Deleted lib1 and lib2 in v6, recoverable?
>Message-ID:
> <CABH=_VTqZzNXPNecFCVZeqfMTnoJiWHbXZz->BriRGtxBY0J10Q(a)mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>On 12/30/18, Norman Wilson <norman(a)oclsc.org> wrote:
>
>> <snip>
>>
>> Nowadays ranlib is no longer a separate program: ar
>> recognizes object files and maintains an index if any are
>> present. I never especially liked that; ar is in.
>> principle a general tool so why should it have a special
>> case for one type of file? But in practice I don't know
>> anyone who uses ar for anything except libraries any more
>> (everyone uses tar for the general case, since .it does a.
>> better job).
>
>As you say, nobody these days uses ar for anything except object
>module libraries. And just about anything you do that modifies an ar
>library will require re-running ranlib afterwards. So as a
>convenience and as a way to avoid cockpit errors, it makes sense to
>merge the ranlib function into ar. MacOS still uses an independent
>ranlib, and it's a pain in the butt to have to remember to run ranlib
>after each time you modify an archive.
>
Maybe not on some of the older, more (resource) restricted systems,
but now normally wouldn't modifying an archive be part of
definitions/rules in a makefile and as such wouldn't the makefile
include using ranlib if an archive was modified ?
uncle rubl
I wrote (re my approach to sendmail.cf):
> Bill's half right. I didn't invent a language; I used what was there.
Grant Taylor asked:
Can I ask what language you did use? Was it m4 or something else?
====
I think you missed my point. The language I used was plain old
sendmail.cf.
Norman Wilson
Toronto ON
In the help file for v6 (/usr/doc/hel), it says that troff, eqn, etc are not part of the distro and even though there are man pages, the utils are not present in my base v6 install. I know this because I copied the hel0-hel5 files and naa over to my mac and used groff to make ps files and ps2pdf to turn those files into pdfs. While they came out ok, there was some overlapping text and the math equations were imperfect. I figured if I could do more preprocessing in v6 before moving files into mac, they might come out better, but the utils as noted above. Do we have the utils as bits somewhere (or is this an oblique reference to 1bsd)?
Thanks,
Will
Sent from my iPhone
There is a file, intro, in /usr/doc/man/man0, that is a system introduction prepared for a ‘Graphic System phototypesetter... in troff’. I was wondering if there was a way to display the file in v6 on the terminal, similar to displaying man pages:
nroff /usr/doc/man/man0/naa /usr/doc/man/man1/write.1
I couldn’t find a troff command and the output from various nroff incantations were less readable than cat.
Thanks,
Will
> On 12/28/18 7:35 PM, Warren Toomey wrote:
> > I just tried it here. I had to do:
> > chdir ken; ...
> > ar r ../lib1 *.o
> > chdir ../dmr; ...
> > ar r ../lib2 *.o
On Fri, Dec 28, 2018 at 08:02:55PM -0600, Will Senn wrote:
> I wound up doing:
> chdir ken
> cc -c -O *.c
> ar r ../lib1 main.o
> ar r ../lib1 alloc.o
> ar r ../lib1 iget.o
> ar r ../lib1 prf.o
> ar r ../lib1 rdwri.o
> ar r ../lib1 slp.o
> ar r ../lib1 subr.o
> ar r ../lib1 text.o
> ar r ../lib1 trap.o
> ar r ../lib1 sig.o
> ar r ../lib1 sysent.o
> ar r ../lib1 clock.o
> ar r ../lib1 fio.o
> ar r ../lib1 malloc.o
> ar r ../lib1 nami.o
> ar r ../lib1 pipe.o
> ar r ../lib1 sys1.o
> ar r ../lib1 sys2.o
> ar r ../lib1 sys3.o
> ar r ../lib1 sys4.o
>
> rm *.o
>
> chdir ../dmr
> cc -c -O *.c
>
> ar r ../lib2 bio.o
> ar r ../lib2 tty.o
> ar r ../lib2 dc.o
> ar r ../lib2 dn.o
> ar r ../lib2 dp.o
> ar r ../lib2 kl.o
> ar r ../lib2 mem.o
> ar r ../lib2 pc.o
> ar r ../lib2 rf.o
> ar r ../lib2 rk.o
> ar r ../lib2 tc.o
> ar r ../lib2 tm.o
> ar r ../lib2 partab.o
> ar r ../lib2 rp.o
> ar r ../lib2 lp.o
> ar r ../lib2 dhdm.o
> ar r ../lib2 dh.o
> ar r ../lib2 dhfdm.o
> ar r ../lib2 sys.o
> ar r ../lib2 hp.o
> ar r ../lib2 ht.o
> ar r ../lib2 hs.o
> rm *.o
>
> Then I continued with the system build and it worked and my changes were
> there!
> Will
Yes, order will be important, I forgot. There's no ranlib in v6 :-)
Cheers, Warren
> From: Warren Toomey
> I just tried it here. I had to do:
> ...
> ar r ../lib1 *.o
> ...
> to get them to rebuild. Otherwise, I had empty libraries.
Duhh. I never noticed the missing "*.o"!
I wonder how that one slipped through? Looking at 'run', it really does look
like it was used to prepare the systems on the distribution tape. So probably
the libraries just happened to already hold the latest and greatest, so that
error had no effect.
The thing with needing to order the library contents properly to cause all the
modules to get loaded is, I reckon, the reason why 'ar' has those arguments to
specify where in the archive a given file goes.
Noel
So... I whacked /usr/sys/lib1 and lib2 ‘accidentally’ meaning I logged in as bin changed to /usr/sys and typed rm lib1 and rm lib2 :). Now, I was thinking at the time that I could regenerate them... this seems like a possibility, but I can’t seem to get them back.
sh run as bin doesn’t do it.
So, what magic incantation is required to rebuild them.
What motivated the exploration was a desire to modify main.c and see those changes manifest.
Help.
Thanks,
Will
Sent from my iPhone
We gained John von Neumann on this day in 1903, and if you haven't heard
of him then you are barely human... As computer science goes, he's right
up there with Alan Turing. There is speculation that he knew of Babbage's
work; see
https://cstheory.stackexchange.com/questions/10828/the-relation-between-bab…
.
-- Dave
Do any fellow TUHS subscribers have any experience with NFS,
particularly in combination with Kerberos authentication?
I'm messing with something that is making me think that Kerberos
authentication (sec=krb5{,i,p}) usurps no_root_squash.
Meaning that root can't access files owned by other users with go-rwx.
Almost as if no_root_squash wasn't configured on the export.
Does anyone have a spare bone that they would be willing to throw my way?
--
Grant. . . .
unix || die
I thought I read a different email saying that there will be a track
about the 50th anniversary. But cannot find any details or reference to
it now.
Does anyone have information about Unix 50th celebration(s)?
Is it time for paper submissions? ...
====
If you mean for the 2019 USENIX Annual Technical Conference,
the CFP is
https://www.usenix.org/conference/atc19/call-for-papers
The submission deadline is about three weeks away, on
2019-01-10.
I see nothing explicit about a UNIX 50th celebration, alas.
At least one program-committee member is on this list; perhaps
more information will appear. Or there's a contact address
for the program co-chairs on the web page cited above.
Norman Wilson
Toronto ON
Hi all, I've just heard that the Usenix board of directors do not want
to explicitly celebrate the 50th anniversary of Unix.
It's been suggested that we, the TUHS members, both lobby the board and
also offer our assistance to help organisation such a celebration.
Who, on the list, would put their hands up to help organising something
that coincided with the 2019 Usenix ATC in July 2019?
I'd like to get the bare bones of an organising team, then approach the
Usenix board, offer our help and ask them to support us.
What do you think? 11 months to go.
Thanks, Warren
P.S. Nokia Bell Labs are also going to organise something, possibly a month
earlier but I have no solid details yet.
Hi folks,
This is a little sideways from on topic, but no too far. Is there a good open source implementation of a pdp11 for fpga in verilog/vhdl that works well for Unix v6+? Google turns up a number, but I’m hoping some of y’all have actual experience with one that you could recommend over others. I’m financially challenged so it’s a requirement that it run on cheap fpga’s not some Tesla prototype :)
Regards,
Will
Sent from my iPhone
Hi all, also an off-topic question. I got a private e-mail from a person
who has been trying to collect old academic papers from the CompSci/IT
field. Does anybody know of an existing archive for old CS/IT papers?
Thanks, Warren
Hi
One of the reasons I enjoy emacs is Meta-X dissociated-press, which
turns the most turgid bureaucratic prose into something truly worth
reading.
Has anybody documented or provided a timeline for the emergence of the
Travesty Generator? (I know that text processing was one of the major
focuses of university research, as opposed to the more utilitarian
focuses of the scientific computing or corporate record keeping areas.
One early CompSci book I got from a second-hand booksellers in
Christchurch before the earthquakes, had a nice section on SNOBOL.)
So who wrote the first Travesty Generator/s?
https://www.ebay.com/str/Zees-Fine-Finds
A few old DEC boards/modules.
I don't think there's anything PDP-11 related, but figured someone on
this mailing list might find something interesting.
art k.
> The journey is documented here:
> http://1587660.websites.xs4all.nl/cgi-bin/9995/timeline
>
> The network code is in a different tree, I'll move it over to the above tree over the weekend.
Posted the network bit in the online repo; it's in the v6net directory.
Also fixed the instability - it is quite satisfying to login to v6 from a 'nc' client on modern hardware.
However, I also found that the BBN code from November 1981 is what is says on the can: beta.
I'll move to the October 1982 code when I find some time.
Paul
PS, this is the 'server' that nc connects to:
#define unchar unsigned char
#define netaddr unsigned long
#include "con.h"
#include <stdio.h>
#include <string.h>
unsigned long
ipaddr(w,x,y,z)
int w,x,y,z;
{
unsigned long ip;
ip = w;
ip = (ip<<8)|x;
ip = (ip<<8)|y;
ip = (ip<<8)|z;
return ip;
}
struct con con;
void
child(fd)
int fd;
{
close(0);
dup(fd);
close(1);
dup(fd);
close(2);
dup(fd);
close(fd);
execl("/bin/sh", "[net-sh]", 0);
}
main()
{
int i, n, sd;
con.c_mode = CONTCP;
con.c_fcon = ipaddr(192,168,1,114);
con.c_lcon = ipaddr(172,16,0,2);
con.c_fport = 0;
con.c_lport = 4000;
sd = open("/net", &con);
printf("Connected\n", sd);
child(sd);
close(sd);
}
Hi all,
A Reddit user is asking about Space Traveller:
>I am an OpenBSD user and am interested in finding the original source code for Ken Thompson's Space Traveller. I have been searching the web for sometime now, but have sadly come up empty handed. Does anyone here by chance know where I could find a copy of it's source code? I am wanting to port it over to OpenBSD as a thank you to it's helpful and welcoming community.
> > the code size is about 25KB for both a minimal V6 kernel and the TCP
> > stack, the rest is data.
>
> That's impressively small; the MIT V6+ with 'demux only in the kernel' was
> 40KB for the combined code (although I can't easily get separate figures for
> the networking part and the rest).
I think my sentence was confusing: it is ~25KB each, so about 50KB combined.
The original V6 kernel was about 29KB (says here https://www.tuhs.org//cgi-bin/utree.pl?file=V6) I've simplified the TTY driver, only support one type of disk driver, dropped shared text segments, dropped FP emulation. Remains about 25KB. Note that the SLIP is merely via a "super RAW" mode on the TTY driver, so I don't need to include the bulky IMP interface driver. Even at 30KB, the V6 kernel must have offered the best bang/buck ratio in the history of software, imho.
> > The Gurwitz code also has an Ethernet driver (note ARP was not invented
> > yet)
>
> How did it get Ethernet addresses?
:^) See here: https://www.tuhs.org//cgi-bin/utree.pl?file=BBN-Vax-TCP/bbnnet/netconf.c
"Someday this will be generated from the configuration file." I think later it did, but I don't have that code.
> > a project to make V6 run ... on a TI990 clone
>
> Oh, about the basic part of this: did you start with a plain V6 distribution?
> So you've had to do all the machine language stuff from scratch (and modify
> things in C like estabur())?
> What are you using for a C compiler ? Is there one out there, or did you have
> to do your own?
I has been a journey. I started with the 2.11BSD compiler and ported that to the TI990 architecture (more precisely the 9995 chip, which is similar to a T11 chip).
I debugged that to make XINU run, and then moved on to LSX (as recovered by the BK-UNIX project). Then I started with the V6 kernel from the TUHS website and made that work. Dave Pitts made it work on a real TI990 (he has a TI990/10 and a TI990/12 in working order). So, yes, I did bootstrap all the low level stuff from scratch.
After a three year hiatus I resumed work on this, integrating the Gurwitz TCP stack.
The journey is documented here:
http://1587660.websites.xs4all.nl/cgi-bin/9995/timeline
The network code is in a different tree, I'll move it over to the above tree over the weekend.
Paul
> From: Paul Ruizendaal
> a project to make V6 run ... on a TI990 clone
Oh, about the basic part of this: did you start with a plain V6 distribution?
So you've had to do all the machine language stuff from scratch (and modify
things in C like estabur())?
What are you using for a C compiler ? Is there one out there, or did you have
to do your own?
> In my setup, network connectivity is via a SLIP interface.
Yeah, that's probably the way to go, to start with.
Noel
> From: Paul Ruizendaal
> project to make V6 run with the Gurwitz TCP stack on a TI990 clone
> (which is pretty similar to a PDP11).
Neat!
> the code size is about 25KB for both a minimal V6 kernel and the TCP
> stack, the rest is data.
That's impressively small; the MIT V6+ with 'demux only in the kernel' was
40KB for the combined code (although I can't easily get separate figures for
the networking part and the rest).
> The Gurwitz code also has an Ethernet driver (note ARP was not invented
> yet)
How did it get Ethernet addresses?
Noel
> I'm sure it's been attempted before, but would anyone be up to the
> challenge of trying to get that going with networking on an
> 18-bit-address-space pdp11?
By coincidence I’m in the middle of a project to make V6 run with the Gurwitz TCP stack on a TI990 clone (which is pretty similar to a PDP11). It runs without separate I/D as two processes in about 100KB.
The Gurwitz TCP stack was the reference implementation for the VAX that BBN did in 1981. It is in the THUS archive:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=BBN-Vax-TCP
As documented in IEN168, the actual TCP processing happens in a separate kernel process, much like process 0 (swapper) in Unix itself. It turns out that the network process shares no data (other than the u struct) with the kernel proper and can be run in a separate address space. Just a few ’thunks’ are needed: open/read/write/close from the kernel to the TCP stack and sleep/wakeup in the other direction.
A V6 Unix kernel runs in 48KB with buffers, the TCP stack with buffers needs about the same; both must remain resident - i.e. it ties up about 100KB of the 256KB core on a 18-bit machine. I suppose when using separate I/D it can run without thunks: the code size is about 25KB for both a minimal V6 kernel and the TCP stack, the rest is data.
In my setup, network connectivity is via a SLIP interface. The Gurwitz code also has an Ethernet driver (note ARP was not invented yet), but I’m not using that. I’m happy to report that this 1981 tcp/ip code can still talk to current OSX and Linux machines.
Just yesterday I got the setup working and I can run minimalist telnet connections etc. Alas it is not quite stable yet, it tends to crash after 5-10 minutes of use.
The BBN reference implementation includes FTP and Telnet servers and clients which I think will still interoperate with current versions. As a final remark note that this BBN code uses an API that is almost unchanged from the API as used on NCP Unix. As compared to sockets this means that a listening connection is not a rendez-vous, but blocks until a connection is made (and then becomes an active connection, i.e. stops listening), and that there is no “select” type functionality.
PS:
> IIRC, outbound packets are copied into kernel buffers
IDRC; according to the documentation, outbound packets are DMA'd directly from
user memory. I have yet to read the code to verify this.
> we must have added PTY's of some sort
There is indeed a PTY driver; it has comments from BBN'ers who edited it, so
perhaps we got it from BBN.
> I don't remember which one SMTP used.
The 'simple' TCP.
> The whole thing worked _really_ well. Alas, I don't think anyone else
> picked up on it.
So I found a long list of people we sent tapes to. Oh well....
> The kernel code is not that large, it should even run on a /40, without
> overlays (although the number of disk buffers would probably get hit).
Well, maybe... Here is the output of 'size' on the last Unix image for that
machine:
40560+3098+44594
It was a /45, so split I/D (no overlays, though). How much could be trimmed
out of that, I'm not sure.
Noel
Hi,
I have an 11/45 I'm hoping will be running soon.
I'd like to run 2.9BSD on it because it's the most highly functional system
I know of that has "official hopes" to fit on such a restrictive machine.
I've heard that it's really unlikely / tough to get a kernel built that'll
run tcp (I care mostly about ftp and telnet) on such a
small-memory-footprint machine. Is this true?
Would anyone be willing to do a quick mentoring / working session with me
to get me up to speed with the constraints I'm facing here and possibly
give me a jump on making adjustments to build such a kernel if possible?
thx
jake
P.S. There's kind of an implied challenge in the 2.11bsd setup docs,
mentioning that "2.11BSD would probably only require a moderate amount of
squeezing to fit on machines with less memory, but it would also be very
unhappy about the prospect."
I'm sure it's been attempted before, but would anyone be up to the
challenge of trying to get that going with networking on an
18-bit-address-space pdp11?
> From: Clem Cole <clemc(a)ccc.com>
> This is why I suggested that if you really want telnet and ftp to the
> PDP-11, you might be better off moving the networking stack out of the
> kernel
Really, the answer is that I need to get off my @ss and put the MIT V6+ system
up (I have all the files, just need to get a round tuit).
It has TCP/IP, but is it not all crammed into the kernel. And unlike the early
BBN V6, it doesn't do TCP as a single process to which all the other
client/server processes talk via IPC.
Instead, the only thing in the kernel is inbound demuxing, and minimal outbound
processing. (IIRC, outbound packets are copied into kernel buffers; an earlier
version of the networking interface driver actually did do inbound and outbound
DMA directly from buffers in the user's process, but only one process could use
the network interface at a time.)
The TCP code was a library that was built into the user process which did the
server/client applications. (The servers which supported login, like FTP,
needed to run as root, like the ordinary login, setuid'ing to the entered
user-id.) I don't remember if we supported server Telnet, but I think we
did. So we must have added PTY's of some sort, I'll have to check.
Since the TCP was in the user process, we actually had a couple of different
ones, depending on the application. Dave Clark had done a quick-n-dirty TCP on
the Alto (in BCPL) which was only good for things like user Telnet, not for
applications that sent a lot of data. We ported that one for the first TCP; we
later did a 'high-speed bulk data' TCP, used for FTP, etc. I don't remember
which one SMTP used.
The whole thing worked _really_ well. Alas, I don't think anyone else picked
up on it.
The kernel code is not that large, it should even run on a /40, without
overlays (although the number of disk buffers would probably get hit). And
since the TCP is in user processes, it could all get swapped out, so it would
run OK on machines without that much physical memory.
The issue is going to be that it will need a new network interface driver,
since I think the only driver ever done for it was for Pronet. And now we get
back to the 'what interfaces are available' question. Doing a DEC driver would
allow use of DEQNA's and DELQA's on QBUS machines, which would be optimal,
since they are common. And people could bring up Unix with TCP/IP on -11/23's.
But we'd have to add ARP (which I would do as a process, with only the
IP->Ether address mapping table stored in the kernel). I wrote a really nice
ARP for the C Gateway that could easily be used for that.
Noel
> From: Warner Losh
> I kinda doubt it has good NCP support: it was released in November of
> 1983.
Wow, that far back? I'd assumed it was later (considerably later).
Looking at the 2.9 networking stuff:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=2.9BSD/usr/net/sys/net
it does indeed have _no_ NCP support.
> I'd get it running in simh, then move to real hardware.
Absolutely; running in an emulator is, I have found, a key step on getting an old
OS running. I've found Ersatz-11 to be really good for PDP-11 emulation.
> It's going to take a lot of elbow grease to make that work, I think.
Indeed; part of the problem, if the goal is going to be 'run it on real
hardware' is 'what network interface to use'.
All the ARPANET interfaces are out. There are drivers there for Proteon,
Ungermann-Bass, Xerox 3MB Ethernet, etc interfaces, but i) where you gonna
find one, and ii) you'll need a router to connect up to most other things.
There's a driver for the Interlan Ethernet interface, but AFAIK, those are
non-existent. (If anyone has one they're willing to part with, please let me
know!)
DEC Ethernet interfaces are available, but i) only the QBUS ones are common,
DEUNAs and DELUAs are almost impossible to find, that I've even seen, and ii)
it would need a driver.
> Ultrix-11 is of similar vintage, and similar functionality and does boot
> on the 18-bit 11's.
Yes, definitely worth looking at; I know it had TCP/IP (we had it on our
-11/73 at Proteon), but I don't know which interfaces it supported; probably
just the DEC ones (which, given the above, is not necessarily a Bad Thing).
Noel
> From: Grant Taylor
> What protocols did 2.9BSD support? Did it have NCP?
NCP was turned off on 1 January, 1983. What do you think?a
> Would it be any easier to use an external NCP to TCP/IP gateway?
Such as?
Noel
Augusta Ada King-Noel, Countess of Lovelace (and daughter of Lord Byron),
was born on this day in 1815; arguably the world's first computer
programmer and a highly independent woman, she saw the potential in
Charles Babbage's new-fangled invention.
J.F.Ossanna was given unto us on this day in 1928; a prolific programmer,
he not only had a hand in developing Unix but also gave us the ROFF
series.
Who'ld've thought that two computer greats would share the same birthday?
-- Dave
We gained Rear Admiral Grace Hopper on this day in 1906; known as "Amazing
Grace", she was a remarkable woman, both in computers and the Navy. She
coined the term "debugging" when she extracted a moth from a set of relay
contacts from a computer (the Harvard Mk I) and wrote "computer debugged"
in the log, taping the deceased Lepidoptera in there as well. She was
convinced that computers could be programmed in an English-like language
and developed Flow-Matic, which in turn became, err, COBOL... She was
posthumously awarded the Presidential Medal of Freedom in 2016 by Barack
Obama.
-- Dave
Very little in language design is so contentious as comment conventions.
C borrowed the PL/I convention, which had the virtue of being useful
for both in-line and interlinear comments, and did not necessitate
marking every line of a long comment. Nobody in the Unix lab had
had much experience with the convention, despite having worked on
Multics for which PL/I was the implementation language.
And how did PL/I get the convention? It was proposed by Paul
Rogoway at the first NPL (as it was then called) design-committee
meeting that I attended. Apparently the topic had been debated
for some while before and people were tired of the subject. Paul
was more firmly committed to his new idea than others were to
old options, so it carried more or less by default. Besides, there
was a much more interesting topic on the agenda. Between the
previous meeting and that one, George Radin had revamped the
entire NPL proposal from mainly Fortran-like syntax to Algol-like.
That was heady enough stuff to divert people's attention from
comments.
As for inexperiece. The comment conventions of previous
languages had not fostered the practice of commenting out
code. So that idea, which is the main impetus for nesting
comments, was not in anybody's mind at the time. Had it
been, nesting might well have carried the day. It probably could
have been changed before 1980, but thereafter there were
too many C compilers. Then standards introduced even more
conservatism. Perhaps Ken can remember whether the notion
was ever seriously considered.
Doug
Another DEC compiler that I forgot was the original C compiler for
Tru64 Unix on the Alpha. This was done at the DECwest facility in
Seattle (which originally had been set up by Dave Cutler). It was a
very strict implementation of the ANSI C89 standard--it had no
extensions such as K&R support. One customer called it the "Rush
Limbaugh of C compilers" because it was extremely conservative and you
couldn't argue with it.
-Paul W.
Hi all,
I haven't seen this on the list yet (apologies if I missed it):
https://unix50.org/
You can get a shell in various historical version of UNIX.
Enjoy !
> Why can't c language comments be nested?
For comments that really are comments, what would be the point?
For comments that are really removal of code - commenting out - there
is a better mechanism, #if (or #ifdef), which does nest.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> So how was it that so many smart - and somewhat like minded it seems
> people end up there? [At Bell Labs]
1. Bell Labs had a great reputation, though it was not at first known
for computing.
2. Research recruiters were researchers themselves, not HR people.
3. Recruiting was for quality hires, not for particular jobs;
complementary talent was valued.
4. Whom a candidate met on site was determined after s/he gave a seminar;
this promoted good matchups.
5. Researchers decided for themselves what to work on--either self-
generated or an interesting problem from elsewhere in the company.
6. If you needed to know something in most any field, you could usually
find a willing expert to get you on track to an answer.
7. Annual merit review was collegial. No one lost out because of unlucky
draw of a supervisor.
8. Collegiality in fact beat that of any faculty I know. Office doors
were always open; new arrivals needed only to do good work, not to
chase tenure.
This culture grew from the grand original idea of the Labs: R&D for
the whole of AT&T funded by the whole of AT&T, with a long time horizon.
I joined thinking the Labs was good seasoning for academia. The culture
held me for 39 years.
The premise was viable in the days of regulated monopoly. It has been
greatly watered down since.
Doug
Ken's story got me thinking about stuff I would still like to learn
and his comment about "when I got to Bell Labs"... made me wonder
how did Ken, Dennis, Brian, Joe and the rest of the crew make their
way to Bell Labs?
When I was just starting out, Sun was sort of the Bell Labs of the
time (not that Sun was the same as Bell Labs but it was sort of
the center of the Unix universe in my mind). So I wanted to go
there and had to work at it a bit but I got there.
Was Bell Labs in the 60's like that? If you were a geek was that
the place to go? I was born in '62 so I don't have any memory of
how well known the Labs were back then.
So how was it that so many smart - and somewhat like minded it seems -
people end up there?
--lm
> From: Toby Thain
> He made amends by being early to recognise that problem, and propose
> solutions, in his 1977 ACM Turing Award lecture
Actually, I'd consider a far bigger amend to be his work on Algol 60 (he was
one of the main contributors), which Hoare so memorably described as "a
language so far ahead of its time that it was not only an improvement on its
predecessors but also on nearly all its successors".
AFAICT, although Algol 60 itself is no longer used, basically _every_ modern
programming language (other than wierd, parallel ones, etc) is heavily
descended from Algol 60 (e.g. C, via CPL).
As for FORTRAN, it's worth recalling that it was originally for the IBM 704,
which was their very _first_ commercial computer with core memory! And not a
lot of it - early 704's came with a massive 4KW of main memory! So the
compiler had to be squeezed into _very_ small space - and it reportedly did an
excellent job of emitting efficient code (at a time when a lot of people
thought that couldn't be done, and so were hostile to the concept of writing
program in HLL). And the compiler had to be written entirely in assembler, to
boot...
Which brings up an interesting query - I wonder when/what the last compiler
written in assembler was? I gather these days compilers for new machines are
always bootstrapped as cross-compilers (an X compiler for the Y machine is
written in X, run through the X compiler for the [existing] Z machine, and
then run though itself, on the Z machine, to produce binary of itself for the
Y machine).
Noel
> On Dec 4, 2018, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
>
> The original Tandem OS (called Guardian at the time) was written in Tandem's TAL (Transaction Application Language, amongst other productions), a vague evolution of HP's SPL that looked more like Algol, starting in about 1974. That is also the earliest I know of an operating system being implemented entirely in a high level language.
Most likely the earliest operating system written in a high-level language was the one for the Burroughs B5000 (early 1960s), written in a dialect of Algol 60. Others: Multics, written in PL/1 (starting in mid 1960s), the operating system for the Berkeley Computer Corporation’s BCC-500, written in BCC SPL (system programming language) (late 1960s), OS6 by Stoy and Strachey, written in BCPL (early 1970s), Xerox Alto OS, written in BCPL (about 1974).
The ARPAnet reached four nodes on this day in 1969 (anyone know their
names?); at least one "history" site reckoned the third node was connected
in 1977 (and I'm still waiting for a reply to my correction). Well, I can
believe that perhaps there were only three left by then...
Hmmm... According to my notes, the nodes were UCSB, UCLA, SRI, and Utah.
-- Dave
We lost Dr. John Lions of this day in 1998; he was one of my Comp Sci
lecturers (yes, I helped him write The Book, and yes, you'll find my name
in the back).
-- Dave
As every computer programmer should know, John Backus was emitted in 1924;
he gave us the BNF syntax (he is the "B"), but he also gave us that
FORTRAN obscenity... Yeah, it was a nice language at the time; the
engineers loved it, but tthe computer scientists hated it (have you ever
tried to debug a FORTRAN program that somebody else wrote?).
Trivia: there is no way that FORTRAN can be described in any syntax; it is
completely ad-hoc.
-- Dave
I did just that. The National Bureau of Standards picked it up
in NBS Handbook 131, "Using ANS FORTRAN" (1980). It is expressed
in the same formalism that Burroughs used for Algol.
Doug
Thhis is a cross-posting from the groff mailing list, where
it was speculated that without roff there might be no Unix.
Old hands will be familiar with the story.
> Without roff, Unix might well have disappeared.
The patent department and the AT&T president's office are the
only in-house examples I know where Unix was adopted because
of *roff.
The important adoptions, which led Berk Tague to found
a separate Unix Support Group, were mainstream telephone
applications and PWB, a Unix-based IDE.
The first telephone application happened in the field. An
engineer in Charlotte, NC, heard of this cheap easily programmed
system and proposed to use it to automate the scheduling and
dispatch of maintenance on the floor of a wire center. Ken
visited to help get them started.
The first Bell Labs telephone application was automating
the analysis of central-office trouble reports. These had
been voluminous stacks of punched cards that reported every
anomaly detected in huge electromechanical switches. The Unix
application captured the data on line and identfied systematic
failures in real time.
The patent adoption was a direct result of Joe Ossanna's
salesmaship. Other early adopters were self-motivated,
but the generous support lent by Ken, Joe, and others was
certainly a tipping force that helped turn isolated events
into a self-sustaining movement.
Doug
Regardless of standards considerations, if there's any advice
that needs to be hammered into man authors, it's to be concise
and accurate, but not pedantic. As Will Strunk commanded,
"Omit needless words."
The most needless words of all are promotional. No man page
should utter words like "powerful", "extraordinarily versatile",
"user-friendly", or "has a wide range of options".
As another instance of the rule, it would be better to recommend
short subtitles than to help make them long by recommending
quotes. If anything is said about limited-length macros, it
would best be under BUGS.
As editor for v7-v10, I would not offer v7 as a canonical
model. It owed its use of boldface in SYNOPIS to the limited
number of fonts (Typically R,F,I,S) that could be on our
typesetter at the same time. For v9 we were able to follow
Kernighan and adopt a distinct literals font (L, which happened
to be Courier but could have been identified with bold had we
wished). I still think this is the best choice.
As for options, v7 is a very poor model. It has many pages
that describe options in line, just as v1 had done for its
few options (called flags pre-v7). By v10 all options were
displayed in a list format.
For nagging reasons of verbal continuity, the options displays
were prefaced by *needless words* like, "The following options
are recognized". A simple OPTIONS heading would be better.
Unfortunately, an OPTIONS heading would intrude between the
basic description and less important details that follow
the options. (I don't agree that it would come too closely
after DESCRIPTION; a majority of man pages already have even
shorter sections.) OPTIONS could be moved to the end of
DESCRIPTION. However, options may well be the biggest reason
for quick peeks at man pages; they should be easy to spot. It
has reasonably been suggested that OPTIONS should be a .SS
subsection. That might be followed by .SS DETAILS.
Doug
Grant:
Sorry, I mistook the context to be that you wrote something to write the
cf file / language for you.
===
Yep, evidently I didn't write clearly enough. Sorry about that.
(Which links us nicely back to the Subject: line, and
the concise clarity of the original manual-entry style!)
Norman Wilson
Toronto ON
WIlliam Cheswick <ches(a)cheswick.com> wrote:
> As for the configuration: when Norman Wilson moved to Toronto, he
> implemented some form of little language for configuring sendmail,
> treating it somewhat as an assembly language.
Bill's half right. I didn't invent a language; I used what was there.
I decided that the best way to deal with Sendmail's own configuration
language was to treat it as I would the assembly language for a
specialized, irregularly-designed microprocessor:
1. Understand as well as possible what the instructions actually do;
2. Write the simplest possible program that will get the job done;
3. Avoid extra layers of macros and so on that hide the details, because
that also hides the irregularities and makes it harder to understand
and debug;
4. By the same reason, don't just copy someone else's program that
does something complicated; write your own and do things simply.
Sendmail has plenty of design flaws (not just in the language), as
I'm sure Eric will acknowledge; but I think the biggest problem
people have had with it that most people copied the rather-complicated
sample configuration files shipped with the source rather than just
reading the manual, doing a few experiments to understand the behaviour,
and writing something simple.
On the other hand, I've never quite understood why so many people
treat device drivers as scary and untouchable, copying an existing
one and hacking it until it seems to work rather than understanding
what the device actually does and writing a simple program to control
it. So perhaps my brain just doesn't work normally.
Norman Wilson
Toronto ON
On 2018-11-29 1:04 PM, Ken Thompson wrote:
> its name became astro and it is on the old backup tapes.
> written in c. it has old elements for everything. published
Thanks.
Was it rewritten? Your story has it dating at least back to 1966, which
made me think it might not have been C.
--Toby
> elements are now in a different form and a different time
> base, so it needs updating to bring it into the 21st century.
> if all you want is the earth, moon, and sun, then it might
> be ok. the earth rotation fudge (delta-t) might need to be
> re-estimated to get second accuracy.
>
>
> On Thu, Nov 29, 2018 at 9:54 AM, Toby Thain <toby(a)telegraphics.com.au> wrote:
>> On 2018-11-27 11:48 PM, Ken Thompson via TUHS wrote:
>>> ...
>>> a version of azel was maintained all the time
>>> i was at bell labs.
>>
>> As soon as I read this it's been on my mind to ask: Does this program
>> survive? Presumably it was Fortran? What did it run on?
>>
>> --Toby
>>
>
> Joe sold the (not really existent) UNIX system to the patent department of AT&T,
> which in turn bought the urgently needed PDP11. Without that there would be no
> UNIX. Without Joe there would be no UNIX.
That one's an urban legend. The PDP-11 was indeed a gift from another department,
thanks to a year-end budget surplus. Unix was up and running on that machine when
Joe corralled the patent department.
Nevertheless the story is consistent with Joe's talent for playing (or skirting)
the system to get things done. After Joe, the talent resurfaced in the
person of Fred Grampp. Lots of tales await Grampp's popping up from Dave
Horsford's calendar.
> Runoff was moved to Multics fairly early: here's its entry from the Multics
> glossary: "A Multics BCPL version of runoff was written by Doug McIlroy
> and Bob Morris."
Morris did one port and called it roff. I did the BCPL one, adding registers,
but not macros. Molly Wagner contributed a hyphenation algorithm. Ken
and/or Dennis redid roff in PDP-11 assembler. Joe started afresh for the
grander nroff, including macros. Then Joe bought a phototypesetter ...
> Sun was sort of the Bell Labs of the time ... I wanted to go there and had
> to work at it a bit but I got there. Was Bell Labs in the 60's like that?
Yes, in desirability. But Bell Labs had far more diverse interests. Telephones,
theoretical physics, submarine cables, music, speech, fiber optics, Apollo.
Wahtever you wanted to know or work on, you were likely to find kindred
types and willing management.
> was that voice synthesizer a votrax or some other thing?
Yes. Credit Joe again. He had a penchant for hooking up novel equipment.
When the Votrax arrived, its output was made accessible by phone and also
by loudspeaker in the Unix lab. You had to feed it a stream of ASCII-
encoded phonemes. Lee McMahon promptly became adept at writing them
down. After a couple of days' play in the lab, Lee was working in his
office with the Votrax on speakerphone in the background. Giving no
notice, he typed the phonemes for "It sounds better over the telephone".
Everyone in the lab heard it clearly--our own "Watson, come here" moment.
But phonemes are tedious. Believing that it could ease the
task of phonetic transcription, I wrote a phonics program, "speak",
through which you could feed English text for conversion to
phonemes. At speak's inaugural run, Bob Morris typed one word,
"oarlock", and pronounced the program a success. Luckily he didn't
try "coworker", which the program would have rendered as "cow orker".
Max Matthews from acoustics research called it a breakthrough.
The acoustics folks could synthesize much better speech, but it
took minutes of computing to synthesize seconds of sounds. So
the Unix lab heard more synthetic speech in a few days than the
experts had created over all time.
One thing we learned is that people quickly get used
to poor synthetic speech just like they get used to
foreign accents. In fact, non-native speakers opined
that the Votrax was easier to understand than real people,
probably due to the bit of silence that the speak program
inserted between words to help with mental segmentation.
One evening someone in the Unix room playing with the
synthesizer noticed a night janitor listening in from
the corridor. In a questionable abuse of a non-exempt
employee, the Unix person typed, "Stop hanging around
around and get back to work." The poor janitor fled.
AT&T installed speak for the public to play with at Epcot.
Worried that folks would enter bad words that everybody
standing around could hear, they asked if I could filter them
out. Sure, I said, just provide me with a list of what to
delete. Duly, I received on letterhead from the VP for
public relations a list of perhaps twenty bad words. (I have
always wondered about the politics of asking a secretary to
type that letter.) It was reported that girls would try the
machine on people's names, while boys would discover that
the machine "didn't know" bad words (though it would happily
pronounce phonetic misspellings). Alas, I mistakenly discarded
the infamous letter in cleaning house to leave Bell Labs.
Doug
We lost J.F. Ossanna on this day in 1977; he had a hand in developing
Unix, and was responsible for "roff" and its descendants. Remember him,
the next time you see "jfo" in Unix documentation.
-- Dave
Hello,
For your information (and to reduce my guilt for posting off topic
sometimes), I have 4.1BSD running with Chaosnet patches from MIT. I'm
adding a Unibus CH11 network interface to SIMH. It's not working fully
yet, but it's close.
Best regards,
Lars Brinkhoff
> From: Larry McVoy
> (*) I know that nroff was "new run off" and it came from somewhere, MIT?
> Some old system ... I've never seen docs for the previous system and I
> kinda think Joe took it to the next level.
Definitely!
The original 'runoff' was on CTSS, written by Jerry Saltzer. It had a
companion program, 'typset', which was an editor for preparing runoff input. A
memo describing them is available here:
http://web.mit.edu/Saltzer/www/publications/ctss/AH.9.01.html
>From the look of things, it didn't have any macro capability.
Runoff was moved to Multics fairly early: here's its entry from the Multics
glossary:
A Multics BCPL version of runoff was written by Doug McIlroy and Bob
Morris. A version of runoff in PL/I was written by Dennis Capps in
1974.
...
Multics documentation was transitioned from the Flexowriters to use of
runoff when the system became self-hosting about 1968. runoff was used for
manuals, release bulletins, internal memos and other documentation for most
of the 70s. To support this use, Multics runoff had many features such as
multi-pass execution and variable definition and expansion that went far
beyond the CTSS version. Multics manuals were formatted with complex macros,
included by the document source, that handled tables of contents and
standard formatting, and supported the single sourcing of the commands
manual and the info files for commands.
So the BCPL version would have been before Bell exited the project. I'm not
sure if the 'macros' comment refers to the BCPL version, or the PL/I. Here's
the Multics 'info' segment about runoff:
http://web.mit.edu/multics-history/source/Multics/doc/info_segments/runoff.…
which doesn't mention macros, but lists a few things that might be used for
macros. It refers to "the runoff command in the MPM Commands" volume (a
reference to "Multics Programmer's manual: Commands and Active Functions") for
details; this is available on bitsavers, see page 3-619 in "AG92-03A",
February 1980 edition.
Noel
> From: Lars Brinkhoff
> Emacs is very much divorced from the Unix philosopy. However, it's
> perfectly in synch with how things are done in ITS.
Hmm. It is complicated, but... the vast majority of my keystrokes are typed
into Epsilon (a wonderful, small, fast EMACS-type editor for Windows, etc
which one can customize in C) - especially since I started, very early on (V6)
to run my shell in an EMACS window, so I could edit commands, and thus I was
pretty much always typing to EMACS. So, it makes sense to me to have it be
powerful - albeit potentially a bit complex.
I say 'potentially' because one could after all restrict oneself to the 4
basic motion commands, and 'delete character'; you don't have to learn what
CRTL-ALT-SHIFT-Q does.
> Stallman .. developing GNU Emacs (from Gosling's version)
Err, I'm not sure how much influence Gosling's was. He had, after all, done
the original EMACS on ITS; I got the impression he just set off on his own
path to do GNU Emacs. (Why else would it be implemented in LISP? :-).
Noel
Hello, everyone:
Recommend a few c language to write on the computer that do not have a network, best can compile to run with GCC. This will kill my time. I like to play with greedy snake written in c language, which is really interesting. Thank you very much!
Caipenghui
Nov 17, 2018
> From: Clem Cole
> Actually I blame the VAX and larger address spaces for much of that and
> no enough real teaching of what I refer to as 'good taste.' When you
> had to think about keeping it small and decomposable, you did. ...
> Truth is, it is a tough call, learning when 'good enough' is all you
> need. ... The argument of course is - "well look how well it works and I
> can do this X" -- sorry not good enough.
Exactly; the bloat in the later Unix versions killed what I feel was the
_single best thing_ about early Unix - which was its awesome, un-matched
bang/buck ratio.
_That_ is what made me such a huge fan of Unix, even though as an operating
system person, I was, and remain, a big fan of Multics (maybe the only person
in the world who really likes both :-), which I still think was a better
long-term path for OSes (long discussion of why elided to keep this short).
I mean, as an operating system, I don't find Unix that memorable; it's (until
recently) a monolithic kernel, with all that entails. Doing networking work on
it was a total PITA! When I looked across as what Dave Clark was able to do on
Multics, with its single-level memory, and layered OS, doing TCP/IP, I was
sky-blue pink with envy.
Noel
Sorry about the recent post. It may seem peripherally
connected to tuhs, but it got there due to overtrained
fingers (or overaged mind). It was intended for another list.
Doug
Hi All.
In https://www.youtube.com/watch?v=_2NI6t2r_Hs&feature=youtu.be Rob Pike
mentions that DMR and Norman Wilson ported Unix to the Cray 1 and that
it was not straightforward.
This sounds interesting. Norman: would you be kind enough to elaborate
on this?
Thanks,
Arnold
On Wed, 14 Nov 2018, Warren Toomey wrote:
>> Hell, I wish I still had that "CSU Tape"; it was Edition 6 with as much
>> of Edition 7 (and AUSAM) that I could shoe-horn in, such as XON/XOFF
>> for the TTY driver. I was known as "Mr Unix 6-1/2" at the time...
>
> Definitely look at the UNSW tapes I have:
> https://minnie.tuhs.org/cgi-bin/utree.pl?file=AUSAM
> and https://www.tuhs.org/Archive/Distributions/UNSW/
> in case any of these are what you are looking for.
I think I did before, but I confess I didn't spend much time on it. My
pride and joy was certainly the rewritten ei.c driver (implementing the
200-UT batch protocol), and the clever workaround to an egregious KRONOS
bug where it would get stuck in a POLL/REJECT loop (I merely sent a dummy
command viz "Q,I" -- discarding the response -- because KRONOS was
expecting a command instead of the correct REJECT being nothing to send
from the batch emulator).
At the time, Unix got blamed because the smaller non-Unix /40s (running a
standalone program) worked fine for some reason; my guess is that it
implemented the broken protocol somehow.
-- Dave
Rob:
I rewrote cat to use just read and write, as
nature intended. I don't recall if my version is in any of v8 v9 v10 ...
====
It is. It was /bin/cat when I arrived at Murray Hill in 1984.
I remember being delighted with the elegant way to get rid of
a flag I had never really liked either.
I never knew Dennis had dragged his heels at it. It was (to me)
so obviously the right answer that I never asked!
Norman Wilson
Toronto ON
really appreciate videos of talks like this as someone who wasn't lucky
enough to be around to experience this in person but benefits from the
things your generation built for us:
https://www.youtube.com/watch?v=_2NI6t2r_Hs&feature=youtu.be
thanks rob!
-pete
--
Pete Wright
pete(a)nomadlogic.org
@nomadlogicLA
All, for a while now there have been some weird multi-hour long delays
between e-mail arriving at TUHS and being forwarded on. I've just removed
a pile of queued messages which I think were causing mailman to have
palpitations. If you posted something on TUHS in the last few hours,
could you re-send it. Apologies for this.
Thanks, Warren
> SunOS 4 definitely had YP.
SunOS 2.0 had YP.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> My favorite anecdote that I've read regarding Belle was when Ken
> Thompson took it out of the country for a competition. Someone,
> I'm assuming with customs, asked him if Belle could be
> classified as munitions in any way. He replied, "Only if you
> drop it out the window."
That's not the half of it. Ken had been invited by Botvinnik,
a past world champion, to demonstrate Belle in Russia. Customs
spotted it in baggage and impounded it without Ken's knowledge.
When he arrived empty-handed in Moscow, his hosts abandoned
him to his own devices.
Late that fateful Friday afteroon, customs called Bell Labs
security, which in turn called Ken's department head--me. That
evening I called Bill Baker, the Labs' presi7, at home,
hoping he might use his high-level Washington connections
to spring Belle. No luck. Ken was in the dark about the whole
affair until Joe Condon managed to reach him at his hotel.
Customs kept the machine a month and released it only after the
Labs agreed to pay a modest fine. I believe Ken's remark about
the military potential of Belle was made in reply to a reporter.
Doug
I was wondering, what was the /crp mount point in early UNIX used for?
And what does "crp" mean? Does it mean what I think it does?
It is only mentioned in V3 it seems:
./v4man/manx/unspk.8:unspk lives in /crp/vs (v4/manx means pre-v4)
./v3man/man6/yacc.6:SYNOPSIS /crp/scj/yacc [ <grammar ]
./v3man/man4/rk.4:/dev/rk3 /crp file system
I suppose scj, doug or ken can help out.
aap
Peter Adams, who photographed many Unix folks for his
"Faces of open source" series (http://facesofopensource.com/)
found trinkets from the Unix lab in the Bell Labs archives:
http://www.peteradamsphoto.com/unix-folklore/.
One item is more than a trinket. Belle, built by
Ken Thompson and Joe Condon, won the world computer
chess championship in 1980 and became the first
machine to gain a chess master rating. Physically,
it's about a two-foot cube.
Doug
Spurred by the recent discussion of NIS, NIS+, LDAP et al, I'm curious what
the landscape was like for distributing administrative information in early
Unix networks.
Specifically I'm thinking about things like the Newcastle Connection, etc.
I imagine that PDP-11's connected to the ARPAnet running Unix would (e.g.,
RFC 681 style) would have adapted the HOSTS.TXT format somehow. What about
CHAOS? Newcastle? Datakit?
What was the introduction of DNS into the mix like? I can imagine that that
changed all sorts of assumptions about failure modes and the like.
NIS and playing around with Hesiod are probably the earliest such things I
ever saw, but I know there must have been prior art.
Supposedly field 5 from /etc/passwd is the GECOS username for remote job
entry (or printing)? How did that work?
- Dan C.
> I have a vague intuition right now that the hyphenation decisions
> ...
> should be accessible without having to invoke the output driver.
Would't that require some way to detect a hyphenation event?
Offhand, I can't think of a way to do that.
But if you know in advance what word's hyphenation is in
question, you could switch environments, use the .ll 1u
trick in a diversion, and base your decision on the result.
Doug
UNIX was half a billion (500000000) seconds old on Tue Nov 5 00:53:20
1985 GMT (measuring since the time(2) epoch).
-- Andy Tannenbaum
Hmmm... According to my rough calculations, it hit a billion (US) seconds
around 2000.
-- Dave
Does anyone have any experience with YP / NIS / NIS+ / LDAP as a central
directory on Unix?
I'm contemplating playing with them for historical reasons.
As such, I'm wondering what the current evolution is for a pure Unix
environment. Read: No Active Directory. Is there a current central
directory service for Unix (or Linux)? If so, what is it?
I'm guessing it's LDAP combined with Kerberos, but I'm not sure.
--
Grant. . . .
unix || die
Interesting. /crp was a regular part of the Research world
in the mid-1980s when I joined up. It was nothing special,
just an extra file system for extra junk, which might or might
not be backed up depending on the system.
I had no idea its roots ran so far back in time.
I always thought it was an abbreviation for `crap,' though,
oddly, the conventional pronunciation seemed to be creep.
Norman Wilson
Toronto ON
A. P. Garcia:
I'd be interested in knowing where a pure unix environment
exists, beyond my imagination and dreams that is.
====
For starters, the computing facility used for teaching
in the Department of Computer Science at the University
of Toronto. Linux workstations throughout our labs; Linux
file servers and other back-ends, except OpenBSD for the
Kerberos KDCs and firewalls.
And yes, we use Kerberos, including Kerberized NFS for
(almost) all exports to lab workstations, which cannot
be made wholly secure against physical breakins by students.
(There's no practical way to prevent that entirely.)
Except we also use traditional UNIX /etc/shadow files
and non-Kerberized NFS for systems that are physically
secure, including the host to which people can ssh from
outside. If you don't type a password when you log in,
you cannot get a Kerberos TGT, so you wouldn't have access
to your home directory were it Kerberized there; and we
aren't willing to (and probably couldn't) forbid use of
.ssh/authorized_keys for users who know how to do that.
Because we need to maintain the password in two places,
and because we create logins automatically in bulk from
course-registration data, we've had to write some of our
own tools. PAM and the ssh GSSAPI support suffice for
logging in, but not for password changes or account
creation and removal.
Someday we will have time to look at LDAP. Meanwhile we
distribute /etc/passwd and /etc/shadow files (the latter
mostly blanked out to most systems) via our configuration-
management system, which we need to have to manage many
other files anyway.
Norman Wilson
Toronto ON
I was just reading this book review:
http://www.pathsensitive.com/2018/10/book-review-philosophy-of-software.html
and came across these paragraphs:
<book quote>
The mechanism for file IO provided by the Unix operating system
and its descendants, such as Linux, is a beautiful example of a
deep interface. There are only five basic system calls for I/O,
with simple signatures:
int open(const char* path, int flags, mode_t permissions);
ssize_t read(int fd, void* buffer, size_t count);
ssize_t write(int fd, const void* buffer, size_t count);
off_t lseek(int fd, off_t offset, int referencePosition);
int close(int fd);
</book quote>
The POSIX file API is a great example, but not of a deep
interface. Rather, it’s a great example of how code with a very
complicated interface may look deceptively simple when reduced to C-style
function signatures. It’s a stateful API with interesting orderings
and interactions between calls. The flags and permissions parameters
of open hide an enormous amount of complexity, with hidden requirements
like “exactly one of these five bits should be specified.” open may
return 20 different error codes, each with their own meaning, and many
with references to specific implementations.
The authors of SibylIFS tried to write down an exact description of the
open interface. Their annotated version[1] of the POSIX standard is over
3000 words. Not counting basic machinery, it took them over 200 lines[2]
to write down the properties of open in higher-order logic, and another
70 to give the interactions between open and close.
[1]: https://github.com/sibylfs/sibylfs_src/blob/8a7f53ba58654249b0ec0725ce38878…
[2]: https://github.com/sibylfs/sibylfs_src/blob/8a7f53ba58654249b0ec0725ce38878…
I just thought it was a thought-provoking comment on the apparent elegance
of the Unix file API that actually has some subtle complexity.
Cheers, Warren
> From: Lars Brinkhoff
> Let's hope it's OK!
Indeed! It will be fun to see that code.
> I suppose I'll have to add a simulation of the Unibus CH11 Chaosnet
> interface to SIMH.
Why? Once 10M Ethernet hardware was available, people switched pretty rapidly
to using that, instead of the CHAOS hardware. (It was available off the shelf,
and the analog hardware was better designed.) That's part of the reason ARP is
multi-protocol.
Some hard-to-run cables (e.g. under the street from Tech Sq to main campus)
stayed CHAOS hardware because it was easier to just keep using what was there,
but most new machines got Ethernet cards.
Noel
> From: Chris Hanson
> you should virtually never use read(2), only ever something like this:
> ...
> And do this for every classic system call, since virtually no client
> code should ever have to care about EINTR.
"Virtually". Maybe there are places that want to know if their read call was
interrupted; if you don't make this version available to them, how can they
tell? Leaving the user as much choice as possible is the only way to go, IMO;
why force them to do it the way _you_ think is best?
And it makes the OS simpler; any time you can move functionality out of the
OS, to the user, that's a Good Thing, IMO. There's nothing stopping people
from using the EINTR-hiding wrapper. (Does the Standard I/O library do this,
does anyone know?)
Noel
PS: Only system calls that can block can return EINTR; there are quite a few
that don't, not sure what the counts are in modern Unix.
On Sun, 4 Nov 2018, Chris Hanson wrote:
> Every piece of code that wants to call, say, read(2) needs to handle
> not only real errors but also needs to special-case EINTR and retry
> the read. Thus you should virtually never use read(2), only ever
> something like this:
> ...
> And do this for every classic system call, since virtually no client
> code should ever have to care about EINTR. It was early an
> implementation expediency that became API and that everyone now has
> to just deal with because you can’t expect the system call interface
> you use to do this for you.
>
>This is the sort of wart that should’ve been fixed by System V and/or BSD 4 at latest.
But it *was* fixed in BSD, and it's in POSIX as the SA_RESTART flag to
sigaction (which gives you BSD signal semantics).
POSIX supports both the original V7 and BSD signal semantics, because
by then there were programs which expected system calls to be
interrupted by signals (and to be fair, there are times when that's
the more convenient way of handling an interrupt, as opposed to using
setjump/longjump to break out of a restartable system call).
- Ted
P.S. The original implementation of ERESTARTSYS / ERESTARTNOHAND /
ERESTARTNOINTR errno handling in Linux's system call return path was
my fault. :-)
The last couple of days I worked on re-setting the V3-V6 manuals.
I reconstructed V5 from the scan as best I could, unfortunately some
pages were missing.
You can find everything I used to do this here,
please read the BUGS section:
https://github.com/aap/unixman
The results can be found here, as HTML and PDF:
http://squoze.net/UNIX/v3man/http://squoze.net/UNIX/v4man/http://squoze.net/UNIX/v5man/http://squoze.net/UNIX/v6man/
Reconstructing V1 and V2 n?roff source and converting the tty 37 output
to ps is something I want to do too, but for now this was exhausting
enough.
Now for the questions that I arose while I was doing this:
Are there scans of the V4 and V6 manual to check my pdfs against?
Where does the V5 manual come from? As explained in the README,
some pages are missing and some pages seem to be earlier than V4.
Is there another V5 manual that one could check against?
Why is lc (the LIL compiler) not in the TOC but has a page?
And most importantly: is the old troff really lost?
I would love to set the manual on the original systems
at some point (and write a CAT -> ps converter, which should be fun).
Doing all this work made me wish we still had earlier versions
of UNIX and its tools around.
Have fun with this!
aap
> From: Clem Cole
> (probably because Larry Allen implemented both UNIX Chaos and Aegis IIRC).
Maybe there are two Larry Allen's - the one who did networking stuff at
MIT-LCS was Larry W. Allen, and I'm pretty sure he didn't do Unix CHAOS code
(he was part of our group at LCS, and we only did TCP/IP stuff; someone over
in EE had a Unix with CHAOS code at the time, so it pre-dated his time with
us).
Noel
Hello,
Which revisions of the "C Reference Manuals" are known to be out there?
I found this:
https://www.bell-labs.com/usr/dmr/www/cman.pdf
Which seems to match the one from V6:
https://github.com/dspinellis/unix-history-repo/tree/Research-V6-Snapshot-D…
"C is also available on the HIS 6070 computer at Murray Hill and and on
the IBM System/370 at Holmdel [3]."
But then there's this:
https://www.princeton.edu/ssp/joseph-henry-project/unix-and-c/bell_labs_136…
"C is also available on the HIS 6070 computer ar Hurray Hill, using a
compiler written bu A. Snyder and currently maintained by S. C. Johnson.
A compiler for the IBM System/360/370 series is under construction."
Due to the description of the IBM compiler, it seems to predate the V6
revision.
Both above revisions use the =+ etc operators.
Finally, this version edited by Snyder:
https://github.com/PDP-10/its/blob/master/doc/c/c.refman
"In addition to the UNIX C compiler, there exist C compilers for the HIS
6000 and the IBM System/370 [2]."
This version documents both += and =+ operators.
Of interest to the old farts here...
At 22:30 (but which timezone?) on this day in 1969 the first packet got as
far as "lo" (for "login") then crashed on the "g".
More details over on http://en.wikipedia.org/wiki/Leonard_Kleinrock#ARPANET
(with thanks to Bill Cheswick for the link).
-- Dave
> From: Steve Johnson
> references that were checked using the pointer type of the structure
> pointer. My code was a nightmare, and some of the old Unix code was at
> least a bad dream.
I had a 'fun' experience with this when I went to do the pipe splice() system
call (after the discussion here). I elected to do it in V6, which I i) had
running, and ii) know like the back of my hand.
Alas! V6 uses 'int *' everywhere for pointers to structures. It also, in the
pipe code, uses constructs like '(p+1)' to provide wait channels. When I wrote
the new code, I naturally declared my pointers as 'struct inode *ip', or
whatever. However, when I went to do 'sleep(ip+1)', the wrong thing happened!
And since V6 C didn't have coercions, I couldn't win that way. IIRC, I finally
resorted to declaring an 'int *xip', and doing an 'xip = ip' before finally
doing my 'sleep(xip+1)'. Gack!
Noel
> From: Dave Horsfall
> We lost ... on this day
An email from someone on a related topic has reminded me of someone else you
should make sure is only your list (not sure if you already have him):
J. C. R. Licklider; we lost him on June 26, 1990.
He didn't write much code himself, but the work of people he funded (e.g.
Doug Engelbart, the ARPANet guys, Multics, etc, etc, etc) to work on his
vision has led to today's computerized, information-rich world. For people who
only know today's networked world, the change from what came before, and thus
his impact on the world (since his ideas and the work of people he sponsored
led, directly and indirectly, to much of it), is probably hard to truly
fathom.
He is, in my estimation, one of the most important and influential computer
scientists of all. I wonder how many computer science people had more of an
impact; the list is surely pretty short. Babbage; Turing; who else?
Noel
> From: Dave Horsfall
> We lost Jon Postel, regarded as the Father of the Internet
Vint and Bob Kahn might disagree with that that... :-)
> (due to his many RFCs)
You need to distinguish between the many for which he was an editor (e.g. IP,
TCP, etc), and the (relatively few, compared to the others) which he actually
wrote himself, e.g. RFC-925, "Multi-LAN address resolution".
Not that he didn't make absolutely huge contributions, but we should be
accurate.
Noel
> Now it could be that v7 troff is perfectly capable of generating the
> manual just like older troff would have.
On taking over editorship for v7, I added some macros to the -man
package. I don't specifically recall making any incompatible
changes. If there were any, they'd most likely show up in
the title and synopsis and should be fixable by a minor tweak
to -man. I'm quite confident that there would be no problems
with troff proper.
Doug
Angelo Papenhoff <aap(a)papnet.eu> writes about the conversion of
printer points to other units:
>> >From my experience in the world of prepress 723pts == 10in.
>>
>> Then Adobe unleashed PostScript on us and redefined the point
>> so that 72pt == 1in.
>>
>> Ibunaware of any other definitions of a point.
The most important other one is that used by the TeX typesetting
system: 72.27pt is one inch. TeX calls the Adobe PostScript one a big
point: 72bp == 1in. Here is what Don Knuth, TeX's author, wrote on
page 58 of The TeXbook (Addison-Wesley, 1986, ISBN 0-201-13447-0):
>> ...
>> The units have been defined here so that precise conversion to sp
>> is efficient on a wide variety of machines. In order to achieve
>> this, TeX's ``pt'' has been made slightly larger than the official
>> printer's point, which was defined to equal exactly .013837in by
>> the American Typefounders Association in 1886 [cf. National Bureau
>> of Standards Circular 570 (1956)]. In fact, one classical point is
>> exactly .99999999pt, so the ``error'' is essentially one part in
>> 10^8. This is more than two orders of magnitude less than the
>> amount by which the inch itself changed during 1959, when it
>> shrank to 2.54cm from its former value of (1/0.3937)cm; so there
>> is no point in worrying about the difference. The new definition
>> 72.27pt=1in is not only better for calculation, it is also easier
>> to remember.
>> ...
Here sp is a scaled point: 65536sp = 1pt. The distance 1sp is smaller
than the wavelength of visible light, and is thus not visible to
humans.
TeX represents physical dimensions as integer numbers of scaled
points, or equivalently, fixed-point numbers in points, with a 16-bit
fraction. With a 32-bit word size, that leaves 16 bits for the
integer part, of which the high-order bit is a sign, and the adjacent
bit is an overflow indicator. That makes TeX's maximum dimension on
such machines 1sp below 2^14 (= 16,384) points, or about 5.75 meters
or 18.89 feet.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Jacob Ritorto
> If this is true, I wonder why the install only offers rl01?
Where does it say this? (I didn't search for that.)
> I'm totally in the market for an Able Enable board too! Out of
> zcuriosity, is it totally out of the question to just find the prints
> and do a production run?
Rotsa ruck! They're down the same mine as Jimmy Hoffa!! :-)
But seriously, if you could find them, that would be fantastic. I've managed
to collect (thanks Clem!) a tiny bit of info about them:
http://gunkies.org/wiki/Able_ENABLE
and I _think_ I've worked out how they worked, but more is better. We had a
set of the prints at MIT BITD, but we didn't have the PROM/PLA/PAL/etc
programming info, and one would need that too to reproduce them.
> I sure hope there's a pdp11 sdcard / usb disk solution someday like they
> did for the Commodore 64
So Dave Bridgham and I have been working on a QBUS card with an FPGA that uses
an SD card to hold the bits, and emulates an RK11/RP11/etc controller. We have
a wire-wrap prototype working (the RK11's done, the RP11 should be a short
edit of that), and UNIX boots and runs. Now to turn it into PCB's...
We've planned that the next step will be to do a UNIBUS version, which will
also include ENABLE functionality (although it won't be plug compatible with
an ENABLE, the memory will be on-board).
Now to find the time/energy to make it happen... :-(
Noel
> From: Noel Hunt
> In addition to SINE, does anyone know what happened to EINE?
Was replaced by ZWEI fairly early on.
Zwei
Was
Eine
Initially
Dunno if it still exists on an MIT dump-tape somewhere.
Noel
Hi All.
I am starting to collect, if possible, different versions of the QED
editor; with a hope to put up a git repo.
If you have a tarball of code, please send it to me with as much info
about it as you have. I would like to track down the qedbuf(1) man page
also.
I have contacted Rob Pike and got one tarball from him. I have another
tarball that I got sometime in 1987 and have a promise of code coming
Donald Mitchell.
Much thanks!
Arnold
Hi,
Was wanting to put together a fully functional (meaning able to load the
whole distro and recompile itself) and "reliable" System III machine made
of real, albeit not terribly sexy parts. I have (4) working rl02 drives
and an 11/34, so I feel like there's a chance it could work. I'll have to
build it on the emulator, of course, then vtserver it over to the real hw
in chunks.
But the blocker is that System III only supports rl01, not rl02, which
kills the 'full distro' prospect.
Would anyone know if it's trivial to modify the source for the rl01 driver
to just add double the blocks, thereby supporting rl02? Or am I wildly
underestimating the task at hand? Has this been done before? Tips?
thx
jake
Does anyone know any history about X11's secondary selection?
What did / still does use it?
I'm fairly familiar with the primary selection and clipboard. But I'm
not aware of anything that uses the secondary selection.
--
Grant. . . .
unix || die
Computer science pioneer Peter Naur was born on this day in 1928; he was
responsible for ALGOL60, BNF syntax (he notably insisted upon calling it
Backus-NORMAL-Form), etc.
-- Dave