Maybe there's a generation / technology gap here. But from history, it
doesn't seem like there was much free - though most did indeed to be open -
I suppose much like the VMS model. At least not until the 80s (or maybe
bash predates that trend).
The C language might've been free, but I wonder if there were any free
compilers until gcc (hell I remember pirating Borland). Even most copies of
*BSD were mainly sold on CD vs downloaded until 10 years or so ago (even
though it was technically free - not including BSDi)
On Mar 20, 2017 7:28 PM, "Doug McIlroy" <doug(a)cs.dartmouth.edu> wrote:
> The hippie mentality had a lot of influence on everyone in that
> generation - including the computer nerds/hackers.
I'm not sure what hippie attributes you had in mind, but one
candidate would be "free and open". In software, though, free
and open was the style of the late 50s. By the hippie heyday
p
p
> The hippie mentality had a lot of influence on everyone in that
> generation - including the computer nerds/hackers.
I'm not sure what hippie attributes you had in mind, but one
candidate would be "free and open". In software, though, free
and open was the style of the late 50s. By the hippie heyday
p
p
I was at Berkeley until July 1981. The oldest SCCS file I have is
4/1/81 (for my dissertation project) and that was clearly my first use
of it. I wasn't using SCCS in 1980 when I wrote uuencode. uuencode got
SCCS-ized later when they put all of 4.xBSD under SCCS.
On 2017-03-20 03:27, schily(a)schily.net wrote:
> Mary Ann Horton <mah(a)mhorton.net> wrote:
>
>> I'm under the impression that shar came later in the 1980s. Google's
>> archive for net.sources only goes back to 1987 (unless I'm doing it
>> wrong) and clearly shar was already well established by then.
>>
>> Can anyone put a date on shar, or at least before/after 6/1/1980?
>
> BTW: do you remember why you did not check in uuencode into the SCCS?
>
> /*--------------------------------------------------------------------------*/
> ...
> Wed Jul 6 11:06:51 1988 bostic
> * uuencode.c 5.6
> * uudecode.c 5.4
> written by Mark Horton; add Berkeley specific copyrights
>
> Wed Feb 24 20:03:58 1988 rick
> * uuencode.c 5.5
> use library fread instead of rolling your own
>
> Mon Dec 22 14:43:09 1986 bostic
> * uuencode.c 5.4
> bug report 4.1BSD/usr.bin/2 and 4.1BSD/usr.bin/3
>
> Wed Apr 10 15:22:23 1985 ralph
> * uudecode.c 5.3
> more changes from rick adams.
>
> Tue Jan 22 14:13:07 1985 ralph
> * uuencode.c 5.3
> * uudecode.c 5.2
> bug fixes and changes from Rick Adams
>
> Mon Dec 19 15:42:38 1983 ralph
> * uuencode.c 5.2
> use a reasonable mode for encoding data piped in.
>
> Sat Jul 2 17:57:51 1983 sam
> * uuencode.c 5.1
> date and time created 83/07/02 17:57:51 by sam
>
> Sat Jul 2 17:57:49 1983 sam
> * uudecode.c 5.1
> date and time created 83/07/02 17:57:49 by sam
> /*--------------------------------------------------------------------------*/
>
> In special, do you know why it has been checked in by Samuel Leffler and
> whether it existed before July 1983?
>
> Jörg
I'd like the opinion of this August Group.
Should I make a claim to be the inventor of the email attachment? (It
would go on my web site, resume, the Wikipedia page, that sort of thing.)
Here's my understanding of the time line on all of this.
1. Originally, our files were all plain text and we just included them
in the email message body. The ~r command in Kurt Shoen's Mail
program was typical. There was no name for this, we were just
emailing files.
2. In 1980, I wrote uuencode. It's stated purpose was to "encode a
binary file for transmission by email". I didn't use the term
"attachment". It became part of 4.0BSD and later systems, and was
widely used.
3. In 1985, Lotus created cc:Mail. It eventually included attachments,
using a file store method. When they added an SMTP gateway later,
it used uuencode as the format. I believe cc:Mail first used the
term "attachment".
4. Microsoft did the same thing with MS Mail somewhat later, possibly
in the 1990s. It also used uuencode in the SMTP gateway.
5. In 1992, Nathaniel Borenstein and Ned Freed invented MIME. It had a
different (and IMHO much better) way to send attachments, and it
became an Internet Standard sometime later, possibly in 1996.
What do you all think?
Mary Ann
> From: Warren Toomey
> So, DCD and CTS are being dropped, but getty (or something) isn't
> responding and (presumably) sending a HUP signal to the shell.
> Is there anybody with some modem or getty knowledge that can help?
I know very little of 4.x, but I did write a V6 DZ driver, back in the
Cenozoic or some such time period... :-)
Looking at the 4.3Tahoe (which particular 4.3 version is in question here,
anyway?) DZ driver:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=4.3BSD-Tahoe/usr/src/sys/vaxub…
I find it hard (without further digging) to figure out how it gets from where
it should discover carrier has gone away (in dzrint(), from dztimer()) to the
rest of the system; they have added some linesw[] thing I don't know about.
Looking at the 4.2 driver:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=4.2BSD/usr/src/sys/vaxuba/dz.c
it seems (in the same routine) to do the right thing:
gsignal(tp->t_pgrp, SIGHUP);
so in that version, it's sending a SIGHUP to the whole pgroup when the
carrier goes away - which should be the right thing.
Noel
On Tue, Mar 14, 2017 at 7:35 AM, Tim Bradshaw <tfb(a)tfeb.org> wrote:
> But the people who have spent 9-figure sums on all this
> marginally-functional tin that the Unix vendors foisted on them don't
> look at it that way: they just want something which is not Unix, and
> which runs on cheap tin.
>
Fair enough -- but I think that this is really another way of describing
Prof. Christiansen's disruption theory. The "lessor" technology wins
over "better" technology because it's good enough.
I'm curious for the Banks, in your experience - which were the UNIX vendors
that were pushing 9-figure UNIX boxes. I'll guess, IBM was one of them.
Maybe NCR. What HP, Sun, DEC in that bundle?
> Linux is not Unix, and runs on cheap tin.
>
I
believe that
the point you are making is that "white box" PC's running a UNIX-like
system - aka Linux could comes pretty close to doing what the highly touted
AIX, NCR et al were doing and were "good enough" to get the job done.
And that's not a statement about UNIX as much as a statement about, the
WINTEL ecosystem, that Linux sat on top of and did an extremely impressive
job of utilizing.
Hi all, over on the uucp project we are struggling with a problem. If a
user is logged in with telnet, and they disconnect the telnet session,
their shell hangs around. The next person that telnets in gets the shell.
SimH, with the -a -m flags on a simulated DZ line, has these modem flags:
Telnet connected: Modem Bits: DTR RTS DCD CTS DSR
Telnet disconnected: Modem Bits: DTR RTS DSR
So, DCD and CTS are being dropped, but getty (or something) isn't responding
and (presumably) sending a HUP signal to the shell.
Is there anybody with some modem or getty knowledge that can help?
Thanks, Warren
On Sun, 12 Mar 2017, William Pechter wrote:
> Talk about security Remember when Shar files were sent to /bin/sh...
> Often as root.
>
> We forget how safe we felt the environment was.
Yep, which is why "unshar" came to be.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> Should I make a claim to be the inventor of the email attachment?
uuencode was critical to attaching arbitrary files, and I am sure
one can find emails with uuencoded bits in them that read, "please
find attached ...". But they would have said the same thing if
what was being sent was source code. So attachment in that sense
obviously predated uuencode. But to identify that kind of
attachment with what mean by the word today is like identifying
cat with tar.
Doug
> Many of the gnu tools started life as BSD code that was hacked on and
> rebranded with the GPL.
A small amount of code was likewise adopted from AT&T.
Doug
All,
Seems my SysVR2 simulation instance has at one point or another lost its
/dev/mt/* and /dev/rmt/* device entries.
Is there a script anywhere to regenerate these, or does anyone know the
major/minor off hand for the SIMH TS device?
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
> Many of the gnu tools started life as BSD code that was hacked on and
> rebranded with the GPL.
I have seen Gnu code likewise adopted from AT&T.
Doug
I know it's a long shot. Does anybody remember how to use the output
of pathalias in sendmail.cf? Specifically, we have set up 4.3BSD with
uucp-only e-mail, and we have a map file which pathalias digests and
outputs fine.
I can't find any useful documentation on putting this output into
sendmail. There's part of a book in Google books, but two pages
are hidden. I also threw out my old bat book ages ago. I have a
PDF of Sendmail_4th_Edition_Oct_2007.pdf but it doesn't mention
pathalias.
Thanks in advance! Warren
> From: "Ron Natalie"
>>> I think most people will attribute the desktop metaphor to Xerox.
>> Strictly speaking, to Smalltalk (from PARC)
^^^^
> I beg to differ. The Star not only pioneered the WISIWYG application
> presentation
PARC _was_ Xerox. The Star was a product based on the Alto, but much of the
Star stuff was pioneered on the Alto.
For instance, WYSIWYG was one of the modes that the Alto's Bravo editor could
be run in; it definitely pre-dates the Star.
> also the concept of the desktop.
Depending on exactly what you mean by 'desktop', that also pre-dated the Star.
I heard the multiple overlapping windows of Smalltalk (an Alto application)
likened to a collection of sheets of paper on a desktop (which is where the
term came from); clicking on one with the mouse brought it to the top, just
like pulling a particular sheet of paper out from the ones on a physical
desktop.
> The whole conscept of dropping documents as icons on the desktop appears
> to have orginated there.
Yes, as I mentioned:
>> things like Bravo, and the basic user command interface on the Alto
>> [the Exec, my brain finally coughed up the name - can't find my Alto
>> manual at the moment] didn't have any concept of windows/desktop
The concept of having a graphical front end as the main user interface was not
from the Alto, and the Alto didn't have icons either; both came later (I'll
let the Lisa people and Star people argue that one out).
Noel
> From: "Ron Natalie"
> I think most people will attribute the desktop metaphor to Xerox.
Strictly speaking, to Smalltalk (from PARC); things like Bravo, and the basic
user command interface on the Alto (I forget what its name was), didn't have
any concept of windows/desktop (although Bravo did use the bitmap screen).
Noel
"Open" was certainly not a work heard in the Unix lab,
where our lawyers made sure we knew it was a "trade secret".
John Lions was brought into the lab both because we admired
his work and because the lawyers wanted to reel that work
back in-house.
Out in the field, the trade secret was treated in many
different ways. Perhaps the most extreme was MIT, whose
lawyers believed it could not be adequately protected in
academia and forbade its use there. I don't know what eventually broke the logjam.
Doug
William Pechter:
VMS source fiche was very common of sites owned by large corporations.
Their IT staff used it to research bugs... and as sample code for
writing their own drivers etc...
=====
Indeed, I used the VMS source microfiche to learn how to
handle various sorts of errors (machine checks, memory
errors) better in UNIX. Stock VAX systems at the time
just crashed on any error, but it turned out that many of
them admitted recovery: some errors were transient,
others could be ridden over by disabling some piece of the
hardware.
This led to an amusing event on the VAX-11/750 that at the
time handled e-mail as uucp node research!. (Its internal
name on our datakit node was grigg.) People noticed that
the system was running slowly. I checked and discovered
that the CPU itself seemed to be a bit slower. Then I
checked logs and discovered that a week earlier, there had
been a cache error; my new recovery code had turned off
the failing half of the cache, logged the error, and forged
ahead.
At the next convenient time, we took the system down and ran
DEC's standalone diagnostics. (Contrary to the rude stories
one hears, those diags were in fact pretty thorough.) The
problem didn't show up, so we booted grigg back up again,
secure in the knowledge that if the problem was persistent,
my code would let us know without crashing. (I don't think
it ever showed up again.)
We also learned to pay more attention to console messages!
Norman Wilson
Toronto ON
> From: Clem Cole
> Do you know the time frame of the banishment? Noel any memories of what
> allowed it be used?
Sorry, this is something I know nothing of; it must have happened while I was
still an early undergrad.
The first Unix I knew of at MIT was the one in the DSSR/RTS group in LCS,
which arrived (I think) roughly around the start of my sophmore year (so
early '76 or so) - I have a memory of one of my friends (who was an undergrad
working in that group) telling me about it, and showing it to me. (I remember
being totally blown away with the way you could write a command 'foo',
compile it, and jut say 'foo' to run it...)
Actually, it may have shown up well before that - perhaps they had it well
before I first saw it.
Certainly by the time I showed up at LCS (fall of '77) it had already spread
to CSR; they had an 11/40 with Unix on it, cloned from the DSSR system.
Again, I don't know if there was any paperwork that had to happen, or if that
system was already covered under whatever license the DSSR machine was under.
Of course, this was all DARPA-funded work, and there may have been something
there that allowed it. We certainly passed Unix source around with other
DARPA projects (e.g. at BBN) without, AFAICR, worrying much about paperwork.
> we had a sign a document with the university stating something that we
> understood it was AT&Ts IP
I don't recall anything like that at MIT; maybe in the very early days, there
was something, but certainly not by '77.
If it's important to know what happened, I can ask (e.g. Prof. Ward, head of
DSSR).
Noel
> From: Random832
> I think he means the fact that MIME specifies the type of the main
> message body (not just attachments), so you can have a message with *no*
> text parts.
Right, that I could discern; what I couldn't get with an definitiveness was
_why_ that was particularly a problem.
(Another possibility, other than the one I previously gave, is perhaps that
there simply is no text part, which one can peruse, ignoring the rest?)
Noel
Hello.
Perhaps you haven't been made aware yet of these series of --IMHO--
very interesting articles about Xenix 386, entitled "Xenix 2.2.3c
Restoration", by Michael Casadevall, a.k.a. NCommander at the geek site
https://soylentnews.org (of which he is one of the founders):
Part 1: https://soylentnews.org/article.pl?sid=17/03/03/1620222
Part 2: https://soylentnews.org/article.pl?sid=17/03/07/1632251
Part 3: https://soylentnews.org/article.pl?sid=17/03/11/2014253
I wish Bela Lubkin, ex- kernel engineer at "classic" SCO, would have
joined the list to comment on those articles and pour some light into
the more obscure points. I sent an email to Bela some time ago telling
him about the TUHS mailing list, but I didn't hear back from him.
--
Josh Good
Hmm yes although perhaps controversially I see this as a bad feature and
one area where Microsoft actually gets it right. Despite the old issues of
"DLL Hell" which have largely been resolved by standardizing all DLLs and
in newer code by using assemblies... you have to admit that they provide a
direct, local API (indeed ABI) to every subsystem you would want to use,
here I am thinking of GDI, but also lots of things that would require
ioctls (CD burning, say) or domain specific languages (such as Postscript)
on Linux. This makes it really easy for Windows developers to use the
feature and the interface is fast and reliable. And where a domain specific
language is actually NEEDED (printing to a Postscript printer on Windows,
or RDP-type desktop remoting etc) it is easy to insert a proxy DLL or
object or device driver that does the necessary scrambling and
unscrambling. It is not so easy to go the other way as it requires
extensive emulation (think of ghostscript driving my Canon non-PS printer).
I wrote about this issue earlier using some examples like an "ESC ["
capable terminal as opposed to a memory mapped local console, or an "AT"
capable external modem as opposed to an internal "WinModem" that just
exposes its D/A and A/D converters with minimal signal processing and needs
the host to do the heavy lifting. Same thing applies to a graphics
terminal. Of course it should be programmed at a high level by specifying
shapes, etc to be drawn, regions to be blitted, clipping regions and pens
etc, a font manager, and it should be possible to load bitmaps, etc, into
its offscreen memory and/or create offscreen drawing buffers, if these
features are used correctly by applications then it is of course trivial to
add a remoting proxy driver similar to Microsoft's RDP, or indeed X Windows.
But the difficulty with X Windows is that the remoting layer is always
there, even though it is almost completely redundant today. This hurts
performance but more importantly it requires extensive workarounds as you
described, which add enormous extra complexity and in my view sharply
increase the learning curve and setup costs. Having said that, Xlib does
offer a decent API/ABI so if we just code to that it's not TOO bad, I would
like to see the rest of it deprecated though, and vendors encouraged to
implement Xlib with whatever backend seems appropriate.
The ridiculous thing here is that X setup is so damn convoluted and
incestuously tied in with the window, session and display managers, THAT IT
IS IMPOSSIBLE TO RUN X REMOTELY ANYMORE AND HAVE A FULL FEATURED DESKTOP, I
have tried many times and have had various tries at thin clients and
terminal serving in my home network and it basically fell over because
environments like Gnome do not support multiple sessions of the same home
directory, not to mention numerous other problems that mean if you login
remotely you basically just get a blank screen with a default X cursor and
maybe a context menu that can run an Xterm. Bleh! In my experience you have
to use a remoter like VNC and guess what that does, tricks X into thinking
it's running locally and then intervenes further up in the display stack to
do the actual remoting.
It's a complete dog's breakfast and frankly could never compete with
Windows in any realistic way. I use it because it is the least bad of the
available options (no way am I having advertising in my start menu and my
computer loaded with bloatware and spyware before I even open the box, and
no way am I putting up with vague messages like "Something went wrong" or
"Windows is making some checks to optimize your experience" or whatnot),
and because my computer is so fast despite being 6yrs old that X only feels
borderline sluggish, i.e. is tolerable. But so much better would be
possible with a redesign. CUPS is also a dogs breakfast and hugely
unreliable, Windows GDI printing just wins hands down for all the same
reasons. End rant.
Nick
On Mar 15, 2017 5:49 AM, "Ron Natalie" <ron(a)ronnatalie.com> wrote:
Nice thing about X was that it would talk to remote displays. I still
remember sitting in the Pentagon demonstrating that the Suntools screen
lock wasn't particularly secure.
Then there was NeWS. This was Gosling's first attempt at a deployable
language. However PostScript (even with Owen Densmore's class
extensions), while a reasonable intermediary language is really sucky to
actually develop. Java was a bit more refined.
Of course, lots of things either implement X under the native window system
or backdoor X with local extensions. We got around doing high frame rate
image work on X via the SharedMemoryExtension and the ability to flip
buffers on the retrace interval (both extensions, but commonly implemented
by many servers).
Sorry, in this context, SunOS means 4.1.4 - not Solaris SVR4
I run Solaris myself, and love it.
On 3/15/2017 11:48 AM, Joerg Schilling wrote:
> Arthur Krewat <krewat(a)kilonet.net> wrote:
>
>> You make a valid point, and re-reading what I wrote, I find that I
>> pushed the example too far :)
>>
>> The subject was originally that SunOS at it's end-of-life did not have
>> the features that Linux now does, and comparing their development
>> lengths brings up an interesting question. What would SunOS have become
> So you believe that SunOS-5.11 is no longer alive?
>
> There is an Oracle based version and a OpenSolarisd based version developed by
> the community.
>
> Jörg
>
Hello all.
I was perusing the list of officially branded UNIX systems, according to
the "UNIX 03" specification and tests done by the Open Group, and I
found there listed something called "Huawei EulerOS 2.0".
https://www.opengroup.org/openbrand/register/xy.htm
Intriguing, ain't it?
So I went to Wikipedia, to see what it has to say about such a beast.
https://en.wikipedia.org/wiki/Single_UNIX_Specification#EulerOS
And I quote: "EulerOS 2.0 for the x86-64 architecture were certified as
UNIX 03 compliant. The UNIX 03 conformance statement shows that the
standard C compiler is from the GNU Compiler Collection (gcc), and that
the system is a Linux distribution of the Red Hat family."
So, Linux (some variety of it, very closely resembling Red Hat) is now a
"officially branded" UNIX.
I think Mr. Stallman can now say: mission accomplished. GNU *is* now
UNIX. (Linux the kernel might not be a FSF project, but it certainly is
under the GNU General Public License.)
--
Josh Good
How are y'all? And greetings from the piney woods of south Georgia.
If anybody wants to help get an Internet innovator into the Internet Hall
of Fame, please drop me a note at jsqmobile(a)gmail.com. No rush; deadline is
tomorrow. He's not a Unix person, but you'll recognize him. Hint: early ISP.
-jsq
On Tue, Mar 14, 2017 at 3:48 PM, Arthur Krewat <krewat(a)kilonet.net> wrote:
> So what I'm hearing is Linux's timeline, which includes things that were
> not developed just for Linux, extends further out than SunOS does.
>
Mumble... the problem of course is the under those rules, SunOS goes back
to research which goes back to V0....
>
>
> ...
> All I'm saying is comparing Linux's timeline to something like SunOS has
> to include everything that went into both because they both relied on
> precursors.
>
Except for any possible legal reasons....why differentiate ? Looks like a
Duck, Quacks Like Duck or from a Turing Test.... I'm mostly can not tell
the difference.
>
> Side note: I'm a bit of a bitch when it comes to Linux - which doesn't
> mean I don't think Linux is "UNIX" - it just means I think it's the
> Coherent of today's UNIX ;)
>
I guess it doesn't matter to me that much. Some of the changes are
gratuitous and annoying, which brings out my inner curmudgeon as its make
its tough to type to sometimes. But the fact is, UNIX, Linux, Macos are
pretty much the same thing - much more so than winders. They are way more
similar than different and I can be productive with all three. To me its
like ethnicity in people. It says a little about some of how you might
look at something, what some of you shared positions/starting points are,
but we are way more alike than different and I would rather learn from the
differences than fight them or try to inflict my wishes. We are better
with diversity.
Clem
> From: Clem Cole
> rms had access to Masscomp b= ox we gave him fairly early on.
> ...
> I'm sure the MC-500 was not the first 68000 he had access. I think he
> was using HW in Steve Ward's lab that the Trix guys were developing
> with TI and he might have had access to an Apollo system.
> ...
> Noel do you remember how that went down?
Sorry, no. From the end of '82 to early '84 I was out of the US, waiting for
my permanent residency to come through, so I missed a chunk of events in that
time period. Maybe one of the DSSR/RTS (Steve Ward, or someone) could clarify
what access RMS had to their 68K machines?
Noel
Nice thing about X was that it would talk to remote displays. I still remember sitting in the Pentagon demonstrating that the Suntools screen lock wasn't particularly secure.
Then there was NeWS. This was Gosling's first attempt at a deployable language. However PostScript (even with Owen Densmore's class extensions), while a reasonable intermediary language is really sucky to actually develop. Java was a bit more refined.
Of course, lots of things either implement X under the native window system or backdoor X with local extensions. We got around doing high frame rate image work on X via the SharedMemoryExtension and the ability to flip buffers on the retrace interval (both extensions, but commonly implemented by many servers).
Allowing more or less arbitrary attachments was a real convenience.
But allowing such stuff to serve as the message proper was
dubious at best. Not only did it require recipients to obtain
special software to read some messages; it also posed a
security threat.
I still use mailx precisely because it will only display plain text.
With active text such as HTML, it is all too easy to mistakenly
brush over a phishing link. Outfits like Constant Contact do their
nonprofit clients a disservice by sending stuff that I won't even
peek at. And it's an annoying chore when companies I actually want
to deal with send receipts and the like in (godawful) HTML only.
Doug
I think this was supposed to go public...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
---------- Forwarded message ----------
Date: Mon, 13 Mar 2017 11:39:45 +0000
From: Steve Simon <steve(a)quintile.net>
To: dave(a)horsfall.org
Subject: Re: [TUHS] attachments: MIME and uuencode
I still actively fight office. I wrote docx2troff and xlsx2txt.
The former can extract txt or troff source from modern (DOCX / OPC) document
as can the latter though, by their nature excel tables don't map well to tbl(1).
These are written for plan9 and so the libraries are a bit different,
but they could be ported to unix without too much pain.
Shout if anyone is interested.
-Steve
As I go to bed, I wonder. Which was the earliest system that used uucp to
send mail through multiple systems to a remote user?
I see V7 has uucp/sdmail.c, but the comment says: This is only implemented
for local system mail at this time. Ditto 32V and 3BSD.
4BSD has delivermail. Its uucp has a README which says: The ``mail'' command
has been modified, so that it may only be used to send mail, when it is
invoked under a name beginning with 'r'. 3BSD has the same uucp.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=3BSD/usr/src/cmd/uucp/README
Ah, but 32V's mail.c checks for 'r':
http://minnie.tuhs.org/cgi-bin/utree.pl?file=32V/usr/src/cmd/mail.c
and so does V7:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/mail.c
So I guess I've just answered my question. It also looks like delivermail
from 4.1BSD could compile on V7, so it might be fun to try and bring a
V7 system up on uucp+mail. But will it (delivermail?) do bang paths?!
Cheers, Warren
I just heard from a historian named Piotr Klaban with an interesting
historical sidelight.
Apparently today 3/11/17 is being publicized as the 25th anniversary of
the email attachment, citing Nat Borenstein's MIME. Piotr points out
that uuencode predates MIME, and he's right.
I checked and, while I don't have any email archives from that time
frame at Berkeley, I was able to find the 4BSD archive on minnie that
dates the uuencode.1c man page at 6/1/80. We didn't call them
attachments back then, just sending binary files by email. (Prior to
then it was common to just include the text of the file raw in the
email, which only worked for ASCII files.) It was a few years later
when cc:Mail and Microsoft Mail started calling uuencoded files embedded
in email "attachments".
When MIME came out in 1992 I became a champion of SMTP/MIME as a
standard - it was a big improvement. But uuencod predated MIME by 12 years.
Mary Ann
> From: Doug McIlroy
> Allowing more or less arbitrary attachments was a real convenience. But
> allowing such stuff to serve as the message proper was dubious at
> best.
Sorry, I'm not sure I'm completely clear what you mean there? Do you mean
'non-ASCII-text objects were processed by the mail system without being told
to do so explicitly, by the user'? That, combined with the below, is indeed a
problem.
> it also posed a security threat.
The problem isn't really so much the ability to have attachments, as that
people defined attachment types with open-ended capabilities, up to and
including what I call 'active content' - i.e. content which includes code
which is to be run.
(Yes, yes, I know - even without that, it's possible to feed 'dumb'
applications bad data, and do an intrusion; I seem to recall there was one of
those with JPEG's, so even plain images were not perfectly safe. And someone
just provided an example of an with plain ASCII. But those holes are much
harder to find/use, whereas active content is a security hole the size of a
trans-Atlantic liner.)
Without an _incredibly_ secure OS (something on the order of late-stage
Multics, when the security had been beefed up even over the original design
[the jargon to search for is 'AIM', if you're interested], or better),
bringing in 'active content' from _outside_ the system, and running it, is
daylight madness - it's an invitation to disaster.
This is true no matter _how_ such content comes in: via HTTP, with a Web
browser; via SMTP, with e-mail, whatever.
Dave Moon coined a phrase, based on an old anti-drug movie: 'TECO madness: A
moment of convenience, a lifetime of regret.' These active contents all, to
me, fall into that category. They _seem_ like a good idea, and provide
interesting capabilities - until some cracker uses one to wipe your hard
drive.
> With active text such as HTML, it is all too easy to mistakenly brush
> over a phishing link.
HTML email is another of my pet peeves/hot buttons - it's just another vector
for active conent. So, for the 'convenience' of being able to send email in
multiple fonts ('eye candy', I derisively call it), we get to let malefactors
send in viruses that can wipe a hard drive.
To me, this kind of thing is professional malpractice, on a par with building
cars that catch on fire, or buildings that collapse. People need to suffer
incredibly severe penalties for propogating this kind of nonsense; maybe then
software engineers will stop valuing convenience over regret.
Noel
On Tue, Mar 7, 2017 at 10:23 AM, Dave Horsfall <dave(a)horsfall.org> wrote:
> It's been ages since I delved into UUCP; first was the
>
> "original", then HoneyDanBer.
>
Actually this is a great question for this list .. how many
implementations were created?
1.) The original 1978 version that shipped with V7 and 32/V (BSD 4.1 and
4.2)
2,) PC-UUCP for DOS came next -- I never knew how much was ripped off from
the original, because at the time, the Chesson's G protocol was not well
specified. The authors claimed to have reverse engineered it - I will say
it worked.
3.) Honey-Dan-Ber rewrite - most popular for a long time
4.) Taylor UUCP first real clone that I know of that I do think was done
with out looking at other's source. G protocol had been publicly
documented by then and the Trailblazer in fact was shipping with the
protocol imbedded in it.
Any others that folks know about and how well were they used? Did things
like Coherent have a UUCP? Linux and FreeBSD were able to use to Taylor
UUCP because it became available by then. Whitesmith's Idris lacked
anything like UUCP IIRC (but was based on V6). Same with Thoth originally
at Waterloo, but by the time they shipped it as the QNX product it was V7
compliant but I do not remember a UUCP being included in it. Minux
lacked a UUCP as I recall, but I'm hazy on that has Andy's crew wrote a lot
of the user space. Coherent was a "full" V7 clone and include things like
the dev tools including yacc/lex and was released much, much before the
Taylor version came out -- so what do they use for uucp if at all?
Does anyone remember any other implementations?
Clem
On 10 March 2017 at 03:04, Erik E. Fair <fair-tuhs(a)netbsd.org> wrote:
> See https://en.wikipedia.org/wiki/Multi-Environment_Real-Time
I'd love to get ahold of a copy of PDP-11 MERT (which surely holds no
significant trade secrets by now) to play with, since it seems like a very
historic, and possibly influential (given what was published about it in the
BSTJ, and elsewhere), but so far I have not been able to find it.
I had a lead to one of the authors (who's now in a very different line of
work), but so far I have yet to find the time to try and run that one down,
to see if anything came of it.
If anyone knows of such, please let me know!
Noel
> Back in the day plain ASCII wasn't really secure, either.
No need to use the past tense. I had a need to assess how much
damage one could do if allowed to feed arbitrary text into xterm.
I came away sobered.
Do not--ever--use a mail agent which will plumb unfiltered text
through to an xterm. nmh, for one:
http://savannah.nongnu.org/bugs/?36056
Andy
> From: Dan Cross
> why did you consider it such a step forward? I'm really curious about
> the reasoning from folks involved with such things at the time.
This was N layers up from my zone of responsibility when I was on the IESG
(which was the internetwork layer), and I don't recall any discussion about it
on the IESG (although if you really care, there might be minutes - I don't
recall when IESG minutes started, though, perhaps this was before that). That
lack of any memory may be nothing more than a sign of my fading memory, but it
could mean it wasn't a very contentious topic.
FWIW, here's my current analysis of the issues; I doubt my analysis then
would have been substantially different.
The fundamental thing that email does is send something - originally a
section of text - from party A to party B in a way that requires no previous
setup or interaction: party B can be anyone in the entire universe of
entities which support that service. MIME is an extension of this model to
carry other types of data: images, etc.
There is a very good analogy to the pre-existing real-world mail system: that
too allows one to send things to anyone without prior special arrangement, and
it supports not only transferring text, but also sending more than that -
physical objects. This pre-existing system argues that this model of operation
is i) useful, and ii) issues raised by it have probably mostly been worked
through.
So the extension of email to carry more than just text seems like a very
plausible extension.
For the 'average' user, the ability to include images in email is a huge
improvement over any alternative. Any kind of 'pull' model (in which the
receiver has to do something to retrieve the data later from some sort of
server) requires access to such a server on the part of the sender; use of a
'push' model (in which data is sent in the same way as text, as part of a
single transfer) is clearly better.
Security issues raised by sending binary data through email are a separate
question, but I note that those issues will mostly still exist no matter how
the binary data is transferred. (E.g. the binary might contain a virus no
matter whether it's transferred via SMTP or FTP.) The ability of email to send
to anyone does raise issues in this context, but this margin is not big enough
to fully explore them.
I also do get a little uncomfortable when email is used instead of a file
transfer system, for very large files, etc, etc. The thing is that the email
system was not designed to transfer really huge objects (although the size
allowed has been going up over time). The store-and-forward model of the
email system is not really ideal for huge objects, etc, etc.
But having said all that, the extension of the email model to send content
other than pure text - images, etc - still seems like a good idea to me.
Noel
All, there might be a flurry of e-mails as the uucp/news stuff gets
set up. I think we should move the actual setup messages off-list and
keep TUHS for anecdotes & questions about the old systems. Sound OK?
If so, I can set up another list.
I noticed that seismo is not as well connected (historically) as decvax,
so I've turned seismo into decvax, and I now have three systems on three
physically different boxes:
munnari ----------- decvax ---------- inhp4
at home simh.tuhs.orgminnie.tuhs.org
behind NAT 5000 5000
I'm happy to pass either decvax or inhp4 onto someone if someone
else really wants one of them.
Cheers, Warren
> On Dec 31, 2016, at 8:58 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)tuhs.org
> Subject: Re: [TUHS] Historic Linux versions not on kernel.org
> Message-ID: <20161231111339.GK576(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I might be colored by the fact that I'm running Linux myself, but I'd
> say that those are almost certainly worth preserving somehow,
> somewhere. Linux and OS X are the Unix-like systems people are most
> likely to come in contact with these days
MacOS X is a certified Unix (tm) OS. Not Unix-Like.
http://www.opengroup.org/openbrand/register/apple.htm
It has been so since 10.0. Since 10.5 (Leopard) it has been so noted on the above Open Group page. The Open Group only lists the most recent release however.
The Tech Brief for 10.7 (http://images.apple.com/media/us/osx/2012/docs/OSX_for_UNIX_Users_TB_July20…) also notes the compliance.
David
On 2017 Mar 9, 21:26, Josh Good wrote:
>
> And by the way, the two user limit in the "Personal Edition" of UnixWare
> 2.1 seems to be real:
>
> $ telnet 172.27.101.128
> Trying 172.27.101.128...
> Connected to 172.27.101.128.
> Escape character is '^]'.
>
>
> UnixWare 2.1 (gollum1) (pts/2)
>
> login: jgood
> Password:
> UnixWare 2.1
> gollum1
> Copyright 1996 The Santa Cruz Operation, Inc. All Rights
> Reserved.
> Copyright 1984-1995 Novell, Inc. All Rights Reserved.
> Copyright 1987, 1988 Microsoft Corp. All Rights Reserved.
> U.S. Pat. No. 5,349,642
> Last login: Tue Mar 9 20:57:05 1999 on pts000
> telnetd: set_id() failed: Too many users
> .
> Connection closed by foreign host.
>
>
> This thing was released in 1996. Obviously, with this limitation it could
> not hold a candle to the emerging Linux tsunammi full of free source code.
On the subject of Linux displacing UnixWare on the PC architecture in the
mid-90's, I've found this most illuminating Usenet thread from 1994, whose
participants include Alan Cox, Theo Tso, and some Novell Product Managers:
http://tech-insider.org/linux/research/1994/1025.html
And what came after that, as they say, is history.
--
Josh Good
Hi all, as part of my effort to recreate part of a simulated Usenet,
I'm trying to bring up uucp, then mail, then C-news on 4.2BSD boxes.
I've got a hardwired serial port between them, and I can see a basic
uucp conversation when I do this:
munnari.oz# /usr/lib/uucp/uucico -r1 -sseismo -x7
uucp seismo (3/6-8:04-132) DEBUG (ENABLED)
. . .
uucp seismo (3/6-8:04-132) SUCCEEDED (call to seismo )
imsg >\015\012\020<
Shere\000imsg >\020<
ROK\000msg-ROK
Rmtname seismo, Role MASTER, Ifn - 5, Loginuser - uucp
. . .
I tried e-mail to seismo!wkt and wkt(a)seismo.UUCP but it's been deferred.
I now need some help with the sendmail config. I did play around with
sendmail.cf/mc way back, but it never involved uucp so I'm stuck.
Anybody want to help (and dust out those cobwebs at the same time)?
Thanks, Warren
OK, Geoff Collyer has built the C-News binaries for the 4.2 emulated
systems. They are temporarily at http://minnie.tuhs.org/Y5/Cnews/
Does someone want to try and get them up and running on an emulated system?
Also, I've build a 4.3BSD version of the emulated uucp systems. It's a
separate branch at https://github.com/DoctorWkt/4bsd-uucp. You can get it
by doing:
git clone https://github.com/DoctorWkt/4bsd-uucp.git \
--branch 4.3BSD --single-branch
Once it's solid enough I will make this the default branch, but I'll
leave the 4.2BSD branch there as well.
Thanks Geoff!
Warren
On Fri, Mar 10, 2017 at 8:15 AM, Jason Stevens <
jsteve(a)superglobalmegacorp.com> wrote:
>
> That almost reminds me to ask about the whole "open" Stanford 68000 board
> that became the Cisco AGS, and SUN 100.. and I think SGi 1000
>
Jason -- I'm not sure what you are trying to say. It was a different
time, different culture, different rules. Note: Please I'm not accusing
you of this, but I worry you are getting dangerous close to an error that I
see made by a lot of folks that grew in the time of the GPL and the "Open
Source Culture." My apologies in advance if you think I'm going a little
too far, but I want to make something clear that seems to have been lost in
time and culture. I do not want to be see as harassing or "shaming" in
anyway way. I want to make a point for everyone since the words we use do
matter (and I realize I screw them up myself often enough)..
I am fairly certain that the "SUN board" - aka the Stanford University
Network 68000 board, like UNIX itself was licensed IP. You are correct
that the schematics (like the UNIX sources) were well known at the time and
"open" in the sense that all of the licenses had them. It was not hard to
find papers with a much of the design described. In fact Andy had worked
on a similar set of boards when he was a CMU a few years earlier for what
we called the "distributed front-end" project (the earlier version was much
weaker and had started as Intel chip of sometime which I have forgotten and
switched to the 68000 at some point - Phil Karn might remember and even
have a copy, I think my copy has been lost to time).
Anyway, to build and sell a Multibus board based on Andy's design that he
did at Stanford as a grad student, you needed a license from Stanford. You
are correct a lot of firms, particularly Cisco, later VLSI Technology - ney
Sun Micro Systems, Imagen, and host of took out licenses to build that
board. Thus a lot of companies built "JAWS" (just another workstation -
so called "3M systems" with a disk), or sometimes diskless terminals as
Andy had imagined it in his papers, or purpose built boxes such the AGS
router and the Imagen printers.
But I flinch a little when I see people call the "SUN" an "open" design.
It was "well know" but it was not what we might call "Free and Open" today.
I admit you just said "open" in your reply to Charlie and may have
meant something different; but so many people today leave the "free" off
when they say "open." *i.e.* People often incorrect deny that Unix was
open as it actually always was from the beginning -- if you had a license,
it just was not "free" to get same. My point is that I believe a license
for the "SUN" was from Stanford was not "free" either. Same with the the
"MIPS" chip technology of a few years later also from Stanford.
So, I would have been happier if you had said something that had included
the words "licensed from Stanford."
Anyway, Research Universities, such as MIT, Stanford and frankly my own
CMU, have long been known for charging for licenses (not always mind you).
In fact, I laud my other institution, because I have always said the real
father of "free and open source" is my old thesis advisor, the late Don
Pederson. In the late 1960s, he founded the UCB EE "Industrial Liaison
Program" which was the auspicious institution that original "BSD" tape
would be released years later. When he first released the first version of
"Simulation Program for Integrate Circuit Evaluation" - aka SPICE, in
approx 67 time frame "dop" said:
*"I always have given away our work. It means we get to go in the back
door and talk to the engineers. My colleagues at some of the other places
license there work and they have go in the front door like any other
salesman."*
When the CS group was added to EE a few years later, their was history,
mechanism, etc. Berkeley had been release source code for a lots of
different project. The Berkeley Software Distribution for Unix V6 was
just the the drop for UNIX - who knew at the time the life it wold spawn
(although I note SPICE is still being used, so even with UNIX's success,
SPICE still hold the record for the "longest" used" BSD release code).
Anyway, "
do
p" used to love to remind the students of that mantra. And he came up
with it 20-25 years before Eric Raymond ever wrote his book and started
equating "open" with "Stallmanism." ;-)
I hope have a great one, and I hope I did not offend.
Clem
One note for those who've been away from 4.x for a while...
If you're using a console window for editing and you just wonder why the
full screen of the VT100 doesn't show up -- it's because the getty is
set down at 1200 baud for the good old LA120 DECwriter III.
Set /etc/ttys to 18console or 12console and it's expects 9600baud and
then vi will let you use full screen to edit.
Been a while since I ran a fake Vax under Unix.
Bill
> From: Jason Stevens
> it also appears that AOS was the router backbone of the NSFNet once
> they started to migrate off of the IMPs
Say what? IMPs were only every used in the ARPANET (and networks built by BBN
for private clients using that technology).
The first routers used in the NSFNET were things called Fuzzballs - PDP-11's
running software from Dave Mills, driving 56KB lines.
They eventually decided they needed to step up a level, and a consortium
involving IBM won, with IBM RT PC's running AIX driving T1 lines.
Noel
I've refrained from jumping into AIX & RT/PC discussions on TUHS. It seems
more appropriate to summarize AIX history than try to correct or clarify
specifics out of context.
I wrote about 5 pages, got feedback, revised accordingly, and posted at
https://notes.technologists.com/notes/2017/03/08/lets-start-at-the-very-beg….
Charlie
On Thu, Mar 09, 2017 at 01:57:05PM +0100, Lars Brinkhoff wrote:
> Is it ok to do experimental testing with that host? I've never set up
> uucp, so I do not yet know quit what I'm doing.
Neither have I! But yes, feel free. In yur SimH .ini file, put (or change)
this line to say:
attach dz line=0,Connect=simh.tuhs.org:5000
which will connect /dev/tty00 to simh.tuhs.org port 5000. Then
set up your L.sys file with a line that says:
seismo Any;9 DIR 9600 tty00 "" "" ogin:--ogin:--ogin: uucp ssword: uucp
so that the uucp site seismo can be contacted via /dev/tty00. Then you
can try doing:
# echo hello there | mail seismo\!root
<wait a few seconds>
# /usr/lib/uucp/uucico -r1 -sseismo -x7
and you should see the debug information with parts of the uucp conversation.
Cheers, Warren
On Thu, Mar 09, 2017 at 04:01:09PM -0700, John Floren wrote:
> Well, I'm trying to set up lanl-a, it's at 199.180.255.235:6666
> (theoretically). I've set it up to point at seismo but uucico hangs
> waiting for the login prompt.
OK, try this: Edit your /etc/remote file to say this for dialer:
dialer:dv=/dev/tty00:br#9600:
Now try:
# tip dialer
which should connect out over /dev/tty00 to seismo via the TCP connection.
Hit Return a few times to see if there is any response. On your host system,
do netstat -a | grep ESTAB and see if there is a TCP connection to
simh.tuhs.org:5000.
I also forgot. To be able to send e-mail, you need to add seismo to the
list of known remote sites in /usr/lib/sendmail.cf:
CWseismo
Cheers, Warren
> From: "Steve Johnson"
> This reminds me of a story from that era. One of the mainframe computers
> had the ability to place phone calls and a program was run every night
> to collect data from far-flung teletypes [which had been pre-loaded with
> data tapes]. ... On day the operators realized that there were two
> phone numbers in Nebraska that were getting called every weekday night,
> and the numbers were very similar. They suspected one was a wrong
> number, so they listened in on the calls to see which one was real. The
> phone rang in Nebraska at 2am and was answered by a sleepy man .. The
> man was heard to say "It's all right, Bertha. It's just that nut with a
> whistle again!"
Interesting: I've heard this same story, but told about TIPs and the ARPANET.
A computer at BBN was set up to regularly dial all the TIP modem lines, to
check that they were working. One line was always down, so they listened in,
and heard some human say "it's just that pevert with the whistle again".
I wonder which one was the original: anyone know for sure?
Noel
> From: Warren Toomey
> attach dz line=0,Connect=simh.tuhs.org:5000
>
> which will connect /dev/tty00
Provided that /dev/tty00 exists, and the major device type is set to the
cdevsw index for the DZ in whatever Unix you are using, and the minor device
is set to the correct value to DZ #0, line #0.... :-)
Noel
Warning Toomey:
In ASCII at http://www.redace.org/html/logical_usenet_map_1984.html
===
That's no UUCP map. It's a USENET map: a map of netnews
propagation. No, they're not the same at all: many places
that used UUCP to exchange mail didn't participate in
netnews.
In particular I see a site I used to run with none of its
important mail links like ihnp4, and only a link to a
system I don't remember at all. I had left that site
a few weeks before that map was published, but I stayed
in touch with the folks there; had all the mail links
been torn down I'd have known. Had someone decided it
was worth while dipping a toe into netnews, though
(something I never bothered with) I might not.
In fact I suspect it would be difficult to find
believable maps for UUCP except amongst major forwarders.
At its peak it was an extremely informal network, with
lots of links that weren't published anywhere because
people at site A wanted to keep in touch with those at
sites B and C but didn't want to pay the bills to
forward mail between B and C, let alone between those
sites and places twelve time zones away.
Norman Wilson
Toronto ON
We are going to need some historical uucp maps so that we can construct
our simulated uucp network which bears some resemblance to the past.
There is a 1984 map on pages 7 to 14 of
http://www.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V05.4.pdf
As Dave mentioned, we need some key sites like ihnp4, cbosgd etc.
What other key sites? Any volunteers to run some of them?
Warren
I was trying to look at mini-unix so I mounted the disk image inside
unix v6 via:
/etc/mount /dev/rk4 /usr/mini-unix
and I noticed that if I ran the mount command as a user and not root
that /etc/mtab would not be updated (but it was updated as expected as
root). Of course /etc/mtab is owned by root :)
Then I noticed something else when I did an ls in the /usr directory:
drwxrwxrwx 20 31 368 Sep 3 1976 mini-unix
Normally I would see things like:
drwxrwxr-x 2 bin 48 May 13 1975 adm
What does the 31 mean?
Mark
http://www.thefullwiki.org/UUCP
``UUCP was originally written at AT&T Bell Laboratories, by Mike Lesk, and
early versions of UUCP are sometimes referred to as System V UUCP.''
Err, it was V7, wasn't it? That considerably predates SysV...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Okay - let's make this an easy-to-dredge-through thread so I can easily
search for stuff later.
What means of interconnection are we going to use?
I should be able to provide:
1). Actual dial-in (probably not anything above 1200 baud...if I am
lucky)
2). SIMH "virtual leased line" dial-in
3). Network mail
A map/list of interconnections would be nice. Need a central database
somewhere.
--
Cory Smelosky
b4(a)gewt.net
> [a] case where AT&T attempted to see whether its Unix code had been stolen
> Coherent?
I doubt it. The only access to Coherent that I am aware of was Dennis's
site visit (recounted in Wikipedia, no less). Steve's Yacc adventure
probably concerned another company.
Besides the affairs of Coherent and Yacc, there was a guy in
Massachusetts who sold Unix-tool lookalikes; I don't remember his name.
We were suspicious and checked his binaries against our source--bingo!
At the same time, our patent lawyer happened to be negotiating
cross-licenses with DEC. DEC had engaged the very plagiarist as
an expert to support their claim that AT&T's pile of patents didn't
measure up to theirs. After a day of bargaining, our lawyer somehow
managed to bring casual conversation around to the topic of stolen
code and eventually offered the suspect a peek at a real example.
He readily agreed that the disassembled binary on the one hand must
have been compiled from the source code on the other. In a Perry
Mason moment, the lawyer pounced: "Would it surprise you if I told
you that this is ours and that is yours?"
The discredited expert didn't appear at next day's meeting.
The lawyer returned to Murray Hill aglow about his coup.
The product soon disappeared from the market.
Doug
On Tue, Mar 7, 2017 at 5:04 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Tue, 7 Mar 2017, Dan Cross wrote:
>
> > One or more microcomputer BBS (Bulletin Board System) platforms had UUCP
> > support to bridge their store-and-forward messaging networks to USENET
> > and send email, etc. The implementation I remember off the top of my
> > head was Waffle, written by Tom Dell. [...]
>
> Was this the UUCP that was available for CP/M? I found it on the old
> Walnut Creek CD, moved it over to my CP/M box via SneakerNet (I ran CP/M
> for years, carefully avoiding DOS/WinDoze) and it worked; it was overlaid
> to hell and back hence really slow, but it worked.
>
Maybe? Though I tend to doubt it. It looks like Waffle originally ran on
the Apple II, but was fairly quickly ported to DOS and then Unix/Xenix. I
believe it was written in C, but the source code is not generally
available. More information on it is here:
http://software.bbsdocumentary.com/IBM/DOS/WAFFLE/
As I mentioned before, the BBS thing was kind of interesting. What strikes
me, however, is how closely the timing lines up with developments in the
Unix world. As Jacob mentions earlier, UUCP was "published" in February
1978 and an improved version distributed with 7th Edition in October of
that year. The first BBS was announced via an article in the November 1978
edition of Byte magazine (available online, with some information here:
https://www.wired.com/2010/02/0216cbbs-first-bbs-bulletin-board/)
For those that don't know, the whole idea behind a BBS was that a person
with a computer (usually a microcomputer), a modem, and a POTS phone line
(usually into the person's house) would run software on the machine that
answered the phone when called (assumed the remote caller was using a
modem, of course) and presented the remote user with an interface for
interacting with the local machine: most often, this was menu based. Most
often, the BBS only had one phone line and the functionality was limited:
sending and receiving simple messages, uploading and downloading files
using protocols like x- y- and zmodem (or kermit!) and maybe playing
specially written games. However, some BBSs became quite sophisticated
supporting multiple lines, interactive chat, multiplayer games and so
forth. Early software was mostly homebrew (the Byte article talks about
software *and* hardware), but eventually packaged systems emerged. There
was even a commercial marketplace for BBS software.
Around 1984, they developed a messaging "network" called Fidonet for
routing email and sending files around; the goal was to minimize
long-distance telephone charges by relaying things through nodes in the
network that were geographically "close" to the next calling region and
transmitting things in batch. Think USENET (which predated it by several
years) but much smaller in scope.
The Internet killed it for the most part, of course, but these things
developed quite the following; some are even still running, though most are
now accessible via telnet/ssh. Somewhat confusingly, some of the operators
seem to think they are some kind of alternative to the "Internet" instead
of just another application of the net. It's sort of an odd viewpoint, but
I think it comes from not being altogether all that savvy: it was mostly a
hobbyist thing. But in the BBS heyday, there was something like 100,000 of
them in North America alone.
Sorry for the wall of text, but I think the parity between the rise of BBSs
and UUCP/USENET is interesting.
- Dan C.
Warren wrote:
> > I might call for participation
> > in a uucp/Usenet reconstruction with people running simulated nodes on
> > the Internet.
On Wed, Mar 08, 2017 at 07:47:30AM +0100, Lars Brinkhoff wrote:
> Are modern systems welcome? I always wanted a bang path address!
I can't see why not, as long as you can simulate a serial connection
with a TCP connection, and can speak uucp.
Cheers, Warren
This scanned version includes all the cited manuals:
A Research UNIX Reader
Annotated Excepts from the Programmer's Manual, 1971-1986
M. Douglas McIlroy
https://archive.org/details/a_research_unix_reader
> From: jnc(a)mercury.lcs.mit.edu <mailto:jnc@mercury.lcs.mit.edu> (Noel Chiappa)
>
>> From: Paul Ruizendaal
>
>> The "Research Unix Reader"
>
> Thanks for mentioning that; I'd never heard of it. Very interesting.
>
>
> A query: it seems to have been written with access to a set of manuals for the
> various early versions of Research Unix. The Unix Tree:
>
> http://minnie.tuhs.org/cgi-bin/utree.pl <http://minnie.tuhs.org/cgi-bin/utree.pl>
>
> has the manual pages for V3 and V4, and V6 and later, but not the other
> ones. Do the manuals used for the preparatio of that note still exist; and, if
> so, is there any chance of getting them scanned?
> From: Paul Ruizendaal
> The "Research Unix Reader"
Thanks for mentioning that; I'd never heard of it. Very interesting.
A query: it seems to have been written with access to a set of manuals for the
various early versions of Research Unix. The Unix Tree:
http://minnie.tuhs.org/cgi-bin/utree.pl
has the manual pages for V3 and V4, and V6 and later, but not the other
ones. Do the manuals used for the preparatio of that note still exist; and, if
so, is there any chance of getting them scanned?
(I have a auto-page-feed scanner, and volunteer to do said scanning. Someone
else is going to have to do the OCR, and back-conversion to NROFF source,
though... :-)
Noel
I spent a year or so working on this in 1977. I was wondering who wrote it.
Funny but: I once had a compile fail on Motorola's MPL compiler, which was
written in fortran. It had so many continued comment lines that the 16-bit
column number went negative, and I got a fairly obscure error.
Anyone remember who wrote it?
Mid-year 2019 is the 50th anniversary of the creation of Unix and I've
been quietly agitating for something to be done to celebrate this. Up to
now, there's been little response.
The original Unix user's group, Usenix, will hold its Annual Technical
Conference on the west coast of the US at this time, so it would make sense
to do something in conjunction with this conference. Some suggestions:
- a terminal room with a bunch of period terminals: ASR-33s, -37s, VT100s,
VT102s, VT220s
- these connected to real/emulated Unix systems either locally or via a
terminal server and telnet to remotely emulated systems
- some graphical terminals: Sun pizza boxes, a Blit would be great
- if possible, some actual real PDP-11s, VAXen
- emulated systems: V1 to V7 Unix, 32V, the BSDs etc. In fact there are
plenty of Unix versions that we could run in emulated mode.
- Unix of course was one of the systems used to implement the Arpanet
protcols, so it would be interesting to get some of the real/emulated
systems networked together
- how about an emulated UUCP network with Usenet on top of it, and
some mail/news clients on the emulated systems.
- retro workshops/tutorials: how to edit with ed, using nroff, posting
a Usenet article, dealing with bang paths.
I'm proposing to gather a bunch of people to start the ball rolling on the
technical/demonstration side. We'd need people:
- with terminals, portable PDP-11s and VAXen, Sun boxen
- prepared to set up emulated systems
- who can help bring the networking (UUCP, Usenet, Arpanet) back to life
- willing to write and run workshops that show off this old technology
- to help set up terminal servers and all the RS-232 to telnet stuff
Some of this we can start doing now, e.g. rebuild an emulated Arpanet, UUCP,
Usenet, get emulated systems up, build front-end telnet interfaces.
Is there anybody willing to sign up for this? I think once we have some
momentum, we can tell the Usenix people and get some buy-in from them.
Post back and/or e-mail me if you can help. Thanks, Warren
It's not really Unix history, but Dartmouth's "communication files"
have so often been cited as pipes before Unix, that you may like
to know what this fascinating facility actually was. See
http://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf
On 6 Mar 2017, at 12:37 , Warren Toomey wrote:
> On Mon, Mar 06, 2017 at 12:16:48PM +0100, Paul Ruizendaal wrote:
>> Hopefully I will have some time later this year to add 'direct run' emulations to the TUHS site based on this code (assuming Warren agrees). The idea would be that next to Archive and the Tree there would be emulation. A visitor would go to e.g. the V5 page of the Tree and also find a link to run V5 in emulation. From the SIMH and Nankervis sites images for:
>
> Yes please. And an 11/20 for 1st Edition Unix too :-) (my wishlist).
>
> Thanks! Warren
From a quick glance at "u0.s" it would seem that V1 has support for a RK11 disk. Also, I would assume that when the MMU is disabled, that a 11/45 would boot up from a disk image with 11/20 code - at least it is worth a try. Do you have a RK11 disk image with V1 installed handy?
> From: Warner Losh
> On Wed, Mar 1, 2017 at 12:49 PM, Random832 <random832(a)fastmail.com> wrote:
>>> My understanding is that System V source of any sort is not legal to
>>> distribute.
>> surely there are big chunks of the opensolaris code that are not *very
>> much* changed from the original System V code they're based on. Under
>> what theory, then, was Sun the copyright holder and therefore able to
>> release it under the CDDL?
> Their paid-up perpetual license that granted them the right to do that?
I wonder, if they do indeed have such a license, if they have the rights to
distribute original SysV source under the CDDL? Or does that license only
apply to SysV code that they have modified? And if so, _how much_ does it have
to be modified, to qualify?
Maybe we can get them to distribute SysV under the CDDL... :-)
Noel
All, I've been running the TUHS list since 1994 and it's always been an
open list. People can say what they want, and I rely on sense and courtesy
to ensure good behaviour. I think only once before I've had to hold and vet
an individual's postings.
However, I've seen undesirable behaviour recently on the list and I've had
a substantial amount of private correspondance about it. Therefore, I've
decided to hold and vet the postings of a few list members (i.e. >1).
I don't take this step lightly; in fact, I've dithered for a while on this.
But the new policy is: if you don't show respect to other members on the
TUHS list, I will hold and vet your postings. If your postings are respectful
then there will be no hold and vet.
I will e-mail the people involved. I feel disappointed to have taken this
step, but that's the way it is.
Cheers, Warren
> From: Wesley Parish
> I think the best thing for all would be the release of the Unix SysV
> source trees under a suitable open source license.
You may think that; I may think that, we _all_ may think that.
But in the legal world, that, and $2 (or whatever the going rate is these
days) will get you a cup of coffee.
Unless someone is prepared to chivvy a rights-holder into actually _doing_
something, any talk is ... just that.
Any volunteers to make something actually happen?
Noel
Hi there,
in case of someone is in need of data recovery, we managed
to do some nice work :)
http://hackaday.com/2017/03/03/raiders-of-the-lost-os-reclaiming-a-piece-of…
love all
--
[ ::::::::: 73 de IW9HGS : http://museum.freaknet.org :::::::::::: ]
[ Freaknet Medialab :: Poetry Hacklab : Dyne.Org :: Radio Cybernet ]
[ NON SCRIVERMI USANDO LETTERE ACCENTATE - NON MANDARMI ALLEGATI ]
[ *I DELETE* EMAIL > 100K, ATTACHMENTS, HTML, M$-WORD DOC and SPAM ]
The 'oldest' I have is a set of SCO UNIX 3.2V4.0 and V4.2
Mail me if you're interested
Cheers,
rudi
> Message: 1
> Date: Sat, 25 Feb 2017 16:55:25 -0800
> From: Cory Smelosky <b4(a)gewt.net>
> To: tuhs(a)minnie.tuhs.org
> Subject: [TUHS] SCO OpenDesktop 386 2.0.0
> Message-ID:
> <1488070525.154368.892915216.18B7F7A4(a)webmail.messagingengine.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hey,
>
> Does anyone have any of the floppies for OpenDesktop 2.0.0? Mine got
> damaged in a dehumidifier failure before they got to California. The
> only survivor was of all things...the QIC-24 tape (which I have read
> fine)
>
> sco-tape> tar tf file0 | more
> ./tmp/_lbl/prd=odtps/typ=u386/rel=2.0.0a/vol=1
>
> Anyone know a good starting point for attempting to install it in to a
> VM? ;)
> --
> Cory Smelosky
> b4(a)gewt.net
Yes. And I just want to point out the systems vendor's worst nightmare:
Competition from an earlier version of their own product. History is
littered with examples where something was deliberately left to wither and
die for this reason.
Apple II and IIgs: We all know that the IIgs was deliberately crippled, and
then discontinued in favour of the IIc+, as it presented a viable
alternative to the 68000-based Macs.
680x0 Macs: Apparently some licensees had 68060 Macs and accelerators in
the works, but Apple refused access to the ROMs to add the 68060 support
code, because it would have been a viable alternative to the PowerPC 603.
IBM OS/2: Was heavily DOS based (I believe it used the INT 21h API with
modifications for protected mode), but in fact was eclipsed by later
versions of DOS/Windows that were retrofitted with things like DPMI
support, hacky but effective in providing a viable alternative to OS/2.
BSD and SysIII: For a while it looked like the 32V-derived BSDs were going
provide a viable alternative to AT&T's official developments of the same,
and it took some heavy handed legal and political manouevring and backroom
deals to make sure that did not happen in the end.
AMD64 and Itanium: Enough said, a very expensive egg on face episode for
Intel. 8086/8088 and iAPX432: Same thing except it was actually Intel's own
product that provided a viable alternative to the "official" new version
rather than a competitor's development of it. Of course a similar story can
be told about 8080/Z80/8085/8086, Intel faced stiff competition from an
enhanced version of their own product before wresting back control with the
much improved 8086. A nightmare for them.
That's the real reason vendors won't open source.
Nick
On Mar 4, 2017 12:02 PM, "Henry Bent" <henry.r.bent(a)gmail.com> wrote:
On 3 March 2017 at 18:56, Wesley Parish <wes.parish(a)paradise.net.nz> wrote:
>
> And since the central Unix source trees have been static - I don't think
> Novell was much more than a
> caretaker, correct me if I'm wrong - and the last SysVR4 release of any
> consequence was Solaris - has
> Oracle done anything with it? - I think the best thing for all would be
> the release of the Unix SysV
> source trees under a suitable open source license.
There was an SVR5, even if it was not nearly the popular product that its
predecessors were. While development certainly slowed, it contained some
amount of technological progression. Obviously at this point development
has stopped completely and it probably does make sense to open source that
code base.
> (I've made a similar argument for the IBM/MS OS/2,
> DEC VAX VMS, and MS Windows and WinNT 3.x and 4.x source trees on various
> other Internet forums:
> the horse has bolted, it's a bit pointless welding shut the barn door now.
> Better to get the credit for
> being friendly and open, and clear up some residual bugs while you're at
> it ... )
Equating VMS, old versions of Windows, etc. isn't quite the same. Even old
versions of those products may well include source that contains, or is
believed by its owners to contain, novel ideas or novel implementations of
existing ideas that may have survived relatively unchanged in newer
versions. And because there is at least a reasonably sized user base for
all of the products you mentioned, corporate customers have an interest in
protecting their investment, and the software creators have an interest in
responding to the desires (or perceived desires) of their customers.
Don't get me wrong - I'd love to see a legal release of the VMS 5 source,
or Windows 3 source, or classic Macintosh source. I'm just not holding my
breath. I think the community's time would be better spend advocating for
source releases of products that are truly dead or all but dead.
-Henry
On Wed, Mar 1, 2017 at 9:13 PM, Jason Stevens <
jsteve(a)superglobalmegacorp.com> wrote:
> Slightly off or on topic, but since you seem to know, and I've never seen
> aix 370 in the eild, did it require VM?
>
It could boot on raw HW.
> Did it take advantage of SNA, and allow front ends, along with SNA
> gateways and 3270's?
>
Not sure how to answer this. It was an IBM product and could be used
with a lot of other IBM's products. Generally speaking it was aimed at the
Educational market, although there were some commercial customers, for
instance Intel was reputed to do a lot of the 486 simulation on a TCF
cluster (I don't know that for sure, that was before I worked for Intel).
>
> Or was it more of a hosted TCP/IP accessable system?
>
Clearly, if you had a PS/2 in the cluster, that was your access point. I
think it was all mixed up in the politics of the day at IBM between
Enterprise, Workstations, and Entry systems. TCP/IP and Ethernets were not
something IBM wanted to use naturally. But the Educational market did
use it and certainly some folks at IBM saw the value.
UNIX was needed for the Education market as was TCP/IP so that going to be
the pointed head of the stick.
Hi!
Some of the stories on here reminded me of the fact that there's also likely
a whole boat-load of UNIX ports/variants in the past that were never released
to customers or outside certain companies.
Not talking about UNIX versions that have become obsolete or which have
vanished by now like IRIX or the original Apple A/UX (now *that* was an
interesting oddball though..) and such, but the ones that either died or
failed or got cancelled during the product development process or were never
intended to be released to the outside ar all.
Personally I came across one during some UNIX consultancy work at Commodore
during the time that they were working on bringing out an SVR4 release for the
Amiga (which they actually sold for some time)
Side-note.. Interestingly enough according to my contacts at that time inside
CBM it was based on the much cheaper to license 3B2 SVR4 codebase and not the
M68k codebase which explained some of the oddities and lack of M68k ABI
compliance of the Amiga SVR4 release..
However..
It turned out that they had been running an SVRIII port on much older Amiga
2000's with 68020 cards for some of their internal corporate networking and
email, UUCP, etc. and was called 'AMIX' internally. But as far as I know it
was never released to the public or external customers.
It was a fairly 'plain jane' SVRIII port with little specific 'Amiga' hardware
bits supported but otherwise quite complete and pretty stable.
Worked quite well in the 4MB DRAM available on these cards. The later SVR4
didn't fare so well.. Paged itself to death unless you had 8 or even (gasp!)
16MB.
It was known 'outside' that something like this existed as the boot ROM's on
the 68020 card had an 'AMIX' option but outside CBM few people really knew
much about it.
It may have been used at the University of Lowell as they developed a TI34010
based card that may already have had some support in this release.
Still..
This does make me wonder.. Does anyone else know of these kinds of special
'snowflake' UNIX versions that never got out at various companies/insitutes?
(and can talk about it without violating a whole stack of NDA's ;) )
No special reason.. Just idle curiosity :)
Likely all these are gone forever anyway as prototypes and small run production
devices and related software tends to get destroyed when companies go bust or
get aquired.
Bye, Arno.
> From: Dave Horsfall
> Another acronym is Esc Meta Alt Ctl Shift...
Good one!
And there was a pretty funny fake Exxx error code - I think it was
"EMACS - Editor too big"?
I was never happy with the size of EMACS, and it had nothing to do with the
amount of memory resources used. That big a binary implies a very large amount
of source, and the more lines of code, the more places for bugs... And it
makes it harder to understand, for someone working on it (to make a
change/improvement).
Noel
>Date: Tue, 28 Feb 2017 14:11:24 -0500
>From: Nemo <cym224(a)gmail.com>
>To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
>Subject: [TUHS] Was 5ESS built on UNIX?
>Message-ID:
><CAJfiPzyDkUP7aAfxQTv51MF61a4CjKSSMxduQaW+Yp0cW00y5w(a)mail.gmail.com>
>Content-Type: text/plain; charset=UTF-8
>
>>I have looked at the papers published in the AT&T Technical J. in 1985
>and found no mention of UNIX.
>
>N.
My Prentice Hall "UNIX(R) System V Release 4, Programmer's Guide:
Streams" lists AT&T copyrights from 1984 - 1990 and UNIX Systems
Laboratories, Inc. 1991-1992.
Rudi
As Corey said, administrative computers in switching centers
ran Unix, but the call-processing machines ran an unrelated
operating system. The Unix lab did influence that operating
system. Bob Morris instigated, and Joe Condon, Lee McMahon,
Ken Thompson and others built TPC (the phone company), a switching
system controlled by a PDP-11. This system actually ran the
phones in CS Research for several years. ESS5 adopted some
of TPC's architecture, though none of its code.
Doug
Dave Horsfall:
And if my failing memory
serves me correctly, [Henry Spencer] wrote C-News in conjunction with Geoff Collier, as
B-News was starting to show its age and limitations.
====
Your failing memory is correct, except that his name is spelt
Collyer, not Collier.
Norman Wilson
Toronto ON
Hi,
On the subject of Troff, this package seems to have disappeared:
flo—A Language for Typesetting Flowcharts
Anthony P. Wolfman and Daniel M. Berry
Computer Science, Technion, Haifa 32000, ISRAEL
1989
The paper about it is available but the code has gone.
Anyone have an archive of it?
-Steve
On 2017-02-27 08:26, Lars Brinkhoff <lars(a)nocrew.org> wrote:
> Tim Bradshaw wrote:
>>> David wrote:
>>> I remember that GNU Emacs launched the first time and then dumped
>>> itself out as a core file. Each subsequent launch would then ‘undump’
>>> itself back into memory. All this because launching emacs the first
>>> time required compiling all that lisp code.
>> It still works like that. Indeed that's the conventional way that
>> Lisp systems tend to work for delivering applications
> Emacs came from ITS, and many Lisps derive from Maclisp which also came
> from ITS. In ITS, it was common for applications to be dumped into a
> loadable core image, even if they were written in assembly language.
Not only i ITS. This is how things work in OS/8, for example. I believe
it is also how things work in TOPS-10 and quite possible also in TOPS-20.
Not sure about RT-11, but I wouldn't be surprised if that's the way
there too.
Essentially, the linker leaves the image in memory. It does not write it
to a file. And then, the command decode have a command for dumping your
memory to disk, as a runable image. There is some information kept
around that the linker sets up, which means you don't normally have to
tell the command decoder which parts of memory to save, or what the
start address is, and so on. But you can also give that information in
your save command.
One of the nice things of this approach is that you can load an image
into memory, and then use the debugger to look around in it, change it,
or run it. And if the program exists, it is still in memory, including
all data, which means you can check the state of everything at exit
time. And of course, if you want to, you can load a program, patch
around in it, in memory, and then run it. And, of course, you can load a
program, run some part of it, and dump it to disk at that stage, so all
initializations have been done.
Your memory is always around, and is not tied to a process that comes
and goes.
Of course, the back side of that is that you can't really run several
programs at once.
But it's not hard to see that RMS and GNU Emacs (coming from these
systems) wanted the same thing again. It do have some points.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Hmm well I am more interested in the ancient code, I am not averse to
adding improvements but I want to do so in a controlled way. Also I prefer
not to use any Sys3~5 interfaces in my current project which is exclusively
BSD.
Haha, well I de-algoled /bin/sh twice so far, first time was for my uzi to
Z180 port about 10yrs back, and second time was for my 4.3BSD to Linux
porting library project last month. In the intervening time I became quite
a sed wizard and my latest de-algolizer is completely automated and
produces very nice results. Could possibly be improved by astyle's removal
of braces around single statements, I considered this too risky at the time
but I have since realized I can compare the stripped executables to
convince myself that it does not change the logic, indeed I should check
the basic de-algolizer in this way also.
Lately I have been thinking of running all of 4.3BSD through astyle but I
hesitate to do unnecessary changes, one always regrets them when doing any
bisecting or rebasing stuff...
Nick
On Feb 28, 2017 3:43 AM, "Joerg Schilling" <schily(a)schily.net> wrote:
Derek Fawcus <dfawcus+lists-tuhs(a)employees.org> wrote:
> How about applying Geoff Collyer's change to the shell memory management
> routine available here:
>
> http://www.collyer.net/who/geoff/stak.port.c
Depends on what shell you are talking about.
The code named by you only works with a very old Bourne Shell that can be
retrieved from the server of Geoff Collyer.
If you are interested in the recent Bourne Shell (SVr4 + Solaris changes),
you
better use my Bourne Shell sources that can be found inside the
schily-tools:
http://sourceforge.net/projects/schilytools/files/
The code from above will not work in a recent Bourne Shell without changes
in
both, Geoff Collyer's stak.c and the rest of the Bourne Shell.
Jörg
--
EMail:joerg@schily.net (home) Jörg Schilling D-13353
Berlin
joerg.schilling(a)fokus.fraunhofer.de (work) Blog:
http://schily.blogspot.com/
URL: http://cdrecord.org/private/http://sourceforge.net/
projects/schilytools/files/
Ooo. Fun. We're talking PDP-10s on a Unix list... :-)
On 2017-02-27 16:13, Arthur Krewat <krewat(a)kilonet.net> wrote:
> In TOPS-10, you could detach from your current job, login again, and
> keep going. Then, attach to the previous job, and go back and forth
> endlessly.
Right. But that is a different thing. Each terminal session only have
one job. The fact that you can detach that, and log in as a new session
is a different concept.
> As for keeping memory around, it was very common on TOPS-10 to put code
> in a "hiseg" that would stick around, and was shareable between "jobs".
Yes. Again, that is a different thing as well. Hisegs are more related
to shared memory.
I assume you know all this, so I'm not going to go into details.
But having the memory around for a program, even if it is not running,
is actually sometimes very useful. If ITS could handle that, while
treating them as separate processes, all associated to one terminal, and
let you select which one you were currently fooling around in, while the
others stayed around, that is something I don't think I've seen elsewhere.
> For something like EMACS, it would be very efficient to have the first
> person run it "compile" all the LISP, leave it in the hiseg, and other
> jobs can then run that code.
That would work, but it would then require that all other users be
suspended until the first user actually completes the initialization,
and after that, all the memory must be readonly.
> Not knowing anything about EMACS, I'm not sure that compiled code was
> actually shareable if it was customized, just thinking out loud.
You can certainly customize and save your own image. But the general
bootstrapping of Emacs consists of starting up the core system, and then
loading a whole bunch of modules and configurations. All that loading
and parsing of those files into data structures in memory is quite cpu
intensive.
Once all that processing is finished, you can start editing.
Each person essentially wants all that work done, no matter what they'd
like to do later. So, Emacs does it once, and then saves the state at
the point where you can start editing.
But it does not mean that the memory is shareable. It's full of various
data structures, and code, and that will change as you go along editing
things as well.
> But even without leveraging the hiseg capability, it was relatively easy
> to save an entire core image back to a .SAV or .LOW or later a .EXE. I
> don't remember how easy it was to do that programmatically, but it was
> easy from the terminal and if it saves a lot of processor time (and
> elapsed time) people would have been happy to do it manually.
Indeed. Like I said, Tops-10 have the same concept as Emacs does today.
But there it was essentially what you always did.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I ported GNU Emacs to the Celerity product line mostly because most of the programmers there wanted it over vi. Not me, I’m a vi guy.
I remember that GNU Emacs launched the first time and then dumped itself out as a core file. Each subsequent launch would then ‘undump’ itself back into memory. All this because launching emacs the first time required compiling all that lisp code.
Does anyone else remember this?
David
On 26 February 2017 at 12:28, Andy Kosela <andy.kosela(a)gmail.com> wrote:
[...]
> Are you sure it was emacs? Most probably it was pico, which was the default
> editor for pine. We used pine/pico for all email at our university in the
> 90's. It was wildly popular.
Ah well, I am not sure -- that betrayed my emacs bias. I saw ^X^C and
assumed emacs.
N.
> From: Deborah Scherrer
> On 2/25/17 11:25 AM, Cory Smelosky wrote:
>> MtXinu is something I really want.
> I worked there for 10 years (eventually becoming President). I'll try
> to dig up a tape.
Say what you will about RMS, but he really did change the world of software.
Most code (except for very specialized applications) just isn't worth much
anymore (because of competition from open source) - which is part of why all
these old code packages are now freely available.
Although I suppose the development of portabilty - which really took off with
C and Unix, although it had existed to some degree before that, q.v. the tools
collection in FORTRAN we just mentioned - was also a factor, it made it
possible to amortize code writing over a number of different types of
machines.
There were in theory portable languages beforehand (e.g. PL/1), but I think it
probably over-specified things - e.g. it would be impossible to port Multics
to another architecture without almost completely re-writing it from scratch,
the code is shot through with "fixed bin(18)"'s every other line...
Noel
On 26 February 2017 at 07:46, Michael Kjörling <michael(a)kjorling.se> wrote:
> On 26 Feb 2017 07:39 -0500, from jnc(a)mercury.lcs.mit.edu (Noel Chiappa):
>> I was never happy with the size of EMACS, and it had nothing to do with the
>> amount of memory resources used. That big a binary implies a very large amount
>> of source, and the more lines of code, the more places for bugs...
>
> But remember; without Emacs, we might never have had _The Cuckoo's
> Egg_. Imagine the terror of that loss.
Hhhmmm.... I must dig my copy out of storage because I do not remember
emacs in there.
As for emac uses, my wife was on (non-CS) staff at a local college
affiliated with U of T. At the time, DOS boxes sat on staff desks and
email was via a telnet connection to an SGI box somewhere on campus.
A BATch file connected and ran pine but shelled out to an external
editor. What was the editor? Well, I saw her composing a message
once and ending the editor session by ^X^C.
N.
Wasn't the default FS type S51K? Limitations like 14 chars directory
names only. No symbolic link ?
>Date: Sun, 26 Feb 2017 11:13:25 -0500
>From: Arthur Krewat <krewat(a)kilonet.net>
>To: Cory Smelosky <b4(a)gewt.net>, Jason Stevens
> <jsteve(a)superglobalmegacorp.com>, tuhs(a)minnie.tuhs.org
>Subject: Re: [TUHS] SCO OpenDesktop 386 2.0.0
>Message-ID: <f5a1d513-3cc1-6a4d-64a3-669b49d7226f(a)kilonet.net>
>Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>What filesystem type does it use for root/boot/whatever?
>
>Install operating system "X" that supports that filesystem type in the
>virtual guest, create a new disk, newfs/mkfs it, arrange the bits from
>the tape, take the newly-assembled disk and move to another VM and try
>to boot it.
>
>Not remembering anything about how SVR3.2 boots (I think that's what
>Opendesktop is?) that's the end of my help on the subject :)
Hey,
Does anyone have any of the floppies for OpenDesktop 2.0.0? Mine got
damaged in a dehumidifier failure before they got to California. The
only survivor was of all things...the QIC-24 tape (which I have read
fine)
sco-tape> tar tf file0 | more
./tmp/_lbl/prd=odtps/typ=u386/rel=2.0.0a/vol=1
Anyone know a good starting point for attempting to install it in to a
VM? ;)
--
Cory Smelosky
b4(a)gewt.net
> On 26 Feb 2017 07:39 -0500, from jnc(a)mercury.lcs.mit.edu (Noel Chiappa):>> I was never happy with the size of EMACS, and it had nothing to do with >> the amount of memory resources used. That big a binary implies a very >> large amount of source, and the more lines of code, the more places for >>bugs...GNU Emacs 26.0.50, GTK+ Version 3.22.8) of 2017-02-25 (Fedora25, Kernel: 4.9.11:Virtual: 794.6Resident: 36.8
> From: Joerg Schilling
> He is a person with a strong ego and this may have helped to spread
> Linux.
Well, I wasn't there, and I don't know much about the early Linux versus
UNIX-derivative contest, but from personal experience in a similar contest
(the TCP/IP versus ISO stack), I doubt such personal attributes had _that_
much weight in deciding the winner.
The maximum might have been that it enabled him to keep the Linux kernel
project unified and heading in one direction. Not inconsiderable, perhaps, if
there's confusion on the other side.,,
So there is a question here, though, and I'm curious to see what others who
were closer to the action think. Why _did_ Linux succeed, and not a Unix
derivative? (Is there any work which looks at this question? Some Linux
history? If not, there should be.)
It seems to me that they key battleground must have been the IMB PC-compatible
world - Linux is where it is now because of its success there. So why did
Linux succeed there?
Was is that it was open-source, and the competitor(s) all had licensing
issues? (I'm not saying they did, I just don't know.) Was it that Linux worked
better on that platform? (Again, don't know, only asking.) Perhaps there was
an early stage where it was the only good option for that platform, and that's
how it got going? Was is that there were too many Unix-derived alternatives,
so there was no clarity as to what the alternatives were?
Some combination of all of the above (perhaps with different ones playing a key
role at different points in time)?
Noel
All,
I'm dumping as much BSD/OS stuff as I can tonight. This includes: SPARC,
sources, and betas.
Unable to dump any floppies, however.
--
Cory Smelosky
b4(a)gewt.net
> Then one day a couple of them ‘fell off a truck’ and my Dad just happened to be there to pick them up and bring them home.
Wonderful story. It reminded me of the charming book, "five Finger Discount"
by Helene Stapinski, whose father brought home truckfall steaks.
Thanks for sharing the tale.
Doug
Is it worth putting a copy of this mailing list into the Unix Archive?
I don't want to dump the mbox in, as it has all our e-mail addresses:
spam etc. I could symlink in the monthly text archives, e.g.
http://minnie.tuhs.org/pipermail/tuhs/2016-December.txt.gz
What do you think? Perhaps in Documentation/TUHS_Mail?
Warren
It’s embarrassing to mention this, but I thought I’d share.
I’ve always wondered what on earth a TAHOE was, as they disappeared just about as quickly as they came out. As we all know that they were instrumental from separating out the VAX code from 4.3BSD in the official CSRG source. I was looking through old usenet stuff when I typed in something wrong, and came across people looking for GCC for the Tahoe running BSD. (http://altavista.superglobalmegacorp.com/usenet/b128/comp/sys/tahoe/79.txt)
In article <2287(a)trantor.harris-atd.com>, bbadger@x102c (Badger BA 64810) writes:
`We have a Harris HCX-9 computer, also known as a Tahoe, and we'd like to
`get gcc and g++ up and running. I haven't seen anything refering to
`the HCX or any Tahoe machines in the gcc distribution. Anyone have it?
`Working on it? Pointers to who might? Know if Berkely cc/ld/asm is PD?
Turns out they were using Harris mini’s called the HCX-9. That’s when I went back to the source and saw this:
#
# GENERIC POWER 6/32 (HCX9)
#
machine tahoe
cpu "TAHOE"
ident GENERIC
So if anyone else is wondering what was a Tahoe, did it exist, was there actual sales, is their pictures of it, etc, the answer is yes, it was a real machine, yes it was sold, and there are even print ads in Computer world.
I thought it was interesting though.
Sent from Mail for Windows 10
Since the X86 discussions seem to have focused on BSD & Linux, I thought I
should offer another perspective.
TLDR: I worked on System V based UNIX on PCs from 1982 to 1993. IMO,
excessive royalties & the difficulty of providing support for diverse
hardware doomed (USL) UNIX on x86. It didn't help that SCO was entrenched in
the PC market and slow to adopt new UNIX versions.
Longer Summary:
>From 1975-82 at IBM Research and UT-Austin C.S. dept, I tried to get access
to UNIX but couldn't.
At IBM Austin from '82 to '89, I worked on AIX and was involved with IBM's
BSD for RT/PC.
Starting in '89, I was the executive responsible for Dell UNIX
(https://notes.technologists.com/notes/2008/01/10/a-brief-history-of-dell-un…)
for most of its existence.
The royalties Dell paid for SVR4 plus addons were hard to bear. Those
royalties were at least an order of magnitude greater than what we paid to
Microsoft.
We couldn't support all of the devices Dell supplied to customers, certainly
couldn't afford to support hardware only supplied by other PC vendors.
SCO had dominant marketplace success with Xenix and SVRx products, seemingly
primarily using PCs with multiport serial cards to enable traditional
timesharing applications. Many at Dell preferred that we emphasize SCO over
Dell SVR4.
When I joined my first Internet startup in 1996 and had to decide what OS to
use for hosting, I was pretty cognizant of all the options. I had no hands
on Linux experience but thought Linux the likely choice. A Linux advocate
friend recommended I choose between Debian and Red Hat. I chose Red Hat and
have mostly used Red Hat & Fedora for my *IX needs since then.
Today, Linux device support is comprehensive, but still not as complete as
with Windows. I installed Fedora 24 on some 9 and 15 year old machines last
week. The graphics hardware is nothing fancy, a low end NVIDIA card in the
older one, just what Intel supplied on their OEM circuit boards in the newer
one. Windows (XP/7/10) on those machines gets 1080p without downloading
extra drivers. (Without extra effort??) Fedora 24 won't do more than
1024x768 on one and 1280x1024 with the other.
Charlie
(somewhat long story)
After reading all the stories about how Unix source was protected and hard to access to I’ve got to say that my experience was a little different.
I was at UCSD from 76-80 when UCSD got a VAX and I think it was running 32V at the time. Well being a CS student didn’t get you access to that machine, it was for the grad students and others doing all the real work.
I became friends with the admin of the system (sdcsvax) and he mentioned one day that the thing he wanted more than anything else was more disks. He had a bunch of the removable disk packs and wanted a couple more to swap out to do things like change the OS quickly etc.
My dad worked for CDC at the time, and he was making removable media of the same type that the VAX was using. My luck. I asked him about getting a disk pack, or two. He said that these things cost thousands and he couldn’t just pick them up and bring them home.
Then one day a couple of them ‘fell off a truck’ and my Dad just happened to be there to pick them up and bring them home. You know, so the kids could see what he did for a job.
I took them into the lab and gave them to the admin who looked the disks, then at me, and asked what I wanted in exchange. I asked for a seat at the VAX, with full access.
Since then I’ve had a ucsd email account, and been a dyed in the wool Unix guy.
David
All, thanks to the hard effort of Noel Chiappa and Paul Ruizendaal,
we now have a couple of early Unix systems with networking modifications.
They can be found in:
http://www.tuhs.org/Archive/Distributions/Early_Networking/
I'll get them added to the Unix Tree soon.
Cheers, Warren
On Tue, Feb 21, 2017 at 9:25 PM, Steve Nickolas <usotsuki(a)buric.co> wrote:
> I started screwing around with Linux in the late 90s, and it would be many
> years before any sort of real Unix (of the AT&T variety), in any form, was
> readily available to me - that being Solaris when Sun started offering it
> for free download.
See my comment to Dan. I fear you may not have known where to look, or whom
to ask. As I asked Dan, were you not at an university at time? Or where
you running a Sun or the like -- i.e. working with real UNIX but working
for someone with binary license, not sources from AT&T (and UCB)?
I really am curious because I have heard this comment before and never
really understood it because the sources really were pretty much available
to anyone that asked. Most professionals and almost any/all
university students had did have source access if they ask for it. That is
part of why AT&T lost the case. The trade secret was out, by definition.
The required by the 1956 consent decree to make the trade secrets
available. A couple of my European university folks have answer that the
schools kept the sources really locked down. I believe you, I never saw
that at places like Cambridge, Oxford, Edinburg, Darmstad or other places I
visited in those days in Europe. Same was true of CMU, MIT, UCB et al
where I had been in the USA, so I my experience was different.
The key that by definition, UNIX was available and there were already
versions from AT&T or not "in the wild." You just need to know where to
look and whom to ask. The truth is that the UCB/BSDi version of UNIX - was
based on the AT&T trade secret, as was Linux, Minix, Coherent and all of
the other "clones" -- aka look-a-likes and man of those sources were
pretty available too (just as Minix was to Linus and 386BSD was to him also
but he did not know to where/whom to ask).
So a few years later when the judge said, these N files might be tain'ted
by AT&T IP, but can't claim anything more. The game was over. The problem
was when the case started, techies (like me, and I'm guessing Larry, Ron
and other ex BSD hackers that "switched") went to Linux and started to
making it better because we thought we going to lose BSD.
That fact is if we had lost BSD, legally would have lost Linux too; but we
did not know that until after the dust settled. But by that time, many
hackers had said, its good enough and made it work for everyone.
As you and Dan have pointed out, many non-hackers did know that UNIX really
was available so they went with *Linux because they thought that had no
other choice, *when if fact, you actually did and that to me was the sad
part of the AT&T case.
A whole generation never knew and by the time they did have a choice but a
few religion began and new wars could be fought.
Anyway - that's my thinking/answer to Noel's original question.
Of why Linux over the over the PC/UNIX strains... I think we all agree that
one of the PC/UNIX was going to be the winner, the question really is why
did Linux and not a BSD flavor?
Tonal languages are real fun. I'm living and working in Bangkok,
Thailand and slightly tone deaf am still struggling.
Which reminds me, regarding binary there are 10 types of people, those
who understand and those who don't :-)
Cheers,
rudi
Noel:
Instead, you have to modify the arguments so that the re-tried call takes up
where it left off - in the example above, tries to read 5 characters, starting
5 bytes into the buffer). The hard part is that the return value (of the
number of characters actually read) has to count the 5 already read! Without
the proper design of the system call interface, this can be hard - how does
the system distinguish between the _first_ attempt at a system call (in which
the 'already done' count is 0), and a _later_ attempt? If the user passes in
the 'already done' count, it's pretty straightforward - otherwise, not so
much!
====
Sometime in the latter days of the Research system (somewhere
between when the 9/e and 10/e manuals were published), I had
an inspiration about that, and changed things as follows:
When a system call like read is interrupted by a signal:
-- If no characters have been copied into the user's
buffer yet, return -1 and set errno to EINTR (as would
always have been done in Heritage UNIX).
-- If some data has already been copied out, return the
number of characters copied.
So no data would be lost. Programs that wanted to keep
reading into the same buffer (presumably until a certain
terminator character is encountered or the buffer is full
or EOF) would have to loop, but a program that didn't loop
in that case was broken anyway: it probably wouldn't work
right were its input coming from a pipe or a network connection.
I don't remember any programs breaking when I made that change,
but since it's approaching 30 years since I did it, I don't
know whether I can trust my memory. Others on this list may
have clearer memories.
All this was a reaction to the messy (both in semantics and
in implementation) compromise that had come from BSD, to
have separate `restart the system call' and `interrupt the
system call' states. I could see why they did it, but was
never satisfied with the result. If only I'd had my inspiration
some years earlier, when there was a chance of it getting out
into the real world and influencing POSIX and so on. Oh, well.
Norman Wilson
Toronto ON
>On Tue, 21 Feb 2017 19:08:33 -0800 Cory Smelosky wrote:
>
>>On Tue, Feb 21, 2017, at 17:22, Rudi Blom wrote:
>> Probably my (misplaced?) sense of humour, but I can't help it.
>>
>> After reading all comment I feel I have to mention I had a look at
>> freeDOS :-).
>>
>> Cheers,
>> rudi
>
>Do I need to pull out TOPS-10 filesystem code now, too? ;)
In 1967 I was 12 and probably had barely discovered ScienceFiction
novel and computers.
Just quickly downloaded a TOPS-10 OS Commands Manual (from 1988) but
no mention of the Level-D filesystem
Probably my (misplaced?) sense of humour, but I can't help it.
After reading all comment I feel I have to mention I had a look at freeDOS :-)
Cheers,
rudi
All, after getting your feedback, I've reorganised the Unix Archive at
http://www.tuhs.org/Archive/
I'm sure there will be some rough edges, let me know if there is anything
glaringly obvious.
I'd certainly like a few helpers to take over responsibility for specific
sections, e.g. UCB, DEC.
Cheers all, Warren
P.S It will take a while for the mirrors to pick this up.
> 2) **Most** Operating systems do not support /dev/* based access to SCSI.
> This includes a POSIX certified system like Mac OS X.
>
> 3) **Most** Operating systems do not even support a file descriptor based
> interface to SCSI commands.
> This includes a POSIX certified system like Mac OS X.
Had Ken thought that way, Unix's universal byte-addressable file format
would never have happened; this mailing list would not exist; and we
all might still be fluent in dialects of JCL. dd was sufficient glue
to bridge the gap between Unix and **Most** Operating Systems.
Meanwhile everyday use of Unix was freed from the majority's folly.
Doug
> From: Larry McVoy
> The DOS file system, while stupid, was very robust in the face of
> crashes
I'm not sure it's so much the file system (in the sense of the on-disk
format), as how the system _used_ it (although I suppose one could consider
that part of the FS too).
The duplicated FAT, and the way file sectors are linked using it, is I suppose
a little more robust than other designs (e.g. the way early Unixes did it,
with indirect blocks and free lists), but I think a lot of it must have been
that DOS wrote stuff out quickly (as opposed to e.g. the delayed writes on
early Unix FS's, etc). That probably appoximated the write-ordering of more
designed-robust FS's.
Noel
> From: Diomidis Spinelli
> Arguably, the same can also be claimed for the networking system calls.
Well, it depends on exactly what you mean by "networking system calls". If
you mean networking a la BSD, perhaps.
However, I can state (from personal experience :-) that the I/O architecture
circa V6/V7 was not very suitable for TCP/IP internetworking (with its
emphasis on an un-reliable network, and smart endpoints). The reason is that
such networking doesn't really fit well into the 'start one I/O operation and
then block the process until it completes' model.
Yes, if you have an application running on top of a reliable stream, you
might be able to coerce that into the 'uni-directional, blocking' I/O model
(if the reliable stream implementation is in, or routed through, the kernel),
but lots of other thing don't work so well. (Think, e.g. an interface with
asynchronous, un-predictable, RPC calls in both directions.)
Noel
> Linus had the qualities of being a good programmer, a good architect,
> and a good manager. I've never seen all 3 in a person before or since.
No comment about Linus, but Vic Vyssotsky is my pick for the title.
He created the first dataflow language (in 1960!). He invented
bit-parallel flow analysis and put it into Fortran 2 years later.
He was one of the technical triumvirs for Multics. Ran several
big development groups at Bell Labs, and was 2 levels up from
the Unix team in Research. I could go on and on. What he
didn't do was publish; he got ahead on pure innate ability
and brilliant insight--a profound influence on almost all]
the original Unix crowd.
Doug
I dont know if it's worth even trying to find and mirror pre 1993 ( IE when cheap CD-ROM mastering was possible) GNU software?
Things like binutils, gas, and GCC can be tremendously useful, along with binaries for long "dead" platforms?
I know that I've always been super thankful of the GNAT people for having some pre-compiled version of the ADA translator which would also include GCC. Sometimes having some kind of native toolset is a big positive, when you don't have anything, especially earlier versions that have issues cross or Canadian cross compiling.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
OMG. I don't know how many times I've consulted the Unix
Tree and blissfully ignored the cross-links that come at
the top of every file--I'm so intent on the content.
Apologies for cluttering the mailing list about a solved topic.
Doug
>Date: Sun, 19 Feb 2017 20:58:59 -0500
>From: Clem Cole <clemc(a)ccc.com>
>To: Nick Downing <downing.nick(a)gmail.com>
>Cc: Jason Stevens <jsteve(a)superglobalmegacorp.com>,
> "tuhs(a)minnie.tuhs.org" <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] Mach for i386 / Mt Xinu or other
>Message-ID: <CAC20D2NM_oyDz0tAM2o5_vJ8Ky_3fHoAmPHn8+DOqNwKoMyqfQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
>
>On Sun, Feb 19, 2017 at 7:29 PM, Nick Downing <downing.nick(a)gmail.com> wrote:
...
>Anyway, Tru64 is based on OSF/1 but also has a lot of DEC proprietary
>things (like TruClusters and anything Alpha specific) that goes beyond the
>based OSF license, so you need the HP clearance before any of that can be
>made available [same is true for HP/UX of course]. To my knowledge,
>DEC/Compaq/HP never released the sources to Tru64 (or HP/UX) to the world
>they way Sun did for Solaris, which in the case of Tru64 is sort of shame.
>there is some every good stuff in there like the file systems, the lock
>managers, cluster scaling, messaging, etc - which would be nice to compare
>to today's solutions. Since HP did have a bought out AT&T license, that
>clearly could have done so, but I do not think anyone left there to make
>that decision - sigh.
As far as I know only the TRU64 Advanced File System (aka AdvFS) has
been released to the OpenSource community, in 2008. Status now unknown
(to me)
See also
. http://advfs.sourceforge.net
. https://www.cyberciti.biz/tips/download-tru64-unix-advanced-filesystem-advf…
Cheers,
rudi
Wow that'd be incredible!!!
I'd love to see how Mach 2.5/4.3BSD compared to the Mach 3.0/Lites 1.1 that
is as close as I've been able to find... I know about the NeXT stuff, as I
have NS 3.3 installed although running it on 'white' hardware gets harder
and harder as PC's get newer and the IDE controllers just are too feature
ful, and too new for NS to deal with, beyond it can only use 2GB disks
properly. Obviously with no source or any way to get in to write drivers or
update the FFS on NeXTSTEP it's basically stuck in those P1 era machines, or
emulation. There is even previous a 68030/68040 cube based emulator for
running all the 'native' versions.
archive what you can, I can only contribute minro things I stubmle uppon,
mostly by accident.
> ----------
> From: Atindra Chaturvedi
> Reply To: Atindra Chaturvedi
> Sent: Friday, February 17, 2017 11:47 PM
> To: jsteve(a)superglobalmegacorp.com; tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Mach for i386 / Mt Xinu or other
>
> Amazing - brings back memories. I was a Unix "enterprise IT user" not a
> "kernel developer guru" back in the day working at a pharmaceutical
> company and was responsible for moving the company off IBM 3090 and SNA to
> Unix and TCP/IP.
>
> Used to buy the new Unix-like releases as they were available to stay
> current - including the Mt. Xinu Mach 386 distro. I still have it and will
> happily send it to the archives - if I can be guided a bit.
>
> Ran the Mt. Xinu for many years as my home machine - it is pre-SCSI for
> booting ( needs ESDI disks ) but was very stable. So will need tweaking to
> boot/install.
>
> Happy to have worked in the mid-70 - 80's era when there were huge changes
> in computer hardware and software technology. I have my books and the
> software for all the cool stuff as it came out in those days - some day I
> will compile it and send it to where it can be better used or archived as
> history.
>
> Atindra.
>
>
>
> -----Original Message-----
> From: jsteve(a)superglobalmegacorp.com
> Sent: Feb 17, 2017 6:30 AM
> To: "tuhs(a)minnie.tuhs.org"
> Subject: [TUHS] Mach for i386 / Mt Xinu or other
>
>
>
> While testing a crazy project I wanted to get working I came across
> this ancient link:
>
>
>
>
> http://altavista.superglobalmegacorp.com/usenet/b182/comp/os/mach/542.txt
>
>
>
> --------8<--------8<--------8<--------8<
>
>
>
> Newsgroups: comp.os.mach
>
> Subject: Mach for i386 - want to beta?
>
> Message-ID: <1364(a)mtxinu.UUCP>
>
> Date: 2 Oct 90 17:12:19 GMT
>
> Reply-To: scherrer(a)mtxinu.COM (Deborah Scherrer)
>
> Organization: mt Xinu, Berkeley
>
> Lines: 24
>
>
>
> Mt Xinu is currently finishing up its release of 2.6 MSD for the
> i386.
>
> 2.6 MSD is a CMU-funded standard distribution of the Mach kernel,
>
> release-engineered with the following:
>
> 2.5 Mach kernel, with NFS & BSD-tahoe enhancements
>
> Transarc's AFS
>
> X11R4
>
> most of the 4.3-tahoe BSD release
>
> Andrew Tool Kit
>
> Camelot transaction processing system
>
> Cornell's ISIS distributed programming environment
>
> most of the FSF utilities
>
> a few other nifty things
>
>
>
> --------8<--------8<--------8<--------8<
>
>
>
> Was any of this stuff ever saved? I know on the CSRG CD there is
> some buried source for Mach 2.5 although I haven't seen anything on where
> to even start to compile it, how or even how to boot it... I know Mach is
> certainly not fast, nor all that 'small' but it'd be interesting to see a
> 4.3BSD on a PC!
>
>
> That's what the Unix Tree is for!
Yes, but it doesn't have cross links as far as I know.
What I have in mind is effectively one more entry in
the root. Call it "union" perhaps. In a leaf of that
tree, say /union/usr/src/cmd/find, will ne a page that
links to all the "find sources in the other systems.
I don't know the range of topologies in the Unix Tree.
For example, some systems may have /src while others
have /usr/src. That could be hidden completely by
simply not revealing the path names. Alternatively
every level in the union tree could record its cousins
in the various systems, as well as its children
in the union system.
Doug
If things are filed by provenance, one useful kind of
cross-linking would be a generic tree whose "leaves"
link to all the versions of the "same" file. All the
better if it could also indicate the degree of
relatedness of the versions--perhaps an inferred
evolutionary tree or a shaded grid, where the
intensity of grid point x,y shows the relatedness
of x and y.
doug
Hi all, I think the current layout of the Unix Archive at
http://www.tuhs.org/Archive/ is starting to show its limitations as we get
more systems and artifacts that are not specifically PDP-11 and Vax.
I'm after some suggestions on how to reorganise the archive. Obviously
there are many ways to do this. Just off the top of my head, top level:
- Applications: things which run at user level
- Systems: things which have a kernel
- Documentation
- Tools: tools which can be used to deal with systems and files
Under Applications, several directories which hold specific things. If
useful, perhaps directories that collect similar things.
Under Systems, a set of directories for specific organisations (e.g. Research,
USL, BSD, Sun, DEC etc.). In each of these, directories for each system.
Under Documentation, several directories which hold specific things. If
useful, perhaps directories that collect similar things.
Under Tools, subdirectories for Disk, Tape, Emulators etc., then subdirs
for the specific tools.
Does this sound OK? Any refinements or alternate suggestions?
Cheers, Warren
Hi all, to quickly answer one recent question. If you want to upload
something Unix-related for me to archive, you can anonymous ftp upload to
ftp://minnie.tuhs.org/incoming/
Nobody can list the directory contents, so it's good for sensitive files.
If you upload something called xyz, can you also add xyz_Readme which might
describe e.g. what the thing is, where it came from, file format (e.g.
floppy images), how to install it, any other useful information.
If you think it can be added to the public Unix Archive at
http://www.tuhs.org/Archive/, or if the file definitely can't be added
and I should move it to the hidden archive, also say so. Also feel free
not to disclose your identity.
Cheers, Warren
P.S Work has become busy this year. I might call for people to help
out with the curation. Any volunteers? Discretion is a pre-requisite.
> From: Atindra Chaturvedi
> including the Mt. Xinu Mach 386 distro. I still have it and will happily
> send it to the archives
Oh, that's fantastic. It's so important that everyone who has these chunk of
computing history make sure they make it into repositories!
> I have my books and the software for all the cool stuff as it came out
> in those days - some day I will compile it and send it to where it can
> be better used or archived as history.
Please do! And everyone else, please emulate! (I'm already doing my bit! :-)p
Noel
> OK, we're starting to get through all the clearances needed to release
> the non-MIT Unix systems
We have now completed (as best we can) the OK's for the 'BBN TCP/IP V6 Unix',
and I finally bestirred myself to add in the documentation I found for it,
and crank out a tarball, available here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/tmp/bbn.tar
It includes all the documentation files I found for the Rand and BBN code (in
the ./doc directory); included are the original NROFF source to the two Rand
publications about ports, and several BBN reports.
This is an early TCP/IP Unix system written at BBN. It was not the first
TCP/IP Unix; that was one done at BBN in MACRO-11, based on a TCP done in
MACRO-11 by Jim Mathis at SRI for the TIU (Terminal nterface Unit).
This networking code is divided into three main groups. First there is
code for the kernel, which includes IPC enhancements to Unix, including
Rand ports, as well as further extensions to that done at BBN for the
earlier TCP - the capac() and await() calls. It also includes a IMP
interface driver (the code only interfaced to the ARPANET at this point in
time). Next, TCP is implemented as a daemon which ran as a single process
which handled all the connections. Finally, other programs implement
applications; TELNET is the only one provided at this point in time.
The original port code was written by Steven Zucker at Rand; the extensions
done at BBN were by Jack Haverty. The TCP was mostly written by Mike
Wingfield, apparently with some assistance by Jon Dreyer. Dan Franklin
apparently wrote the TELNET.
Next, I'll be working on the MIT-CSR machine. That's going to take quite a
while - it's a whole system, with a lot of applications. It does include FTP,
SMTP, etc, though, so it will be a good system for anyone who wants to run V6
with TCP on a /23. We'll have to write device drivers for whatever networking
cards are out there, though.
Noel
> From: Larry McVoy
> Are you sure? Someone else said moshi was hi and mushi was bug. Does
> mushi have two meanings?
Yes:
http://www.nihongodict.com/?s=mushi
Actually, more than two! Japanese is chock-a-block with homonyms. Any
given Japanese word will probably have more than one meaning.
There's some story I don't quite recall about a recent Prime Minister who
made a mistake of this sort - although now that I think about it, it was
probably the _other_ kind of replication, which is that a given set of kanji
(ideograms) usually has more than one pronunciation. (I won't go into why,
see here:
http://mercury.lcs.mit.edu/~jnc/prints/glossary.html#Reading
for more.) So he was reading a speech, and gave the wrong reading for a word.
There is apparently a book (or more) in Japanese, for the Japanese, that lists
the common ones that cause confusion.
A very complicated language! The written form is equally complicated; there
are two syllabaries ('hiragana' and 'katakana'), and for the kanji, there are
several completely different written forms!
Noel
Follow-up to Larry's "Mushi! Mushi!" story
(http://minnie.tuhs.org/pipermail/tuhs/2017-February/008149.html)
I showed this to a Japanese acquaintance, who found it hilarious for a
different reason. He told me that a s/w bug is "bagu" -- a
semi-transliteration -- and "mushi" is "I ignore you". So corporate
called, asked for status, and the technical guy said "I am going to
ignore you!" and then hung up.
N.
I have found a video by Sandy Fraser from 1994 which discusses the Spider network (but not the related Unix software). The first 30 min or so are about Spider and the ideas behind it, then it moves on to Datakit and ATM:
https://www.youtube.com/watch?v=ojRtJ1U6Qzw
Although the thinking behind them is very different, the "switch" on the Spider network seems to have been somewhat similar to an Arpanet IMP.
Paul
==
On page 3 of the Research Unix reader (http://www.cs.dartmouth.edu/~doug/reader.pdf)
"Sandy (A. G.) Fraser devised the Spider local-area ring (v6) and the Datakit switch (v7) that have served in the lab for over a decade. Special services on Spider included a central network file store, nfs, and a communication package, ufs."
I do not recall ever seeing any SPIDER related code in the public V6 source tree. Was it ever released outside Bell Labs?
From a bit of Googling I understand that SPIDER was a ATDM ring network with a supervisor allocating virtual circuits. Apparently there was only ever one SPIDER loop with 11 hosts connected, although Fraser reportedly intended to create multiple connected loops as part of his research.
The papers that Fraser wrote are hard to find: lots of citations, but no copies, not even behind pay walls. The base report seems to be:
A. G. FRASER, " SPIDER-a data communication experiment", Tech Report 23 , Bell Lab, 1974.
Is that tech report available online somewhere?
Tanks!
Paul
> From: Random832
> You could return the address of the last character read, and let the
> user code do the math.
Yes, but that's still 'design the system call to work with interrupted and
re-started system calls'.
> If the terminal is in raw/cbreak mode, the user code must handle a
> "partial" read anyway, so returning five bytes is fine.
As in, if a software interrupt happens after 5 characters are read in, just
terminate the read() call and have it return 5? Yeah, I suppose that would
work.
> If it's in canonical mode, the system call does not copy characters into
> the user buffer until they have pressed enter.
I didn't remember that; that TTY code makes my head hurt! I've had to read it
(to add 8-bit input and output), but I can't remember all the complicated
details unless I'm looking at it!
> Maybe there's some other case other than reading from a terminal that it
> makes sense for, but I couldn't think of any while writing this post.
As the Bawden paper points out, probably a better example is _output_ to a
slow device, such as a console. If the thing has already printed 5 characters,
you can't ask for them back! :-)
So one can neither i) roll the system call back to make it look like it hasn't
started yet (as one could do, with input, by stuffing the characters back into
the input buffer with kernel ungetc()), or ii) wait for it to complete (since
that will delay delivery of the software interrupt). One can only interrupt
the call (and show that it didn't complete, i.e. an error), or have
re-startability (i.e. argument modification).
Noel
> From: Paul Ruizendaal
> There's an odd comment in V6, in tty.c, just above ttread():
> ...
> That comment is strange, because it does not describe what the code
> does.
I can't actually find anyplace where the PC is backed up (except on a
segmentation fault, when extending the stack)?
So I suspect that the comment is a tombstone; it refers to what the code did
at one point, but no longer does.
> The comment isn't there in V5 or V7.
Which is consistent with it documenting a temporary state of affairs...
> I wonder if there is a link to the famous Gabriel paper
I suspect so. Perhaps they tried backing up the PC (in the case where a system
call is interrupted by a software interrupt in the user's process), and
decided it was too much work to do it 'right' in all instances, and punted.
The whole question of how to handle software interrupts while a process is
waiting on some event, while in the kernel, is non-trivial, especially in
systems which use the now-universal approach of i) writing in a higher-level
stack oriented language, and ii) 'suspending' with a sub-routine call chain on
the kernel stack.
Unix (at least, in V6 - I'm not familiar with the others) just trashes the
whole call stack (via the qsav thing), and uses the intflg mechanism to notify
the user that a system call was aborted. But on systems with e.g. locks, it
can get pretty complicated (try Googling Multics crawl-out). Many PhD theses
have looked at these issues...
> Actually, research Unix does save the complete state of a process and
> could back up the PC. The reason that it doesn't work is in the syscall
> API design, using registers to pass values etc. If all values were
> passed on the stack it would work.
Sorry, I don't follow this?
The problem with 'backing up the PC' is that you 'sort of' have to restore the
arguments to the state they were in at the time the system call was first
made. This is actually easier if the arguments are in registers.
I said 'sort of' because the hard issue is that there are system calls (like
terminal I/O) where the system call is potentially already partially executed
(e.g. a read asking for 10 characters from the user's console may have
already gotten 5, and stored them in the user's buffer), so you can't just
simply completely 'back out' the call (i.e. restore the arguments to what they
were, and expect the system call to execute 'correctly' if retried - in the
example, those 5 characters would be lost).
Instead, you have to modify the arguments so that the re-tried call takes up
where it left off - in the example above, tries to read 5 characters, starting
5 bytes into the buffer). The hard part is that the return value (of the
number of characters actually read) has to count the 5 already read! Without
the proper design of the system call interface, this can be hard - how does
the system distinguish between the _first_ attempt at a system call (in which
the 'already done' count is 0), and a _later_ attempt? If the user passes in
the 'already done' count, it's pretty straightforward - otherwise, not so
much!
Alan Bawden wrote a good paper about PCLSR'ing which explores some of these
issues.
Noel
There's an odd comment in V6, in tty.c, just above ttread():
/*
* Called from device's read routine after it has
* calculated the tty-structure given as argument.
* The pc is backed up for the duration of this call.
* In case of a caught interrupt, an RTI will re-execute.
*/
That comment is strange, because it does not describe what the code does. The comment isn't there in V5 or V7.
I wonder if there is a link to the famous Gabriel paper about "worse is better" (http://dreamsongs.com/RiseOfWorseIsBetter.html) In arguing its points, the paper includes this story:
---
Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called PC loser-ing because the PC is being coerced into loser mode, where loser is the affectionate name for user at MIT.
The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.
The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix -- namely, implementation simplicity was more important than interface simplicity.
---
Actually, research Unix does save the complete state of a process and could back up the PC. The reason that it doesn't work is in the syscall API design, using registers to pass values etc. If all values were passed on the stack it would work. As to whether it is the right thing to be stuck in a read() call waiting for terminal input after a signal was received...
I always thought that this story was entirely fictional, but now I wonder. The Unix guru referred to could be Ken Thompson (note how he is first referred to as "from Berkeley but working on Unix" and then as "the New Jersey guy").
Who can tell me more about this? Any of the old hands?
Paul
> From: Lars Brinkhoff
> Nick Downing <downing.nick(a)gmail.com> writes:
>> By contrast the MIT guy probably was working with a much smaller/more
>> economical system that didn't maintain a kernel stack per process.
I'm not sure I'd call ITS 'smaller'... :-)
> PCLSRing is a feature of MIT' ITS operating system, and it does have a
> separate stack for the kernel.
I wasn't sure if there was a separate kernel stack for each process; I checked
the ITS source, and there is indeed a separate stack per process. There are
also three other stacks in the kernel that are used from time to time (look
for 'MOVE P,' for places where the SP is loaded).
Oddly enough, it doesn't seem to ever _save_ the SP - there are no 'MOVEM P,'
instructions that I could find!
Noel
On page 3 of the Research Unix reader (http://www.cs.dartmouth.edu/~doug/reader.pdf)
"Sandy (A. G.) Fraser devised the Spider local-area ring (v6) and the Datakit switch (v7) that have served in the lab for over a decade. Special services on Spider included a central network file store, nfs, and a communication package, ufs."
I do not recall ever seeing any SPIDER related code in the public V6 source tree. Was it ever released outside Bell Labs?
From a bit of Googling I understand that SPIDER was a ATDM ring network with a supervisor allocating virtual circuits. Apparently there was only ever one SPIDER loop with 11 hosts connected, although Fraser reportedly intended to create multiple connected loops as part of his research.
The papers that Fraser wrote are hard to find: lots of citations, but no copies, not even behind pay walls. The base report seems to be:
A. G. FRASER, " SPIDER-a data communication experiment", Tech Report 23 , Bell Lab, 1974.
Is that tech report available online somewhere?
Tanks!
Paul
> we just read the second tape, which read without error. ... at this
> point we have access to everything that was on that machine.
OK, we're starting to get through all the clearances needed to release the
non-MIT Unix systems on the machine. (The MIT one is going to take more
work - I have to curate out all the personal files.)
We have now completed the OK's for the 'Network Unix' (the one done at the
University of Illinois for use on the ARPANET, with NCP). A tarball is
available here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/tmp/nosc.tar
(It's called 'nosc.tar' because it came through NOSC, and then SRI,
on the way to MIT.)
In addition to all the UIllinois code, it also contains early versions of the
MH mail reader (from Rand) and the MMDF mailer (from UDel).
Enjoy!
Noel
With no offense intended, I can't help noting the irony of the
following paragraph appearing in a message in the company of
others that address Unix "bloat".
>'\cX' A mechanism that allows usage of the non-printable
> (ASCII and compatible) control codes 0 to 31: to cre-
> ate the printable representation of a control code the
> numeric value 64 is added, and the resulting ASCII
> character set code point is then printed, e.g., BEL is
> '7 + 64 = 71 = G'. Whereas historically circumflex
> notation has often been used for visualization pur-
> poses of control codes, e.g., '^G', the reverse
> solidus notation has been standardized: '\cG'. Some
> control codes also have standardized (ISO 10646, ISO
> C) alias representations, as shown above (e.g., '\a',
> '\n', '\t'): whenever such an alias exists S-nail will
> use it for display purposes. The control code NUL
> ('\c@') ends argument processing without producing
> further output.
Except for the ISO citations, this paragraph says the same
thing more succinctly.
'\cX' represents a nonprintable character Y in terms of the
printable character X whose binary code is obtained
by adding 0x40 (decimal 64) to that for Y. (In some
historical contexts, '^' plays the role of '\c'.)
Alternative standard representations for certain
nonprinting characters, e.g. '\a', '\n', '\t' above,
are preferred by S-nail. '\c@' (NUL) serves as a
string terminator regardless of following characters.
And this version, 1/3 the length of the original, tells all
one really needs to know.
'\cX' represents a nonprintable character Y in terms of the
printable character X whose binary code is obtained
by adding 0x40 (decimal 64) to that for Y. '\c@'
(NUL) serves as a string terminator regardless of
following characters.
Doug]
On 2017-02-09 20:55, corey(a)lod.com (Corey Lindsly) wrote:
>
>> In spite of that, I'm typing away to you all, I'm 3ms away from 8.8.8.8
>> (Google's dns server). Go wireless. It's pretty remarkable to be here
>> and have decent net connectivity.
>>
>> I do not yearn for the days of SLIP.
>> --
>> ---
>> Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
> 3ms? Really? I'm impressed, and I'd like to see your traceroute. We peer
> directly with Google and I get 4-5ms. Do share.
Meh. From Uppsala in Sweden I seem to have about 2ms ping time to 8.8.8.8...
Psilocybe:update/bqt> ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=1 ttl=56 time=2.10 ms
64 bytes from 8.8.8.8: icmp_req=2 ttl=56 time=1.93 ms
64 bytes from 8.8.8.8: icmp_req=3 ttl=56 time=2.05 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=56 time=1.89 ms
64 bytes from 8.8.8.8: icmp_req=5 ttl=56 time=2.02 ms
64 bytes from 8.8.8.8: icmp_req=6 ttl=56 time=2.05 ms
64 bytes from 8.8.8.8: icmp_req=7 ttl=56 time=2.00 ms
64 bytes from 8.8.8.8: icmp_req=8 ttl=56 time=1.97 ms
64 bytes from 8.8.8.8: icmp_req=9 ttl=56 time=2.03 ms
64 bytes from 8.8.8.8: icmp_req=10 ttl=56 time=2.10 ms
^C
--- 8.8.8.8 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 1.894/2.020/2.108/0.067 ms
Psilocybe:update/bqt> traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 r1.n.it.uu.se (130.238.19.254) 1.986 ms 2.324 ms 2.717 ms
2 l-uu-1-b1.uu.se (130.238.6.251) 0.288 ms 0.680 ms 0.646 ms
3 uu-r1.sunet.se (130.242.6.148) 0.686 ms 0.685 ms 0.673 ms
4 uppsala-upa-r1.sunet.se (130.242.4.138) 0.672 ms 0.661 ms 0.657 ms
5 stockholm-fre-r1.sunet.se (130.242.4.26) 3.503 ms 3.468 ms 3.483 ms
6 se-fre.nordu.net (109.105.102.9) 24.456 ms 24.532 ms 24.153 ms
7 se-kst2.nordu.net (109.105.97.27) 1.934 ms 1.902 ms 1.891 ms
8 as15169-te-tc1.sthix.net (192.121.80.47) 2.204 ms 2.189 ms
72.14.196.42 (72.14.196.42) 1.872 ms
9 216.239.40.29 (216.239.40.29) 1.862 ms 1.941 ms 216.239.40.27
(216.239.40.27) 1.995 ms
10 209.85.251.233 (209.85.251.233) 2.398 ms 209.85.245.61
(209.85.245.61) 2.778 ms 72.14.234.85 (72.14.234.85) 2.385 ms
11 google-public-dns-a.google.com (8.8.8.8) 2.372 ms 2.366 ms 2.337 ms
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> Lots of commands are now little shells
...
> Linux today is much more like the systems
> Unix displaced than it is like Unix
So depressingly true!
Doug
Thanks a lot for the tip Paul. It's great that others are working in
this area. Although I must say that as a kind of a "historian" I try
to go to primary sources where possible. Although I had already
converted a fair bit of code in the manner you describe, I am actually
re-converting a fair bit of it since I now have a semi-automated
system for doing so, that way I get pretty consistent results that
aren't reliant on ad-hoc decisions made during porting. Well, good
judgement is still needed, but I have a set of mental algorithms for
fixing exactly the kinds of questionable constructs you describe,
which lead to pretty consistent results. Using my scripts I converted
bin, usr.bin and lib of 4.3BSD in a few weeks, although a fair bit of
that time was spent on "bin/as" and "bin/sh" and "bin/csh" and other
pathological cases. When I have time I will proceed to ucb. I did all
subdirectories of bin (things like sed which are multi-module
programs) but not usr.bin yet.
So what I'll probably do when I get to looking at LSX is to re-convert
and then compare against your work, since either of us could quite
well have found questionable constructs missed by the other. Also,
earlier today I was looking at Noel's page about improving V6:
http://mercury.lcs.mit.edu/~jnc/tech/ImprovingV6.html
Anyway, I'm much more of a V7 guy and I think I would find V6 strange
and compromised, so I am thinking I will definitely have to apply some
of these patches, or at least check how much they increase the code
size by. At the very least, lseek() and mdate() have to go in, I'm not
sure about stdio since having a suite of the standard commands that
don't use stdio and hence are smaller/slower might be OK. But probably
my preferred approach is to calculate a patch V6 -> Mini Unix or V6 ->
LSX and then try to apply that on top of V7. Hmm.
As to moving to a V7 kernel and then adding TCP/IP I'm not sure if
this is adviseable, as I was saying earlier I think it might be best
to keep that functionality outboard from the kernel. The question in
my mind is (1) does the Mini Unix / LSX system have to be a fully
participating node on the network or can it be a leaf node without any
routing, and (2) does it have to respond to ping or incoming
connections at any time. Since my scenario is a simple SLIP link to my
home server, (1)=No for me. As to (2), I see two scenarios, (a) the
machine is used as a development machine, where I run "ed" and "cc"
and so on, and occasionally "ftp" or "rcp" as a client only, or (b)
the machine is used as a remote node for something like say data
logging or web serving, where it runs the same application all the
time, and I connect to it to retrieve results and/or download software
updates. In case (a) there are only outgoing connections. In case (b)
there are incoming connections, but the machine runs the same
application all the time, so there's no disadvantage to having TCP in
userspace. I don't envisage a more complicated scenario where it runs
inetd in the background and a console in the foreground, due to RAM
limits.
cheers, Nick
On Thu, Feb 9, 2017 at 12:56 AM, Paul Ruizendaal <pnr(a)planet.nl> wrote:
> Nick,
>
> If you want to work with LSX, you might have a look at the LSX port I did for the TI990 mini computer: http://1587660.websites.xs4all.nl/cgi-bin/9995/dir?ci=1c38b1fc8792c80b&name…
> It is a further development from the work that was done for BKUNIX by Leonid Broukhis (https://sourceforge.net/p/bkunix/code/HEAD/tree/)
>
> You get stuff converted to a dialect of C acceptable by modern compilers, and some kludges like using 'char*' for 'unsigned' and 'int a[2]' for 'long a' are cleaned up.
>
> The repository also has a V6 kernel with similar clean up and some V7 compatibility ('lseek' instead of 'seek', etc.). My next step would be to move to a V7 kernel and add TCP/IP capability. This is how I got interested in the history of sockets and TCP/IP. I have found that the BSD stack (as found in e.g. ULTRIX-11) will not fit in 64KB (note: just the network stack). The BBN stack appears to fit in 56KB, with 15KB of buffers available; I think it could be integrated with a V7 kernel as a second kernel process.
>
> Paul
>
> On 8 Feb 2017, at 12:21 , Nick Downing wrote:
>
>> Yes, NetBSD and 386BSD are interesting. They could well form a good
>> basis for a minimal but fully functional OS for a modern platform. One
>> reservation I have though, is as follows: When 386BSD and its
>> derivatives like OpenBSD, NetBSD, FreeBSD came out, Unix was still
>> encumbered and thus they had to be based on 4.4BSD Lite (not even
>> NET/2 was safe). Nobody made an unencumbered version of say 4.3BSD or
>> even NET/2, even though it was theoretically possible, by examining
>> what had to be taken out/added to produce 4.4BSD Lite.
>>
>> Given that Unix is now unencumbered I believe there is no point
>> adopting 4.4, or even 4.3Reno, because the main thing done in those
>> releases as far as I know, is to add partial POSIX compliance. But if
>> you want POSIX compliance you will not achieve minimality. As an
>> example consider the BSD sigvec() routine. POSIX calls this
>> sigaction(), the old SV_ONSTACK flag becomes SA_ONSTACK, the old
>> integer mask becomes a sigset_t and so on... and in any reasonable
>> POSIX compliant BSD the two calls are gonna have to coexist, etc.
>>
>> As to 32V, I appreciate your idea, as I was having some similar
>> thoughts myself. However I personally wouldn't use 32V as a basis for
>> any serious porting work, because I would assume V7 and 4.3 are much
>> more stable and well tested, since they ran in a lot of installations
>> over a long time. That's not to denigrate the huge achievement in
>> porting V7 to the VAX and producing 32V, but it probably has some
>> rough edges that were smoothed out over time to produce the 4BSDs. The
>> situation is a bit different for kernel/toolchain/other userspace.
>>
>> As to the kernel I have been trying to make a detailed comparison
>> between 32V and the BSDs, but have been a bit put off by the fact that
>> all files moved around and may have been renamed, I will figure it all
>> out eventually though. As to the toolchain I have compared it quite
>> carefully with 4.3BSD, and I conclude you should use the later
>> toolchain as there is no disadvantage and some advantages to doing so.
>> As to the rest of the userspace, I BELIEVE that it's stock V7 with any
>> 32-bit issues fixed, but I suspect somewhat hastily and superficially.
>>
>> Producing a 32V-like kernel from 4.3BSD sources would probably be
>> quite difficult, it would be relatively easy to disable added system
>> calls, but harder to disable things like setpgrp() or the associated
>> support in the TTY drivers. More seriously the memory management would
>> have to change, I am planning however to make virtual memory optional
>> in the 4.3BSD kernel, by maybe putting the 32V code back in, protected
>> by #ifndef VM or some such (somewhat like Steven Schultz has done in
>> porting 4.3BSD to PDP-11 to produce 2.11BSD).
>>
>> On the other hand producing a 32V-like userland from the 4.3BSD
>> sources would be pretty easy, I think just delete the sources for any
>> executables that weren't distributed with 32V and possibly, if any of
>> the tools seem particularly bloated, comment out any command line
>> switches or features that weren't in 32V. Going to this level of
>> effort would likely be pointless though. Another option would be
>> taking V7 and re-porting it (except the toolchain) over to a 32-bit
>> environment and kernel. I have developed over the past months, tools
>> that make this relatively straightforward, and in the process would
>> rediscover any 32-bit issues that were fixed in creating 32V. So I
>> wouldn't use 32V.
>>
>> On a slightly different tack, I also have been for some time
>> investigating the idea of an Apple II port of Unix, there are massive
>> technical issues to be solved, but I think I got a bit closer the
>> other night when I decided to collect all information and resources I
>> could find about Mini-Unix and LSX (LSI Unix). Both are
>> self-supporting V6-based environments, the compiler can only compile
>> small programs but it is nonetheless possible for each Unix to
>> recomple itself. LSX I believe could run from floppies (dunno about
>> 140K floppies) in less than 64K.
>>
>> So, you know, true minimality is a relative term. We want something
>> LSX-like for an Apple II, something 2.11BSD-like for an IBM PC/XT or
>> 286 (as Peter Jeremy noted, it's a good fit, and I'd be interested to
>> know more), something 4.3BSD-like for a VAX or 386... something a bit
>> more fully featured for a modern 64-bit multi-gigabyte system... but
>> just not as bloated as what we currently rely on. Hmm well it's hard.
>> What I do know, is that I have a lot of old hardware with <16M RAM and
>> Linux won't run on it anymore, and this annoys me very greatly.
>>
>> In talking about FreeBSD/Linux bloat I forgot to mention the packet
>> filter, iptables (Linux) or pf (FreeBSD). I have a bit of experience
>> with this, since I regularly used to put 2 Ethernet cards in my home
>> server and make it Internet facing through one of them and share the
>> connection using NAT through the other card. But I've come to the
>> conclusion this is stupid, and moreover, that putting a complete
>> mini-language into the kernel just for this purpose is utterly stupid.
>> Programming the thing from userspace is incredibly intricate, and all
>> this complexity serves no purpose.
>>
>> I recently found out about SLIRP (userspace packet rewriting) and I
>> think this would be a good way to implement NAT on servers or routers
>> that actually need to do NAT -- just make a userspace process that
>> runs something SLIRP-like and has a raw socket to the second Ethernet
>> card, and Bob's your uncle.
>>
>> But this got me thinking along pretty productive lines in terms of the
>> tiny Apple II port -- I have been wanting to put the 2.11BSD network
>> stack into an Apple II for a long time, but I've now realized this is
>> not necessary. A much better approach for those Mini-Unix or LSX or
>> even V7 systems, would be to have a userspace library that does SLIP
>> and contains its own TCP, UDP, IP drivers, resolver and so on. Then if
>> you run a userspace program like say, ftp, which is linked to the
>> userspace TCP library, it would basically just open /dev/ttyXX, bring
>> up the SLIP link itself, do any necessary downloads etc, then close
>> the TTY and stop responding to any IP stuff coming over the SLIP link
>> whilst you quit to the prompt, until another program reopens it.
>>
>> cheers, Nick
>>
>> On Wed, Feb 8, 2017 at 2:56 PM, Jason Stevens
>> <jsteve(a)superglobalmegacorp.com> wrote:
>>> What about NetBSD 1.1 or even 386BSD?
>>>
>>> There never was a 4.2 or 4.3 for i386 was there?
>>>
>>> I'd guess the 32v userland could be built on early 4.4BSD Lite/NET2 greatly
>>> reducing its footprint.
>>>
>>>
>>>
>>>
>>> On February 8, 2017 11:47:03 AM GMT+08:00, Nick Downing
>>> <downing.nick(a)gmail.com> wrote:
>>>>
>>>> This is an issue that interests me quite a bit, since I was running
>>>> FreeBSD in an effort to get around Linux bloat problems discussed.
>>>> Well not that I really mind Linux as a user interface / runtime
>>>> environment / main development machine, but I think it probably
>>>> shouldn't be used as a "least common denominator" for development
>>>> since you end up introducing unwanted dependencies on a whole lot of
>>>> stuff.
>>>>
>>>> So I was running FreeBSD as a more minimal *nix. I did quite a lot of
>>>> interesting stuff with FreeBSD such as setting up diskless
>>>> workstations in my home, etc. I spent a lot of time tinkering around
>>>> in the kernel code. I was planning to do some serious development on
>>>> 4.4BSDLite or FreeBSD to create an operating system more to my liking.
>>>> So, I was looking carefully at differences since ancient *nixes.
>>>>
>>>> And, I can say that FreeBSD is pretty bloated. Umm well they've added
>>>> SMP, at the time it was using the Giant Lock although that could be
>>>> fixed by now. They've added VFS and NFS of course. They've added an
>>>> entire subsystem for block devices IIRC that handles partitioning and
>>>> possibly some other sophisticated stuff, which I believe is their own
>>>> design. Umm the kqueues and I believe they have their own
>>>> implementation of kernel threading or lightweight processes including
>>>> some sort of idle daemon. The network stack is heavily upgraded, to
>>>> the extent I looked into it, the added features are things you would
>>>> want (syncookies = DOS protection, etc) but also could not possibly be
>>>> called minimal, and would preclude running it on other than a
>>>> multi-megabyte machine. They have multiple ABIs so the kernel can
>>>> accept Linux or BSD syscalls or whatever else (I used it to run
>>>> Acrobat Reader Linux on my FreeBSD desktop). Umm I am pretty sure they
>>>> have kernel modules ala Linux. Lots and lots and lots of stuff... and
>>>> that's only considering the kernel. If you look in the ports
>>>> collection you see they have incredible amounts of bloat there too...
>>>> for instance GNOME, Libreoffice, LATEX, gcc, python... not that I'm
>>>> denigrating these tools, since they do invaluable work and I use them
>>>> every day, but the point is, you CANNOT call them minimal.
>>>>
>>>> The quest for a clean minimal system goes on ->. FreeBSD is not the
>>>> answer. In fact I believe 4.3BSD-Reno and 4.4 go strongly offtrack.
>>>>
>>>> cheers, Nick
>>>>
>>>> On Wed, Feb 8, 2017 at 1:55 PM, Greg 'groggy' Lehey <grog(a)lemis.com>
>>>> wrote:
>>>>>
>>>>> On Tuesday, 7 February 2017 at 15:38:40 -0800, Steve Johnson wrote:
>>>>>>
>>>>>> Looking back, the social dynamics of the Unix group helped a lot in
>>>>>> keeping the bloat small. The rule was, whoever touches something
>>>>>> last becomes its owner. Of course, we were all free to complain
>>>>>> about things, and did, but the amalgamation of tinkerings that
>>>>>> characterizes most of the Linux commands just didn't happen.
>>>>>
>>>>>
>>>>> Out of interest: where do you (or others) consider that the current
>>>>> BSD projects it in this comparison?
>>>>>
>>>>> Greg
>>>>> --
>>>>> Sent from my desktop computer.
>>>>> Finger grog(a)lemis.com for PGP public key.
>>>>> See complete headers for address and phone numbers.
>>>>> This message is digitally signed. If your Microsoft mail program
>>>>> reports problems, please read http://lemis.com/broken-MUA
>>>
>>>
>>> --
>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
DEL, sometimes labeled RUBOUT has a very important feature. It's all ones. When punching a paper tape, if you make a mistake you can mechanically backspace the tape (there's a button on the punch rather than the actual backspace key), you can then press RUBOUT which overpunches the incorrect character. Presumably, whatever system that this was designed for disregards those characters when encountered.
Amusingly, we though the HERE IS key was just to generate leaders because none of our teletypes had the drum programmed. Later I found out that you could break the tabs on the drum and have HERE IS send a short string of characters. ^E (called ENQ or sometimes WRU ...for who are you) triggers this to be sent in response.
To get back to the UNIX tie in, I actually had for years a Model 37 teletype. This was one of the few terminals that you didn't have to set the nl mode mapping for. It had a large key marked NEWLINE where RETURN usually is and sent ^J (\n) and responded to it the way UNIX expected. In addition it handles all the ESC-8 ESC-9 etc... codes that nroff sent by default without needing a filter. Mine was an ASR so it had the tape unit. It lacked the "greek box" that the one at JHU had to print greek characters after an ^N (shift in). The thing was amusing as it didn't turn on the motor until the modem came ready and when carrier detect was asserted a big green PROCEED light lit on the front.
It was quaint, but when I finally got a higher speed modem, I switched back to using a CRT. The Model 37 was a screaming 150 baud.
I finally "donated" it to RS who dumped the thing behind someone's car somewhere.
> From: Michael Kjorling
> That wouldn't have anything to do with how ^@ is a somewhat common
> representation of 000, would it? .. I've always kind of wondered where
> that notation came from.
Well, CTRL-<*> is usually just the <*> character with the high bits cleared.
So, to have a printing representation of NULL, you have two character choices
- SPACE, and '@'. Printing "^ " is not so hot, so "^@" is better.
Also, if you look at an ASCII table, usually people just take the @-_ column,
and use that, since every character in that column has a printing
representation. The ' '-? column is missing the ' ', and `-<DEL> is missing
the DEL. So if you just take a CTRL character and set the 0100 bit, and print
it as "^<char>", you get something readable.
(Note that CTRL-' ' _is_ usually used when one needs to _input_ a NUL
character.)
Noel
Inspired by:
> Stephen Bourne after some time wrote a cron job that checked whether an
> update in a binary also resulted in an updated man page and otherwise
> removed the binary. This is why these programs have man pages.
I want to tell a story about working at Sun. I feel like I've sent this
but I can't find it in my outbox. If it's a repeat chalk it up to old
age.
I wanted to work there, they were the Bell Labs of the day, or as close
as you could get.
I got hired as a contractor through Lachman (anyone remember them?) to do
POSIX conformance in SunOS (the 4.x stuff, not that Solaris crap that I
hate).
As such, I was frequently the last guy to touch any file in the kernel,
my fingerprints were everywhere. So when there was a panic, it was
frequently laid at my doorstep.
So here is how I got a pager and learned about source management.
Sun had two guys, who will remain nameless, but they were known as
"the SCSI twins". These guys decided, based on feedback that "people
can interrupt sun install", to go into the SCSI tape driver and disable
SIGINT, in the driver. The kernel model doesn't allow for drivers messing
with your signal mask so on exit, sometimes, we would get a "panic: psig".
Somehow, I sure was because of the POSIX stuff, I ended up debugging this
panic. It had nothing to with me, I'm not a driver person (I've written
a few but I pretty much suck at them), but it landed in my lap.
Once I figured it out (which was not easy, you had to hit ^C to trigger it
so unless you did that, and who does that during an install) I tracked down
the code to SCSI twins.
No problem, everyone makes mistakes. Oh, wait. Over the next few months
I'm tracking down more problems, that were blamed on me since I'm all over
the kernel, but came from the twins.
Suns integration machines were argon, radon, and krypton. I wrote
scripts, awk I think, that watched every update to the tree on all
of those machines and if anything came from the SCSI twins the script
paged me.
That way I could go build and test that kernel and get ahead of the bugs.
If I could fix up their bugs before the rest of the team saw it then I
wouldn't get blamed for them.
I wish I could have figured out something like Steve did that would have
made them not screw up so much but this was the next best thing. I actually
got bad reviews because of their crap. My boss at the time, Eli Lamb, just
said "you are in kadb too much".
--lm
> From: Paul Ruizendaal
> The best one seems to have been the 3Com stack, which puts IP in the
> kernel and TCP in a daemon.
I've gotta get the MIT V6 one online.
Incoming demux is in the kernel; all of the TCP, and everything else, is in
processes along with the application - one process per application instance.
It sounds like it might be clunky, but it's not: there are a couple of
different TCP's (a small, low performance one for things like User TELNET,
timer servers, yadda-yadda; a bigger, higher-performance one for things like
FTP), and the application just calls them as subroutine libraries
(effectively). Since there's no IPC overhead, the performance is pretty good.
Unfortumately, a lot of the stuff never migrated from personal directories to
the system folder, so I have to go curate out the personal files (or, more
likely, move them all to a system folder) before I can release it all.
> Perhaps economizing on fragmentation and and window management is
> better.
Fragmentation, perhaps - but good window management is a must.
> I wonder if just putting the code for this state in the kernel and
> handling only the state changes and other states in a daemon is perhaps
> the best split on constrained hardware.
I don't think that's a good idea; cutting the TCP in two parts, which have to
communicate over an interface is going to be _really_ ugly: do you have one
copy of the connection state data (in which case one them has to go through an
interface to get to it), or two (synchronization issues). If you want a small
kernel footprint, take the MIT approach.
Noel
> I'm fairly certain it was originally in BCPL.
>
> You could just drop a note to Bjarne Stroustrup and ask. :-)
On page 44 of _The Design and Evolution of C++_ (Addison-Wesley, 1994), Stroustrup says:
“However, only C, Simula, Algol68, an in one case BCPL left noticeable traces in C++ as released in 1985. Simula gave classes, Algol68 operating overloading, references, and the ability to declare variables anywhere in a block, and BCPL gave // comments.”
He says a bit more about // comments on page 93, including an example of how they introduced an incompatibility with C.
> From: Nick Downing
> I'm much more of a V7 guy and I think I would find V6 strange and
> compromised
De gustibus. I used it for many years, and am quite at home in it. I think
it's a marvel of functionality/size - at the time it came out, not much bigger
than DEC PDP-11 OS's, but with a 'big system' feel to it (which they
_definitely_ did not) - in fact, _better_ than most big systems of the day.
But I can see it would be rather too simple (and in the kernel inelegant,
code-wise, by today's standards - see below) for many. V7 is not that
different, in terms of user experience, from V6, though.
> I am thinking I will definitely have to apply some of these patches, or
> at least check how much they increase the code size by.
Sorry, that page is kind of a mish-mosh. Most of the stuff that's talked about
there is for user commands, not the kernel.
There are only a few kernel changes (lseek() and mdate(), and param.c so that
the new 'si' command can get thing from param.h without having to have it
compiled in), and they are all small.
> But probably my preferred approach is to calculate a patch V6 -> Mini
> Unix or V6 -> LSX and then try to apply that on top of V7.
I'm a little confused as to what your goal is here. Get V6 running on some
other architecture? Upgrade V6 for some goal which I am not aware of? I know
you probably said something in an earlier email, sorry, I don't recall.
Anyway, if you're going to do anything with V6 kernel code, you need to be
aware that it's really idiosyncratic - a lot of its written in a very early
dialect of C, and while things like 'a =+ b' -> 'a += b' and 'int a 1' -> 'int
a = 1' are pretty easy to fix, there are lots of intances of int's being used
as pointers to several different kinds of structures, etc, etc.
If you want to move an early, small Unix to something other than a PDP-11, V7
is probably a much better bet.
> As to moving to a V7 kernel and then adding TCP/IP I'm not sure if this
> is adviseable, as I was saying earlier I think it might be best to keep
> that functionality outboard from the kernel.
There are a couple of early TCP/IP's which ran outside the kernel, but I think
the standard Berkeley one might be a handful to move out.
Noel
> From: Charles Anthony
> Sigh. That tops my Multics bug report.
No way! You actually got the fix approved by an MCB! Much cooler! :-)
> BSD4.1 is circa 1890?
Well, it's old, but not _that_ old!! :-)
Noel
> Hi, we just read the second tape, which read without error.
> ...
> So at this point we have access to everything that was on that machine.
So, in the process of transferring this all to a TAR file, we found a bug in
BSD 4.1b. (The length of some directories whose last sector held only one
entry was being incorrectly set to the actual length of the directory, not a
multiple of the sector size.)
Anyone know where I can report a BSD 4.1b bug? :-)
Noel
PS: Although the Algol-60 isn't there, there is a nice LISP (good enough to
run The Doctor ... :-)
> From: Nick Downing
> is it possible for you to read the other tapes also?
Hi, we just read the second tape, which read without error. It appears to be
mostly the same stuff as the first, except that for some not-now-understood
reason, a lot of the sub-directories in /src/src (the directory that held most
of the sources) weren't there on the _first_ tape, but _are_ there on the
_second_. So at this point we have access to everything that was on that
machine.
It's too long a list to go through, but here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/tmp/csr2_edfiles.txt
is an edited list of the files on the machine. Most of /usr/ has been deleted,
since it contains a lot of personal files, the names of which I don't wish to
make public.
Alas, some of the code (e.g. the much of the MIT TCP/IP) was in some personal
directories; it will take me a while to curate all that.
Also, this machine did not contain everything that was done at MIT: one
particularly saddening lacuna is that the Algol-60 (written for the 'Intro to
programming for CS majors' course, 6.031 to those for whom that means
anything) isn't there, along with its documentation. With that being _such_ an
incredibly influential language, I'd really wanted to see a PDP-11 version
made available.
There's also an APL, and some missing subdirectories in the BCPL source
directory ('henke', 'richards' and 'tenex'). Etc, etc.
I have reached out to people at MIT, to see if a tape backup from the machine
where all that was can be found; I will keep you all posted if anything shows
up.
> I would be particularly interested in the early 8080 compiler
Yes, that's there ('c8080'), but object-only - it may have been something that was
purchased from an outside vendor. There does seem like there might be an 8080-back
end for the BCPL compiler.
Noel
> From: Paul Ruizendaal
> Great! I'd love to take a look at all that.
OK, it'll all be appearing once we have a chance to get organized (it's all
mixed in with personal files).
> That is very interesting. It may be related to the V6 with NCP from
> UoI/DTI.
I think it _is_ the V6 from UoI/DTI. The source has Gary (?) Grossman's and
Steve Holmgren's name on it, and the headers say they date from 1074-75.
> The printout does not have the kernel modifications with it, so it would
> be great if your archive does include that.
The archive does include the complete kernel, but i) the changes aren't listed
in any way (I forsee a lot of 'diffs', unless you just take the entire
kernel), ii) there's a file called 'history' which contains a long list of
general changes/improvements of the kernel not really related to TCP/IP, by a
long list of people, dated from the middle of '78 to the middle of '79. So it
looks like he started with a considerably modified system.
The only client code I see is User Telnet. (The MIT code has User and
Server Telnet and FTP, as well as SMTP, but it uses a wholly different
TCP interface.)
Noel
Side story on Unix related to Xview.
I go to a conference in San Jose for Sun users in the mid 80’s
and am discussing Xview with a few folks (names lost to memory).
A very nice person named Nancy Blackman walks up and joins
the discussion. We get to talking and she has a weird memory
bug and I’m willing to help her look at it. So we go to ‘her place’
which is her lab at Moffet field. We discuss her bug, find a fix
and I go back home to San Diego.
I mention that I met this very nice lady named Nancy Blackman
at the conference to my wife. Turns out my wife went to school
with Nancy before she moved to San Diego.
So, who else has weird stories of how Unix development or
Unix conferences had the side effect of making the world a
smaller place.
If this is too off topic, drop the conversation here.
David
> On Feb 1, 2017, at 1:52 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> Date: Wed, 1 Feb 2017 13:40:11 -0800
> From: Larry McVoy <lm(a)mcvoy.com>
> To: Noel Chiappa <jnc(a)mercury.lcs.mit.edu>
> Cc: tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] shared memory on Unix
> Message-ID: <20170201214011.GG880(a)mcvoy.com>
> Content-Type: text/plain; charset=us-ascii
>
> On Wed, Feb 01, 2017 at 02:44:34PM -0500, Noel Chiappa wrote:
>>> From: "Steve Johnson"
>>
>>> The meetings went on for over a year, but _I NEVER MET WITH THE SAME
>>> PERSON TWICE!_ It seemed that the only thing the marketing group knew
>>> how to do was reorganize the marketing group...
>>
>> Shades of SI:Electric-Marketing (I _think_ that was its name) on the Symbolics
>> LISP Machine...
>>
>> (For those who never had the joy of seeing this, it randomly drew a bunch of
>> boxes with people in them on the screen in a hierarchy, connected them, and
>> then started randomly moving the boxes around... I wonder if the source
>> still exists - or, better yet, a video of it running? Probably not, alas.)
>
> Sun had reorgtool (orgtool) that had all the high up people down to
> directors I think and you pushed a button and it reshuffled them.
> It was a Xview app, anyone remember that toolkit (I sorta miss it).
>
> --lm
I hope this isn't too far off topic here.
I've been meaning to rename the few systems I administer with names
that reference famous (or at least somewhat well-known in the proper
circles) historical UNIX systems, but I have been unable to find any
lists of such names so have no real place to start. About the closest
I _was_ able to find is the ARPANET map[1] of the late 1970s that is
on Wikipedia and is occasionally circulated, but which gives mostly
architecture, location and links, not any system (host) names.
Short of unimaginative things like calling my home router IMP[2] or
things like that, can anyone either suggest names with a bit of
background (where they were, what hardware, what time period, etc.),
or point me toward online resources where I can find lists of those?
[1]: https://en.wikipedia.org/wiki/File:Arpanet_logical_map,_march_1977.png
[2]: https://en.wikipedia.org/wiki/Interface_Message_Processor
--
Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
“People who think they know everything really annoy
those of us who know we don’t.” (Bjarne Stroustrup)
> From: Mark Hare <markhare(a)buffalo.edu>
> I believe I have found the problem.
Excellent!
> Looking at the DL11-W, I saw that there is an uncovered trace on the
> circuit board just below the pins of the BERG connector. It appears
> that the exposed conductor ends of the cable were making contact with
> this trace, which shorted the entire cable together.
Shorting any set of DL11-W pins together still shouldn't have caused that
symptom, AFAICS.
I'm too lazy to pull out a DL11-W, and prints thereof, to check, but I suspect
that what actually happened is that trace carries some 'interesting' signal,
and it got grounded 'or something' by the exposed ends of the flat cable.
> I'm just glad that there doesn't appear to be any lasting damage.
Indeed! I was worried about that...
Noel
> From: Mark Hare
> For a more permanent solution, I designed a simple adapter board that
> connects to the BERG 40 connector on the DL11-W and converts it to a DB9
> serial port ... I also ordered a 40-pin (non-IDE) ribbon cable to
> connect the DL11-W to my adapter.
> When I connected everything, the 11/34 would start but no lights would
> appear on the front panel. I tried disconnecting the adapter but leaving
> the ribbon cable plugged into the BERG connector, but the problem
> persisted. When I removed the ribbon cable entirely, the unit powered on
> with no problems.
That's extremely odd. There isn't a pin on the DL11-W Berg connector which
should be able to cause anything like that kind of behaviour. The only
_possible_ thing I can think of, looking at the list of signals on the Berg,
is that you are grounding +5 (TT). Either that, or your DL11-W has some
serious issue?
> Since this is a straight-through ribbon cable, I don't see what could be
> causing this problem.
Me either. But clearly it's not just a straight-through ribbon cable....
I myself wouldn't have gone that route; one can obtain 40-pin IDE/DuPont (they
are all .1" spacing pins, and basically interchangeable, module keying)
connector shells, and female pins for same; I would have made a custom cable
to plug into the Berg with that, to a DE-9S or DB-9P connector (depending on
whether one wanted one wired as a DCE or DTE). (I myself make such cables, but
to a DB-25S connector, and then use a commercial DB-25P to DE-9S adapter when
needed.) Oh well...
Does the DL11-W still work, using the jumper cables kludge?
Noel
Hello all,
This is my first time emailing the list, so please let me know if this
doesn't belong here of if I'm breaking any rules.
A few months ago, I rescued a PDP 11/34a with 2 RL01 drives from the scrap
heap. The unit appears to work fine based on my limited front-panel
testing. I haven't gotten the drives running yet since someone cut the
power cords when the cabinet was being removed.
There is a DL11-W serial line unit/realtime clock (M7856) installed in the
11/34 that I want to use for serial input/output. I have configured the
card for 9600 baud, 8N1. Using some jumper wires, I carefully connected the
card to a serial cable and a computer running a terminal and I was able to
send some characters back and forth successfully.
For a more permanent solution, I designed a simple adapter board that
connects to the BERG 40 connector on the DL11-W and converts it to a DB9
serial port (In restrospect, this product was already available at
https://oshpark.com/shared_projects/uTMf3v08 but I didn't know about that
at the time). I also ordered a 40-pin (non-IDE) ribbon cable to connect the
DL11-W to my adapter.
When I connected everything, the 11/34 would start but no lights would
appear on the front panel. I tried disconnecting the adapter but leaving
the ribbon cable plugged into the BERG connector, but the problem
persisted. When I removed the ribbon cable entirely, the unit powered on
with no problems.
Since this is a straight-through ribbon cable, I don't see what could be
causing this problem. I have checked the continuity of each wire in the
cable, and there doesn't appear to be a problem. I'd appreciate any advice
that anyone has to offer.
Yours,
Mark D. Hare
markhare(a)buffalo.edu
University at Buffalo
B. S. Civil Engineering '16
M. S. Structural/Earthquake Engineering Student
> From: Clem Cole
> don't expect a lot of wild and crazy names
Yeah, those arrived when places started to get lots of identical machines,
and needed a theme to name them. So I remember MIT-LCS had VAX 750's called
Tide, Borax, etc (cleaners); MIT-AI had Suns called Grape-Nuts, Wheaties, etc
(cereals).
I know other places had similar name sets, but I can't recall the themes of
any of them - although looking at an old HOSTS.TXT, I see CMU had systems
called Faraday, Gauss, etc, while Purdue had Fermat, Newton, etc; U-Texas had
Disney characters, BBN had fish, U-Washington had South Pacific islands - the
list just goes on and on.
Google for a old Host file, that's a good source if you want to know more.
Noel
Co-inventor of Unix, he was born on this day in 1943. Just think: without
those two, we'd all be running M$ Windoze and thinking that it's
wonderful.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
As a tourist in Christchurch NZ in 1982, I saw a notice of a student piano
recital at the university. Free, why not? The fellow who sat next to me turned
out to be a phyicist. On learning that I was a computer scientist, he proudly
described his wonderful new computer and operating system--the first of its
kind in the university, if I remember correctly. I let on that I was familiar
with it, so we both left the recital with a small-world story to tell.
Doug
Slartibartfast brings back fond memories of THHGTTG.
Of course those in IT simply know that with a Guide and a towel
there's no need to panic :-)
Cheers,
rudi
The presence of some sort of shared memory facility in the
BBN V6 Unix kernel got me thinking about the origins of
shared memory on Unix.
I had a vague recollection that primordial versions were present
in either PWB or CB3, but a quick glance at the source indicates
that this is not correct.
What are the origins of shared memory on Unix, i.e. what came
before mmap() and SysV IPC? Was the BBN kernel the first to
implement such a facility on Unix?
Paul
Not so long ago I joked about putting a Cray-1 in a watch. Now that we are
essentially living in the future, what audacious (but realistic)
architectures can we imagine under our desks in 25 years? Perhaps a mesh
of ten-million of today's highest end CPU/GPU pairs bathing in a vast sea
of non-volatile memory? What new abstractions are needed in the OS to
handle that? Obviously many of the current command line tools would need
rethinking (ps -ef for instance.)
Or does the idea of a single OS disintegrate into a fractal cloud of
zero-cost VM's? What would a meta-OS need to manage that? Would we still
recognize it as a Unix?
You might find this interesting reading:
http://www.livinginternet.com/u/ui_netexplodes.htm <http://www.livinginternet.com/u/ui_netexplodes.htm>
In particular inhp4. I used to have a UUCP map that linked me into this network back in the mid 80s. I was based in the UK doing some work for Henry Spencer at Microport Systems if any of you recall their iX286 System V port, which was pretty cool.
Anyway, there are some interesting machine names mentioned.
From: smb(a)ulysses.att.com
Subject: Re: IHNP4
Date: Thu, 25 Oct 90 20:48:42 EDT
> Thus, ihnp4 was Indian Hill Network Processor #4
> mh was Murray Hill. ak was the Atlanta Wire Works, sb was Southern
> Bell, cb was Columbus (Mark Horton was mark@cbosgd for a long time)
> plus others.
Yup, Columbus Operating Systems Group D, as I recall.
> Then there were the machines in the lab that had (and have) names like
> bonnie, clyde, ulysses, research, allegra, lento, harpo, chico, etc.
> From: Clem Cole
> my printers have often been named after chainsaws
Yeah, MIT (or was it Proteon, I forget - a long time ago :-) had that theme
going for a while for printers...
> @ DEC we were pretty free to use what we wanted and some were themed,
> most were boring.
Hah! I do have a cosmically great computer naming story from DEC, though.
So DECNet host names were limited to N characters (where N=8, or some
such). So one day they get this complaint from some DEC user in the UK:
"Grumble, grumble, grumble, N-character limit in DECNet host names, we want to
name our host 'Slartibartfast'."
So, this being before a certain radio play had hit the US from the UK, the
people at DEC were like:
"What's a 'Slartibartfast'???"
Instantly, the reply shot back (and perhaps some of you saw this coming):
"Boy, you guys are so unhip it's a wonder your pants down fall down!" :-) :-)
Noel
> From: "Steve Johnson"
> The meetings went on for over a year, but _I NEVER MET WITH THE SAME
> PERSON TWICE!_ It seemed that the only thing the marketing group knew
> how to do was reorganize the marketing group...
Shades of SI:Electric-Marketing (I _think_ that was its name) on the Symbolics
LISP Machine...
(For those who never had the joy of seeing this, it randomly drew a bunch of
boxes with people in them on the screen in a hierarchy, connected them, and
then started randomly moving the boxes around... I wonder if the source
still exists - or, better yet, a video of it running? Probably not, alas.)
Noel
Hi,
my site at the ba-stuttgart was removed. It hosted course ware for my
unix v6 lecture. This includes:
Unix Programmer's Manual (aka man pages)
Documents for use with the Unix Time-Sharing System
prepared as postscript files.
I provided the man pages as HTML-pages with the references replaced by
links.
The lecture notes contain tips for installing v6 on the simh emulator,
a description of the pdp11 instruction set and hardware as well as
a description of unix v6, including booting, kernel and user land software.
So if anyone is interested let me know.
Greetings
Wolfgang Helbig
Stauferstr. 22
71334 Waiblingen
Germany
> From: Nigel Williams
> Is it a reasonable claim that the PDP-10 made time-sharing "common"
> ... I'm presuming that "common" should be read as ubiquitous and
> accessible
> I'm wondering if it was really the combination of the PDP-11
Good question; I think a case can be made both ways.
> (lower-cost more models)
One observation I will make: the two don't have identical time-lines; the
earliest PDP-10 models predate the PDP-11 by a good chunk, and the PDP-11
out-lasted the PDP-10. So that has a big influence, I think, on the question
above.
The first PDP-10 (the KA - we'll leave aside the even earlier PDP-6) was made
out of small cards with individual transistors (B-series Flip Chips), whereas
the earliest PDP-11 model (the -11/20) used SSI TTL on much larger cards.
Ditto on the other end: the last PDP-10 sold used 29xx bit-slice technology,
whereas the PDP-11 lasted through three generations of microprocessor (the
LSI-11, Fonz, and Jaws).
Noel
Nigel Williams <nw(a)retrocomputingtasmania.com> asks on the TUHS list today:
>> ...
>> Is it a reasonable claim that the PDP-10 made time-sharing "common"
>> (note it says "the machine")? I'm presuming that "common" should be
>> read as ubiquitous and accessible (as in lower-cost than
>> competing/alternative options from other manufacturers or even DEC).
>>
>> I'm wondering if it was really the combination of the PDP-11
>> (lower-cost more models) and Unix ("free" license to universities)
>> that propelled time-sharing, at least at universities.
>> ...
I worked on the IBM ATS (Administrative Terminal System) for text
processing in the early 1970s, and for several years, on the CDC 6400
under both SCOPE and KRONOS operating systems. Those were mainframe
environments, but users scattered around campus accessed them via
glass terminals, so that was certainly time sharing.
Later, for 12 years (1978--1990), I also worked on TOPS-20 on the
PDP-10, and that too was time sharing, with most users having a
terminal on their desks. We also had PDP-11 and LSI-11 systems, but
they ran DEC proprietary operating systems, and were generally
dedicated to particular research hardware.
It was only in the early 1980s that my institution also began to run
Unix systems, initially Wollongong BSD on VAX 750s, and then in 1987,
with our first Sun workstations running SunOS. Thus, for me at least,
Unix time sharing came a dozen years late (though it was still
welcome, and remains so today).
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
This story appears today in The Register:
PDP-10 enthusiasts resurrect ancient MIT operating system
Incompatible Timesharing System now compatible with modern machines
https://www.theregister.co.uk/2017/01/30/pdp10_enthusiasts_resurrect_ancien…
Near the end of the story is a mention of SIMH and of KLH10, both
of which emulate the PDP-10. There is also mention of a PDP-11
emulator running inside ITS.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Lars Brinkhoff
> several debuggers called RUG and CARPET
SYSENG;CARPET > and SYSENG;KLRUG > (and also SYSEN2;URUG >).
CARPET runs in the PDP-10, and talks to the 11's via the Rubin 10-11 interface
on MIT-AI (which let the PDP-10 see into the PDP-11s' memory); it installed a
small toehold in the 11 (e.g. for trap handling). There was also a version
(conditionalized in the source) called "Hali" ("Hali is Carpet over a [serial]
line") - 'hali' is Turkish for 'carpet' (I wonder how someone knew that).
RUG runs in the front-end 11 on the KL (MIT-MC). URUG is a really simple
version of RUG that runs in a GT40, and use the GT40 display for output.
There's also 11DDT (KLDCP;11DDT >) - not sure why both this and KLRUG exist -
unless RUG was for the front-end 11, and 11DDT was for the I/O-11?
Noel
> From: Paul Ruizendaal
>> the headers say they date from 1974-75.
> Wow, that's great! That means that you have the initial version.
The file write dates are May 1979, so that's the latest it can be. There is
one folder called 'DTI' which contains an email message from someone at DTI to
someone at SRI which is dated "10 Apr 1979" so that seems to indicate that
that's indeed when they are from.
(The message says that the folder contains the source for DTI's IMP-11A
driver, which is different from UIll's, although they both descend from the
same original version.)
> Possibly it is V5 not V6
Nope, definitely V6 here.
> All my leads for the 1975 version of this code base came up dry and I
> feared it lost.
I could have sworn that I'd seen _listings_ of the code in a UIllinois
document about NCP Unix that I had found (and downloaded) on the Internet, but
I can't find them here now. I did look again and found:
"A Network Unix System for the Arpanet", by Karl C. Kelley, Richard Balocca,
and Jody Kravitz
but it doesn't contain any sources.
> it may contain the first version of 'mbufs'
It might - the code is conditionalized for "UCBUFMOD" all over the place.
> Yes, a 'history' file seems to have been common practice at BBN. The
> kernel would have had many modifications:
> - the 'ports' extension from Rand
Yes.
> - the 'await' extension by Jack Haverty
Yup.
> - an 1822-driver
Yes (also by Haverty) - although IMP11-A drivers are all over the place, there
are two different ones in the NCP Unix alone.
> - possibly, an Autodin II network driver
Didn't see one.
> - possibly, shared memory extensions
Yes, there are two module in 'ken', map_page.c and set_lcba.c (I was unable to
work out what 'LCBA' stood for) which seem to do something with mapping.
> It might even have some NCP code in it
Yes, there's an 'ncpkernel' directory.
> There seem to have been two versions of the BBN modified kernel. One was
> done for systems without separate I/D with stuff heavily trimmed
Yes, there's a 'SMALL' preprocessor flag which conditionally removes some
stuff.
> The other may have extended the V6 kernel to run in separate I and D
> spaces
That capability was present in stock V6.
Noel
> From: Clem Cole
> Steve Ward's guys writing Trix hacked together a compiler, assembler and
> the like.
All of which I have the source for - just looked through it.
> If memory serves me, tjt wrote the assembler
I have the NROFF source for the "A68 Assembler Reference", and it's by James
L. Gula and Thomas J. Teixeira. It says that "A68 is an edit of the MICAL
assembler also written by Mike [Patrick].".
> Jack Test did much of the compiler and again IIRC that was based on PCC.
I dunno, I'm not familiar with PCC, so I can't say. It definitely looks very
different from the Ritchie C compiler.
Noel
> From: Paul Ruizendaal <pnr(a)planet.nl>
>> I have this distinct memory of Dave Clark mentioning the Liza Martin
>> TCP/IP for Unix in one of the meeting report publihed as IENs
> It may be mentioned in this report:
> http://web.mit.edu/Saltzer/www/publications/rfc/csr-rfc-228.pdf
Yeah, I had run across that in my search for any remnants of the Martin
stuff.
> Would you know if any of its source code survived?
As I had mentioned, I had found some old dump tapes, and had one of them read;
it had some bad spots, but we've just (this morning) succeeding in having a
look as to what's there, and I _think_ all of the source is OK (including the
kernel code, as well as applications like server Telnet and FTP). No SCCS or
anything like that, so it's a bit hit or miss doing history - the file write
dates were preserved, but of course a lot of them would have been edited over
time to fix bugs, add features, etc.
The tape appears to contains a _lot_ of other historic material, and it's
going to take a while to sort it all out; it includes a Version 6 with NCP
from NOSC/SRI, some Unix from BBN; a BCPL compiler; a 'bind' for .rel format
files (produced by MACRO-11 and probably BCPL) written in BCPL; programs to
convert from .rel to a.out and back; an early verion of Montgomery EMACS;
another Unix from 'TMI' (whoever that might be); another UNIX that's somehow
associated with TRIX; someone's early kernel overlay stuff; an early 68K C
compiler, and also an early 8080 C compiler - just a ton of stuff (that's just
a few items that grabbed my eye as I scrolled by).
Algol, alas, appears not to be there (we probably didn't add it, because of
space reasons). The copy of LISP on this tape seem to be damaged; I do have 3
other tapes, and between them, I hope we'll be able to retrieve it.
Noel
> From: Nick Downing
> This is a wonderful find
Yes, I was _very_ happy to find those tapes in my basement; up till that, I
was almost sure all those bits were gone forever.
Thanks to Chuck Guzis, whose old data recovery service made this possible - he
actually read the tape.
> is it possible for you to read the other tapes also?
Alas, they're all of the same system. So the most we're going to get is the
files that are missing on this one due to bad spots on the tape.
Noel
> some Unix from BBN
This one is from 1979, it includes Mike Wingfield's TCP. The 'Trix UNIX' is a
port to the 68K, probably started with something V7ish (I see "setjmp.h" in
there). Bits of the Montgomery EMACS appear to date from 1981, but the main
source files seem to be from 1984. I also have the source to 'vsh' (Visual
Shell), whatever that is.
Noel
Just stumbled over another early TCP/IP for Unix:
http://bitsavers.informatik.uni-stuttgart.de/pdf/3Com/3Com_UNET_Nov80.pdf
It would seem to be a design similar to that of Holmgren's (NCP-based) Network Unix (basic packet processing in the kernel, connection management in a user space daemon). In time and in concept it would sit in between the Wingfield ('79, all user space) and the Gurwitz ('81, all kernel) implementations.
I think it was distributed initially as a mod versus V7 and later as a mod versus 2BSD.
Would anybody here know of surviving source of this implementation?
Thanks,
Paul
The recent discussion of Solaris made me think - what was the first Unix to
have centralized package management as part of the OS? I know that IRIX
had it, I think from the beginning (possibly even for the GL2 releases) but
I imagine there was probably something before that.
-Henry
guess it is the beginning of the end of Solaris and the Sparc CPU:
'Rumors have been circulating since late last year that Oracle was
planning to kill development of the Solaris operating system, with major
layoffs coming to the operating system's development team. Others
speculated that future versions of the Unix platform Oracle acquired
with Sun Microsystems would be designed for the cloud and built for the
Intel platform only and that the SPARC processor line would meet its
demise. The good news, based on a recently released Oracle roadmap for
the SPARC platform, is that both Solaris and SPARC appear to have a
future.
The bad news is that the next major version of Solaris—Solaris 12— has
apparently been canceled, as it has disappeared from the roadmap.
Instead, it's been replaced with "Solaris 11.next"—and that version is
apparently the only update planned for the operating system through
2021.
With its on-premises software and hardware sales in decline, Oracle has
been undergoing a major reorganization over the past two years as it
attempts to pivot toward the cloud. Those changes led to a major speed
bump in the development cycle for Java Enterprise Edition, a slowdown
significant enough that it spurred something of a Java community revolt.
Oracle later announced a new roadmap for Java EE that recalibrated
expectations, focusing on cloud services features for the next version
of the software platform. '
http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confir…
--
Kay Parker
kayparker(a)mailite.com
--
http://www.fastmail.com - The way an email service should be
Now that we have quite a few ex-Bell Labs staff on the list, and several
other luminaries, and with the Unix 50th anniversary not far off, perhaps
it is time to form a working group to help lobby to get 8th, 9th and 10th
Editions released.
I'm after volunteers to help. People who can actually move this forward.
Let me know if and how you can help out.
Thanks, Warren
I'm a bit puzzled, but then I only ever worked with some version of
Ultrix and an AT&T flavour of UNIX in Philips, SCO 3.2V4.2 (OpenServer
3ish), DEC Digital UNIX, Tru64, HP-UX 1123/11.31 and only ever used
"mkdir -p".
Some differences in the various versions are easily solved in scripts,
like shown below. Not the best of examples, but easy. Getting it to
work on a linux flavour wouldn't be too difficult :-)
OS_TYPE=`uname -n`
case "${OS_TYPE}" in
"OSF1")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="THA-7"
;;
"HP-UX")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/contrib/bin:/xyz/field/scripts:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="TST-7"
;;
*)
echo "${OS_TYPE} unknown, exit"
exit 1
;;
esac
> From: Doug McIlroy
> Perhaps the real question is why did IBM break so completely to hex for
> the 360?
Probably because the 360 had 8-bit bytes?
Unless there's something like the PDP-11 instruction format which makes octal
optimal, octal is a pain working with 8-bit bytes; anytime you're looking at
the higher bytes in a word, unless you are working through software which
will 'interpret' the bytes for you, it's a PITA.
The 360 instruction coding doesn't really benefit from octal (well,
instructions are in 4 classes, based on the high two bits of the first byte,
but past that, hex works better); opcodes are 8 or 16 bits, and register
numbers are 4 bits.
As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370
Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over
whether to use 6 or 8, and they finally went with 8 because i) statistics
showed that more customer data was numbers, rather than text, and storing
decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit
byte), and ii) they were looking forward to handling text with upper- and
lower-case.
Noel
> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug
=
> From: Joerg Schilling
> Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz
> channel?
Google is your friend:
https://en.wikipedia.org/wiki/T-carrierhttps://en.wikipedia.org/wiki/Digital_Signal_1
> How was the 64 ??? Kbit/s interface to the first IMPs implemented?
> Wasn't it AT&T that provided the lines for the first IMPs?
Yes and no. Some details are given in "The interface message processor for the
ARPA computer network" (Heart, Kahn, Ornstein, Crowther and Walden), but not
much. More detail of the business arrangement is contained in "A History of
the ARPANET: The First Decade" (BBN Report No. 4799).
Details of the interface, and the IMP side, are given in the BBN proposal,
"Interface Message Processor for the ARPA Computer Network" (BBN Proposal No.
IMP P69-IST-5): in each direction there is a digital data line, and a clock
line. It's synchronous (i.e. a constant stream of SYN characters is sent
across the interface when no 'frame' is being sent).
The 50KB modems were, IIRC, provided by the Bell system; the diagram in the
paper above seems to indicate that they were not considered part of the IMP
system. The modems at MIT were contained in a large rack, the same size as
the IMP, which stood next to it.
I wasn't able to find anything about anything past the IMP/modem interface.
Perhaps some AT+T publications of that period might detail how the modem,
etc, worked.
Noel
On the subject of the PDP-10, I recall seeing people at a DECUS
meeting in the early 1980s wearing T-shirts that proclaimed
I don't care what they say, 36 bits are here to say!
I also recall a funny advertizing video spoof at that meeting that
ended with the line
At DIGITAL, we're building yesterday's tomorrow, today.
That meeting was about the time of the cancellation of the Jupiter
project at DEC that was planned to produce a substantially more
powerful follow-on to the KL-10 processor model of the PDP-10 (we had
two such at the UofUtah), disappointing most of its PDP-10 customers.
Some of the Jupiter technology was transferred to later VAX models,
but DEC never produced anything faster than the KL-10 in the 36-bit
line. However, with microcomputers entering the market, and early
workstations from Apollo, LMI, Sun, and others, the economics of
computing changed dramatically, and departmental mainframes ceased to
be cost effective.
Besides our mainframe DEC-20/60 TOPS-20 system in the College of
Science, we also ran Wollongong BSD Unix on a VAX 750, and DEC VMS on
VAX 780 and 8600 models. In 1987, we bought our first dozen Sun
workstations (and for far less than the cost of a DEC-20/60).
After 12 good years of service (and a forklift upgrade from a 20/40 to
a 20/60), our KL-10 was retired on 31-Oct-1990, and the VAX 8600 in
July 1991. Our productivity increased significantly in the Unix
world.
I wrote about memories and history and impact of the PDP-10 in two
keynote addresses at TUG meetings in articles and slides available at
http://www.math.utah.edu/~beebe/talks/2003/tug2003/http://www.math.utah.edu/~beebe/talks/2005/pt2005/
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I was well aware of the comment in V6, but had no idea what it
referred to. When Dennis and I were porting what became V7 to the
Interdata 8/32, we spent about 10 frustrating days dealing with savu
and retu. Dennis did his most productive work between 10pm and 4am,
while I kept more normal hours. We would pore over the crash dumps
(in hex, then a new thing for us--PDP-ll was all octal, all the
time). I'd tinker with the compiler, he'd tinker with the code and
we would get it to limp, flap its wings, and then crash. The problem
was that the Interdata had many more registers than the PDP-11, so the
compiler only saved the register variables across a call, where the
PDP-11 saved all the registers. This was just fine inside a process,
but between processes it was deadly. After we had tried everything
we could think of, Dennis concluded that the fundamental architecture
was broken. In a couple of days, he came up with the scheme that
ended up in V7.
It was only several years later when I saw a T-shirt with savu and
retu on it along with the famous comment that I realized what it had
referred to, and enjoyed the irony that we hadn't understood it
either...
Steve
----- Original Message -----
From: "Brantley Coile" <brantleycoile(a)me.com>
To:"Larry McVoy" <lm(a)mcvoy.com>
Cc:<tuhs(a)tuhs.org>
Sent:Mon, 16 Jan 2017 05:11:02 -0500
Subject:Re: [TUHS] Article on 'not meant to understand this'
Tim Bradshaw <tfb(a)tfeb.org> writes on 17 Jan 2017 13:09 +0000
>> I think we've all lived in a wonderful time where it seemed like
>> various exponential processes could continue for ever: they can't.
For an update on the exponential scaling (Moore's Law et al), see
this interesting new paper:
Peter J. Denning and Ted G. Lewis
Exponential laws of computing growth
Comm. ACM 60(1) 54--65 January 2017
https://doi.org/10.1145/2976758
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> One thing that I'm unclear about is why all this Arpanet work was not filtering more into the versions of Unix done at Bell Labs.
The short answer is that Bell Lbs was not on Arpanet. In the early
80s the interim CSNET gave us a dial-up window into Arpanet, which
primarily served as a conduit for email. When real internet connection
became possible, network code from Berkeley was folded into the
research kernel. (I am tempted to say "engulfed the research kernel",
for this was a huge addition.)
The highest levels of AT&T were happy to carry digital data, but
did not see digital as significant business. Even though digital T1
was the backbone of long-distance transmission, it was IBM, not
AT&T, that offered direct digital interfaces to T1 in the 60s.
When Arpanet came along MCI was far more eager to carry its data
than AT&T was. It was all very well for Sandy Fraser to build
experimental data networks in the lab, but this was seen as a
niche market. AT&T devoted more effort to specialized applications
like hotel PBXs than to digital communication per se.
Doug
I'm having a lot of fun with a virtual 11/94 and 2.11. What a lot of
excellent engineering!
It seems like an obvious project would be to adapt a newer pcc with ANSI
C support of some sort. Has this already been done? I'll take a look
if not.
Thanks,
Andy Valencia
p.s. The "less" in /usr/local doesn't seem to handle stty based TTY
geometry. I re-ported "less2" from comp.sources.unix and added this.
Somebody ping me if the mildly edited sources are of interest.
> From: Larry McVoy
> It is pretty stunning that the company that had the largest network in
> the world (the phone system of course) didn't get packet switching at
> all.
Actually, it's quite logical - and in fact, the lack of 'getting it' about
packets follows directly from the former (their large existing circuit switch
network).
This dates back to Baran (see his oral history:
https://conservancy.umn.edu/handle/11299/107101
pg. 19 and on), but it was still detectable almost two decades later.
For a variety of all-too-human reasons (of the flavour of 'we're the
networking experts, what do you know'; 'we know all about circuit networks,
this packet stuff is too different'; 'we don't want to obsolete our giant
investment', etc, etc), along with genuine concerns about some real issues of
packet switching (e.g. the congestion stuff, and how well the system handled
load and overload), packet switching just was a bridge too far from what they
already had.
Think IBM and timesharing versus batch and mainframe versus small computers.
Noel
> From: Warren Toomey
> Something I've been meaning to ask for a while: why Unix and octal on
> the PDP-11? Because of the DEC documentation?
Yeah, DEC did it all in octal.
> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b
> is a multiple of 3. But PDP-11 is 16b, multiple of 4.
Look at PDP-11 machine code. Two-op instructions look like this (bit-wise):
oooossssssdddddd
where 'ssssss' and 'dddddd' (source and destination) have the same format:
mmmrrr
where 'mmm' is the mode (things like R, @Rn, etc) and 'rrr' is the register
number. All on octal boundaries. So if you see '010011' in a dump (or when
looking at memory through the front console switches :-), you know
immediately that means:
MOV R0, @R1
Much harder in hex... :-)
Noel
> From: Angelo Papenhoff
> The problem is that the function which did the savu was not necessarily
> the same as the function that does the retu, so after retu the function
> could have the call stack of a different function. As dmr explained,
> this worked with the PDP-11 compiler but not with the interdata
> compiler.
To put it slightly differently, in PDP-11 C all stack frames look identical,
but this is not true of other machines/compilers. So if routine A called
savu(), and routine B called aretu(), when the call to aretu() returned,
procedure B is still running, but on procedure A's stack frame. So on machines
where A's stack frame looks different from B's, hilarity ensues.
(Note that aretu() was significantly different from retu() - the latter
switched to a different process/stack, whereas aretu() did a 'non-local goto'
[technically, switched to a different stack frame on the current stack] in the
current process.)
> Note that Lions doesn't explain this either, he assumed that the
> difficulty was with with u_rsav and u_ssav .. (he probably wasn't that
> wrong though, it really is confusing, but it's just not what the comment
> refers to)
Right. There are actually _three_ sets of saved stack info:
int u_rsav[2]; /* save r5,r6 when exchanging stacks */
int u_qsav[2]; /* label variable for quits and interrupts */
int u_ssav[2]; /* label variable for swapping */
and it was the interaction among the three of them that I found very hard to
understand - hence my (incorrect) memory that the 'you are not' comment
actually referred to that, not the savu/aretu stuff!
Calls to retu(), the primitive to switch stacks/processes, _always_ use
rsav. The others are for 'non-local gotos' inside a process.
Think of qsav as a poor man's exception handler for process software
interrupts. When a process is sleeping on some event, when it is interrupted,
rather than the sleep() call returning, it wakes up returning from the
procedure that did the savu(qsav). (That last is because sleep() - which is
the procedure that's running when the call to aretu(qsav) returns - does a
return immediately after restoring the stack to the frame saved in qsav.)
And I've forgotten exactly how ssav worked - IIRC it was something to do with
how when a process is swapped out, since that can happen in a number of
ways/places, the stack can contains calls to various things like expand(),
etc; when it's swapped back in, the simplest thing to do is to just throw that
all away and have it go back to where it was just before it was decided to
swap it out.
Noel
> From: Tony Finch
This is getting a bit far afield from Unix, so my apologies to the list for
that. But to avoid dumping it in the Internet-History list abruptly, let me
answer here _briefly_ (believe it or not, the below _is_ brief).
> AIUI there were two major revisions to the IPv4 addressing architecture:
Not quite (see below). First, one needs to understand that there are two
different timelines for changes to addressing: in the hosts, and in the
routers (called 'gateways' originally). To start with, they were tied
together, but as of RFC-1122, they were formally separated: hosts no longer
fully understood the syntax/semantics of addresses, just (mostly) treated them
as opaque 32-bit quantities.
> subnetting (RFC 917, October 1994 ... RFC 950, August 1985), and
> classless routing (RFC 1519, September 1993)
Originally, network numbers were 8 bits, and the 'rest' (local) part was 24.
Mapping from IP addresses to physical network addresses was some with direct
mapping - ARP did not exist - the actual local address (e.g. IMP/Port) was
contained in the 'rest' field - each network had a document which specified
the mapping. (Which is part of the interoperability issue with old
implementations.)
As some point early on, it was realized that 8 bits of network number were not
enough, and the awful A/B/C kludge was added (it was dropped on the community,
not discussed before-hand). Subnetting was indeed the next change. Then the
host/router split happened.
Classless routing (which effectively extended addesses, for path-computation
purposes, to 32+N bits - since you couldn't look at a 32-bit IP address and
immediately tell which was the 'network' part any more, you _had_ to have the
mask as well, to tell you how many bits of any given address were the network
number) was more of a process than a single change - the inter-AS routing
(BGP) had to change, but so did IGP's (OSPF, IS-IS), etc, etc.
> originally called supernetting (RFC 1338, June 1992).
There was this effort called ROAD which produced RFC-1338 and 1519, and IIRC
there was an intermediate, involving blocks of network numbers (1338), and
that slowly evolved into arbitrary blocks (1519).
One should also note that the term "super-netting" comes from a proposal by
Carl-Hubert ("Roki") Rokitansky which did not, alas, make it to RFC. (His
purpose was different, but it used the same mechanism.) Alas, the authors of
1338/1519 failed to properly acknowledge his earlier work.
Noel
> From: Johnny Billquist
>> everyone working on TCP/IP heard about Version 4 shortly after the
>> June, 1978 meeting.
> Over a year before any documents said anything about it.
Incorrect. It's documented in IEN-44, June 1978 (written shortly after the
meeting, in the same month).
> I'm sure people were doing networking protocols and stuff earlier, but
> it wasn't the TCP/IP we know and talk about today
People were working on Unix in 1977, but it's not the same Unix we know and
talk about today. Does that mean it's not Unix they were working on?
>> there were working implementations (as in, they could exchange data with
>> other implementations) of TCP/IPv4 by January 1979 - see IEN 77.
^^
> But not TCP4 then.
I just specified that it was v4 (see above).
> thus, not interoperable with an implementation today
No, with properly-chosen addresses (because of the changes in address
handling), they probably would be.
Noel
> From: Paul Ruizendaal
> I guess by April 1981 (RFC777) we reach a point where things are
> specified to a level where implementations would interoperate with
> today's implementations.
Yes and no. Earlier boxes would interoperate, _if addresses on each end were
chosen properly_. Modern handling of addresses on hosts (for the 'is this
destination on my physical network' step of the packet-sending algorithm) did
not come in until RFC-1122 (October 1989); prior to that, lots of host code
probably tried to figure out if the destination was class A, B or C, etc, etc.
Also, until RFC-826 (ARP, November 1982) pretty much all the network
interfaces (and thus the code to turn the 'destination IP address' into an
attached physical network address, for the first hop) were things like ARPANet
that no longer exist, so you could't _actually_ fire up one of them unless you
do something like the 'ARPANet emulation' that the various PDP-10 simulators
use to allow old OS's running on them to talk to the current Internet.
> only if one accepts IEN54/55 as 'TCP/IP'
What are they, if not TCP/IP?
Not the modern variant, of course, but then again, nothing before the early
90's is truly 'modern TCP/IP'.
> IEN98 mentions a TCP3 stack done for Network Unix ... in 1978 by DTI /
> Gary Grossman.
I read this, BITD, but don't recall much about it. I was not impressed by the
coding style.
> at the same time it also uses old-style assignments ('=+' instead of
> '+='). Could this be "typesetter C"?
I don't know. IIRC, that compiler supported both styles. It had to have been a
later compiler than the one that came with V6, that didn't support longs. But
I don't recall any bug with long support in the typetter C compiler we had at
MIT.
> From the above I would support the moniker "first TCP/IP in C on Unix"
No. That clearly belongs to the DTI one. (The difference between V3 and V4,
while significant, aren't enough to make the DTI not 'TCP/IP in C for Unix'.)
If you want to say 'first V4TCP/IP in C for Unix', maybe; I'd have to look for
dates on the one done at MIT for V6, that may be earlier, but I don't think
so. (Check the minutes in the IEN's, that's probably the best source of data
on progress of the various implementations.)
> One thing that I'm unclear about is why all this Arpanet work was not
> filtering more into the versions of Unix done at Bell Labs.
Here's my _guess_ - ask someone from Bell for a sure answer.
You're using 20/20 hindsight. At that point in time, it was not at all obvious
that TCP/IP was going to take over the world.
There were a couple of alternatives for moving data around that Bell used -
Datakit, and UUCP - and they worked pretty well, and there was no reason to
pick up on this TCP/IP thing.
I suspect that it wasn't until LAN's became popular than TCP/IP looked like a
good thing to have - it fits very well with the capabilities most LANs had (in
term of the service provided to things attached to them). Datakit was its own
thing, and for UUCP you'd have to provide a reliable stream, and TCP/IP 'just
did that'.
Noel
On 2017-01-16 03:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > Like I pointed out, RFC760 lacks ICMP.
>
> So? TCP will work without ICMP.
True. However, IP and UDP will have issues.
> > Which also makes one question how anyone would have known about IPv4 in
> > 1978.
>
> Well, I can assure you that _I_ knew about it in 1978! (The decision on the v4
> packet formats was taken in the 5th floor conference room at 545 Tech Sq,
> about 10 doors down from my office!)
>
> But everyone working on TCP/IP heard about Version 4 shortly after the June,
> 1978 meeting.
Over a year before any documents said anything about it. This is where I
have problems. :-)
> > Also, first definition of TCP shows up in RFC 761
>
> If you're speaking of TCPv4 (one needs to be precise - there were also of
> course TCP's 1, 2, 2.5 and 3, going back to 1974), please see IEN-44. (Ignore
> IEN's -40 and -41; those were proposals for v4 that got left by the wayside.)
That is a very good point. I've been talking v4 all the time (both for
IP and TCP). Like I said, I'm sure people were doing networking
protocols and stuff earlier, but it wasn't the TCP/IP we know and talk
about today, and you just reaffirmed this.
And yes, the TCP/IP we know today did not come out of a blue sky. Of
course it is based on earlier work. (Just do you don't have to go on
about that again.)
> > So yes, I still have problems with claims that they had it all running
> > in 1978.
>
> I never said we had it "all" running in 1978 - and I explicitly referenced
> areas (congestion, addressing/routing) we were still working on over 10 years
> later.
>
> But there were working implementations (as in, they could exchange data with
> other implementations) of TCP/IPv4 by January 1979 - see IEN 77.
But not TCP4 then. And thus, not interoperable with an implementation
today, and interoperable in general being a rather floating and moving
target, as you had several imvompatible TCP versions, using different
protocol numbers, and several incompatible IP versions.
> (I'll never forget that weekend - we were in at ISI on Saturday, when it was
> normally closed, and IIRC we couldn't figure out how to turn the hallway
> lights on, so people were going from office to office in the gloom...)
Fun times, I bet.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> (Check the minutes in the IEN's, that's probably the best source of data
> on progress of the various implementations.)
Another place to look is Internet Monthly Reports and TCP-IP Digests (oh, I
see you've seen those, I see a reference to one).
I have this distinct memory of Dave Clark mentioning the Liza Martin TCP/IP
for Unix in one of the meeting report publihed as IENs, but a quick look
didn't find it.
Noel
On 14 Jan 2017, at 17:41 , Michael Kjörling wrote:
> On 13 Jan 2017 10:13 +0100, from pnr(a)planet.nl (Paul Ruizendaal):
>> over on the internet history mailing list.
>
> Interesting. Care to give me a pointer toward it?
That mailing list is here:
http://mailman.postel.org/mailman/listinfo/internet-history
On 14 Jan 2017, at 23:02 , Johnny Billquist wrote:
> IMPs did not talk IP, just for the record.
Yes, this is true of course. The software of the IMP was resurrected from printouts some time ago:
http://walden-family.com/impcode/
> The problem is that all the RFCs are available, and they are later than this. The ARPAnet existed in 1979, but it was not using TCP/IP. If you look at the early drafts of TCP/IP, from around 1980-1981, you will also see that there are significant differences compared to the TCP/IP we know today. There was no ICMP, for example. Error handling and passing around looked different.
Once again: yes. When exactly was the TCP/IP specification completed? That is an issue where reasonable people can hold different opinions. What software first implemented this on Unix? Here too reasonable people can hold different opinions. Below my take on this, based on my current understanding (and I keep repeating that I'm learning new things about this stuff almost every day and please advise if I'm missing things).
Development of TCP/IP
The specification that became TCP/IP apparently finds its roots in 1974, and it is gradually developed over the next years with several trial implementations. By March 1977 we get to TCP2 and more trials. Next it would seem that there was a flurry of activity from January to August 1978, resulting in specifications for TCP4 (IEN54 and IEN55). Then, up to March more implementations follow, as documented in IEN98. With those implementations tested, also for interoperability, more changes to the protocol and implementations follow and I guess by April 1981 (RFC777) we reach a point where things are specified to a level where implementations would interoperate with today's implementations. This is not where it stops, 'modern' congestion control only goes back to the late 80's (see for instance Craig Partridge http://www.netarch2009.net/slides/Netarch09_Craig_Partridge.pdf, it is an interesting read).
Early Unix code bases
(1) The Mathis/Haverty stack
In 1977 Jim Mathis at SRI writes a "TCP2.5" stack in PDP11 assembler, running on the MOS operating system (for the LSI11). In 1978 Haverty is assigned to take this stack and make it run on V6 Unix. He builds a kernel with ports, await and capac to achieve this. It is a mixed success (see http://mailman.postel.org/pipermail/internet-history/2016-October/004073.ht…) but he maintains it through 1978 and 1979 as a skunkworks project and the code eventually supports TCP4 (as defined in IEN54/55). The source has survived in Jack Haverty's basement as a printout, but it is not online. As far as I know know, this the first TCP/IP on Unix (a tie with the Wingfield implementation, and only if one accepts IEN54/55 as 'TCP/IP').
(2) The Grossman (DTI) stack
IEN98 mentions a TCP3 stack done for Network Unix (by then called ENFE/INFE) in 1978 by DTI / Gary Grossman. I don't currently have information about that implementation. As at March 1979 it did not appear to support TCP4.
(3) The Wingfield/Cain stack
In 1978 BBN / Michael Wingfield, was commissioned by DCEC / Ed Cain to write a TCP4 tack in C for Unix V6. As it stood in March 1979 this code supported IEN54/55 with the AUTODIN II security extensions that heralded back to 1975. It is a partial implementation: it does not support IP fragmentation and it has a simplistic approach to incoming out-of-order packets (drop everything until the right one arrives). However, it worked and kept being maintained by Ed Cain, who by October 1981 has added support for ICMP and GGN (https://www.rfc-editor.org/rfc/museum/tcp-ip-digest/tcp-ip-digest.v1n1.1) He is still supporting it as late as 1985 (https://www.rfc-editor.org/rfc/museum/tcp-ip-implementations.txt.1)
As far as I know know, only the March 1979 code survives. I'm currently retyping it, about halfway through. I'm not sure what compiler it was written for: it uses longs, but apparently this is still somewhat broken (with comments in the source amounting to 'this WTF necessary to work around compiler bugs'); at the same time it also uses old-style assignments ('=+' instead of '+='). Could this be "typesetter C"? The code feels like it might be based on earlier work, for instance the BBN BCPL implementation for TENEX a few years earlier, but that is pure speculation. It could also be that Wingfield was new to C and bringing habits from other languages with him. Once all done, I'll ask Michael about it.
I'm on thin ice here, but my current guess would be that the 5,000 line code base would need some 500 lines of new code to make it interoperable with today's implementations. From the above I would support the moniker "first TCP/IP in C on Unix" as claimed by UCLA, either for the March 1979 version if one takes the view that '90% modern' is good enough, or for the October 1981 version if support for RFC777 is the benchmark. In the latter view it beats the Gurwitz stack by about a month or two. However, it is not a full implementation of the specifications.
(4) The Gurwitz stack
Last in the list of candidates is the Rob Gurwitz stack for BSD4.1 (see IEN168), started in January 1981. It is a full implementation of the protocols that looks like it was done from scratch (as confirmed by Gurwitz and Haverty) and consolidates the earlier learnings. In my opinion, it is the first implementation where the source code has a distinct 'early unix smell' (please excuse the phrase), arguably more so that the later Joy code. The first iterations of the code don't support (the as yet non-existent) RFC777. By November 1981 there is a beta distribution tape that does, and this code looks to interoperate with modern TCP/IP implementations as-is. If the benchmark is a full implementation interoperating with today's code, the first TCP/IP on Unix would I think be the Gurwitz stack. Possibly it is the first TCP/IP in that definition on any OS.
Note that this is also where TCP/IP and Network Unix join back up [but this view might change as I learn more about the Grossman / DTI version]: the Gurwitz code uses an API closely based on that of UoI's Network Unix and the provided user land programs (Telnet, FTP, MTP, etc.) appear ports from that code base (or perhaps from a BBN development of the UoI work).
===
In any case, I think it is fair to say that TCP/IP as we know it today did not drop from the sky in 1981. There was a vast amount of prior work done in the second half of the 70's on a variety of hardware and operating systems, experience that all feeded into the well known RFC's.
One thing that I'm unclear about is why all this Arpanet work was not filtering more into the versions of Unix done at Bell Labs. The folks involved were certainly aware of each other and the work that was going on. With universities the cost of 'always on' long distance lines may have been too great, but within Bell Labs that would have been less of an issue and there is a clear link with the core business of the Bell System.
Would anybody have some background on that?
Paul