> From: Johnny Billquist
> Like I pointed out, RFC760 lacks ICMP.
So? TCP will work without ICMP.
> Which also makes one question how anyone would have known about IPv4 in
> 1978.
Well, I can assure you that _I_ knew about it in 1978! (The decision on the v4
packet formats was taken in the 5th floor conference room at 545 Tech Sq,
about 10 doors down from my office!)
But everyone working on TCP/IP heard about Version 4 shortly after the June,
1978 meeting.
> Also, first definition of TCP shows up in RFC 761
If you're speaking of TCPv4 (one needs to be precise - there were also of
course TCP's 1, 2, 2.5 and 3, going back to 1974), please see IEN-44. (Ignore
IEN's -40 and -41; those were proposals for v4 that got left by the wayside.)
> So yes, I still have problems with claims that they had it all running
> in 1978.
I never said we had it "all" running in 1978 - and I explicitly referenced
areas (congestion, addressing/routing) we were still working on over 10 years
later.
But there were working implementations (as in, they could exchange data with
other implementations) of TCP/IPv4 by January 1979 - see IEN 77.
(I'll never forget that weekend - we were in at ISI on Saturday, when it was
normally closed, and IIRC we couldn't figure out how to turn the hallway
lights on, so people were going from office to office in the gloom...)
Noel
> From: Johnny Billquist <bqt(a)softjar.se>
> And RFC791 is dated September 1981.
Yes, but it had pretty much only editorial changes from RFC-760, dated January
1980 (almost two years before), and from a number of IEN's dated even earlier
than that (which I'm too lazy to paw through).
> So I have this problem with people who say that they implemented TC/IP
> in 1978 for some reason.
If you look at IEN-44, June 1978 (issued shortly after the fateful June 15-16
meeting, where the awful 32-bit address decision was taken), you will see that
the packet format as of that date was pretty much what we have today (the
format of addresses kept changing for many years, but I'll put that aside for
now).
> Especially if they say ... it was working well in heterogeneous
> networks.
TCP/IP didn't work "well" for a long time after 1981 - until we got the
congestion control stuff worked out in the late 80's. And IIRC the routing/
addressing stuff took even longer.
> I don't think it's correct to say that it was TCP/IP, as we know it
> today.
Why not? A box implementing the June '78 spec would probably talk to a current
one (as long as suitable addresses were used on each end).
> It was either some other protocol (like NCP) or some other version of
> IP, which was not even published as an RFC.
Nope. And don't put too much weight on the RFC part - TCP/IP stuff didn't
start getting published as RFC's until it was _done_ (i.e. ready for the
ARPANet to convert from NCP to TCP/IP - which happened January 1, 1983).
All work prior to TCP/IP being declared 'done' is documented in IEN's (very
similar documents to RFC's, distributed by the exact same person - Jon
Postel).
Noel
On 2017-01-13 18:57, Paul Ruizendaal <pnr(a)planet.nl> wrote:
>
> On 12 Jan 2017, at 4:54 , Clem Cole wrote:
>
>> The point is that while I have no memory of capac(), but I can confirm that I definitely programmed with the empty() system call and Rand ports on a v6 based kernel in the mid-1970s and that it was definitely at places besides Rand themselves.
> Thank you for confirming that. If anybody knows of surviving source for these extensions I'd love to hear about it. Although the description in the implementation report is clear enough to recreate it (it would seem to be one file similar to pipe.c and a pseudo device driver similar in size to mem.c), original code is better. It is also possible that the code in pipe.c was modified to drive both pipes and ports -- there would have been a lot of similarity between the two, and kernel space was at a premium.
>
>> [...] confirming something I have been saying for few years and some people have had a hard time believing. The specifications for what would become IP and TCP were kicking around the ARPAnet in the late 1970s.
> My understanding is that all RFC's and IEN's were available to all legit users of the Arpanet. By 1979 there were 90 nodes (IMP's) and about 200 hosts connected. I don't get the impression that stuff was always easy to find, with Postel making a few posts about putting together "protocol information binders". Apparently nobody had the idea to put all RFC's in a directory and give FTP access to it.
They were, and still are. And I suspect Clem is thinking of me, as I
constantly question his memory on this subject.
The problem is that all the RFCs are available, and they are later than
this. The ARPAnet existed in 1979, but it was not using TCP/IP. If you
look at the early drafts of TCP/IP, from around 1980-1981, you will also
see that there are significant differences compared to the TCP/IP we
know today. There was no ICMP, for example. Error handling and passing
around looked different.
IMPs did not talk IP, just for the record.
RFC760 defines IPv4, and is dated January 1980. It refers to some
previous documents that describe IP, but they are not RFCs. Also, if you
look at RFC760, you will see that errors were supposed to be handled
through options in the packet header, and that IP addresses, while 32
bits, were just split into 8 bits for network number, and 24 bits for
host. There were obviously still some work needed before we got to what
people think on IPv4 today. Anyone implementing RFC760 would probably
not work at all with an IPv4 implementation that exist today.
> I am not sure how available this stuff was outside the Arpanet community. I think I should put a question out about this, over on the internet history mailing list.
>
> As an aside: IMHO, conceptually the difference between NCP and TCP wasn't all that big. In my current understanding the big difference was that NCP assumes in-order, reliable delivery of packets (as was the case between IMP's) and that TCP allows for unreliable links. Otherwise, the connection build-up and tear-down and the flow control were similar. See for instance RFC54 and RFC55 from 1970. My point is: yes, these concepts were kicking around for over a decade in academia before BSD.
Not sure if BSD is a good reference point. Much stuff was not actually
done on Unix systems at all, if you start reading machine lists in the
early RFCs. Unix had this UUCP thingy, that they liked. ;-) BSD and
networking research came more to the front doing all the refinements
over the years.
Anyway, yes, for sure TCP did not come out of the void. It was based on
earlier work. But there are some significant differences between TCP/IP
and NCP, which is why you had the big switch day.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I see, no I had not realized that code is still in use, I would have
thought it had been replaced by a whole lot of POSIX bloat. Admittedly the
2.11BSD ctime/asctime/localtime/timezone stuff is simplistic and doesn't
address complicated cases but it's good enough.
However I have to resist the temptation to improve or update stuff in
2.11BSD, I went down that path many times (with the Makefiles project for
instance) and because everything is interdependent you always introduce
more problems and get deeper and deeper enmeshed. In order to stay in
control I only fix essentials and apply a rule of minimal change, period.
This applies until I have a baseline that builds exactly the same binary
system image as the native build. Then I might proactively improve parts of
the system but I will not do it reactively if you follow.
As I see it the zic behaviour is not a bug since time_t is 32 bits on
2.11BSD and has no unreasonable values, and localtime() is BSD not POSIX
compliant and is not allowed to return NULL.
cheers, Nick
On 14/01/2017 6:53 AM, "Random832" <random832(a)fastmail.com> wrote:
On Fri, Jan 13, 2017, at 12:57, Nick Downing wrote:
> I then ended up doing a fair bit of re-engineering, how this came
> about was that I had to port the timezone compiler (zic) to run on the
> Linux cross compilation host, since the goal is eventually to build a
> SIMH-bootable disk (filesystem) with everything on it. This was a bit
> involved, it crashed initially and it turned out it was doing
> localtime() on really small and large values to try to figure out the
> range of years the system could handle. On the Linux system this
> returns NULL for unreasonable time_t values which POSIX allows it to
> do. Hence the crash. It wasn't too obvious how to port this code. (But
> whatever I did, it had to produce the exact same timezone files as a
> native build).
You know that the timezone file format that it uses is still in use
today, right? There's extra data at the end in modern ones for 64-bit
data, but the format itself is cross-platform, with defined field widths
and big-endian byte order.
What do you get when you compare the native built timezone files with
one from your linux host's own zic? It *should* only differ by the
version number in the header [first five bytes "TZif2" vs "TZif"] and
the 64-bit section, if you're giving it the same input files. And I bet
you could take the current version of the code from IANA and, if it
matters to you, remove the parts that output the 64-bit data. If nothing
else, looking at the modern code and the version in 2.11BSD side-by-side
will let you backport bug fixes.
(Note: Technically, the version present in most Linux systems is a fork
maintained with glibc rather than the main version of the code from
IANA)
So I got a fair bit further advanced on my 2.11BSD cross compiler
project, at the moment it can make a respectable unix tree (/bin /usr
etc) with a unix kernel and most software in it. I unpacked the
resulting tarball and chrooted into it on a running SIMH system and it
worked OK, was a bit rough around the edges (missing a few necessary
files in /usr/share and that kind of thing) but did not crash. I
haven't tested the kernel recently but last time I tested it, it
booted, and the checksys output is OK.
I then ended up doing a fair bit of re-engineering, how this came
about was that I had to port the timezone compiler (zic) to run on the
Linux cross compilation host, since the goal is eventually to build a
SIMH-bootable disk (filesystem) with everything on it. This was a bit
involved, it crashed initially and it turned out it was doing
localtime() on really small and large values to try to figure out the
range of years the system could handle. On the Linux system this
returns NULL for unreasonable time_t values which POSIX allows it to
do. Hence the crash. It wasn't too obvious how to port this code. (But
whatever I did, it had to produce the exact same timezone files as a
native build).
So what I ended up doing was to port a tiny subset of 2.11BSD libc to
Linux, including its types. I copied the ctime.c module and prefixed
everything with "cross_" so there was "cross_time_t" and so forth, and
"#include <time.h>" became "#include <cross/time.h>", in turn this
depends on "#include <cross/sys/types.h>" and so on. That way, the
original logic worked unchanged.
I decided to also redo the cross compilation tools (as, cc, ld, nm,
ranlib and so on) using the same approach, since it was conceptually
elegant. This involved making e.g. "cross_off_t" and "struct
cross_exec" available by "#include <cross/a.out.h>", and obviously the
scheme extends to whatever libc functions we want to use. In
particular we use floating point, and I plan to make a "cross_atof()"
for the C compiler's PDP-11-formatted floating-point constant
handling, etc. (This side of things, like the cross tools, was
working, but was not terribly elegant before).
So then I got to thinking, actually this is an incredibly powerful
approach. Instead of just going at it piecemeal, would it not be
easier just to port the entire thing across? To give an example what I
mean, the linker contains code like this:
if (nund==0)
printf("Undefined:\n");
nund++;
printf("%.*s\n", NNAMESIZE, sp->n_name);
It is printing n_name from a fixed-size char array, so to save the
cost of doing a strncpy they have used that "%.*s" syntax which tells
printf not to go past the end of the char array. But this isn't
supported in Linux. I keep noticing little problems like this
(actually I switched off "-Wformat" which was possibly a bad idea). So
with my latest plan this will actually run the 2.11BSD printf()
instead of the Linux printf(), and the 2.11BSD stdio (fixing various
other breakage that occured because BUFSIZ isn't 512 on the Linux
system), and so on. What I will do is, provide a low level syscalls
module like cross_open(), cross_read(), cross_write() and so on, which
just redirect the request into the normal Linux system calls, while
adjusting for the fact that size_t is 16 bits and so forth. This will
be really great.
In case it sounds like this is over-engineering, well bear in mind
that one knotty problem I hadn't yet tackled is the standalone
utilities, for instance the 2.11BSD tape distribution contains a
standalone "restor" program which is essentially a subset of the
kernel, including its device drivers, packaged with the normal
"restor" utility into one executable that can be loaded off the tape.
It was quite important to me that I get this ported over to Linux, so
that I can produce filesystems, etc, at the Linux level, all ready to
boot when I attach them to SIMH. But it was going to be hugely
challenging, since compiling any program that includes more than the
most basic kernel headers would have caused loads of conflicts with
Linux's kernel headers and system calls. So the new approach
completely fixes all that.
I did some experiments the last few days with a program that I created
called "xify". What it does is to read a C file, and to every
identifier it finds, including macro names, C-level identifiers,
include file names, etc, it prepends the sequence "x_". The logic is a
bit convoluted since it has to leave keywords alone and it has to
translate types so that "unsigned int" becomes "x_unsigned_int" which
I can define with a typedef, and so on. Ancient C constructs like
"register i;" were rather problematic, but I have got a satisfactory
prototype system now.
I also decided to focus on 4.3BSD rather than 2.11BSD, since by this
stage I know the internals and the build system extremely intimately,
and I'm aware of quite a lot of inconsistencies which will be a lot of
work to tidy up, basically things that had been hurriedly ported from
4.3BSD while trying not to change the corresponding 2.8~2.10BSD code
too much. Also in the build system there are quite a few different
ways of implementing "make depend" for example, and this annoys me, I
did have some ambitious projects to tidy it all up but it's too
difficult. So a fresh start is good, and I am satisfied with the
2.11BSD project up to this moment.
So what will happen next is basically once I have "-lx_c" (the "cross"
version of the 4.3BSD C library) including the "xified" versions of
the kernel headers, then I will try to get the 4.3BSD kernel running
on top of Linux, it will be a bit like User-Mode Linux. It will use
simulated network devices like libpcap, or basically just whatever
SIMH uses, since I can easily drop in the relevant SIMH code and then
connect it up using the 4.3BSD kernel's devtab. The standalone
utilities like "restor" should then "just work". The cross toolchain
should also "just work" apart from the floating point issue, since it
was previously targeting the VAX which is little-endian, and the
wordsize issues and the library issues are taken care of by "xifying".
Very nice.
The "xifying" stuff is in a new repository 43bsd.git at my bitbucket
(user nick_d2).
cheers, Nick
> From: Paul Ruizendaal
>> On 12 Jan 2017, at 4:54 , Clem Cole wrote:
>> The specifications for what would become IP and TCP were kicking around
>> ... in the late 1970s.
The whole works actually started considerably earlier than that; the roots go
back to 1972, with the formation of the International Packet Network Working
Group - although that group went defunct before TCP/IP itself was developed
under DARPA's lead.
I don't recall the early history well, in detail - there's a long draft
article by Ronda Hauben which goes into it in detail, and there's also "INWG
and the Conception of the Internet: An Eyewitness Account" by Alexander
McKenzie which covers it too.
By 1977 the DARPA-led effort had produced several working prototype
implementations, and TCP/IP (originally there was only TCP, without a separate
data packet carriage layer) were up to version 3.
> My understanding is that all RFC's and IEN's were available to all legit
> users of the Arpanet.
Yes and no. The earliest distribution mechanism (for the initial NCP/ARPANet
work) was hardcopy (you can't distribute things over the 'net before you have
it working :-), and in fact until a recent effort to put them all online, not
all RFC's were available in machine-readable form. (I think some IEN's still
aren't.) So for many of them, if you wanted a copy, you had to have someone at
ISI make a photocopy (although I think they stocked them early on) and
physically mail it to you!
> Apparently nobody had the idea to put all RFC's in a directory and give
> FTP access to it.
I honestly don't recall when that happened; it does seem obvious in
retrospect! Most of us were creating document in online text systems, and it
would have been trivial to make them available in machine-readable form. Old
habits die hard, I guess... :-)
> I think I should put a question out about this, over on the internet
> history mailing list.
Yes, good idea.
> As an aside: IMHO, conceptually the difference between NCP and TCP
> wasn't all that big.
Depends. Yes, the service provided to the _clients_ was very similar (which
can be seen in how similar the NCP and TCP versions of thing like TELNET, FTP,
etc were), but internally, they are very different.
> In my current understanding the big difference that was NCP assumes
> in-order, reliable delivery of packets ... and that TCP allows for
> unreliable links.
Yes, that's pretty accurate (but it does mean that there are _a lot_ of
differences internally - re-transmissions, etc). One other important
difference is that there's no flow control in the underlying network
(something that took years to understand and deal with properly).
> yes, these concepts were kicking around for over a decade in academia
> before BSD.
TCP/IP was the product of a large, well-organized, DARPA-funded and -led
effort which involved industry, academic and government players (the first
two, for the most part, DARPA-funded). So I wouldn't really call it an
'academic' project.
Noel
Thanks Warren for saving me to sort out the confusion. I am sorry I started it in the first place.
On Tue, Jan 10, 2017 at 11:20 AM, Joerg Schilling <schily(a)schily.net <mailto:schily@schily.net>> wrote:
> …. Note that
> this list is very similar to that in the early part of his book on System V
> internals.
Having having just removed the dust off my old copy of TMGE, it is interesting that the list I wrote here is very similar to what I wrote back in 1993. Just goes to show, Alzheimer’s hasn’t got me yet ;)
Ok, the story so far. Berny wrote:
Here's the breakdown of SVR4 kernel lineage as I recall it. ...
From SunOS:
vnodes
VFS
Dan Cross wrote:
> VFSSW <=== NO, this is from SunOS-4
Surely Berny meant the file system switch here, which could have come
from early system V, but originated in research Unix (8th edition?).
Joerg Schilling wrote:
It is rather a part of the VFS interface that has first been completed
with SunOS-3.0 in late 1985.
And this is where the confusion starts. Does "It" refer to FSS or VFS?
I've just looked through some sources. The file system switch was in SysVR3:
uts/3b2/sys/mount.h:
/* Flag bits passed to the mount system call */
#define MS_RDONLY 0x1 /* read only bit */
#define MS_FSS 0x2 /* FSS (4-argument) mount */
VFS was in SunOS 4.1.4:
sys/sys/vfs.h:
struct vfssw {
char *vsw_name; /* type name string */
struct vfsops *vsw_ops; /* filesystem operations vector */
};
And VFS is in SysVR4:
uts/i386/sys/vfs.h:
typedef struct vfssw {
char *vsw_name; /* type name string */
int (*vsw_init)(); /* init routine */
struct vfsops *vsw_vfsops; /* filesystem operations vector */
long vsw_flag; /* flags */
} vfssw_t;
Interestingly, the "filesystem operations vector" comment also occurs
in FreeBSD 5.3, NetBSD-5.0.2 and OpenBSD-4.6. Look for vector here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=FreeBSD-5.3/sys/sys/mount.hhttp://minnie.tuhs.org/cgi-bin/utree.pl?file=NetBSD-5.0.2/sys/compat/sys/mo…http://minnie.tuhs.org/cgi-bin/utree.pl?file=OpenBSD-4.6/sys/sys/mount.h
Larry wrote:
System Vr3 had something called the file system switch which is what
Berny is talking about. SunOS had virtual file system layer (VFS) and
that would be one of things ported to SVr4.
which is consistent with everybody else.
So now that we have consistency, let's move on.
Cheers, Warren
I have been trolling these many threads lately of interest. So thought I should chip in.
"SVr4 was not based on SunOS, although it incorporated
many of the best features of SunOS 4.x”.
IMHO this statement is almost true (there were many great features from BSD too!).
SunOS 5.0 was ported from SVR4 in early 1991 and released as Solaris 2.0 in 1992 for desktop only.
Back in the late 80s, Sun and AT&T partnered development efforts so it’s no surprise that SunOS morphed into SVR4. Indeed it was Sun and AT&T who were the founding members of Unix International…with an aim to provide direction and unification of SVR4.
I remember when I went to work for Sun (much later in 2003), and found that the code base was remarkably similar to the SVR4 code (if not exact in many areas).
Here’s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;)
From BSD:
TCP/IP
C Shell
Sockets
Process groups and job Control
Some signals
FFS in UFS guise
Multi groups/file ownership
Some system calls
COFF
From SunOS:
vnodes
VFS
VM
mmap
LWP and kernel threads
/proc
Dynamic linking extensions
NFS
RPC
XDR
From SVR3:
.so libs
revamped signals and trampoline code
VFSSW
RFS
STREAMS and TLI
IPC (Shared memory, Message queues, semaphores)
Additional features in SVR4 from USL:
new boot process.
ksh
real time extensions
Service access facility
Enhancements to STREAMS
ELF
> I wonder if >pdd ... was in any way any inspiration for /proc?
That may have been a bit too cryptic. "pdd" ('process directory directory')
was a top-level directory in the Multics filesystem which contained a
directory for each process active in the system; each directory contained data
(in segments - roughly, 'files', but Multics didn't have files because it was
a single-level store system) associated with the process, such as its kernel-
and user-mode (effectively - technically, ring-0 and ring-4) stacks, etc.
So if a process was sitting in a system call, you could go into the right
directory in >pdd and look at its kernel stack and see the sequence of
procedure calls (with arguments) that had led it to the point where it
blocked. Etc, etc.
Noel
>Date: Wed, 11 Jan 2017 02:16:26 +0000
>From: Steve Simon <steve(a)quintile.net>
>To: tuhs(a)tuhs.org
>Subject: [TUHS] Rje / sna networking
>Message-ID: <05DECAED-065A-4520-805A-D86CF1596A01(a)quintile.net>
>Content-Type: text/plain; charset=us-ascii
>Casual interest,
>Anyone ever used RJE from SYS-III - IBM mainframe remote job entry
>System? I started on Edition 7 on an interdata so I am (pretty much) too young
>for that era, unless I am fooling myself.
>-Steve
In the 90sh DEC in Europe had a number of products on top of SCO UNIX
3.2V4.2 calling DECadvantage (from the German part of former Philips
Information Systems). Included were an SNA environment with 3270
display/print, 3770/RJE, and APPC. I've used RJE for downloading daily
reports in one of the banks here in Thailand. Long time ago though.
Still have various sample scripts I put together that time.
> /proc came from research
Indeed it did.
> /proc was done by Roger at AT&T (maybe USL). I recall him telling me that
> he was not the original author though and that it came from PWB.
Roger Faulkner's /proc article, recently cited in tuhs, begins with
acknowledgment to Tom Killian, who originated /proc in research.
(That was Tom's spectacular debut when he switched from high-energy
physics, at Argonne IIRC, to CS at Bell Labs.)
doug
I asked:
> I wonder where the inspiration for the Unix job control came from? In
> particular, I can't help but notice that Control-Z does something very
> similar in the PDP-10 Incompatible Timesharing System.
Jim Kulp answered:
> The ITS capabilities were certainly part of the inspiration. It was a
> combination of frustrations and gaps in UNIX with some of those
> features found in ITS that resulted in the final package of features.
Casual interest,
Anyone ever used RJE from SYS-III - IBM mainframe remote job entry
System? I started on Edition 7 on an interdata so I am (pretty much) too young
for that era, unless I am fooling myself.
-Steve
Doug McIlroy:
There was some pushback which resulted in the strange compromise
of if-fi, case-esac, do-done. Alas, the details have slipped from
memory. Help, scj?
====
do-od would have required renaming the long-tenured od(1).
I remember a tale--possibly chat in the UNIX Room at one point in
the latter 1980s--that Steve tried and tried and tried to convince
Ken to rename od, in the name of symmetry and elegance. Ken simply
said no, as many times as it took. I don't remember who I heard this
from; anyone still in touch with Ken who can ask him?
Norman Wilson
Toronto ON
Reputed origins of SVR4:
> From SunOS:
> ...
> NFS
And, sadly, NFS is still with us, having somehow upstaged Peter
Weinberger's RFS (R for remote) that appeared at the same time.
NFS allows one to add computers to a file system, but not to
combine the file systems of multiple computers, as RFS did
by mapping uids: NFS:RFS::LAN:WAN.
Doug
> From: Chet Ramey
> /proc was done by Roger at AT&T (maybe USL). I recall him telling me
> that he was not the original author though and that it came from PWB.
> The original implementation was done by Tom Killian for 8th Edition.
I wonder if >pdd (which dates to somewhere in the mid-60's, I'm too lazy to
look the exact date up) was in any way any inspiration for /proc?
Noel
Wasn't ksh SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it.
I. built it for SVR4 on my Xelos 3230 back in the day.
Bill
Sent from my android device.
> On 10 Jan 2017, at 16:16, pechter(a)gmail.com wrote:
>
> Wasn't msg SVR4... It was in the Xelos sources @Concurrent Computer which was an SVR2 port. Xelos didn't do paging but the source in 87 or 88 or so had ksh in it.
>
> I. built it for SVR4 on my Xelos 3230 back in the day.
msgs goes back as far as SVR2.
>
> Bill
>
> Sent from my android device.
>
> -----Original Message-----
> From: Berny Goodheart <berny(a)berwynlodge.com>
> To: tuhs(a)minnie.tuhs.org
> Sent: Tue, 10 Jan 2017 10:12
> Subject: [TUHS] the guy who brought up SVr4 on Sun machines
>
> I have been trolling these many threads lately of interest. So thought I should chip in.
>
> "SVr4 was not based on SunOS, although it incorporated
> many of the best features of SunOS 4.x”.
>
> IMHO this statement is almost true (there were many great features from BSD too!).
> SunOS 5.0 was ported from SVR4 in early 1991 and released as Solaris 2.0 in 1992 for desktop only.
> Back in the late 80s, Sun and AT&T partnered development efforts so it’s no surprise that SunOS morphed into SVR4. Indeed it was Sun and AT&T who were the founding members of Unix International…with an aim to provide direction and unification of SVR4.
> I remember when I went to work for Sun (much later in 2003), and found that the code base was remarkably similar to the SVR4 code (if not exact in many areas).
>
> Here’s the breakdown of SVR4 kernel lineage as I recall it. I am pretty sure this is correct. But I am sure many of you will put me right if I am wrong ;)
>
> From BSD:
> TCP/IP
> C Shell
> Sockets
> Process groups and job Control
> Some signals
> FFS in UFS guise
> Multi groups/file ownership
> Some system calls
> COFF
>
> From SunOS:
> vnodes
> VFS
> VM
> mmap
> LWP and kernel threads
> /proc
> Dynamic linking extensions
> NFS
> RPC
> XDR
>
> From SVR3:
> .so libs
> revamped signals and trampoline code
> VFSSW
> RFS
> STREAMS and TLI
> IPC (Shared memory, Message queues, semaphores)
>
> Additional features in SVR4 from USL:
> new boot process.
> ksh
> real time extensions
> Service access facility
> Enhancements to STREAMS
> ELF
>
>
>
>
>
> From: Berny Goodheart
> From BSD:
> Process groups and job Control
The intermediate between V6 and V7 which ran on several MIT machines (I think
it was an early PWB - I should retrieve it and make it available to the Unix
archive, it's an interesting system) had 'process groups', but I don't know if
the concept was the same as BSD process groups.
Noel
> From: Tony Finch
> The other classic of Algol 68 literature
No roundup of classic Algol 68 literature would be complete without Hoare's
"The Emperor's Old Clothes".
I assume everyone here has read it, but on the off-chance there is someone
who hasn't, a copy is here:
http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf
and I cannot recommend it more highly.
Noel
On Wed, Jan 4, 2017 at 11:17 AM, ron minnich <rminnich(a)gmail.com
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=rminnich@gmail.com>>
wrote:
> Larry, had Sun open sourced SunOS, as you fought so hard to make happen,
> Linux might not have happened as it did. SunOS was really good. Chalk up
> another win for ATT!
>
FWIW: I disagree. For details look at my discussion of rewriting Linux
in RUST
<https://www.quora.com/Would-it-be-possible-advantageous-to-rewrite-the-Linu…>
on quora. But a quick point is this .... Linux original took off (and was
successful) not because of GPL, but in spite of it and later the GPL would
help it. But it was not the GPL per say that made Linux vs BSD vs SunOS et
al.
What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a
lot of hackers (myself included) thought the case was about *copyright*.
It was not, it was about *trade secret* and the ideas around UNIX. * i.e.*
folks like, we "mentally contaminated" with the AT&T Intellectual Property.
When the case came, folks like me that were running 386BSD which would
later begat FreeBSD et al, got scared. At that time, *BSD (and SunOS)
were much farther along in the development and stability. But .... may of
us hought Linux would insulate us from losing UNIX on cheap HW because
their was not AT&T copyrighted code in it. Sadly, the truth is that if
AT&T had won the case, *all UNIX-like systems* would have had to be removed
from the market in the USA and EU [NATO-allies for sure].
That said, the fact the *BSD and Linux were in the wild, would have made it
hard to enforce and at a "Free" (as in beer) price it may have been hard to
make it stick. But that it was a misunderstanding of legal thing that
made Linux "valuable" to us, not the implementation.
If SunOS has been available, it would not have been any different. It
would have been thought of based on the AT&T IP, but trade secret and
original copyright.
Clem
>Date: Mon, 09 Jan 2017 08:45:47 -0700
>From: arnold(a)skeeve.com
>To: rochkind(a)basepath.com
>Cc: tuhs(a)tuhs.org
>Subject: Re: [TUHS] Unix stories, Stephen Bourne and IF-FI in C code
>Message-ID: <201701091545.v09FjlXE027448(a)freefriends.org>
>Content-Type: text/plain; charset=us-ascii
>
>I remember the Bournegol well; I did some hacking on the BSD shell.
>
>In general, it wasn't too unusual for people from Pascal backgrounds to
>do similar things, e.g.
>
> #define repeat do {
> #define until(cond) } while (! (cond))
>
>(I remember for me personally that do...while sure looked weird for.
>my first few years of C programming. :-)
>
>(Also, I would not recommend doing that; I'm just noting that
>people often did do stuff like that.)
When the Philips computer division worked on MPX (multi-processor
UNIX) in late 80tish they had an include file 'syntax.h' which did a
lot of that Pascal-like mapping.
Here part of it:
/* For a full explanation see the file syntax.help */
#define IF if(
#define THEN ){
#define ELSIF }else if(
#define ELSE }else{
#define ENDIF }
#define NOT !
#define AND &&
#define OR ||
#define CASE switch(
#define OF ){
#define ENDCASE break;}
#define WHEN break;case
#define CWHEN case
#define IMPL :
#define COR :case
#define BREAK break
#define WHENOTHERS break;default
#define CWHENOTHERS default
#define SELECT do{{
#define SWHEN }if(
#define SIMPL ){
#define ENDSELECT }}while(0)
#define SCOPE {
#define ENDSCOPE }
#define BLOCK {
#define ENDBLOCK }
#define FOREVER for(;;
#define FOR for(
#define SKIP
#define COND ;
#define STEP ;
#define LOOP ){
#define ENDLOOP }
#define NULLOOP ){}
#define WHILE while(
#define DO do{
#define UNTIL }while(!(
#define ENDDO ))
#define EXITWHEN(e) if(e)break
#define CONTINUE continue
#define RETURN return
#define GOTO goto
I was in building 5 at Sun when they were switching to SVr4 which became
Solaris 2.0 (I think). Building 5 housed the kernel people at Sun.
John Pope was the poor bastard who got stuck with doing the bring up.
Everyone hated him for doing it, we all wanted it to fail.
I was busting my ass on something in SunOS 4.x and I was there late into
the night, frequently to around midnight or beyond. So was John.
We became close friends. We both moved to San Francisco and ended up
commuting to Mountain View together (and hit the bars together).
John was just at my place, here's a few pictures for those who might
be interested. He's a great guy, got stuck with a shitty job.
http://www.mcvoy.com/lm/2016-pope/
--lm
> On Jan 9, 2017, at 6:00 PM,"Steve Johnson" <scj(a)yaccman.com> wrote:
>
> I can certainly confirm that Steve Bourne not only knew Algol 68, he
> was quite an evangelist for it.
Bourne had led the Algol68C development team at Cambridge until 1975. See http://www.softwarepreservation.org/projects/ALGOL/algol68impl/#Algol68C .
> if-fi and case-esac notation from Algol came to shell [via Steve Bourne]
There was some pushback which resulted in the strange compromise
of if-fi, case-esac, do-done. Alas, the details have slipped from
memory. Help, scj?
doug
All, I'm not sure if you know of Walter Müller's work at implementing
a PDP-11 on FPGAs: https://wfjm.github.io/home/w11/. He sent me this e-mail
with an excellent source code cross-reference of the 2.11BSD kernel:
P.S.: long time ago I wrote a source code viewer for 2.11BSD and OS with
a similar file and directory layout. I made a few tune-ups lately
and wrote some sort of introduction, see
https://wfjm.github.io/home/ouxr/
Might be helpful for you in case you inspect 2.11BSD source code.
Cheers all, Warren
I was amused this morning to see a post on the tack-devel(a)lists.sourceforge.net
mailing list (TACK = The Amsterdam Compiler Kit) today from David Given,
who writes:
>> ...
>> ... I took some time off from thinking about register allocation (ugh)
>> and ported the ABC B compiler to the ACK. It's now integrated into the
>> system and everything.
>>
>> B is Ken Thompson and Dennis Ritchie's untyped programming language
>> which later acquired types and turned into K&R C. Everything's a machine
>> word, and pointers are *word* address, not byte addresses.
>>
>> The port's a bit clunky and doesn't generate good code, but it works and
>> it passes its own tests. It runs on all supported backends. There's not
>> much standard library, though.
>>
>> Example:
>>
>> https://github.com/davidgiven/ack/blob/default/examples/hilo.b
>>
>> (Also, in the process it found lots of bugs in the PowerPC mcg backend,
>> now fixed, as well as several subtle bugs in the PowerPC ncg backend; so
>> that's good. I'm pretty sure that this is the only B compiler for the
>> PowerPC in existence.)
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Date: Fri, 6 Jan 2017 20:09:18 -0700
> From: Warner Losh <imp(a)bsdimp.com>
> To: "Greg 'groggy' Lehey" <grog(a)lemis.com>
> Cc: Clem Cole <clemc(a)ccc.com>, The Eunuchs Hysterical Society
> <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] SunOS vs Linux
>
>> On Friday, 6 January 2017 at 9:27:36 -0500, Clem Cole wrote:
>>
>> I think that if SunOS 4 had been released to the world at the right
>> time, the free BSDs wouldn't have happened in the way they did either;
>> they would have evolved intimately coupled with SunOS.
>
> With the right license (BSD), I'd go so far as to saying there'd be no
> BSD 4.4, or if there was, it would have been rebased from the SunOS
> base... There were discussions between CSRG and Sun about Sun donating
> it's reworked VM and VFS to Berkeley to replace the Mach VM that was
> in there... Don't know the scope of these talks, or if they included
> any of the dozens of other areas that Sun improved from its BSD 4.3
> base... The talks fell apart over the value of the code, if the rumors
> I've heard are correct.
>
> Warner
Since I was involved with the negotiations with Sun, I can speak
directly to this discussion. The 4.2BSD VM was based on the
implementation done by Ozalp Babaoglu that was incorporated into
the BSD kernel by Bill Joy. It was very VAX centric and was not
able to handle shared read-write mappings.
Before Bill Joy left Berkeley for Sun, he wrote up the API
specification for the mmap interface but did not finish an
implementation. At Sun, he was very involved in the implementation
though did not write much (if any) of the code for the SunOS VM.
The original plan was to ship 4.2BSD with an mmap implementation,
but with Bill's departure that did not happen. So, it fell to me
to sort out how to get it into 4.3BSD. CSRG did not have the
resources to do it from scratch (there were only three of us).
So, I researched existing implementations and it came down to
the SunOS and MACH implementations. The obvious choice was SunOS,
so I approached Sun about contributing their implementation to
Berkeley. We had had a lot of cooperation about exchanging bug
fixes, so this is not as crazy as it seems.
The Sun engineers were all for it, and convinced their managers
to push my request up the hierarchy. Skipping over lots of drama
it eventually got to Scott McNealy who was dubious, but eventually
bought into the idea and cleared it. At that point it went to the
Sun lawyers to draw up the paperwork. The lawyers came back and
said that "giving away SunOS technology could lead to a stockholder
lawsuit concerning the giving away of stockhoder assets." End of
discussion. We had to go with MACH.
Kirk McKusick
This is a history list and I'm going to try to answer this to give some
historical context and hopefully end this otherwise a thread that I'm not
sure adds to the history of UNIX much one way of the other. Some people
love GPL, some do not. I'll gladly take some of this off list. But I
would like to see us not devolve TUHS into my favorite license or favorite
unix discussion.
On Thu, Jan 5, 2017 at 9:09 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> That makes sense to me, the GPL was hated inside of Sun, it was considered
>
> a virus. The idea that you used a tiny bit of GPLed code and then
> everything
>
> else is GPLed was viewed as highway robbery.
>
I'm not lawyer, nor play one. I am speaking for myself not Intel here so
take what I have to say with that in mind. Note I do teach the required
"GPL and Copyright Course" of all Intel SW folks so I have had some
training and I do have some opinions. I also have lived this for 5 start
up, and a number of large firms both inside and as the a consultant.
Basically, history has shown that they both viral an non-viral licenses
have their place. Before I worked Intel I admit I was pretty much
negative on the GPL "virus" and I >>mostly<< still am. IMHO, it's done
more damage than it has helped and the CMU/MIT/BSD style "Dead Fish"
license has done for more positive *for the industry @ large *than the GPL
in the long run. But I admit, I'm a capitalist and I see the value in
letting some one make some profit for their work. All I have seen the
virus do in the long run is that firms have lawyers to figure out how to
deal with it.
There is a lot of miss information about the term "open source" .... open
source does not mean "free" as in beer. It means available and "open" to
be read and modified. Unix has >>always<< be open and available - which
is why their are so many versions of Unix (and Linux). The question was
the *price* of the license and who had it. Most hacker actually did have
access as this list shows -- we had from our universities for little money
or our employees for much more. GPL and the virus it has, does not protect
any one from this diversity. In fact, in some ways it makes it harder.
The diversity comes from the market place. The problem is that in the
computer business, the diversity can be bad and keeping things "my way" is
better for the owner of the gold (be it a firm like IBM, DEC, or Microsoft)
or a technology like Linux.
What GPL is >>supposed<< to do it ensure that secrets are not locked up and
ensure that all can see and share in the ideas. This is a great idea in
theory, the risk is that if you have IP that you want to some how protect,
as Larry suggests, the virus can put your IP in danger. To the credit of
firms like Intel, GE, IBM et al, they have learned how to try to firewall
their >>important<< IP with processes and procedures to protect it (which
is exactly what rms did not want to have happen BTW). [In my experience,
it made the locks even tighter than before], although it has made some
things more available. I now this rankles some folks. There are
positives and negatives to each way of doing things.
IMO, history has shown that it has been the economics of >>Clay
Christiansen style disruption<<, not a license that changed things in our
industry. When the price of UNIX of any version (Linux, *BSD, SunOS,
MInux, etc...) and the low cost HW came to be and the "enough" hackers did
something. Different legal events pushed one version ahead of others, and
things had to be technology "good enough" -- but it was economics not
license that made the difference. License played into the economics for
sure, but in the end, it was free (as in beer) vs $s that made it all work.
Having lived through they completely open, completely closed, GPLed and
dead-fish world of the computer industry, I'm not sure if we are really any
farther ahead in practice. We just have to be careful and more lawyers
make more money - by that's my being a cynic.
Anyway, I hope we can keep from devolving from really history.
Clem
One significant area of non compliance with unix conventions is its non
case sensitive filesystem (HFS and variants like HFS+ if I recall). I think
this is partly for historical reasons to make Classic / MacOS9 emulation
easier during the transition. But I could never understand why they did
this, they could have put case insensitivity in their shell and apps
without breaking the filesystem.
Anyway despite its being unix I can't really see it gaining much traction
with serious unix users (when did you last get a 404 from a major website
with a tagline "Apache running on MacOSX"?), the MacPorts and Fink repos
are a really bad and patchy implementation of something like
apt/ctan/cpan/etc (I think possibly at least one of those repos builds from
source with attendant advantages/problems), it does not support X properly,
the dylibs are non standard, everything is a bit broken compared with Linux
(or FreeBSD) and Apple does not really have the motivation or the manpower
to create a modern, clean system like unix users expect.
Open sourcing Darwin was supposed to open it up to user contributed
enhancements but Apple was never serious about this, it was just a sop to
people who claimed (correctly) that Apple was riding on the back of open
source and giving nothing back to the community. Since Apple refused to
release any important code like drivers or bootstrap routines the Darwin
release was never really any more useable than something like 4.4BSDLite.
People who loved their Macs and loved unix and dreamed of someday running
the Mac UI on top of a proper unix, put significant effort into supplying
the missing pieces but were rebuffed by Apple at every turn, Apple would
constantly make new releases with even more missing pieces and breakage and
eventually stopped making any open source releases at all, leaving a lot of
people crushed and very bitter.
As for me I got on the Apple bandwagon briefly in 2005 or so, at that time
I was experimenting with RedHat but my primary development machines were
Windows 98 and 2000 (occasionally XP). My assessment was RedHat was not
ready for desktop use, since I had trouble with stuff like printers and
scanners that required me to stay with Windows (actually this was probably
surmountable but I did not have the knowledge or really the desire to spend
time debugging it). That's why I selected Apple as a "compromise unix"
which should connect to my devices easily. I got enthusiastic and spent a
good $4k on new hardware. Shortly afterwards Apple announced the Intel
transition so I realized my brand new gear would soon be obsolete and
unsupported. I was still pretty happy though. Two things took the shine off
eventually (a) I spilt champagne on my machine, tore it down to discover my
beautiful and elegant and spare (on the outside) machine was a horrible
hodgepodge of strange piggyback PCBs and third party gear (on the inside),
this apparently happened because options like the backlit keyboard had
become standard equipment at some point but Apple had never redesigned them
into the motherboard, the whole thing was horribly complicated and fragile
and never worked well after the teardown (b) I got seriously into FreeBSD
and Linux and soon discovered the shortcomings of the Mac as a serious
development machine, everything was just slightly incompatible leading to
time waste.
Happily matters have improved a lot. Lately I was setting up some Windows 7
and 10 machines for my wife to use MS Office on for her uni work. Both had
serious driver issues like "The graphics card has crashed and recovered".
And on the Windows 10 machine, despite it being BRAND NEW out of the box
and manufacturer preloaded, the wifi also did not work, constantly crashed
requiring a reboot. Windows Update did not fix these problems. Downloading
and trying various updated drivers from the manufacturer's website seems to
have for now, except on the Windows 7 machine where the issue is noted and
listed as "won't fix" because the graphics card is out of date, the fixed
driver won't load on this machine. Given this seems to be the landscape
even for people who are happy to spend the $$ on the official manufacturer
supported Windows based solutions, Linux looks pretty easy to install and
use by comparison. Not problem free, but may have fewer problems and easier
to fix problems.
It appears to me that with the growing complexity of the hardware due to
the millions of compatibility layers and ad hoc protocols built into it,
the job of the manufacturers and official OS or driver writers gets harder
and harder, whereas the crowdsourced principle of open source shows its
value since the gear is better tested in a wider variety of realistic
situations.
cheers, Nick
On Jan 1, 2017 9:46 AM, "David" <david(a)kdbarto.org> wrote:
> On Dec 31, 2016, at 8:58 AM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)tuhs.org
> Subject: Re: [TUHS] Historic Linux versions not on kernel.org
> Message-ID: <20161231111339.GK576(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I might be colored by the fact that I'm running Linux myself, but I'd
> say that those are almost certainly worth preserving somehow,
> somewhere. Linux and OS X are the Unix-like systems people are most
> likely to come in contact with these days
MacOS X is a certified Unix (tm) OS. Not Unix-Like.
http://www.opengroup.org/openbrand/register/apple.htm
It has been so since 10.0. Since 10.5 (Leopard) it has been so noted on the
above Open Group page. The Open Group only lists the most recent release
however.
The Tech Brief for 10.7 (http://images.apple.com/media/us/osx/2012/docs/OSX_
for_UNIX_Users_TB_July2011.pdf) also notes the compliance.
David
I left Digital in 1994, so I don’t know much about the later evolution of the Alphaservers, but 1998 would have been about right for en EV-56 (EV-5 shrink) or EV-6. There’s a Wikipedia article about all the different systems but most of the dates are missing.
The white label parts are all PAL22V10-15s. The 8 square chips are cache SRAMS, and most the the SOIC jellybeans are bus transceivers to connect the CPU to RAM and I/O. The PC derived stuff is in the back corner. There are 16 DIMM slots to make two ranks of 54 bit RAM out of 8-bit DIMMs. We usually ran with a SCSI card, an ethernet, and an 8514 graphics card plugged into the riser.
-L
> On 2017, Jan 5, at 5:55 PM, ron minnich <rminnich(a)gmail.com> wrote:
>
> What version of this would I have bought ca. 1998? I had 16 of some kind of Alpha nodes in AMD sockets, interconnected with SCI for encoding videos. I ended up writing and releasing what I think were the first open source drivers for SCI -- it took a long time to get Dolphin to let me release them.
>
> The DIPs with white labels -- are those PALs or somethin? Or are the labels just to cover up part names :-)
>
> On Thu, Jan 5, 2017 at 2:39 PM Lawrence Stewart <lstewart2(a)gmail.com <mailto:lstewart2@gmail.com>> wrote:
> Alphas in PC boxes! I dug around in the basement and found my Beta (photo attached).
>
> This was from 1992 or 1993 I think. This is an EV-3 or EV-4 in a low profile PC box using pc peripherals. Dave Conroy designed the hardware, I did the console ROMS (BIOS equivalent) and X server, and Tom Levergood ported OSF-1. A joint project of DEC Semiconductor Engineering and the DEC Cambridge Research Lab. I think about 20 were built, and the idea kickstarted a line of low end Alphaservers.
>
> This was a typical Conroy minimalist design, crunching the off-chip caches, PC junk I/O, ISA bus, and 64 MBytes of RAM into this little space. I think one gate array would replace about half of the chips.
>
> -L
>
>
> <IMG_0939.JPG><IMG_0939.JPG>
All, following on from the "lost ports" thread, I might remind you all that
I'm keeping a hidden archive of Unix material which cannot be made public
due to copyright and other reasons. The goal is to ensure that these bits
don't > /dev/null, even if we can't (yet) do anything with them.
If you have anything that could be added to the archive, please let me know.
My rules are: I don't divulge what's in the archive, nor who I got stuff
from. There have been very few exceptions. I have sent copies of the archive
to two important historical computer organisations who must abide by the
same rules. I think I've had one or two individuals who were desperate to
get software to run on their old kit, and I've "loaned" some bits to them.
Anway, that's it. If that seems reasonable to you, and you want an off-site
backup of your bits, I'm happy to look after them for you.
Cheers, Warren
On 4 January 2017 at 13:51, Steve Johnson <scj(a)yaccman.com> wrote (in part):
> These rules provided rich fodder for Lint, when it came along, [...]
All this lint talk caused me to reread your Lint article but no
history there. Was there a specific incident that begat lint?
N.
So there are a few ports I know of that I wonder if they ever made it back
into that great github repo.I don't think they did.
harris
gould
That weird BBN 20-bit machine
(20 bits? true story: 5 4-bit modules fit in a 19" rack. So 20 bits)
Alpha port (Tru64)
Precision Architecture
Unix port to Cray vector machines
others? What's the list of "lost machines" look like? Would companies
consider a donation, do you think?
If that Cray port is of any interest I have a thread I can push on maybe.
but another true story: I visited DEC in 2000 or so, as LANL was about to
spend about $120M on an Alpha system. The question came up about the SRM
firmware for Alpha. As it was described to me, it was written in BLISS and
the only machine left that could build it was an 11/750, "somewhere in the
basement, man, we haven't turned that thing on in years". I suspect there's
a lot of these containing oxide oersteds of interest.
ron
(Yes, a repeat, but this momentous event only happens every few years.)
The International Earth Rotation Service has announced that there will be
a Leap Second inserted at 23:59:59 UTC on the 31st December, due to the
earth slowly slowing down. It's fun to listen to see how the time beeps
handle it; will your GPS clock display 23:59:60, or will it go nuts
(because the programmer was an idiot)?
I actually have a recording of the last one, over at
www.horsfall.org/leapsecond.webm (yes, I am a tragic geek),
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>Date: Wed, 4 Jan 2017 16:41:07 -0500
>From: "Ron Natalie" <ron(a)ronnatalie.com>
>To: "'ron minnich'" <rminnich(a)gmail.com>, <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] lost ports
>Message-ID: <01c001d266d3$42294820$c67bd860$(a)ronnatalie.com>
>Content-Type: text/plain; charset="utf-8"
...
>I did kernel work on the PA for HP also worked on their X server (did a few other X server >over the years).
>The hard part would be finding anybody from these companies who could even remember
>they made computers let alone had UNIX software.
I worked for the computer division in Philips Electronics, DEC,
Compaq, HP, HPE and still remember some of it :-)
I wasn't involved in OS development, but in testing, turnover to
National Sales Organisations, etc. Even now at some customer side I
still have a few aDEC400xP servers from 1992 running SCO UNIX 3.2V4.2
(last update 1999). Also a few AlphaServers with Digital UNIX, Tru64;
finally some Itanium servers with HP-UX 11.23/11.31.
Especially the big/small endian issue gave our customer (and therefore
myself) a few headaches. Imagine getting a chunk of shared memory and
casting pointers assuming the 'system' takes care of alignment. Big
surprise for the customer moving from Tru64 to HP-UX.
I just went looking at the v6 source to confirm a memory, namely that cpp
was only invoked if a # was the first character in the file. Hence, this:
https://github.com/dspinellis/unix-history-repo/blob/Research-V6-Snapshot-D…
People occasionally forgot this, and hilarity ensued.
Now I'm curious. Anyone know when that convention ended?
ron
Goodness, I go to sleep, wake up 8 hours later and there's 50 messages in
the TUHS mailing list. Some of these do relate to the history of Unix, but
some are getting quite off-topic.
So, can I get you all to just pause before you send in a reply and ask:
is this really relevant to the history of Unix, and does it contribute
in a meaningful way to the conversation.
Looks like we lost Armando, that's a real shame.
Cheers, Warren
Peter Salus writes "The other innovation present in the Third Edition
was the pipe" ("A Quarter Century of Unix", p. 50). Yet, in the
corresponding sys/ken/sysent.c, the pipe system call seems to be a stump.
1, &fpe, /* 40 = fpe */
0, &dup, /* 41 = dup */
0, &nosys, /* 42 = pipe */
1, ×, /* 43 = times */
On the other hand, the Fourth Edition manual documents the pipe system
call, the construction of pipelines through the shell, and the use of wc
as a filter (without an input file, as was required in the Second Edition).
Would it therefore be correct to say that pipes were introduced in the
Fourth rather than the Third Edition?
>
> What made Linux happen was the BSDi/UCB vs AT&T case. At the time, a
> lot of hackers (myself included) thought the case was about *copyright*.
> It was not, it was about *trade secret* and the ideas around UNIX. * i.e.*
> folks like, we "mentally contaminated" with the AT&T Intellectual Property.
>
Wasn’t there a Usenix button with the phrase “Mentally Contaminated” on it?
I’m sure I’ve got it around here somewhere. Or is my memory suffering from
parity errors?
David
> keeping the code I work on portable between Linux and the Mac requires
> more than a bit of ‘ifdef’ hell.
Curmudgeonly comment: I bristle at the coupling of "ifdef" and "portable".
Ifdefs that adjust code for different systems are prima facie
evidence of NON-portability. I'll buy "configurable" as a descriptor
for such ifdef'ed code, but not "portable".
And, while I am venting about ifdef:
As a matter of sytle, ifdefs are global constructs. Yet they often
have local effects like an if statement. Why do we almost always write
#ifdef LINUX
linux code
#else
default unix code
#endif
instead of the much cleaner
if(LINUX)
linux code
else
default unix code
In early days the latter would have cluttered precious memory
with unfreachable code, but now we have optimizing compilers
that will excise the useless branch just as effectively as cpp.
Much as the trait of overeating has been ascribed to our
hunter ancestors' need to eat fully when nature provided,
so overuse of ifdef echos coding practices tuned to
the capabilities of bygone computing systems.
"Ifdef hell" is a fitting image for what has to be one of
Unix's least felicitous contributions to computing. Down
with ifdef!
Doug
From: "Doug McIlroy" <doug(a)cs.dartmouth.edu>
Subject:Re: [TUHS] Mac OS X is Unix
> keeping the code I work on portable between Linux and the Mac requires
> more than a bit of ‘ifdef’ hell.
| Curmudgeonly comment: I bristle at the coupling of "ifdef” and "portable".
| Ifdefs that adjust code for different systems are prima facie
| evidence of NON-portability. I'll buy "configurable" as a descriptor
| for such ifdef'ed code, but not "portable".
| <snip>
| "Ifdef hell" is a fitting image for what has to be one of
| Unix's least felicitous contributions to computing. Down
| with ifdef!
| Doug
Doug makes a very good point about ifdef hell. Though I’d claim that it isn’t even “configurable” at some level.
Several years ago I was working at Megatek, a graphics h/w vendor. We were porting the X11 suite to various new boards at the rate of about 1 a week it seemed. Needless to say the code became such a mishmash of ifdef’s that you couldn’t figure out what some functions did any longer. You just hoped and prayed that your patch worked properly on the various hardware you were targeting and didn’t break it for anyone else. You ran the unit tests, if they passed you pushed your change and ran an hid under a rock for a while until you were sure it was safe to come out again.
David
hi all,
For those who have been following my 2.11BSD conversion, I was working
on this in about 2005 and I might have posted about it then, and then
nothing much happened while I did a university degree and so on, but
recently I picked it up again. When I left it, I was partway through
an ambitious conversion of the BSD build system to my own design (a
file called "defs.mk" was included in all Makefiles apparently), and I
threw this out because it was too much work upfront. The important
build tools like "cc" were working, but I have since reviewed all
changes and done things differently. The result is I can now build the
C and kernel libraries and a kernel, and they work OK.
This seems like a pretty good milestone so I'm releasing the code on bitbucket.
See https://bitbucket.org/nick_d2/ for the list of my repositories,
there is another one there called "uzi" which is a related project,
but not document so I will write about it later, in the meantime
anyone is welcome to view the source and changelogs of it.
The 2.11BSD repository is at the following link:
https://bitbucket.org/nick_d2/211bsd
There is a detailed readme.txt in the root of repository which
explains exactly how I approached the conversion and gives build
instructions, caveats and so forth. To avoid duplication I won't post
this in the list, but I suggest people read it as a post, since it's
extremely interesting to see all the porting issues laid out together.
See
https://bitbucket.org/nick_d2/211bsd/src/27343e0e0b273c2df1de958db2ef5528cc…
Happy browsing :)
cheers, Nick
> From: Clem Cole
> You might say something like: Pipe's were developed in a 3rd edition
> kernel, where there was is evidence of nascent idea (its has a name and
> there are subs for it), but the code to fully support it is lacking in
> the 3rd release. Pipes became a completed feature in the 4th edition.
To add to what others have pointed out (about the assembler and C kernels),
let me add one more data-bit. In the Unix oral histories done by Michael S.
Mahoney, there's this:
McIlroy: .. And on-e day I came up with a syntax for the shell that went
along with the piping, and Ken said, "I'm going to do it!" He was tired of
hearing all this stuff, and that was - you've read about it several times,
I'm sure - that was absolutely a fabulous day the next day. He said, "I'm
going to do it." He didn't do exactly what I had proposed for the pipe
system call; he invented a slightly better one that finally got changed
once more to what we have today. He did use my clumsy syntax.
He put pipes into Unix, he put this notation [Here McIlroy pointed to the
board, where he had written f > g > c] into shell, all in one night. The next
morning, we had this - people came in, and we had - oh, and he also changed
a lot of - most of the programs up to that time couldn't take standard
input, because there wasn't the real need. So they all had file arguments;
grep had a file argument, and cat had a file argument, and Thompson saw
that that wasn't going to fit with this scheme of things and he went in and
changed all those programs in the same night. I don't know how ... And the
next morning we had this orgy of one-liners.
So I don't think that suggested text, that it was added slowly, is
appropriate. If this account is correct, it was pretty atomic.
It sounds more the correct answer to the stuff in the source is the one
proposed, that it got added to the assembler version of the system before it
was done in the C version.
Noel
Warren:
Can anybody help explain the "not in assembler" comment?
====
I think it means `as(1) has predefined symbols with the
numbers of many system calls, but not this one.'
Norman Wilson
Toronto ON
I recall reading a long time ago a sentence in a paper Dennis wrote which
went something like "Unix is profligate with processes". The word
profligate sticks in my mind. This is a 30+-year-old memory of a probably
35+-year-old paper, from back in the day when running a shell as a user
level process was very controversial. I've scanned the papers (and BSTJ) I
can find but can't find that quote. Geez, is my memory that bad? Don't
answer that!
Rob Pike did a talk in the early 90s about right and wrong ways to expose
the network stack in a synthetic file system. I'd like to find those
slides, because people keep implementing synthetics for network stacks and
they always look like the "wrong" version from Rob's slides. I've asked him
but he can't find it. I've long since lost the email with the slides,
several jobs back ...
thanks
ron
> In one of his books, Wirth laments about programmers proudly
> showing him terrible code written in Pascal
For your amusement, here's Wirth himself committing that sin:
http://www.cs.dartmouth.edu/~doug/wirth.pdf
> From: Nick Downing
> way overcomplicated and using a very awkward structure of millions of
> interdependent C++ templates and what-have-you.
> ...
> the normal standard use cases that his group have tested and made to
> work by extensive layers of band-aid fixes, leaving the code in an
> incomprehensible state.
Which just goes to provide support for my long-term contention, that language
features can't help a bad programmer, or prevent them from writing garbage.
Sure, you can take away 'goto' and other dangerous things, and add a lot of
things that _can_ be used to write good code (e.g. complete typing and type
checking), but that doesn't mean that a user _will_ write good code.
I once did a lot of work with an OS written in a macro assembler, done by
someone really good. (He'd even created macros to do structure declarations!)
It was a joy to work with (very clean and simple), totally bug-free; and very
easy to change/modify, while retaining those characteristics. (I modified the
I/O system to use upcalls to signal asynchronous I/O completion, instead of
IPC messages, and it was like falling off a log.)
Thinking we can provide programming tools/languages which will make good
programmers is like thinking we can provide sculpting equipment which will
make good sculptors.
I don't, alas, have any suggestions for what we _can_ do to make good
programmers. It may be impossible (like making good sculptors - they are born,
not made).
I do recall talking to Jerry Saltzer about system architects, and he said
something to the effect of 'we can run this stuff past students, and some of
them get it, and some don't, and that's about all we can do'.
Noel
I have a ImageMagic CD that I got back in 1994 that I found in my
garage. It has a bunch of versions of linux that aren't on kernel.org.
The 0.99 series, the 0.98 series and what looks like 1.0 alpha pl14
and pl15.
Is anybody here interested in them?
I have fallen out of contact with the Linux folks, so don't know if
anybody on kernel.org would be interested in these. Does anybody care?
Warner
On 2016-12-29 03:00, Nick Downing <downing.nick(a)gmail.com> wrote:
> I will let you know when I get
> it working :) It's not a current focus, but I will return to it someday.
> In the meantime, I'm putting it on bitbucket, so others will be able to
> pick it up if they wish. However, this also isn't my current focus, it's
> there, but it's not documented.
>
> The IAR compiler on the Z180 supports a
> memory model similar to the old "medium" memory model that we used to
> use with Microsoft or Turbo C on DOS machines, that is, multiple code
> segments with a single data segment. Yes, the Z180 compiled C code is
> larger than the PDP-11 compiled C code, but luckily you can have
> multiple code segments, which you cannot (easily) have on the PDP-11.
>
> Unfortunately code and data segments share the same 64 kbyte logical
> address space, so what I did was to partition the address space into 4
> kbytes (always mapped, used for interrupt handlers, bank switching
> routines, IAR compiler helper routines, etc), 56 kbytes (kernel or
> current process data and stack) and 4 kbytes (currently executing
> function). The currently executing function couldn't be more than 4
> kbytes and couldn't cross a physical 4 kbyte boundary due to the
> hardware mapping granularity, but this was acceptable in practice.
>
> I got
> the Unix V7 clone working OK under this model and then added the
> networking, so although it was a bit of a dogs breakfast, it proves the
> concept works. My memory management left a fair bit to be desired (too
> much work to fix) however I think porting 2.11BSD would solve this
> problem since it works in the PDP-11 under split I/D, which has similar
> constraints except the 4 kbyte code constraint. My understanding is
> 2.11BSD is actually a cut down 4.3BSD running on the HAL from 2.xxBSD, I
> would like to audit each change from 4.3BSD to make sure I agree with
> it, so essentially my project would be porting 4.3BSD rather than
> 2.11BSD. But I'd take the networking stack and possibly a lot more code
> from 2.11BSD, since it is simplified, for instance the networking stack
> does not use SYN cookies. cheers, Nick
Having written quite some code on the Z180, as well as god knows how
much code on the PDP-11, I'm going to agree with Peter Jeremy in that I
do not believe 2.11BSD can be made to run on a Z180. (Well, of course,
anything is possible, you could just write a 68000-emulator, for
example, but natively... no.)
Unix V7 is miles from 2.11BSD. Unix V7 can run on very modest PDP-11
models. 2.11BSD cannot be made to run on a PDP-11 without split I/D
space, which effectively gives you 128K of address space to play with,
in addition to the overlaying done with the MMU remappings.
The MMU remappings might be possible to emulate enough with the segment
registers of the Z180 for the Unix needs, but the split I/D space just
won't happen.
2.9BSD was the last version (I believe) which ran on non split-I/D machines.
Johnny
>
> On Wed, Dec 28, 2016 at 6:14 PM, Peter Jeremy <peter(a)rulingia.com> wrote:
>> On 2016-Dec-25 17:21:31 -0500, Steve Nickolas <usotsuki(a)buric.co> wrote:
>>> On Mon, 26 Dec 2016, Nick Downing wrote:
>>>> I became frustrated with the limitations of both UZI and NOS and decided to
>>>> port 2.11BSD to the cash register as the next step, my goal was (a) make it
>>>> cross compile from Linux to PDP-11, (b) check it can build an identical
>>>> release tape through cross compilation, (c) port it to Z80 using my
>>>> existing cross compiler.
>>> A Z180 is powerful enough to run 2.11BSD? o.o;
>> I suspect shoe-horning 2.11BSD onto a Z180 would be difficult - 2.11BSD
>> on a PDP-11 requires split I+D and has kernel and userland in separate
>> address spaces. Even with that, keeping the non-overlay part of the
>> kernel in 64KB is difficult. Equivalent Z180 code is going to be much
>> larger than PDP-11 code.
>>
>> I'd be happy to be proved wrong.
>>
>> --
>> Peter Jeremy
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Since Larry started the wish-listing of people we'd like to see on the
list, I'll add mine:
* Doug Gwynn
* Chris Torek
* Guy Harris
Anyone know how to track down any of these folks?
My two cents. :-)
Happy New Year everyone,
Arnold
As Neil Young said when he played with the band, it's been of one of
the great joys of my life to be here with you (yeah, I paraphrased).
As a kid who wanted to be at Bell Labs, a student who got the troff
manual and used it for the next 30 years, a student who got an account
on the vax 11/750 that had the BSD source on it and learned so much,
I just want to thank all of the Bell Labs people for being here and
Warren for putting this list together and for Unix teaching me so much.
If I could have one thing for Christmas it would be bwk joining the list.
I did some extensions to awk and asked him about it and he tarred up
~bwk/awk and sent it to me. I've got the awk source and the book source
in english and french (I think). Brian is awesome, it would be cool to
have him on this list.
All that said, I super grateful to be here amongst the people who were
there when you got an image from Ken. You guys are lucky.
--lm
[ This whole article is just a flight of fancy; feel free to ignore it or
at least treat it as the whimsy that it is. ]
The PDP-7 Unix system is the first step on the evolution of Unix as we know it.
We have a snapshot of the system around the end of 1970/beginning of 1971 at
http://www.tuhs.org/Archive/PDP-11/Distributions/research/McIlroy_v0/
and a reconstructed and working partial system at
https://github.com/DoctorWkt/pdp7-unix
PDP-7 Unix was a playground for Ken, Dennis and others to try out ideas and
implementations, and it was quickly superseded by 1st Edition PDP-11 Unix.
Details of how it evolved are at https://www.bell-labs.com/usr/dmr/www/hist.html
and https://www.bell-labs.com/usr/dmr/www/chist.html
All fine and good. However, I keep wondering, how far could they have taken
Unix on the PDP-7 platform?
The Kernel
----------
The reconstructed kernel only occupies 3070 words of 4096 words available,
so there is room left for more code. There is already an alternative
reconstruction where the "dd" concept has been replaced with the ".."
concept (see https://github.com/DoctorWkt/pdp7-unix/tree/master/src/alt)
PDP-7 Unix doesn't have the concept of absolute or relative filenames
(e.g. /usr/bin/ls or a/b/c or ../../file), Could the nami kernel function
be modified to do this? It would probably mean changing from two characters
packed into a word to a single character per word (to make searching for
'/' easier), and this would turn it into something more recognisable as Unix.
What about pipes? They should not be too hard to implement. Even sixteen
pipes with a 16-word buffer each would only be 256+ extra data words in
the kernel. And a hundred words of code?
There are only a few special devices in the kernel: ttyin, ttyout, keyboard,
display, pptin, pptout. What about a disk block device? Was there a PDP-7
tape device, and if so, why not a tape driver and block device?
Filesystem
----------
The implementation of the filesystem is, in some places, quite inefficient.
The free block list is implemented as follows. In each block, there are
10 free block numbers then a pointer to the next part of the free list.
However, each block can hold 64 block numbers, so why are only 10 free
block numbers stored in each block? By using the whole of a block to store
free block numbers, there would actually be more free blocks to use!
Each i-node (size 12 words, 7 of which are direct or indirect pointers)
has one word which holds a unique value. This doesn't seem to be used at
all. If it was reused as a block pointer, this would allow files to be
up to 8*64=512 (small) or 8*64*64=32768 (large) words in length, instead of
7*64=448 words (small) or 7*64*64=28672 (large) words.
The system is set up to only use one side of the two-sided disk device.
It looks like the other side was used to backup (snapshot) a working
system in case of catastrophic filesystem corruption: they could simply
copy the blocks from the "snapshot" side back to the working side. We
could double the size of the filesystem quite easily.
Macro Assembler
---------------
The kernel is written using fairly tight assembly code, and there probably
isn't a way to translate it into a high-level language. The PDP-7 has an
arcane instruction set, and the existing assembler syntax is nothing special.
What about a macro assembler that makes it easier to write code, especially
readable code? Here are some ideas based on the existing kernel:
u.rq := 8 ==> lac 8; dac u.rq
function swap ==> swap: 0
{
return; jmp swap i
}
subroutine .fork ==> fork: line 1 // i.e. not a function
{
line 1
}
loop ==> 1:
{
} jmp 1b
if (sad dm1) ==> sad dm1
{ jmp 1f
code1 code1
} else { 1: code2
code2
}
betwen(o101,o132) ==> jms betwen; o101; o132
There are probably a bunch more that could be added. The aim is to
make the control structures easier to read and write. The programmer
still has to grok the PDP-7 instruction set.
B (or other) Language
---------------------
PDP-7 Unix has a B compiler while compiles source down to a virtual
instruction set which is interpreted by the B interpreter. We have the
B interpreter code, and Robert Swierczek managed to rewrite the B compiler,
see https://github.com/DoctorWkt/pdp7-unix/blob/master/src/other/b.b
At first glance, the PDP-7 architecture is not that amenable to high level
languages, but it turns out that it is indeed possible to write a compiler for
a C subset that targets the PDP-7, see https://github.com/DoctorWkt/a-compiler.
So, could the B compiler be modified to actually output PDP-7 assembly code?
If so, we could rewrite the utilities (cp, mv, ls etc.) in a high-level
language and make the system easier to maintain. I would recommend treating
int and char as the same and only storing one char per word.
And then, even though the PDP-7 architecture doesn't support it, how hard
would it be to add int/char types and bring the language one step closer to C?
Conclusion
----------
All of this is pie in the sky. It can certainly be done, but a) who has
the time and b) it would be a "tour de force", nothing really useful. But
imagine if, at the beginning of 1970, Unix had a proper B or C compiler,
utilities written in this high-level language, a kernel written in a semi-high
level language, and a system with pipes and proper pathnames.
Cheers, Warren
Let me be the first to say that the International Earth Rotation Service
has announced that there will be a Leap Second inserted at 23:59:59 UTC on
the 31st December, due to the earth slowly slowing down. It's fun to
listen to see how the time beeps handle it; will your GPS clock display
23:59:60, or will it go nuts like my last one did (I had to power cycle
the thing)?
My recording of the last one: horsfall.org/leapsecond.webm .
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> So was "/usr/bin" initially only for user-contributed binaries, or was
it from its inception a place for binaries that were not essential for
system boot and could not fit in the root partition?
The latter is my understanding, but early on the two
interpretations would have been nearly coextensive.
Remember, though, that even Ken wrote some "user-contributed"
code.
Doug
> I don't think I ever heard the appellation "phototypesetter C"
> before.
Interesting data point; thanks for passing that along.
> Certainly C and the phottypesetter developed independently, though in
> the same room. But the explanation that they got linked by appearing in
> the same tape release makes perfect sense.
I have this vague memory of being told, or reading somewhere, that many of the
changes from 'vanilla V6' C to 'phototypsetter' C were added because they were
needed for that project, hence the name. Alas, I have no idea where I might
have gotten that from (I had a quick look at a few likely documentary sources,
but no joy).
It's quite possible that this was a supposition on someone outside Bell's part
(or perhaps inside Bell, but outside the Unix group), because the two came out
in the same tape.
Reading the notes about the upgrades (in particular, "newstuff.nr") makes it
seem like a more likely driver of _some_ of the changes was the Interdata port
(which was also happening at around the same time, if I have the timeline
correct). And of course some might have been driven by general utility (e.g.
the ability to initialize structures).
It would be interesting to see if anyone remembers why these changes were made.
Noel
The document
http://www.tuhs.org/Archive/PDP-11/Distributions/research/1972_stuff/Readme
discusses the uncertainty regarding the epoch used for the file timestamps.
"The biggest problem here is to pin down the epoch for the files. In the
early version of UNIX, timestamps were in 1/60th second units. A 32-bit
counter using these units overflows in 2.5 years, so the epoch had to
be changed periodically, and I believe 1970, 1971, 1972 and 1973 were
all epochs at one stage or another."
"Given that the C compiler passes, and the library, are dated in June
of the epoch year, and that Dennis has said ``1972-73 were the truly
formative years in the development of the C language'', it's therefore
unlikely that the epoch for the s2 tape is 1971: it is more likely to
be 1972. The tape also contains several 1st Edition a.out binaries,
which also makes it unlikely to be 1973."
"Therefore, Warren's decoding of the s2-bits file, in s2-bits.tar.gz,
uses 1972 as the epoch. However, Dennis decoding in s2.tar.gz uses 1973."
"Finally, the date(1) a.out on the tape uses 1971 as its archive. How
annoying! After a bit of discussion, Dennis and Warren have agreed that
1972 is the most probable epoch for s2-bits."
I thought I could validate the epoch by looking at the distribution of
weekdays for the three alternative years (1971 to 1973). Here are the
results.
wget
http://www.tuhs.org/Archive/PDP-11/Distributions/research/1972_stuff/Readme
for guess in 1971 1972 1973 ; do
echo $guess
EPOCH=$(date +'%s' -d "$guess/01/01 00:00 UTC")
awk '/\/core/,/\/etc\/init/ {
if ($9) print strftime("%a", '$EPOCH' + $9 / 60)}' Readme |
sort |
uniq -c |
sort -n
done
1971
1 Sat
6 Mon
8 Thu
8 Tue
17 Fri
21 Wed
34 Sun
1972
1 Sun
6 Tue
8 Fri
8 Wed
17 Sat
21 Thu
34 Mon
1973
1 Tue
6 Thu
8 Fri
8 Sun
17 Mon
21 Sat
34 Wed
As you can see, unless weekends at the Bell Labs were highly atypical,
1972 has the most probable distribution of work among the days of the week.
> of course some [of the changes to C in this time period] might have been
> driven by general utility (e.g. the ability to initialize structures).
I was thinking about this, with my memory of the changes refreshed by
re-reading those 'changes to C' notes, and it's interested how many of them
were ones I personally (and most of the people working with me) didn't use.
Here is a list of the changes described in those 3 documents:
'newstuff':
- Longs
- Unsigneds
- Blocks (locals declared inside a non-top-level {} pair)
- Initialization of structures
- Initialization of automatic variables
- Bit fields
- Macro functions
- Macro conditionals (#ifdef)
- Arguments in registers
- Typedefs
- 'Static' scope
'Changes':
- Multi-line macros
- Undefine
- Conditional expressions (#if)
- Unions
- Casts
- Sizeof() on abstractions
- '=' in initializations
- Change binary operators from trailing to leading
- 'extern' does not allocate storage
(This note also includes unsigneds, blocks, and structure initializations,
from the earlier? note.)
'cchanges':
- Structure assignment and argument/return-value
- Enum
Of these, I never really used quite a few: blocks, automatic initializations,
typedefs, unions, structure assignment/etc, or enum. I'm not sure if I ever
used bit fields, either. Some of these are understandable; e.g. automatic
initializations are just syntactic sugar (as are register arguments, but I did
use those).
Typedef is also effectively syntactic sugar; you can always use a macro and
get almost (entirely?) the same result. In fact, I devised an entire system of
types to make the code I was working on (almost entirely networking code, so
lots of packet headers from other machines, etc) more rigorous - and later it
turned out it had made it much more portable; it all used macros, not typedef.
I don't think I ever used typedef...
(The details of that might be of some interest: instead of int, long, etc we
had things of the form {type}{len}, where {type}pe} was 'int', 'uns', 'bit',
etc and length was 'b', 's', 'l', or two other interesting ones 'w' and 'f' -
'w' meant the machine's native word length, and 'f' meant whatever was fastest
on the machine. So 'unsl' mean '32-bit unsigned'. Depending on the machine,
the compiler couldn't always produce them all - e.g. the PDP-11 didn't have
unsl - so sometimes you had to live with non-optimal replacements. There were
also un-typed types, i.e. 'byte', 'swrd', 'lwrd' - 8, 16 and 32 bits - and
'word' - the machine's native length.)
Unions didn't get used much either, in our stuff, although one would think it
would be useful in network code - you get a packet with a pile of bits inside
it, which can be one of N different formats, seems like a perfect application
for a union. The problemis that it tied two different subsystems intimately
together. If you have protocol A and protocol B, if you use a union to define
the header format, the union has to have both A and B in it. Now if you want
to add protocol C, that requires modifying that union definition. It was much
easier to just take a pointer to the outer packet's data area, and assign
(with cast) it to a new pointer variable which was of the correct type for the
header you were trying to process.
Some of the new things were incredibly useful, though - or, in fact, one
couldn't get by without them. Casts were incredibly useful once the compiler
got pickier. Initialization of structures was huge - other than 'bdevsw'-like
hacks, there was just no other way to do that.
Noel
> From: Warren Toomey
> Ritchie, D.M. The UNIX Time Sharing System. MM 71-1273-4.
> which makes me think that the draft version Doug McIlroy found
Not really a response to your question, but I'd looked at that
'UnixEditionZero' and was very taken with this line, early on:
"the most important features of UNIX are its simplicity [and] elegance"
and had been meaning for some time to send in a rant.
The variants of Unix done later by others sure fixed that, didn't they? :-(
On a related note, great as my respect is for Ken and Doug for their work on
early Unix (surely the system with the greatest bang/buck ratio ever), I have
to disagree with them about Multics. In particular, if one is going to have a
system as complex as modern Unices have become, one might as well get the
power of Multics for it. Alas, we have the worst of both worlds - the size,
_without_ the power.
(Of course, Multics made some mistakes - primarly in thinking that the future
of computing lay in large, powerful central machines, but other aspects of
the system - such as the single-level store - clearly were the right
direction. And wouldn't it be nice to have AIM boxes to run our browers and
mail-readers in - so much for malware!)
Noel
The Second Edition manual has a section titled "User Maintained
Programs" listing the following utilities and games: basic, bc, bj, cal,
chash, cre, das, dli, dpt, moo, ptx, tmg, and ttt. In the Introduction,
the reader is asked to consult their authors for more information.
Does anyone remember whether at the time these were installed in the
system-wide /bin directory, or whether they were only available in their
owners' home directories?
All, I've just got back from a few days away to find 14 new subscription
requests to the TUHS mailing list. Welcome aboard to you all.
Normally I only get one request a month, so I have some concerns about
the legitimacy of all these requests, so accept my apologies in advance
if there is any off-topic e-mails in the next few days.
P.S The mail while I was away was very interesting. Noel, you might
also be interested in the B interpreter and Robert Swierczick's B
compiler the PDP-7 Unix. The original B compiler doesn't exist, so
Robert took the 11/20 C compiler and "undid" the code that does types
so that it "became" a B compiler.
https://github.com/DoctorWkt/pdp7-unix/tree/master/src/other
Cheers all
Warren
Tommy Flowers MBE was born on this day in 1905; amongst other things he
designed Colossus, the world's first programmable electronic computer,
which was used to break the German Lorenz cipher (not Enigma, as some
think).
Relevance to Unix? Well, without the world's first usable computer we
would not have the world's first usable OS, and M$ would probably reign
supreme by now...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> More here:
> http://minnie.tuhs.org/pipermail/tuhs/2014-October/005218.html
Which sez:
There is a second document, untitled, no date, which I have not been able to
locate online at all.
..
From the content, it seems to be from shortly after the previous one, so say,
circa 1977.
I poked around in a dump of the CSR Unix which I now have available to me, and
found a copy of it. I just checked the Internet (using the canonical search
engine) for various phrases from it, but I still could not turn it up. So,
here it is in machine-readable form:
http://ana-3.lcs.mit.edu/~jnc/history/unix/CChanges.txt
Hope this is some use to someone... :-)
Noel
Amusing, I don't think I ever heard the appellation "phototypesetter C"
before. Certainly C and the phottypesetter developed independently, though
in the same room. But the explanation that they got linked by appearing
in the same tape release makes perfect sense. Thanks for the tidbit.
Doug
> From: "Ron Natalie"
> At some point .. and the ability to assign/pass structures got
> supported, though I thought that was the compiler that came with V7.
That is my vague recollection too.
> I'm still annoyed they didn't fix arrays when they fixed structs.
Which aspect? The ability to assign/pass/return arrays, or the funky way that
array naming worked (I'm trying to remember the details, I think it was
something to do with 'arrays' passed as arguments - it was actually a pointer
that was passed, but the declaration didn't have to say 'pointer').
Noel
> From: Tony Finch
> when did C get its cast operator?
Well after that piece of code was written! :-)
I don't recall exactly off the top of my head, but I recall 2-3 notes to
users about the evolution of C post 'vanilla' V6; I think a lot of it was
related to work being done on typetting stuff, hence the moniker
'phototypsetter compiler' which was applied to that 'improved' C.
One of the notes is fairly common, but another I have only in hardcopy
(although I scanned it at one point).
I'll try and turn all of them, along with some notes I made about the
differences between 'vanilla' V6 C and 'phototypsetter C' (which a lot of the
later V6 users started with - I certainly did), into an article on the
Computer History wiki on the early evolution of C.
Noel
PS:
> I recall 2-3 notes to users about the evolution of C post 'vanilla' V6
> ...
> One of the notes is fairly common, but another I have only in hardcopy
> (although I scanned it at one point).
More here:
http://minnie.tuhs.org/pipermail/tuhs/2014-October/005218.html
If anyone knows of any other documentation of C evolution, I'd be interested
in hearing about it for the Computer History wiki page.
Noel
> From: "Steve Johnson"
> The number on the left is a PDP-11 address, probably for some kind of
> control register.
It's the Processor Status Word, which contained the CPU's hardware priority
level, condition codes, etc.
> That's a construction that's left over from B.
I wonder why it was written as "0177776->integ", rather than "*017776"?
Probably the former allowed the C compiler to work out what size the operand
was. (BTW, 'integ' was declared in a structure declaration as follows:
struct {
int integ;
};
(Did the code looked at actually say "0177776->int"? The compiler might have
barfed on a reserved keyword being used like that.)
Noel
Has anyone got a setup where they can run something like nm(1) on the
compiled Third Edition Unix C files and send me the output?
(Alternatively, just send me the .o files, and I'll whip up something to
read their symbols.) I tried to compile the source code on a modern
system by hacking old C to something closer to what GCC will accept,
with commands such as the following.
cc -E dp.c |
sed 's/=\([&|+-]\)/\1=/g;s/struct[ \t]*(/struct {/' |
gcc -w -x c -
However, I stumbled on the use of structure fields on things that aren't
structures, e.g. "0177776->int =| 300"
It has often been told how the Bell Labs law department became the
first non-research department to use Unix, displacing a newly acquired
stand-alone word-processing system that fell short of the department's
hopes because it couldn't number the lines on patent applications,
as USPTO required. When Joe Ossanna heard of this, he told them about
roff and promised to give it line-numbering capability the next day.
They tried it and were hooked. Patent secretaries became remote
members of the fellowship of the Unix lab. In due time the law
department got its own machine.
Less well known is how Unix made it into the head office of AT&T. It
seems that the CEO, Charlie Brown, did not like to be seen wearing
glasses when he read speeches. Somehow his PR assistant learned of
the CAT phototypesetter in the Unix lab and asked whether it might be
possible to use it to produce scripts in large type. Of course it was.
As connections to the top never hurt, the CEO's office was welcomed
as another ouside user. The cost--occasionally having to develop film
for the final copy of a speech--was not onerous.
Having teethed on speeches, the head office realized that Unix could
also be useful for things that didn't need phototypesetting. Other
documents began to accumulate in their directory. By the time we became
aware of it, the hoard came to include minutes of AT&T board meetings.
It didn't seem like a very good idea for us to be keeping records from
the inner sanctum of the corporation on a computer where most everybody
had super-user privileges. A call to the PR guy convinced him of the
wisdom of keeping such things on their own premises. And so the CEO's
office bought a Unix system.
Just as one hears of cars chosen for their cupholders, so were these
users converted to Unix for trivial reasons: line numbers and vanity.
Doug
All, I'm writing a paper based on my June 2016 talk on PDP-7 Unix. As part
of that I was looking at the BCPL -> B -> NB -> C history. And as part of
that, I was reading Ken's B manual, written in 1972:
https://www.bell-labs.com/usr/dmr/www/kbman.pdf
Then I noticed at the end Ken refers to:
Ritchie, D.M. The UNIX Time Sharing System. MM 71-1273-4.
which makes me think that the draft version Doug McIlroy found
(now at http://www.tuhs.org/Archive/PDP-11/Distributions/research/McIlroy_v0/UnixEd…)
must have made it into a full memorandum.
Given that we have the memorandum number, does anybody know if it
would be possible to find it in the archives from what was Bell Labs?
Cheers, Warren
J.F.Ossanna was given unto us in 1928; a prolific programmer, he not only
had a hand in developing Unix but also gave us the ROFF series.
PS: Ada Lovelace, arguably the world's first computer programmer, was
given unto us in 1815; she saw the potential in Charles Babbage's
new-fangled thing.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On Thu, Dec 8, 2016 at 5:39 AM, Joerg Schilling <schily(a)schily.net> wrote:
> Back to the Bourne Shell:
>
> The original Bourne Shell did not use malloc, but rather had a SIGSEGV
> handler
> that used to extend the "string stack" called "stak" via sbrk() whenever
> the
> code tried to access data beyond the end ot the heap.
>
Right although the 68K was not the first or the only system to have that
problem - again IIRC Series/1 and maybe one of the PE systems. You are
correct then SunOS and >>some<< of the 68000 based system did have the
problem you suggested. And in fact, Masscomp (and Apollo) which used
68000's (but used two of them so could run full VM) could survive that
style of fault (because it never occurred).
BTW: the "conceptual problem" , you mentioned while true, is being a little
harsh. As the one of the 68K designers (Les Crudele) said to me once,
when they wrote the microcode, there were not thinking about instruction
restart - so the issue was that Nick did not save enough state to do it.
The model for the 68000 that they were using was the base/limit PDP-11 and
the original PDP-10. Also at the time, the competition was the 6800, the
8080, Z80, 6502. They were trying to put a PDP-11/70 on chip (without
getting into trouble like CalData did - which Les, Nick and team were very
aware having been DEC and CalData customers before they were at Moto].
While we think of the 68000 family has being 32 bit, the fact is inside it
is a 16 bit system (the barrel shifter and execution functions are 16).
And it was a skunk works project -- the 6809 was Moto's real processor.
It was an experiment done by a couple of rogue engineers that said - hey we
can do better, The fact was they made a chip that was good enough to
actually compete with the Vax in the end, because it was fast enough AND
they had the wisdom to define 32 bits of address space and 32 bit
operations (again having been PDP-11 users), but as Les used to say - part
of the thinking was that while DEC was moving to the Vax, they had hoped to
break into the area that they 16 bits minis claimed - which they in-fact
did.
And if you think in terms of a Clay Christensen's disruption theory, the
fact that VM did not work easily (i.e. was a "worse" technology than the
Vax) was ok - a new breed of customer did not care. 68000 was huge
success, despite Moto marketing ;-)
To me the larger issue with the 68010 was that when Nick did add the
restart microcode, the new '10 microcode actually dumped version dependant
state on the external stack (in Intel terminology -different "step" '10 put
different state on the external stack or worse, could not restart an
instruction that had been saved from a different step processor).
This screw up was a huge issue when we did replaced the "executor" with a
68010, because it meant that all cpu boards had to be the same processor
microcode revision. Masscomp was of course the first to make an MP, so was
the the first firm to run into the issue (I remember debugging it - we
could not reproduce the issue because of course tjt and my own machine's by
chance had "MPU" boards as we called them with the same step -- it was one
of the field service guys that realized that the customer system had a
mixed step board -- i.e. when they replaced a single MPU in the field, the
system stopped working). IIRC: Moto never fixed the '10, as that
processor was reasonably short lived in the open market. They did fix the
microcode in the '20 so the state on the external stack was independent of
stepping.
Clem
*This* is a computer!
http://archive.computerhistory.org/resources/text/EAI/EAI.HYDAC2400.1963.10…
A hybrid analogue/digital box, what could possibly go wrong? And check
out the dudes and dudess supporting it (except she'd better be careful
when moving her chair, in case she takes out a paper tape unit...).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> Dennis Ritchie's home pages have some info on this
Yeah, I'd read that - I was hoping for some actual technical info on the KS11,
though.
(I'm assuming he has given the name there correctly, or if his memory has
dropped a bit - a thing which human memories do! :-) - since I've never been
able to find a single mention of it, including in the Spare Module Handbook,
which covers other Special Systems products).
> I looked for (but did not find) information on what ""the classical
> PDP-10 style "hi-seg" "low-seg" memory mapping unit"" was.
The best description is in the DECSystem-10 Hardware Reference Manual (mine is
EK-10/20-HR-001, but alas that version appears not to be online - I'll scan my
copy and put it online when I get a chance.) This version:
http://pdp10.nocrew.org/docs/ad-h391a-t1.pdf
does appear to cover it: pp. 5-38 through 5-40 (pp. 352-354 of the PDF) for
the KA10, and pp. 5-15 to 5-30 (pp. 329-344) for the KI10.
The KA10 provided one (optionally) two base/bounds register pairs, where the
base register contains the location in real memory. With two pairs (the
second one applied to high user address space), the high part could be
write-protected, for sharing pure code.
The KI10 provided something similar to this, but more complicated; it included
paging, but also something called User 'Concealed', which allowed proprietary
subroutine packages to be used, while hidden from the rest of the user's code.
> Does anyone have an idea what PDP-10 MMU Dennis may have been referring
> to?
Almost certainly the KA10.
> Here my hypothesis would be that in kernel mode mapping was off, and
> that in user mode there were two segments, each with a base and limit
> into physical memory
Hard to say. Kernel mode might or might not have mapping, and User mode might
have provided one, or two, segments; the KA10 did have an option for
single-segment.
> this setup has an echo in how the later KL-11 MMU was used.
Sorry, what's a KL-11? The only 'KL11' I know of is the serial line board
(M780) which was the predecessor to the DL11.
Noel
Finally took a look at the V1 source.
Referring to http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/u2.s
Toward the end of the sysexec function there is:
cmp core,$405 / br .+14 is first instruction if file is
/ standard a.out format
bne 1f / branch, if not standard format
mov core+2,r5 / put 2nd word of users program in r5; number of
/ bytes in program text
sub $14,r5 / subtract 12
cmp r5,u.count /
bgt 1f / branch if r5 greater than u.count
mov r5,u.count
jsr r0,readi / read in rest of user's program text
add core+10,u.nread / add size of user data area to u.nread
br 2f
1:
jsr r0,readi / read in rest of file
2:
mov u.nread,u.break / set users program break to end of
/ user code
add $core+14,u.break / plus data area
jsr r0,iclose / does nothing
br sysret3 / return to core image at $core
$core is equated to 040000 in another file (u0.s). In V1 apparently the a.out header was 6 words (http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/man/man5/a.out.5) not 8, and hence the magic for a standard executable was 0405. It was already used as magic to distinguish a.out format files from other executables. It also shows that indeed 'exec' jumped to the first word of the header (at location $core).
From this I don't think we will find code for the KS-11 in the V1 source. The file http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/u3.s also seems to support a LSX /MX like approach where each process switch equates a swap.
I'd say that the lost V2 is where memory protection first appeared in Unix, i.e. 1972.
Paul
PS Sorry for writing 'V0' earlier -- I meant V1 all along.
On 2016-12-08 18:18, Paul Ruizendaal <pnr(a)planet.nl> wrote:
>
>>> DEC's Custom Special Systems (CSS) group .. build a simple base/limi
>>> register device, soon after the 11/20 was released.> ... So an early
>>> version of after the original 11/20 port from the PDP-7 had this
>>> however.....
>> Oh, right, I'd forgotten about that: the KS-11 - I've previously enquired to
>> see if anyone had _any_ documentation for this, but so far, nada.
> I was looking for that a few years back. Dennis Ritchie's home pages have
> some info on this (https://www.bell-labs.com/usr/dmr/www/odd.html)
> At the bottom of that page he writes:
>
> ""Back around 1970-71, Unix on the PDP-11/20 ran on hardware that not only did not support virtual memory, but didn't support any kind of hardware memory mapping or protection, for example against writing over the kernel. This was a pain, because we were using the machine for multiple users. When anyone was working on a program, it was considered a courtesy to yell "A.OUT?" before trying it, to warn others to save whatever they were editing.
> [..snip..]
> We knew the PDP-11/45, which did support memory mapping and protection for the kernel and other processes, was coming, but not instantly; in anticipation, we arranged with Digital Special Systems to buy a PDP-11/20 with KS-11 add-on. This was an extra system unit bolted to the processor that made it distinguish kernel from user mode, and provided a classical PDP-10 style "hi-seg" "low-seg" memory mapping unit. I seem to recall that maybe 6 of these had been made when we ordered it.""
>
> My hypothesis is that the very first versions of Unix were using a memory scheme similar to that used in the later LSX and MX derivatives: the kernel resides in lower memory and the user program in upper memory; each process switch implied a swap. Disclaimer: I have not studied the V0 source to verify this hypothesis.
>
> When this topic had my interest I looked for (but did not find) information on what ""the classical PDP-10 style "hi-seg" "low-seg" memory mapping unit"" was. Here my hypothesis would be that in kernel mode mapping was off, and that in user mode there were two segments, each with a base and limit into physical memory -- and that this setup has an echo in how the later KL-11 MMU was used.
>
> Does anyone have an idea what PDP-10 MMU Dennis may have been referring to?
If you read the Wikipedia page about the PDP-10, you'd find the answer.
Basically, kernel mode uses physical memory. User mode have a low
segment and a high segment. This is decided by the high bit of the address.
For each segment, there is a base register and a length register.
Commonly, the low segment stored read/write data, while the high segment
could be shared between processes, and have readonly data/code.
But you could play in other ways with it, if you wanted to.
Essex MUD, as far as I remember, had the game data in a shared hiseg,
which could be written by the program.
So your guessing is pretty good. Not sure I'd say this is similar to how
the later PDP-11 MMU works, though. But I can see someone making the
comparison, since the PDP-11 pages can vary in size, within limits.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole
> DEC's Custom Special Systems (CSS) group .. build a simple base/limi
> register device, soon after the 11/20 was released.> ... So an early
> version of after the original 11/20 port from the PDP-7 had this
> however.....
Oh, right, I'd forgotten about that: the KS-11 - I've previously enquired to
see if anyone had _any_ documentation for this, but so far, nada.
> I would look at Warren's First Edition work to see if there were dregs
> of this in that code base
Alas, I'd already had that idea (to try and at least re-create a programming
spec, at least, for the KS11). There do not seem to be any traces there,
perhaps because that code came from a document entitled "Preliminary Release
of Unix Implementation", which argues that it's a very early 'version' of V0
(the early 'versions' weren't very formal, there was a continuous process of
change/improvement going on, apparently).
> It is also noted that the 45 class system (45/55/70/44) had "17th"
> address bit - i.e. split I/D space. I believe that this is when "magic
> numbers" were really introduced so that could be supported.
No, they came in first for 'pure text' (0410):
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys/ken/sys1.c
which I would expect arrived to minimize swapping on machines with small
amounts of real memmory.
Support for user split-I/D (411) didn't arrive until Version 6:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/ken/sys1.c
although IIRC split I/D in the kernel was supported supported slightly
before it was in user - although V5 didn't:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/sys/conf/mch.s
so it couldn't have been much earlier than V6.
Noel
Which version of Unix first ran on a computer with virtual addressing (address translation) so that a process with non-position independent code (PIC) can be loaded anywhere in RAM that the kernel decided to put it, and memory protection such that no process could accidentally or deliberately access RAM not allocated to it by the kernel (or a SIGSEGV would be delivered to it)?
Put another way, when did Unix processes stop playing Core War with each other? (OK, so long as no more than one is resident at a time, they can't play Core War with each other, but there still needs to be a mechanism to protect the kernel from inadvertent (or advertent) pointer use).
Which is to say, when did Unix run on (and properly use) computers with memory management units (MMU)?
My guess from a quick look at the history of the DEC PDP-11 is that the target computer was likely a PDP-11/35 or PDP-11/40 with a KT11-D "memory management" module.
One imagines that many pointer mistakes (bugs) in assembly or C were discovered and squashed in that version, modulo the historical unhappiness resulting from address zero containing a zero if dereferenced ("NULL pointers") in process address space.
What year did that come about?
By the time I got to Unix (2.8BSD on the Cory Hall DEC PDP-11/70), those features (virtual addresses, memory protection from the kernel) had apparently been part of Unix for a long time - certainly earlier than Version 6.
This is distinct from demand-paged virtual memory which so far as I know was developed on the DEC VAX-11.
curious,
Erik <fair(a)netbsd.org>
> From: "Erik E. Fair" <f
> One imagines that many pointer mistakes (bugs) in assembly or C were
> discovered and squashed in that version, modulo the historical
> unhappiness resulting from address zero containing a zero if
> dereferenced ("NULL pointers") in process address space.
PS: PDP-11 Unix didn't, I think, do much (anything?) to solve the null pointer
problem. (This is for early C versions; I don't know about the later BSD
ones.)
Location 0 was a usable address for both read and write for everything except
'pure-text' (0410 magic word) processes; in those it was only legal for
read. Address 0 mostly did not contain a 0, either; for C programs using the
stock run-time, it contained a 'setd' instruction, except in split I+D
processes, in which case data space location 0 probably (I'm too busy to spin
up my V6 emulator to check) contained some of the program's initialized data
(unless special arrangements had been made).
Noel
> From: "Erik E. Fair"
> Which version of Unix first ran on a computer with virtual addressing
That would be the first version to run on the PDP-11/45; I'm not sure which one
that was, there's not enough left of Version 2 or Version 3 to see; Version 4
definitely ran on the 11/45:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys/ken/45.s
> My guess from a quick look at the history of the DEC PDP-11 is that the
> target computer was likely a PDP-11/35 or PDP-11/40 with a KT11-D
> "memory management" module.
No, they came after the -11/45 (with the KT11-C MMU).
> What year did that come about?
They got one of the first -11/45's, per a Unix history document I'm too busy
to dig up, so 1972.
Noel
I'm not sure if my other reply got though, so I'll try again...
I found the source to the BBN stack in the CSRG CD's it's on CD 4
/sys/deprecated/bbnnet
LINT.bbn 08-Aug-2016 06:37 3.5K
NOTES 08-Aug-2016 06:37 4.6K
RELAY.bbn 08-Aug-2016 06:37 1.2K
SCCS/ 08-Aug-2016 06:37 -
fsm.h 08-Aug-2016 06:37 1.2K
fsmdef.h 08-Aug-2016 06:37 9.6K
hmp.c 08-Aug-2016 06:37 12K
hmp.h 08-Aug-2016 06:37 3.2K
hmp_subr.c 08-Aug-2016 06:37 6.5K
hmp_traps.c 08-Aug-2016 06:37 3.5K
hmp_traps.h 08-Aug-2016 06:37 2.7K
hmp_var.h 08-Aug-2016 06:37 1.4K
ic_output.c 08-Aug-2016 06:37 5.7K
icmp.c 08-Aug-2016 06:37 17K
icmp.h 08-Aug-2016 06:37 3.3K
in.c 08-Aug-2016 06:37 12K
in.h 08-Aug-2016 06:37 2.0K
in_pcb.c 08-Aug-2016 06:37 11K
in_pcb.h 08-Aug-2016 06:37 1.9K
in_proto.c 08-Aug-2016 06:37 4.9K
in_var.h 08-Aug-2016 06:37 2.2K
ip.h 08-Aug-2016 06:37 3.3K
ip_input.c 08-Aug-2016 06:37 29K
ip_output.c 08-Aug-2016 06:37 14K
ip_usrreq.c 08-Aug-2016 06:37 3.8K
macros.h 08-Aug-2016 06:37 4.4K
net.h 08-Aug-2016 06:37 2.4K
nopcb.h 08-Aug-2016 06:37 318
raw_input.c 08-Aug-2016 06:37 9.4K
rdp.h 08-Aug-2016 06:37 15K
rdp_cksum.s 08-Aug-2016 06:3
7 4.4K
rdp_fsm.c 08-Aug-2016 06:37 4.5K
rdp_input.c 08-Aug-2016 06:37 9.6K
rdp_macros.h 08-Aug-2016 06:37 7.9K
rdp_prim.c 08-Aug-2016 06:37 13K
rdp_states.c 08-Aug-2016 06:37 34K
rdp_subr.c 08-Aug-2016 06:37 8.4K
rdp_usrreq.c 08-Aug-2016 06:37 21K
seq.h 08-Aug-2016 06:37 415
sws.h 08-Aug-2016 06:37 326
tcp.h 08-Aug-2016 06:37 8.6K
tcp_input.c 08-Aug-2016 06:37 12K
tcp_prim.c 08-Aug-2016 06:37 9.8K
tcp_procs.c 08-Aug-2016 06:37 28K
tcp_states.c 08-Aug-2016 06:37 20K
tcp_usrreq.c 08-Aug-2016 06:37 22K
udp.c 08-Aug-2016 06:37 5.2K
udp.h 08-Aug-2016 06:37 1.1K
udp_usrreq.c 08-Aug-2016 06:37 7.0K
I've been meaning to try to try to manually mash stuff together but just
haven't gotten around to it.
> ----------
> From: Paul Ruizendaal
> Sent: Thursday, December 1, 2016 4:30 PM
> To: tuhs(a)minnie.tuhs.org
> Subject: [TUHS] looking for 4.1a BSD full kernel source
>
>
> Hi,
>
> I'm trying to find out exactly what was in the 4.1a BSD distribution, as
> far as the kernel is concerned. The image in the CSRG archive comes from a
> tape that had a hard read error and does not include any kernel sources.
> Some of the kernel files were already covered by SCCS around that time,
> but not everything. My main focus is to understand tcp/ip networking in
> 4.1a and whether the kernel could be built with either the Berkeley or the
> BBN network stack.
>
> Does anybody know where I could find a full set of kernel sources for the
> 4.1a BSD kernel?
>
> Many thanks in advance!
>
> Paul
>
Hi
I am sure there must have been an email about this if so, I do apologies.
The below link is from Diomidis Spinellis work, I remember seeinng an email
from him a few months ago requesting some information about the history of
BSD. Looking at it, he has done some amazing work. I look forward to
playing with 386BSD!
https://github.com/dspinellis/unix-history-repo
Hi,
I'm looking for the source code to "Network Unix" as described here:
https://tools.ietf.org/html/rfc681
and/or its later development described here:
https://archive.org/details/networkunixsyste243kell
Actually, I'd be happy with finding the source code to any version of this Network Unix. This version of Unix had fairly wide use in the Arpanet community and was in use at several universities and organizations (e.g.: Rand, BBN, etc.)
Would anybody here know of a surviving copy?
Many thanks in advance,
Paul
Hi Diomidis,
Thanks for that link. This is exactly what I'm trying to ascertain, and I'm finding conflicting evidence.
- The socket API was in a state of flux between October '81 and March '82 (when 4.1a was supposedly cut). By March '82 it was mostly there, but not until later in the year did it fully stabilize.
- The BBN stack did not use the sockets API as late as January '82
- What I currently fathom from the SCCS files is that the socket API implementation was hard coded to use the nascent Berkeley stack.
- But the BBN code was likely in the 4.x BSD source tree, outside of SCCS (Berkeley started out with the BBN code, but it morphed quite quickly and drastically)
- In 1985 the BBN code finally enters SCCS (marked 'deprecated'); this code was integrated with the sockets API, and much developed from its 1982 form
Either the below link is correct (and I think I may have contributed to its view in a private mail to Kirk), or there were two different distributions (4.1a BSD with Berkeley network code and 4BSD with BBN network code). The two may have merged into one in peoples' memories: 35 years is a long time. Finding the actual kernel source for the 4.1a distribution could provide clarity on this point.
Perhaps Bill Joy could shed some light on the issue, but I don't have contact details. Having actual source removes all doubt.
Paul
On 1 Dec 2016, at 10:51 , Diomidis Spinellis wrote:
> The best description I could find is the following:
>
> http://minnie.tuhs.org/pipermail/tuhs/2016-September/007417.html
>
> > The 4.1a distribution had the initial socket interface with a
> > prerelease of the BBN TCP/IP under it. There was wide distribution
> > of 4.1a. The 4.1b distribution had the fast filesystem added and
> > a more mature socket interface (notably the listen/accept model
> > added by Sam Leffler).
>
> Diomidis
>
> On 01/12/2016 10:30, Paul Ruizendaal wrote:
>>
>> Hi,
>>
>> I'm trying to find out exactly what was in the 4.1a BSD distribution, as far as the kernel is concerned. The image in the CSRG archive comes from a tape that had a hard read error and does not include any kernel sources. Some of the kernel files were already covered by SCCS around that time, but not everything. My main focus is to understand tcp/ip networking in 4.1a and whether the kernel could be built with either the Berkeley or the BBN network stack.
>>
>> Does anybody know where I could find a full set of kernel sources for the 4.1a BSD kernel?
>>
>> Many thanks in advance!
>>
>> Paul
>>
>
Hi,
I'm trying to find out exactly what was in the 4.1a BSD distribution, as far as the kernel is concerned. The image in the CSRG archive comes from a tape that had a hard read error and does not include any kernel sources. Some of the kernel files were already covered by SCCS around that time, but not everything. My main focus is to understand tcp/ip networking in 4.1a and whether the kernel could be built with either the Berkeley or the BBN network stack.
Does anybody know where I could find a full set of kernel sources for the 4.1a BSD kernel?
Many thanks in advance!
Paul
Larry McVoy:
I've always morned that he died so early. I would have liked to talk
to him, I love troff to this day.
====
Me too (s/morn/mourn/, of course). I might even have had the
chance to work with him.
The original UNIX crowd were all really neat characters, albeit
sometimes a trifle overly characterful. All nice guys to work
with, too, at least those who were still around when I was at
1127.
Norman Wilson
Toronto ON
We lost J.F.Ossanna in 1977; he had a hand in developing Unix, and was
responsible for "roff" and its descendants. Remember him, the next time
you see "jfo" in documentation.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I stumbled onto this by accident, and yes it really is a port of UNIX v1
from PDP-11 assembly into 8086/80386 assembly.
I fired up Qemu and some of the disk images and it booted up!
the directories contain assembly, text files, pdf's, images, and pictures...
I have no idea how to build it I didn't see any make file, but it looks very
interesting!
http://www.singlix.com/runix/index.html
Also you may want to mute the tab, or turn your speakers down, it has an
embedded music player.
I just found a project that has ported Unix V7 to a modern system, and
a search of my TUHS archives finds no previous mention of it:
V7/x86 - x86 port of UNIX V7
http://www.nordier.com/v7x86/index.html
V7/x86 on VirtualBox
http://www.nordier.com/articles/v7x86_vbox.html
The jwhois command identifies the host site as (possibly) located in
Durban, South Africa.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Can’t help about that DEC-ism.
Do have these 2.
News from Xerox Corporation:
FOR IMMEDIATE RELEASE
XEROX ANNOUNCES HYPER-ETHERNET
SAN FRANCISCO, CA. Jan 7. 2010 - Xerox today announced Hyper-Ethernet their fourth
generation local area network. In addition to its ability to transmit text data and images, Hyper-
Ethernet enables the transmission of people. ”People transmission over Hyper-Ethernet" according
to Michael Liddle, V.P. of Office Systems. "will greatly reduce elevator congestion and eliminate the
need for video conferencing.“ Order taking for Hyper-Ethemet will begin next month. Installation
will start in Los Angeles in the Third Quarter.
In a related announcement. Wang Labs. headquartered in Hoboken. New Jersey, announced Super-
Hyper Wangnet their twelfth generation local area network. According to Freddie Wang, President
of Wang Labs. "Super-Hyper-Wangnet will not only transmit people over the Wangband, but will
also transmit furniture and buildings over the interconnect and utility bands. These additional
capabilities of Super-Hyper Wangnet are vital to the emerging office of the future." Order taking
for Super-Hyper-Wangnet will begin next month. Installation has already occurred worldwide.
IBM Corporation. who has been rumored to be about to announce a local area network since 1980,
was not available for comment.
xxx
Followed by
Digital Responds to Hyper-Ethernet
TEWKSBURY. MA. April 1. 2010 -- Digital Equipment announced today it's new DECnet Phase
XVIII Architecture. In response to recent Xerox and Wang improvements to Ethernet that provide
people and facility transportation across inter-node links DEC‘s latest DECnet provides these
capabilities as well as providing for the creation of virtual facilities and even countries. These
capabilities are provided by breakthroughs in communications technology that actually uses the
Ether as a communications media. Through the use of a new dedicated NANO-PDP-11/E99
gateway processor system, ETHERGATE, DECnet users can access anywhere in the Ethereal Plane.
This development obsoletes teleconferencing, since meeting groups can create their own common
conference rooms and cafeterias, thus resolving space, travel and dining problems. There may be a
few bugs left, as some of the dissenting DECnet Review Group members have not been seen since
the last meeting held in such a virtual conference facility.
This breakthrough was brought about by a team effort of the Distributed Systems‘ Software and
Hardware engineering teams in a effort to improve on their Tewksbury, Massachusetts facility. In a
compromise decision, Distributed Systems will maintain an ETHERGATE in TWOOO, but it will
connect directly to their new home somewhere in the Shire of their newly defined Middle Earth
reality. Despite some difficulties, the scenery, windows tax breaks, pool and racquetball courts
made the relocation go quite smoothly. Engineering Network topology will not change, as all
forwarding will be done by the TWOOO Ethereal Plane Router residing in the crater at the former
building site.
Utility packages such as Ethereal Person Transfer (EPT) and Ethereal Facility Transfer (EFT)
provide appropriate capabilities for casual users. Sophiscated users can create (SCREATE), access
(SOPEN), and delete (SNUKE) ethereal entities transparently from high level languages using the
Ethereal Management System (EMS) package and the Ethereal Access Protocol (EAP). An
ETHERTRIEVE utility for easy interactive use will be available shortly.
DECnet Phase XVIII follows on the success of the Phase XVI ability to access everyone‘s Digital
Professional wrist watch computer system. This lead to the current Phase XVII architecture, which
has routing capabilities that allow direct communications with the entire Earth population's Atari
home video games.
Distributed Systems architects are hard at work on the next phase of DECnet that will include
multi-plane existence network management (using the NIECE protocol) and galaxy level routing
using 64K bit addresses.
Digital will continue to support it's Gateway products into the Prime Material Plane. These
products include an IBM ANA (Acronym-based Network Architecture) Gateway, the TOLKIEN
product that allows control of all ring based networks, and our Mega-broad-jump-band hardware
which leaps past Wang's products in the hype weary business marketplace.
From our Engineering Net. (You can tell that they're really working on on it. Racquetball courts
indeed!)
David
> On Nov 17, 2016, at 6:00 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> Send TUHS mailing list submissions to
> tuhs(a)minnie.tuhs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
> or, via email, send a message with subject or body 'help' to
> tuhs-request(a)minnie.tuhs.org
>
> You can reach the person managing the list at
> tuhs-owner(a)minnie.tuhs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of TUHS digest..."
>
>
> Today's Topics:
>
> 1. OT: USENET article involving DEC machines and atomic tests
> (Dave Horsfall)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 18 Nov 2016 11:54:40 +1100 (EST)
> From: Dave Horsfall <dave(a)horsfall.org>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] OT: USENET article involving DEC machines and atomic
> tests
> Message-ID: <alpine.BSF.2.11.1611181148020.19595(a)aneurin.horsfall.org>
> Content-Type: TEXT/PLAIN; charset=US-ASCII
>
> Hoping someone can help me here, as I've grepped the 'net to no avail.
>
> Back when USENET was supreme and dinosaurs strode the earth, there was a
> hilarious article involving a DEC box (long forgotten it) that was used to
> instrument underground atomic tests. In one scene, the box was atop a
> truck which was right over the hole; they both went skywards, but the data
> was recovered by taking out the core memory boards and plugging them into
> another box.
>
> Does anyone remember it? It does sound rather suss...
>
> --
> Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> http://minnie.tuhs.org/cgi-bin/mailman/listinfo/tuhs
>
> ------------------------------
>
> End of TUHS Digest, Vol 12, Issue 5
> ***********************************
Hoping someone can help me here, as I've grepped the 'net to no avail.
Back when USENET was supreme and dinosaurs strode the earth, there was a
hilarious article involving a DEC box (long forgotten it) that was used to
instrument underground atomic tests. In one scene, the box was atop a
truck which was right over the hole; they both went skywards, but the data
was recovered by taking out the core memory boards and plugging them into
another box.
Does anyone remember it? It does sound rather suss...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Don North
> Track 0 is not used by standard DEC software
I wonder why DEC did't use track 0. The thing is small enough (256KB in the
original single-density) that even 1% is a good chunk to throw away. Does
anyone know? (I had a look online, but couldn't turn anything up.)
If I had to _guess_, one possibility would be that track 0 is the innermost
track, where the media is moving the slowest, and as a result it's more
error-prone. Another is that IBM used track 0 for something special, and DEC
tried to conform with that. But those are pure guesses, I would love to know
for sure.
Noel
> From: Don North <north(a)alum.mit.edu>
> .. the hardware bootstrap reads track 1 sectors 1, 3, 5, 7
Ah, thanks for that. Starting to look at the code, I had missed the
interleave.
So does DEC do anything with track 0, or is it always just empty?
Noel
So, I'm winding up to boot Unix V6 from an RX02 floppy. So I need two things:
- Details of how DEC ROM bootstraps boot from RX02's. I vaguely recall seeing
documentation of this somewhere (e.g. which sectors it loads, etc), but now I
can't find it. Don North has dumps of the RX02 ROM's, but I'm too lazy to read
through the code and figure out how they work. Is there some documentation
which covers it? I did a quick Google search, but if there is anything out
there, my Google-fu was inadequate.
- Did anyone ever do an RX02 driver for the V6 disk bootstrap? (Well, I guess
a V7 driver would work, too.) Note: what I need is _not_ either i) the Unix OS
driver for the RX02 (I found one of those already), or ii) a driver for the v7
standalone second-stage bootstrap (which would probably be in C). The thing
I'm looking for would be called rx.s, or something like that. Yes, I could
write it, but again, I'm lazy! :-)
Noel
> From: Jacob Goense <dugo(a)xs4all.nl>
> There is the issue with the non-existing man command.
My page on "Bringing up V6 Unix on the Ersatz-11 PDP-11 Emulator" has a
section on man:
http://mercury.lcs.mit.edu/~jnc/tech/V6Unix.html#man
It's pretty straight-forward.
Noel
So, in a fit of brain-fade, I originally sent this to the Classic Computers
list, when it was probably really more appropriate for this one. (Apologies to
those who are subscribed to both, and are thus seeing this again.)
So, I'm trying to do what VTServer was invented for - load Unix into an actual
PDP-11, over its serial line, when one doesn't have machine-readable Unix on
any mass storage for the machine.
However, all the initial code that VTServer loads ('mkfs', etc) is V7-specific
(V6 has a slightly different file system format) - and I want to install V6.
Has anyone ever tweaked the programs which VTServer loads to do V6-format
filesystems? I did a quick Google, but didn't see anything.
Another option is to do something more V6-like, and copy a small bootable V6
file-system image over the serial line; that may be an easier way to go.
No biggie if not, it won't be much work to adapt things, but I figured I'd try
to avoid re-inventing the wheel...
Note that the installation procedures for V6 and V7 are wholly different,
something which confused a number of people on CCTalk.
The 'Setting up Unix' documents are more checklists, they don't go into a lot
of detail as to what is actually happening, so I have prepared two pages on the
Computer History wiki:
http://gunkies.org/wiki/Installing_UNIX_Sixth_Editionhttp://gunkies.org/wiki/Installing_UNIX_Seventh_Edition
which go into more detail on what is actually happening.
Noel
> I'll start with getting VTServer to run under V6 (my only Unix, don't
> have anything later :-)
So, I just got VTServer runnin under V6: it successfully loaded a memory
diagnostic from the 'server', into the 'client', using 'vtboot' on the
latter. (Both running on emulated machines, for the moment - I thought I'd
take all the hardware-related variables out of the equation, until I have the
software all running OK.)
It didn't require as much work on VTServer as I thought it might: I had to
convert the C to the V6 dialect (no '+=', etc), and some other small things
(e.g. convert the TTY setup code), but in general, it was pretty smooth and
painless.
Note that it won't run under vanilla V6, which does not provide 8-bit input
and output on serial lines. I had previously added 'LITIN' and 'LITOUT' modes
(8-bit input and output) to my V6; since the mode word in stty/gtty was
already full, I had to extend the device interface to support them. I didn't
add ioctl() or anything later, I did an upward-compatible extension to
stty/gtty. (I'm a real NIH guy. :-)
My only real problem in getting VTServer running was with LITIN; I did it
some while back, but had never actually tested it (I was only using LITOUT,
for my custom program to talk to PDP-11 consoles, which also did downloads,
so needed 8-bit output). So when I went to use it, it didn't work, and it was
a real stumper! But I did eventually figure out what the problem was (after
writing a custom program to reach into the kernel and dump the entire state
of a serial line), and get it working.
(I had taken the shortcut of not fully understanding how the kernel serial
line code worked, just tried to install point fixes. This turned out not to
work, because of a side-effect elsewhere in the code. Moral of the story: you
can't change the operation of a piece of software without complete
understanding of how it works...)
Is there any interest in all this? If so, I can put together a web page with
the V6-verion VTServer source, along with the modified V6 serial line stuff
(including a short description of the extended stty/gtty interface), etc.
> so if you turn up whatever you used to boot V6, it would probably still
> be useful.
So I guess my next step, if I don't hear shortly from someone who has
previously used VTServer to install V6, is to start on actually getting
a V6 file system created.
I'm still vacillating over whether it would be better to go V6-style (and
just transfer a complete, small existing V6 filesystem), or V7-style (and
get stand-alone 'mkfs', etc running with V6-format file systems). Anyone
have an opinion?
Noel
> From: Clem Cole
> I miss my AAA
Well, I'm not sure I actually _miss_ mine (my old eyes prefer black on white),
but I definitely have fond memories of it. It was such a huge thing when I
finally managed to snag one to replace my (IIRC) VT52.
You could actually have three useful windows in Emacs!! And the detached
keyboard was definitely more ergonomic.
Noel
Time to post this again; warning: you need to uderstand the obscure
references... I've actually worked in places like this.
-----
VAXen, my children, just don't belong some places. In my business, I am
frequently called by small sites and startups having VAX problems. So when
a friend of mine in an Extremely Large Financial Institution (ELFI) called
me one day to ask for help, I was intrigued because this outfit is a
really major VAX user - they have several large herds of VAXen - and
plenty of sharp VAXherds to take care of them.
So I went to see what sort of an ELFI mess they had gotten into. It seems
they had shoved a small 750 with two RA60's running a single application,
PC style, into a data center with two IBM 3090's and just about all the
rest of the disk drives in the world. The computer room was so big it had
three street addresses. The operators had only IBM experience and, to
quote my friend, they were having "a little trouble adjusting to the VAX",
were a bit hostile towards it and probably needed some help with system
management. Hmmm, Hostility... Sigh.
Well, I thought it was pretty ridiculous for an outfit with all that VAX
muscle elsewhere to isolate a dinky old 750 in their Big Blue Country, and
said so bluntly. But my friend patiently explained that although small, it
was an "extremely sensitive and confidential application." It seems that
the 750 had originally been properly clustered with the rest of a herd and
in the care of one of their best VAXherds. But the trouble started when
the Chief User went to visit his computer and its VAXherd.
He came away visibly disturbed and immediately complained to the ELFI's
Director of Data Processing that, "There are some very strange people in
there with the computers." Now since this user person was the Comptroller
of this Extremely Large Financial Institution, the 750 had been promptly
hustled over to the IBM data center which the Comptroller said, "was a
more suitable place." The people there wore shirts and ties and didn't
wear head bands or cowboy hats.
So my friend introduced me to the Comptroller, who turned out to be five
feet tall, 85 and a former gnome of Zurich. He had a young apprentice
gnome who was about 65. The two gnomes interviewed me in whispers for
about an hour before they decided my modes of dress and speech were
suitable for managing their system and I got the assignment.
There was some confusion, understandably, when I explained that I would
immediately establish a procedure for nightly backups. The senior gnome
seemed to think I was going to put the computer in reverse, but the
apprentice's son had an IBM PC and he quickly whispered that "backup"
meant making a copy of a program borrowed from a friend and why was I
doing that? Sigh.
I was shortly introduced to the manager of the IBM data center, who
greeted me with joy and anything but hostility. And the operators really
weren't hostile - it just seemed that way. It's like the driver of a Mack
18 wheeler, with a condo behind the cab, who was doing 75 when he ran over
a moped doing its best to get away at 45. He explained sadly, "I really
warn't mad at mopeds but to keep from runnin' over that'n, I'da had to
slow down or change lanes!"
Now the only operation they had figured out how to do on the 750 was
reboot it. This was their universal cure for any and all problems.
After all it works on a PC, why not a VAX? Was there a difference?
Sigh.
But I smiled and said, "No sweat, I'll train you. The first command you
learn is HELP" and proceeded to type it in on the console terminal. So
the data center manager, the shift supervisor and the eight day operators
watched the LA100 buzz out the usual introductory text. When it finished
they turned to me with expectant faces and I said in an avuncular manner,
"This is your most important command!"
The shift supervisor stepped forward and studied the text for about a
minute. He then turned with a very puzzled expression on his face and
asked, "What do you use it for?" Sigh.
Well, I tried everything. I trained and I put the doc set on shelves by
the 750 and I wrote a special 40 page doc set and then a four page doc
set. I designed all kinds of command files to make complex operations into
simple foreign commands and I taped a list of these simplified commands to
the top of the VAX. The most successful move was adding my home phone
number.
The cheat sheets taped on the top of the CPU cabinet needed continual
maintenance, however. It seems the VAX was in the quietest part of the
data center, over behind the scratch tape racks. The operators ate lunch
on the CPU cabinet and the sheets quickly became coated with pizza
drippings, etc.
But still the most used solution to hangups was a reboot and I gradually
got things organized so that during the day when the gnomes were using the
system, the operators didn't have to touch it. This smoothed things out a
lot.
Meanwhile, the data center was getting new TV security cameras, a halon
gas fire extinguisher system and an immortal power source. The data center
manager apologized because the VAX had not been foreseen in the plan and
so could not be connected to immortal power. The VAX and I felt a little
rejected but I made sure that booting on power recovery was working right.
At least it would get going again quickly when power came back.
Anyway, as a consolation prize, the data center manager said he would have
one of the security cameras adjusted to cover the VAX. I thought to
myself, "Great, now we can have 24 hour video tapes of the operators
eating Chinese takeout on the CPU." I resolved to get a piece of plastic
to cover the cheat sheets.
One day, the apprentice gnome called to whisper that the senior was going
to give an extremely important demonstration. Now I must explain that what
the 750 was really doing was holding our National Debt. The Reagan
administration had decided to privatize it and had quietly put it out for
bid. My Extreme Large Financial Institution had won the bid for it and
was, as ELFIs are wont to do, making an absolute bundle on the float.
On Monday the Comptroller was going to demonstrate to the board of
directors how he could move a trillion dollars from Switzerland to the
Bahamas. The apprentice whispered, "Would you please look in on our
computer? I'm sure everything will be fine, sir, but we will feel better
if you are present. I'm sure you understand?" I did.
Monday morning, I got there about five hours before the scheduled demo to
check things over. Everything was cool. I was chatting with the shift
supervisor and about to go upstairs to the Comptroller's office. Suddenly
there was a power failure.
The emergency lighting came on and the immortal power system took over the
load of the IBM 3090s. They continued smoothly, but of course the VAX,
still on city power, died. Everyone smiled and the dead 750 was no big
deal because it was 7 AM and gnomes don't work before 10 AM. I began
worrying about whether I could beg some immortal power from the data
center manager in case this was a long outage.
Immortal power in this system comes from storage batteries for the first
five minutes of an outage. Promptly at one minute into the outage we hear
the gas turbine powered generator in the sub-basement under us
automatically start up getting ready to take the load on the fifth minute.
We all beam at each other.
At two minutes into the outage we hear the whine of the backup gas turbine
generator starting. The 3090s and all those disk drives are doing just
fine. Business as usual. The VAX is dead as a door nail but what the hell.
At precisely five minutes into the outage, just as the gas turbine is
taking the load, city power comes back on and the immortal power source
commits suicide. Actually it was a double murder and suicide because it
took both 3090s with it.
So now the whole data center was dead, sort of. The fire alarm system had
its own battery backup and was still alive. The lead acid storage
batteries of the immortal power system had been discharging at a furious
rate keeping all those big blue boxes running and there was a significant
amount of sulfuric acid vapor. Nothing actually caught fire but the smoke
detectors were convinced it had.
The fire alarm klaxon went off and the siren warning of imminent halon gas
release was screaming. We started to panic but the data center manager
shouted over the din, "Don't worry, the halon system failed its acceptance
test last week. It's disabled and nothing will happen."
He was half right, the primary halon system indeed failed to discharge.
But the secondary halon system observed that the primary had conked and
instantly did its duty, which was to deal with Dire Disasters. It had
twice the capacity and six times the discharge rate.
Now the ear splitting gas discharge under the raised floor was so massive
and fast, it blew about half of the floor tiles up out of their framework.
It came up through the floor into a communications rack and blew the cover
panels off, decking an operator. Looking out across that vast computer
room, we could see the air shimmering as the halon mixed with it.
We stampeded for exits to the dying whine of 175 IBM disks. As I was
escaping I glanced back at the VAX, on city power, and noticed the usual
flickering of the unit select light on its system disk indicating it was
happily rebooting.
Twelve firemen with air tanks and axes invaded. There were frantic phone
calls to the local IBM Field Service office because both the live and
backup 3090s were down. About twenty minutes later, seventeen IBM CEs
arrived with dozens of boxes and, so help me, a barrel. It seems they knew
what to expect when an immortal power source commits murder.
In the midst of absolute pandemonium, I crept off to the gnome office and
logged on. After extensive checking it was clear that everything was just
fine with the VAX and I began to calm down. I called the data center
manager's office to tell him the good news. His secretary answered with,
"He isn't expected to be available for some time. May I take a message?"
I left a slightly smug note to the effect that, unlike some other
computers, the VAX was intact and functioning normally.
Several hours later, the gnome was whispering his way into a demonstration
of how to flick a trillion dollars from country 2 to country 5. He was
just coming to the tricky part, where the money had been withdrawn from
Switzerland but not yet deposited in the Bahamas. He was proceeding very
slowly and the directors were spellbound. I decided I had better check up
on the data center.
Most of the floor tiles were back in place. IBM had resurrected one of the
3090s and was running tests. What looked like a bucket brigade was
working on the other one. The communication rack was still naked and a
fireman was standing guard over the immortal power corpse. Life was
returning to normal, but the Big Blue Country crew was still pretty shaky.
Smiling proudly, I headed back toward the triumphant VAX behind the tape
racks where one of the operators was eating a plump jelly bun on the 750
CPU. He saw me coming, turned pale and screamed to the shift supervisor,
"Oh my God, we forgot about the VAX!" Then, before I could open my mouth,
he rebooted it. It was Monday, 19-Oct-1987. VAXen, my children, just
don't belong some places.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
_______________________________________________
Applix 1616 mailing list
Applix-L(a)object-craft.com.au
https://www.object-craft.com.au/cgi-bin/mailman/listinfo/applix-l
> From: Random832
> Tapes are still bigger than disks, to my understanding.
Makes sense. There's a lot more surface area on a tape than on a disk. Yes,
mechanical considerations mean you can pack the bits on a disk in more
densely, but not enough to offset the much greater surface area.
Noel
Back when it was all UUCP, a friend setup his own system, bang.
He then used his initials as his login, bam.
So when asked for his email address he answered
bang bang bang bam
Bret was a funny guy.
David
The news of Dennis's demise is indeed an echo from five
years ago.
Dennis would have been amused.
Think of it is a day to think about Dennis and what he
gave us, instead of Trump and what he threatens. I
suspect many of us could use that.
Norman Wilson
Toronto ON
Forwarded with permission.
On Sunday, 18 September 2016 at 9:07:45 -0700, Kirk McKusick wrote:
> On Monday, 12 September 2016 at 11:33:48 +0200, Joerg Schilling wrote:
>> norman(a)oclsc.org (Norman Wilson) wrote:
>>
>>> I don't think the BSD kernel when adopted had much, if any,
>>> of sockets, Berkeley's TCP/IP, McKusick's FFS; if it did,
>>> they were excised. Likewise any remaining trace of V7's
>>> mpx(2) multiplexed-file IPC.
>>
>> From looking at the CSRG sources, it seems that the filesystem
>> project has been founded by Bill Joy and Kirk approached the project
>> a year (or more) later and implemented only the parts that are
>> related to the block allocation.
>>
>> Does someone know more?
>>
>> Joerg
>
> The 4.1a distribution had the initial socket interface with a
> prerelease of the BBN TCP/IP under it. There was wide distribution
> of 4.1a. The 4.1b distribution had the fast filesystem added and
> a more mature socket interface (notably the listen/accept model
> added by Sam Leffler). There was very limited distribution of 4.1b.
> The 4.1c distribution had the finishing touches on the socket
> interface and added the rename system call to the filesystem.
> It also added the reliable signal interface. There was very wide
> distribution of 4.1c as there was a 9-month delay in the distribution
> of 4.2BSD while DARPA, BBN, and Berkeley debated whether the prerelease
> of BBN's TCP/IP should be replaced with BBN's finished version. In
> the end the TCP/IP was not replaced as it had had so much field
> testing and improvement by the folks running the BSD releases that
> it was deemed more performant and reliable. There had been a plan
> to release 4.1d that would have the new virtual memory (mmap)
> interface, but the delay in getting out 4.2BSD caused that addition
> to be delayed for the 4.3BSD release.
>
> As far as the filesystem is concerned, Bill Joy had done an initial
> design primarily coming up with the idea of cylinder groups as a
> way to distribute the block and inode bitmaps across the disk in
> manageable size pieces. When he handed the project off to me I
> received a single header file that defined his prototype for the
> superblock and the cylinder group structures. I did all of the
> coding from there with Bill doing design review. My feedback from
> the folks at the labs was that they were not interested in incorporating
> the fast filesystem because it tripled the size of the filesystem
> code (from 400 to 1200 lines) and because it needlessly put
> functionality into the kernel that could be done in userland
> (mkdir/rmdir/rename which together were 400 of the 1200 lines of code).
>
> Kirk McKusick
--
Sent from my desktop computer.
Finger grog(a)FreeBSD.org for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
So the "Odd Comments and Strange Doings in Unix" page:
http://www.bell-labs.com/usr/dmr/www/odd.html
mentions (at the bottom) a "PDP-11/20 with KS-11 add-on". I'm trying
to find info about the KS11, but haven't been able to locate anything.
Does anyone have info on it, or, failing that, is there source for an early
version of Unix that uses it? (I can create something of a User's Manual for
it by reverse engineering from the source.)
Thanks!
Noel
Hi All.
I'm looking for old tarballs of mawk. Specifically all versions of
mawk 1.2 and mawk 1.3 prior to 1.3.2. I found a 1.3.2 tarball on the
net (not sure how authentic it is ; if you have a real one, please let
me know) and I have 1.3.3.
Much thanks,
Arnold Robbins
> Interesting, but then nobody did run a modern shell on one of these machines or
> everybody did type slowly, so the character lossage problem did not occur.
I'm afraid I don't get the point, apparently something about the
relative performance of stream- and non-stream tty drivers. How
do shells get into the act? And didn't uucp, which was certainly
not a slow typist, appear like any dial-up connection and thus
use /dev/ttyxx? (I cannot recollect, though, when dial-up uucp
finally ceased.)
DOug
Doug McIlroy:
"Re-port" may be a bit strong. Internet stuff from Berkeley
was folded into the research code (for a huge increase in
kernel size). But I think it was done by pasting Berkeley
code into local source, rather than the other way around.
====
Actually it was more nearly:
-- Adopt 4.1c BSD kernel
-- Graft in Research-specific things it was important to
keep: in particular Dennis's stream subsystem, Tom Killian's
original /proc, Peter Weinberger's early network file system
client code (the server was just a user-mode program) and
simple hackery to speed up the file system without great
fuss (make the block size 4KiB and move the free list to
a bitmap; no cylinder groups or other radical changes).
Also device drivers to support Datakit, at the time our
workhorse network. I think a file-system switch went
in early as well, spurred by having both /proc and
pjw's network file system; it wasn't used to support
multiple disk-file-system layouts, though it could have
been.
-- Outside the kernel, keep pretty much the then-current
Research commands, including Blit/5620 support, the
cleaned up and slightly-extended Bourne shell, and whatnot.
I don't think the BSD kernel when adopted had much, if any,
of sockets, Berkeley's TCP/IP, McKusick's FFS; if it did,
they were excised. Likewise any remaining trace of V7's
mpx(2) multiplexed-file IPC.
I'm going by the state the system was in when I arrived
in August 1984, plus a short note written by Weinberger
that I came across later.
TCP/IP support didn't show up until later, I think summer
1985, though it might have been a year later. The first
cut was done by Robert T. Morris (later famous for a buggy
program that broke the Internet), who did several summers
as an intern; he took the code from (I think) 4.2 BSD,
and constructed some shims to fit it into the stream world.
Paul Glick later cleaned it up a lot, removing the need
for most of the shimmery.
Further evolution followed, of course, including a
complete rewrite of the interface between drivers
(device, file system, and stream) and the rest of the
system, which made configuration much more straightforward.
Also a rampage on my part to identify code that was no
longer useful and kick it out; I took special pleasure
in removing ptrace(2) (even though I had to change adb
and sdb first to use /proc).
But that was all later.
Norman Wilson
Toronto ON
Gerard Holzmann took the true and false commands as
the jumping-off point for "Code Inflation", an
installment of his "Reliable Code" blog and column
in IEE Software. An informative, but depressing, read:
http://spinroot.com/gerard/pdf/Code_Inflation.pdf
Doug
Norman wrote
The earliest stream-I/O-system-based tty driver I'm aware of was
already in the Research kernel when I interviewed at Bell Labs
in early 1984. I have a vague memory that it was a couple of
years older than that, but I cannot find any citations to
back up either of those memories.
Dennis described streams in the special Unix issue of the
BSTJ, Oct 1984, and noted that "it runs on about 20 machines in
... Research ... Although it is being investigated in other
parts of AT&T Bell Laboratories, it is not generally available."
The manuscript was received October 18, 1983.
Doug
Doug
Because of the design bug I mentioned, I searched for UNIX sources from AT&T
that include streams support, but could never find any.
=====
None of the Research systems after 32/V was ever distributed except
to a handful of sites under site-specific letter agreements that
forbade redistribution.
This is a bug, not a feature, but there it is. It was easy to get
approval to write a paper, much harder to get permission to distribute
code, especially when the code in some way overlapped the Official
Product.
Warren and I (and Dennis, when he was still alive) hoped to do
something about some years back, but it's a lot harder than it used
to be because it is harder to find a corporate entity that is
confident enough to give permission, even for stuff that is so old
that it is unlikely to have a trumppenceworth of commercial value.
Then IBM vs SCO intervened, and now things are even more fragmented.
There may be other efforts under way now and then to negotiate the
legal minefield. I wish them all well, and will help them where I
can.
Norman Wilson
Toronto ON
I'm not sure of the point of this mine-is-bigger-than-yours argument, but:
The earliest stream-I/O-system-based tty driver I'm aware of was
already in the Research kernel when I interviewed at Bell Labs
in early 1984. I have a vague memory that it was a couple of
years older than that, and was first implemented in a post-V7
PDP-11 system; also that I had heard about it first at a USENIX
conference in 1982 or 1983; but I cannot find any citations to
back up either of those memories.
I do know that I'd heard of it while I was still working at Caltech,
because I remember thinking about what a good idea it was and
about possibly trying to do my own version of it, but I never did.
I left Caltech at the end of June 1984, spent the following month
touring nearly the entire Amtrak long-distance network in a single
long reservation (it was possible to do that with surprisingly few
overnight stops off the train in those days), and started at Bell
Labs at the beginning of August.
Norman Wilson
Toronto ON
It sounds like my understanding of the different 4.1x versions is
just mistaken. If 4.1c had FFS and sockets, the kernel on which
V8 was built must have been an earlier 4.1x.
I believe the reason for adopting that kernel, rather than sticking
with 32/V, was exactly to get paging support. There was a competing
32/V descendant with paging, written by John Reiser at Holmdel, which
I gather was thought by many to be much cleaner technically; e.g. he
unified the block-device buffer cache and the memory-page cache, and
implemented copy-on-write paging rather than resorting to the messy
vfork. I have heard that there was considerable argument and
hand-wringing over whether to use Reiser's kernel or the BSD one.
It all happened well before I arrived, and I don't know what the
tradeoffs were, though one was certainly that Reiser's management
didn't support his work and nobody in 1127 was keen to have to take
it over.
Norman Wilson
Toronto ON
Joerg Schilling:
The colon was introduced by AT&T around 1983. It was used for Bourne Shell
scripts. Some of these scripts made it into SVr4 and caused problems with
non-Bourne compatible other shells.
====
Interesting. I never knew of that convention. I remember seeing
shell scripts with a : at the front, but thought that was just to
make sure the first character wasn't # even if the script began with
a comment.
Since some here had never heard of the #-means-csh convention, I
should perhaps explain about :. In pre-Bourne shells that used the
simple external control-flow mechanisms that I think were discussed
here a few months ago, : was used to mark a label: goto(1) would
seek to the beginning of its standard input, then read until it
encountered a line of the form
: label
with the desired label, then exit with the seek pointer at the first
character of the following line.
: was a no-op command; I forget whether it was implemented within the
shell or externally. Either way, that made it useful as a comment
character, but somewhat clumsy: it was just a command, with no
special parsing rules attached. A comment using : had to begin at
a command boundary, and its arguments were parsed in the normal way:
rm -rf * : you don't want to do this
was probably not what you wanted, instead you had to type
rm -rf * ; : "you don't want to do this"
or the like.
csh used # as a comment character from the beginning. Bourne
adopted it too.
Norman Wilson
Toronto ON
Random832:
The existence of cd as a real command is a bit silly (Ubuntu doesn't
seem to bother with it), but it is technically required by the standard.
===
Just for the record, Fedora 21 supplies /bin/cd, as part
of package bash-4.3.42-1. Interestingly, it is a shell
script:
lu$ cat /bin/cd
#!/bin/sh
builtin cd "$@"
lu$
As has been said here, it's hard to see the functional point.
Others have remarked on the continued life of /bin/true and
/bin/false. There are some who use those as shells in /etc/passwd
for logins that should never actually be allowed to do anything
directly. I have no strong personal feeling about that, I'm just
reporting.
And to be fair (as has also already been displayed here), the
copyright notice inserted in the once-empty /bin/true was hundreds
of bytes long, not thousands. Let us call out silliness, but let
us not make it out as any sillier than it actually is.
Norman Wilson
Toronto ON
UNIX old fart and amateur pedant
I remember reading about #! in the early 1980s, and
having mixed feelings about it, as I still do. The
basic idea is fine, if annoyingly limited; but that
the kernel has to decide, in effect, whether to treat
a header as binary or text bothers me. Were I designing
a new system from scratch today, I'd just make the
header all text; the small extra space and time for
the kernel to parse that for binaries doesn't matter
any more. It certainly did when #! was invented,
though.
I had the impression at the time that it came from
Berkeley, but I think I later heard from the horse's
mouth that it was originally Dennis's idea.
I don't think anyone has yet laid out the complete
story of what came before:
1. Originally, the shell would exec(file), and if
exec returned ENOEXEC, would open the file and treat
it as shell commands.
2. Then came the C shell, and a problem: did file
contain commands for csh or sh? A hack emerged:
if csh encountered a script file, it would read
the first character; if that was '#' it was a
csh script, otherwise it handed off to /bin/sh.
None of this helped when some program other than
the shell called exec on a shell script. That's one
reason execlp and execvp appeared. (The other is that
they observe $PATH if the command pathname has a
single element.)
I don't know offhand whether there was ever an execlp/vp
that implemented the #-means-csh convention. Anyone
else remember?
Norman Wilson
Toronto ON
> 8th edition was essentially a re-port of 4.1c BSD, correct?
"Re-port" may be a bit strong. Internet stuff from Berkeley
was folded into the research code (for a huge increase in
kernel size). But I think it was done by pasting Berkeley
code into local source, rather than the other way around.
But, since much of the rest of the BSD kernel was Bell
Labs code, it's probably right that the result of the
merge had more code in common with BSD than with Research.
If you ask, though, what fraction of Research code
survived the merge, it was probably larger than the
surviving fraction of the total BSD code.
Doug
> IIRC #! originated at Bell Labs but it got out to the world via BSD.
> Perhaps Dr. McIlroy could confirm / deny / expand upon the details (please?)
I recall Dennis discussing the feature at some length before installing it.
So the exact semantics, especially the injected argument, are almost]
certainly his. I don't know whether he built on a model from elsewhere.
#! appeared between v7 (1979) and v8 (1985). As v8 was never released,
it clearly made its way into the world via BSD and USG. BSD, being
more nimble, was likely first.
doug
On 9 September 2016 at 17:15, Mary Ann Horton <mah(a)mhorton.net> wrote (in part):
> When I was at Berkeley working on my dissertation, I wrote a tool that would
> let you edit a text file written in any language you could define with a
> grammar, with syntax and semantic error checking while you edited. I had
> grammars for several popular (in 1980) languages.
My curiosity is piqued. What were these languages?
N.
On 10 September 2016 at 05:41, Joerg Schilling <schily(a)schily.net> wrote:
> Michael Kjörling <michael(a)kjorling.se> wrote:
>
>> On 10 Sep 2016 09:45 +0200, from dnied(a)tiscali.it (Dario Niedermann):
>> > Il 15/07/2016 alle 14:27, Norman Wilson ha scritto:
>> >> lu$ cat /bin/cd
>> >> #!/bin/sh
>> >> builtin cd "$@"
>> >> lu$
>> >
>> > But doesn't this change the current dir only in the child shell?
>> > Which then exits right after the second line, parent shell's $PWD
>> > unaffected. I really don't see how this script is useful.
>>
>> It does appear rather useless. Curiously, Debian (checked on Wheezy =
>> bash 4.2+dfsg-0.1+deb7u3 and Jessie = bash 4.3-11+b1) seems to not
>> supply anything like that, so it would appear to be some kind of
>> Fedora-ism rather than a part of anything upstream; that, or the
>> Debian folks are actually paying attention to what they ship onto
>> users' systems.
>
> POSIX requires some commands to be callable via exec().
Solaris 10 has the following amusing implementation (/usr/bin/cd):
#!/bin/ksh -p
#
#ident "@(#)alias.sh 1.2 00/02/15 SMI"
#
# Copyright (c) 1995 by Sun Microsystems, Inc.
#
cmd=`basename $0`
$cmd "$@"
N.
> From: Blake McBride
> After about 30 years of C, there are only three things I would have
> liked to see:
> 1. Computed goto
Can't you make a switch statement do what you need there?
The two things I really missed were:
- BCPL's ValOf/ResultIs, for making more complex macros (although with the
latest modern compilers, which inline small functions, this is less
of an issue)
- The ability to 'break' out of more than one level of nesting; one either
has to stand on one's head (code-wise), or use a goto
Noel
> After about 30 years of C, there are only three things I would
have liked to see:
> 1. Computed goto
...
> Computed goto's are good for interpreters.
A computed goto, of course, is merely an optimized switch.
Dennis installed this optimization early in the evolution of C. The
main driving force was the performance and size of the PDP-11 Unix
kernel. As functionality grew, resource usage was repeatedly tamped
down by improving C's code generation.
The switch optimizer chose among three strategies: naive, binary
decision tree, and computed goto, depending on the number and density
of switch alternatives. Hybrid strategies may have been used, too,
but my memory is hazy on this point. In particular the optimization
improved system-call dispatch--thus achieving the objective,
"good for interpreters". I assume (I hope not unrealistically)
that this optimization has been in the repertoire of mainline C
compilers ever since.
> (Or perhaps require C to support tail recursion.)
I can imagine making a strong recommendation in the standard for
optimizing switches and (at least direct) tail recursion.
Doug
I sent a similar message some time ago, but I haven't
seen it appear in the mailing list, so here goes again.
Apologies if it ends up as a duplicate.
> After about 30 years of C, there are only three things I would have liked
> to see:
>
> 1. Computed goto
> ...
> Computed goto's are good for interpreters.
A computed goto is an optimized switch, and that optimization
goes back to the original C compiler. Mostly driven by
considerations of size and speed of the Unix kernel, Dennis
quite early on taught the compiler to choose among three
compilation strategies for a switch: a chain of comparisons,
a tree of comparisons, or a computed goto, depending on the
number and density of alternatives.
The compilation of the system-call dispatch table was
a perfect example of "good for interpreters."
I have always assumed that other mainline compilers behave
similarly, but I have no solid knowledge about that.
doug
Seen on another list... And I got quoted by Steve Bellovin :-)
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
---------- Forwarded message ----------
From: Kent Borg
To: cryptography(a)metzdowd.com
Subject: Re: [Cryptography]
"NSA-linked Cisco exploit poses bigger threat than previously thought"
On 08/25/2016 06:06 PM, Steven M. Bellovin wrote:
> I first heard more or less that line from Doug McIlroy himself; he
> called C the best assembler language he'd ever used.
Ancient fun-fact: Years ago there was an article in Byte magazine
describing how a useful subset of C could be directly assembled into 68000
code. Not compiled, assembled.
C is a stunning assembly language. When those wild-eyed nerds at AT&T
decided to write Unix not in assembly but in C (where was management!?),
it was radical. But C was up to (down to?) the task, it was pioneering
then and is still doing useful things decades later: From the fastest
supercomputers to some pretty slim microcontrollers (plus a hell of a lot
of Android devices) multitudes of computers run a Linux kernel compiled
from the *same* C source code, with almost no assembly. Big-endian,
little-endian: no matter. Different word lengths: no matter.
That is one impressive cross-platform assembly language!
Unfortunately, C is also a dangerous language that mortal programmers
cannot reliably wield.
-kb, the Kent who knows he is pressing his luck on a moderated
cryptography mailing list, but C deserves a lot of respect, as it also
deserves to be efficiently sent into a dignified retirement.
_______________________________________________
The cryptography mailing list
cryptography(a)metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
Every time someone starts spouting about how unsafe
C is, and how all the world's problems would be solved
if only people would stop using it, I think of Flon's
Axiom, for 35 years my favourite one-liner about
programming and languages:
There does not now, nor will there ever, exist a
programming language in which it is the least bit
hard to write bad programs.
Flon's Axiom comes from a short note On Research
in Structured Programming, published in SIGPLAN
Notices in October 1975. It's just as true today.
Over the years I've seen people misinterpret the
Axiom as an argument against looking for better
programming languages at all, but that's not what
it means. (Read the original note--it's a page
and a half--for full context; it is, alas, behind
ACM's Digital Library paywall.) There are certainly
languages that make certain sorts of mistakes easier
or harder, or are easier or harder to read, but in
the end most of that really is up to the programmer.
Programming well requires a lot of thought and care
and careful rereading, and often throwing half the
code out and re-doing it better, and until we can
have a programming community the majority of whom
are up to those challenges, we will continue to have
crashes and security vulnerabilities and other
embarrassing bugs aplenty, no matter what language
is used.
Norman Wilson
Toronto ON
The latest issue of the IEEE Annals of Computing was published
electronically today, and it has an article that I expect many
TUHS list readers will enjoy reading:
Notes on the History of Fork and Join
http://dx.doi.org/10.1109/MAHC.2016.34
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
All, sorry this is slightly off-topic. I'm trying to
find out what fstat(2) returns when the file descriptor
is a pipe. The POSIX/Open Group documentation doesn't
really specify what should be returned. Does anybody have
any pointers?
Thanks, Warren
P.S. Why? xv6 has fstat() but returns an error if the
file descriptor isn't associated with an i-node. I'm
trying to work out if/how to fix it.
I remember once, long ago--probably in the early 1980s--writing
a program that expected fstat on a pipe to return the amount of
data buffered in the pipe. It worked on the system on which
I wrote the code. Then I tried it on another, related but
different UNIX, and it didn't work. So if POSIX/SUS don't
prescribe a standard, I don't think one should pretend there
is one, and (as I learned back then) it's unwise to depend
on the result, except I think it's fair not to expect fstat
to fail on any valid file descriptor.
I'm pretty sure that in 7/e and earlier, fstat on a pipe
reported a regular file with zero links. There was a reason
for this: the kernel in fact allocated an i-node from a
designated pipe device (pipedev) file system, usually the
root. So the excuse that `there's no i-node' was just wrong.
In last-generation Research systems, when pipes were streams
(and en passant became full duplex, which caused no trouble
at all but simplified life elsewhere--I think I was the one
who realized that meant we didn't need pseudo-ttys any more),
the system allocated a pair of in-core i-nodes when a pipe
was created. As long as such an i-node cannot be accidentally
confused with one belonging to any disk file system, this
causes no trouble at all, and since it is possible to have
more than one disk file system this is trivially possible
just by reserving a device number. (In fact by then our
in-core i-nodes were marked with a file system type as well,
and pipes just became their own file system.) stat always
returned size 0 for (Research) stream pipes, partly because
nobody cared enough, partly because the implementation of
streams didn't keep an exact count of all the buffered data
all along the stream, just a rough one sufficient for flow
control. Besides, with a full-duplex pipe, which direction's
data should be counted?
Returning to the original question, I'd suggest that:
-- fstat(fd) where fd is a pipe should succeed
-- the file should be reported to have zero links,
since that is the case for a pipe (unless a named pipe,
but if you support those you probably have something
else to stat anyway)
-- the file type should be IFIFO if that type exists
in xv6 (which it wouldn't were it a real emulation of
6/e, but I gather that's not the goal), IFREG otherwise
-- permissions probably don't matter much, but for
sanity's sake should be some reasonable constant.
Norman Wilson
Toronto ON
> From: Warren Toomey
> xv6 is a Unix-like OS written for teaching purposes.
I'm fairly well-aware of Xv6; I too am planning to use it in a project.
But back to the original topic, it sounds like there's a huge amount of
variance in the semantics of doing fstat() on a pipe. V6 doesn't special-case
it in any way, but it sounds as if other systems do.
What V6 does (to complete the list) is grow the temporary file being used to
buffer the pipe contents up to a certain maximum size, whereupon it halts the
writer, and waits for the reader to catch up - at which point it truncates
the file, and adjusts the read and write pointers back to 0. So fstat() on
V6, which doesn't special-case pipes in any way for fstat(), apparently
returns 'waiting_to_be_read' plus 'already_read'.
>>> xv6 has fstat() but returns an error if the file descriptor isn't
>>> associated with an i-node.
>> ?? All pipe file descriptors should have an inode?
To answer my own question, after a quick look at the Xv6 sources (on my
desktop ;-); it turns out that Xv6 handles pipes completely differently;
instead of borrowing an inode, they have special 'pipe' structures. Hence the
error return in fstat() on Xv6. (That difference also limits the amount of
buffered data in a pipe to 512 bytes. So don't expect high throughput from a
pipe on Xv6! :-)
So I guess you get to pick which semantics you want fstat() on a pipe to have
there: V6's, V7's (see below), or something else! :-)
> 7th Ed seems to return the amount of free space in the pipe, if I read
> the code correctly:
I'm not sure of that (see below), but I think it would make more sense to
return the amount of un-read data (which is what I think it does do), as the
closest semantics to fstat() on a file.
It might also make sense to return the amount of free space (to a writer), and
the amount of data available to read (to a reader), since those are the
numbers users will care about. (Although then fstat() on the write side of a
pipe will have semantics which are inconsistent with fstat() on files. And if
the user code knows the maximum amount of buffering in a pipe, it could work
out the available write space from that, and the amount currently un-read.)
> fstat()
> {
> ...
> /* Call stat1() with the current offset in the pipe */
> stat1(fp->f_inode, uap->sb, fp->f_flag&FPIPE? fp->f_un.f_offset: 0);
> }
> stat1()
> {
> ...
> ds.st_size = ip->i_size - pipeadj;
I'm too lazy to go read the code (even though I already have it :-), but V7
seems to usually be very similar to V6. So, what I suspect this code does is
pass the expression:
((fp->f_flag & FPIPE) ? fp->f_un.f_offset : 0)
as 'pipeadj' (to account for the amount that's already been read), and then
returns (ip->i_size - pipeadj), i.e. the amount remaining un-read, as the
size.
Noel
> From: Warren Toomey
> I'm trying to find out what fstat(2) returns when the file descriptor
> is a pipe.
In V6, it returns information about the file (inode) used as a temporary
storage area for data which has been written into the pipe, but not yet read;
i.e. it's an un-named file with a length which varies between 0 and 4KB.
> xv6 has fstat() but returns an error if the file descriptor isn't
> associated with an i-node.
?? All pipe file descriptors should have an inode?
Noel
Hi all, I'm working on a Unix-related project, and I thought I'd ask if
anybody here might help.
There's a pared-down Unix-like system, xv6, which is inspired by 6th Edition
Unix and the Lions Commentary. Its purpose is to teach OS principles.
The website and book are here:
https://pdos.csail.mit.edu/6.828/2014/xv6.htmlhttps://pdos.csail.mit.edu/6.828/2014/xv6/book-rev8.pdf
Unfortunately, while the kernel is nice, they don't provide much of
a run-time environment, so it feels too much of a toy to use. I had the
idea of porting a small set of libraries and commands over to get it to
the point where it feels a bit like 7th Edition.
I've made a start by using the Minix 2.0 libraries and commands, see
https://github.com/DoctorWkt/xv6-minix2 and the NOTES file. I now realise
that bringing up a libc plus associated commands will involve a fair bit of
work.
So, if anybody is interested in helping, let me know.
Thanks in advance, Warren
Dave Horsfall:
Not Henry Spencer, perchance?
=====
Since the Canadian in question had been working in the US since
1964 or so, he must by now be pushing 70 years old.
I haven't seen Henry for some years, but I don't think he has
aged that much.
Norman Wilson
Toronto ON
> Date: Sat, 30 Jul 2016 15:30:36 +0000
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)tuhs.org
> Subject: Re: [TUHS] History repeating itself
> Message-ID: <20160730153036.GI3375(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> On 30 Jul 2016 10:15 -0400, from cowan(a)mercury.ccil.org (John Cowan):
>>> Who needs FedEx?
>>
>> Well, latency counts for something too, as does radius: if I want to
>> send bulk data from New York to London (a very normal thing to do),
>> your station wagon isn't going to count for much.
>
> You could, however, get an economy class flight ticket and load up
> your suitcase with either HDDs or SDXCs (I suspect SDXCs would be
> better per amount of data from the perspective of both volume and
> weight, and would take better to handling). Given FedEx's prices,
> _once you have the infrastructure set up_ (which you'll need whether
> you have someone travel with the media, by air or by stationwagon, or
> FedEx it), that _might_ even compare favorably in terms of bytes
> transferred per second per dollar. (Now that's a measurement of
> throughput I don't think I've seen before; B/s/$.) Of course, you'd
> need someone who can babysit the suitcase, which potentially adds to
> the cost, but the stationwagon traditionally hasn't been self-driving
> either, and most of a transatlantic flight isn't active time on part
> of the person travelling with the suitcase so you could go with an
> overnight flight and allow the person to sleep.
>
> If you want to reduce the risk of the bag getting handled roughly or
> lost in handling, reduce the above to carry-on luggage; it will still
> provide a quite respectable throughput.
>
> ... ...
>
> It might not be the absolute cheapest approach, but it seems rather
> hard to beat in terms of throughput per dollar for bulk data transfer,
> especially if you already have someone who would travel anyway and can
> be convinced to take a company-approved suitcase in return for having
> their ticket paid for.
>
> --
> Michael Kjörling • https://michael.kjorling.se • michael(a)kjorling.se
> “People who think they know everything really annoy
> those of us who know we don’t.” (Bjarne Stroustrup)
>
To setup the 'infrastructure might be the tricky part. Many years ago
I flew from Montreal to Amsterdam and had two stacks of 5-1/4"
diskettes with me. No papers, confiscated in Amsterdam.
Cheers,
Rudi
Hi folks,
My root partition for Unix v6 is almost full and /dev/rk0 only has 83 blocks.
The trouble is I wanted to compile bc.y and I think it needs around
300 blocks of temporary space. I was wondering if there was a way to
set up Unix v6 so that it could use one of the other drives for tmp
space. I tried to set up a link using ln but it seems I can't link
across filesystems.
The exact error is "26: Intermediate file error".
I managed to rearrange things so that /dev/rk0 had over 300 blocks of
free space and it fixed the problem, but I'm curious if there was
another solution.
Mark
Clem Cole:
Also to be fair, Dennis did symlinks before 4.2. They were part of the V8
I believe.
=======
I'm pretty sure they came from Berkeley nevertheless. I don't know
the exact order of events, but the 8th Edition kernel was essentially
that from one of the later 4.1x BSDs, hacked in 1127 to remove sockets
and FFS (were they even there yet), then to add Dennis's stream I/O
system, Tom Killian's original /proc, and Peter Weinberger's neta
network-file-system client. Perhaps a few other hooks as well.
Symlinks were already there, and although we made some limited careful
use of them, made nobody very happy because they made such a big
irregular lump in so many things: file system no longer a tree,
difference between stat and lstat, and so on.
One thing 8/e did differently from Berkeley was that ls by default
hid symlinks rather than trotting them out proudly. If f was a
symlink, ls -l f showed the state of the target file, not that of
the link; one had to do ls -lL f to see the symlink itself.
That reflected a general feeling that symlinks should be neither
seen nor heard unless necessary.
Norman Wilson
Toronto ON
William Pechter:
Only thing I can think of is add another drive or partition and mount it
as /tmp.
=====
You say that as if it's a bad thing.
Norman Wilson
Toronto ON
mount >> ln -s
Just to be clear: I don't pine at all for UUCP.
I do still think it's a mistake that e-mail addresses and
domain names run backwards from the way directories and
filenames run. That's what I miss about !norman vs
norman@.
But it's all a Beta-vs-VHS matter these days, like a lot
of unfortunate design decisions that have become standard
over the years. Like git winning out over hg, which is
sort of like the VAX/VMS command language winning out over
the Bourne shell. (To toss another pebble into the pond
to see what the ripples look like, rather in the manner
of Rob and Dave.)
Norman Wilson
Toronto ON
I recently noticed that lpr has a symlink option ("-s") on Solaris but
not on Apple. Is there anything here historically except prudence and
small drives?
N.
> I heard that Bob Morris was asked for his initials, he said “rm”, they insisted on a middle initial, which he didn’t have, so he supplied “h”, hence “rhm”.
True in principle, but when it happened and who "they" were, is lore
beyond my ken. I presume it was before he joined Bell Labs. At the
labs, interoffice communications typically used initials, so the
DMR, JFO, RHM convention was well established. Only the affectation
of lower-case only was new--and that was the fault of unicase Model
33. Who wanted to SHOUT EVERYTHING they wrote, or litter it with escapes?
doug
Google was not the first place Rob and Dave had fun with names.
At one point, Rob had a duplicate entry in /etc/passwd,
with login name r, password empty, normal userid/groupid/home
directory, special shell. The shell program checked whether
it was running on a particular host and a particular hardwired
serial line: if yes, it ran the program that started the Research
version of the window system for our bitmapped terminals;
otherwise it just exited. The idea seemed to be to let him
log in quickly in his office.
I think that by the time I arrived at Bell Labs he'd stopped
using it, because it no longer worked, because we no longer
ran serial lines directly from computers to offices--everyone
was connected via serial-port Datakit instead.
While I was there, senior management bought a Cray X-MP/24 for
the research group. (Thank you for using AT&T.) Since it too
was accessible via Datakit (using a custom hardware interface
built by Alan Kaplan, but that's another story), it had to have
a hostname. It was either Dave or Rob, I forget which, who
suggested 3k, because (a) it was a supercomputer, so `big bang'
seemed to fit; (b) it was Arno Penzias, then VP for Research,
who got us the money, so `big bang' and 3K radiation seemed
even more appropriate; and, most important, (c) it was fun to
see whether a hostname beginning with a digit broke anything.
So far as I recall, nothing broke. Some people who were
involved with TCP/IP networking at the labs were frightened
about it; I don't remember whether that Cray was ever connected
to an IP network so I don't know whether anything went wrong
there. Of course such names are not a problem today, but
in those long-lost days when nobody worried much about buffer
overflows either, such bugs were much more common. Weren't they?
Norman Wilson
Toronto ON
Time to start a new thread :-)
Back when Unix was really Unix and dinosaurs strode the earth, login names
were restricted to just 8 characters, so you had to be inventive when
signing up lots of students every term (ObUS: semester).
A wonderful Japanese girl, Eriko Kinoshita, applied for an account on some
box somewhere. Did I mention that login names defaulted to the first 8
characters of the surname?
Understandably annoyed, Plan B for assigning logins was applied, which was
the first name followed by the first letter of the surname.
Sigh...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
One gets used to login names. In the 80ish I got 'rubl' and I'm still using it.
Of course in this age of the World Wild Web that may make me easily
trackable. Nothing to hide though :-)
Gr[aeiou]g Lehey:
And I wanted greg@, but it was taken. So I ended up with grog@, and
I've had that for nearly 30 years.
=====
I was !norman for some years, but when I left Bell
Labs for the real world 26 years ago, I was forced
to switch to norman@.
That was part of the price I paid for trading suburban
New Jersey for downtown Toronto. On the whole it was
a more-than-satisfactory trade, and emerging to the
real world broadened my perspectives in many areas,
but being stuck with Hideous Naming was certainly a
minor disadvantage.
Norman Wilson
Toronto ON
research!norman no more
On Jul 14, 2016 7:01 PM, "Peter Jeremy" <peter(a)rulingia.com> wrote:
>
> On 2016-Jul-15 08:36:56 +1000, Dave Horsfall <dave(a)horsfall.org> wrote:
> >On Thu, 14 Jul 2016, Clem Cole wrote:
> >And on the Mac and FreeBSD, they still are (as well as being builtins).
>
> FreeBSD provides a convenient list of what commands are (currently)
builtin
> to the provided shells and available externally:
> https://www.freebsd.org/cgi/man.cgi?builtin
>
Bash man page does as well along with command -v (and hash IIRC) letting
you know.
I've always been curious though - what was the reason behind implementing
/bin/[ ? IDK any shell where this isn't implemented - I always assumed it's
a POSIX compatibility stopgap older systems needed to stay compliant with
their shipped shell.
I remember hearing that originally the Unix shell had control structures
(e.g. if, while, case) implemented through external commands. However,
I can't see this reflected in the source code. The 7th Edition Bourne
shell has these commands built-in (usr/src/cmd/sh/cmd.c), while the 6th
Edition (usr/source/s2/sh.c) seems to lack them completely.
The only external command I found was glob, which performed wildcard
expansion.
Am I missing something? Was this implemented in a version that was
never released? If so, does anyone know how this implementation worked?
(Nested commands might require holding some sort of globally
accessible stack.)
> As far as I know, it [|] has always been used as 'or' on computers.
I was on the NPL (eventually PL/I) committee when IBM 'generously'
increased the 360 character set from 48 to 60. George Radin grabbed
| for OR, before IBM announced the character set. Previously
the customary use for | in logic was the "Scheffer stroke", which
we now know as NAND. So "always" is ever since it became available.
Was PL/I the first to adopt it?
Doug
Dave Horsfall:
I still remember when the pipe command was "^" (pointy hat).
====
I still remember--barely--when \136 was up-arrow, not carat!
I don't think pipe was ever only ^, but that ^ was a
synonym for | added to make it easier to use on older
upper-case terminals that had no |. Those (remaining
few) who were there at the time can perhaps clarify.
I still habitually quote shell arguments containing ^,
even though I haven't used a shell that required that
since late 1984 (Rob had removed the special meaning
from /bin/sh before I arrived at Bell Labs). On the
other hand, I still cannot be bothered to get used to
quoting arguments containing !; I just disable all
that history and editing bloatware whenever possible.
Norman Wilson
Toronto ON
Ok, I hope this question isn't too off-topic...
I was looking through the X10R3 source tree trying to find the
earliest paint program for X. I wasn't able to see anything that
looked like a paint program.
Xpaint might be the oldest, wikipedia says the first version appeared in 1989.
Searching for xpaint on tuhs returned no matches, but I saw that
4.3BSD-Tahoe had some old X programs but nothing listed there seemed
to be a paint program.
Maybe xgedit? It's listed as a "simple graphic editor for the X window
system", but I don't know if it really qualifies as a paint program.
Mark
On 2016-07-11 04:00, John Cowan <cowan(a)mercury.ccil.org> wrote:
> Johnny Billquist scripsit:
>> > Uh. I'm no language expert, but that seems rather stretched. English
>> > comes from Old English, which have a lot more in common with
>> > Scandinavian languages, and they are all Germanic languages. Which
>> > means they all share a common root.
> Absolutely.
>
>> > What makes you say then that all the others borrowed it from
>> > English?
> Because when words change, they change according to common patterns
> specific to the language. For example, a change between Old English (OE)
> and Modern English (ModE) is that long-a has become long-o. Consequently,
> the descendants of OE bát, tá, ác are ModE boat, toe, oak. In Scots,
> which is also descended from OE, this change did not operate, and long-a
> changed in the Great Vowel Shift along with long-a from other sources,
> giving the Older Scots words bait, tae, eik. However, current Scots
> does not use bait, but rather boat, and we can see that because this
> breaks the pattern it must be a borrowing from English.
So the obvious question then becomes: Are you saying that Old English
also borrowed the word from English?
(See http://www.etymonline.com/index.php?term=boat)
>> > (I assume you know why Port and Starboard are named that way...)
> OE steor 'steering oar, rudder' + bord 'side of a ship'. Parallel
> formations gave us common Scandinavian styrbord from ON stjórnborði,
> similarly Dutch stuurbord, German Steuerbord. Larboard, the other side,
> began life as Middle English ladde 'load' + bord, because it was the side
> you loaded a ship from, and was altered under the influence of starboard.
> Because the two were easily confused, port officially replaced it in the
> 19C, though it had been used in this meaning since the 16C.
Well, in Scandinavian the port side is called "babord", which comes from
bare board, since that was the "clean" side, which you could dock on. No
rudder to break... And it's from way before medieval times... But I'm
pretty sure the term is from even before the Vikings were around.
Johnny
I suspect Yanks being pedantic about `slash' versus `forward slash'
would give an Englishman a stroke.
If that's too oblique for some of you, I can't help.
Norman Wilson
Toronto ON
after the brief but illuminating detour on character sets and the
evolution of human languages, we now return you to the Unix Heritage
mailing list :-)
[ Please! ]
Cheers, Warren
If 19961 is the oldest citation the OED can come up with, "slash"
really is a coinage of the computer age. Yet the character had
been in algebra books for centuries. The oral tradition that underlies
eqn would be the authority for a "solid" name. I suspect, though,
that regardless of what the algebra books called it, the name
would be "divided by".
This is sheer hypothesis, but I have always thought that \ got
onto printer chains and type balls as a crude drawing aid. Ditto
for |. Once the characters became available people began to find
uses for them.
On 2016-07-10 02:52, John Cowan <cowan(a)mercury.ccil.org> wrote:
> Steffen Nurpmeso scripsit:
>> > "Die Segel streichen" (Taking in the sails),
> "Striking the sails" in technical English. All the nations around the
> North and Baltic Seas exchanged their vocabularies like diseases, and if
> we didn't have records of their earlier histories, we would know they
> were related but we'd never figure out exactly how. For example, it
> can be shown that French bateau, German Boot, common Scandinavian båt,
> Irish bád, Scottish Gaelic bàta, Scots boat, and the equivalents in
> the various Frisian languages are none of them original native words:
> they all were borrowed from English boat.
Uh. I'm no language expert, but that seems rather stretched. English
comes from Old English, which have a lot more in common with
Scandinavian languages, and they are all Germanic languages. Which means
they all share a common root.
What makes you say then that all the others borrowed it from English? I
would guess/suspect that the term is older than English itself, and the
similarity of the word in the different languages comes from the fact
that it's old enough to have been around when all these languages were
closer to the roots and each other. Boats have been around for much
longer than the English language so I would suspect some term for them
have been around for a long time too...
If you ask me, you all got most terms from the Vikings anyway, who were
the first good seafarers... :-)
(I assume you know why Port and Starboard are named that way...)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Steffen Nurpmeso:
...and that actually makes me wonder why the engineers that
created what became POSIX preferred slash instead -- i hope it is
not the proud of high skills in using (maybe light) sabers that
some people of the engineer community seem to foster. But it
could be the sober truth. Or, it could be a bug caused by
inconsideration. And that seems very likely now.
====
It had nothing to do with engineers. `Slash' for / has been
conventional American usage for as long as I can remember,
dating back well before POSIX or UNIX or the movie that made
a meme of light sabers.
It's unclear exactly how far back it dates. The earliest
OED citation for `slash' as `A thin sloping line, thus /'
is dated 1961; but the cite is from Webster's 3rd.
Given the amount of violence prevalent in American metaphor,
it is hardly noteworthy.
Make American Language Violent Again (and I HATE MOSQUITOS*).
Norman Wilson
Toronto ON
* If you don't know what this refers to, you probably don't
want to know.
On Fri, Jul 8, 2016 at 7:09 AM, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> ...and that actually makes me wonder why the engineers that
> created what became POSIX preferred slash instead
>
I can not speak for anyone else. But at the time when I was a part of
the /usr/group UNIX standards** mtgs I personally do not believe I had
ever heard of the term "solidus." Such a term maybe had been used in my
first form Latin classes from the 1960s, but by the 1980s I had long ago
forgotten any/all of my Latin. I certainly did not try to remember it as a
computer professional.
In those days many of us, including me, did (and still do) refer to the
asterisk as "splat" and the exclamation point as "bang" from the sound
made by them when they printed yellow oiled paper @ 10 cps from the console
TTY. But slash was what we called the character that is now next to the
shift key on modern keyboards. I do not remember ever using, much less
needed to refer to, the character "back slash" until the unfortunate crap
that the folks in Redmond forced on the industry. Although interestingly
enough, the vertical bar or UNIX "pipe" symbol was used and discussed
freely in those days. I find it interesting that Redmond-ism became the
unshifted character, not the vertical bar by the shear force of economics
of the PC.
Clem
** For those that do not know (my apologies to those that do) the 1985
/usr/group standards committee was the forerunner to IEEE P1003. Which we
published as the first "official UNIX API standard agreed by the community"
(I still have a hardcopy). But neither /usr/group nor USENIX had the
political authority to bring an official standard to FIPS, ANSI, ECMA, ISO
or like, while IEEE did. So a few months before the last meeting, Jim
Issak petitioned IEEE for standards status, and the last meeting of the
/usr/group UNIX standards meeting was very short -- about 10 minutes. We
voted to disband and then everyone in the room officially reformed a few
minutes later all signing in as IEEE P1003, later to be called POSIX. For
further historical note, I was a "founding member" of both groups and the
editor of a number of early drafts (numbers 5-11 IIRC), as well as the
primary author of the Tape Format and Terminal I/O sections of P1003.1.
With Keith Bostic, I would later be part of the P1003.2 and pen the
original PAX compromise. After that whole mess I was so disgusted with the
politics of the effort, I stopping going to the POSIX mtgs.
PPS While I did not work for them at the time, you can blame DEC for the
mess with the case/character sets in the POSIX & FIPS standards. A number
of the compromises in the standard documents were forced by VMS, 7-bit
(case insensitivity) being the prime one. While we did get in the
rational section of document that it was suggested/advised that systems
implementations and applications code be case insensitive and 8 bit clean
so that other character sets could be supported. However the DEC folks
were firmly against anything more than 7-bit ASCII and supporting anything
in that character set. My memory is that the IBM folks were silent at the
time and just let the DEC guys carry the torch for 1960's 7-bit US English.
Thanks for reminding me about that one, Clem. I think I even have
Darnell's book somewhere.
I haven't decided what to do about batch interpreters for C. They aren't
interactive but there is still some overlap of concerns. I'll probably
post a list of them somewhere. I also have Al Stevens' Quincy,
Przemyslaw Podsiadly's SeeR, and Herb Schildt's from "Building your own
C interpreter."
On Wed, Jul 6, 2016, at 12:22 PM, Clem Cole wrote:
> From the The Unix Historical Society mailing list, I discovered your
> historical interest in C interpreters. It looks like you are missing at
> least one, so I though I would introduce you all.
>
> Paul/Wendell meet Peter Darnell -- Pete wrote one an early C interpreter
> for his C programming book. I'll leave it to you folks to discuss what
> he
> did, its current status et al.
>
> Best Wishes,
>
> Clem Cole (old time UNIX and C guy)
>
>
>
> ---------- Forwarded message ----------
> From: Warren Toomey <wkt(a)tuhs.org>
> Date: Sat, Jul 2, 2016 at 6:01 PM
> Subject: [TUHS] Interactive C Environments
> To: tuhs(a)tuhs.org
>
>
> All, I've been asked by Wendell to forward this query about C
> interpreters to the mailing list for him.
>
> ----- Forwarded message from Wendell P <wendellp(a)operamail.com> -----
>
> I have a project at softwarepreservation.org to collect work done,
> mostly in the 1970s and 80s, on C interpreters.
>
> http://www.softwarepreservation.org/projects/interactive_c
>
> One thing I'm trying to track down is Cin, the C interpreter in UNIX
> v10. I found the man page online and the tutorial in v2 of the Saunders
> book, but that's it. Can anyone help me to find files or docs?
>
> BTW, if you have anything related to the other commercial systems
> listed, I'd like to hear. I've found that in nearly all cases, the
> original developers did not keep the files or papers.
>
> Cheers,
> Wendell
>
> ----- End forwarded message -----
--
http://www.fastmail.com - The professional email service
Clem Cole:
I do not remember ever using, much less
needed to refer to, the character "back slash" until the unfortunate crap
that the folks in Redmond forced on the industry.
=====
Oh, come on. You programmed in C. You probably used
UNIX back when @ was the default kill character (though
I doubt you're odd enough still to use that kill character,
as I do). You surely used troff, LaTeX, or both, and have
doubtless sworn at regular expressions more often than
most of the young Linux crowd have had chocolate bars.
I think you've just forgotten it out of PBSD (post-backlash
stress disorder, nothing to do with Berkeley).
Norman Wilson
Toronto ON
UNIX\(tm old fart who swore at a regexp just yesterday
Greg Lehey:
And why? Yes, the 8088 was a reasonably fast processor, so fast that
they could slow it down a little so that they could use the same
crystal to create the clock both for the CPU and the USART. But the
base system had only 16 kB memory, only a little more than half the
size of the 6th Edition kernel. Even without the issue of disks
(which could potentially have been worked around) it really wasn't big
enough for a multiprogramming OS.
=====
Those who remember the earliest UNIX (even if few of us have
used it) might disagree with that. Neither the PDP-7 nor the
PDP-11/20 on which UNIX was born had memory management: a
context switch was a swap. That would have been pretty slow
on floppies, so perhaps it wouldn't have been saleable, but
it was certainly possible.
In fact Heinz Lycklama revived the idea in the V6 era to
create LSX, a UNIX for the early LSI-11 which had no
memory management and a single ca. 300kiB floppy drive.
It had more memory than the 8088 system, though: 20kiW,
i.e. 40kiB. Even so, Lycklama did quite a bit of work to
squeeze the kernel down, reduce the number of processes
and context switches, and so on.
Here's a link to one of his papers on the system:
https://www.computer.org/csdl/proceedings/afips/1977/5085/00/50850237.pdf
I suspect it would have been possible to make a XENIX
that would have worked on that hardware. Whether it
would have worked well enough to sell is another question.
Norman Wilson
Toronto ON
All, I've been working with Peter Salus (author of A Quarter Century of Unix)
to get the book published as an e-book. However, the current publishers have
been very incommunicative.
Given that the potential readership may be small, Peter has suggested this:
> I think (a) just putting the bits somewhere where they could
> be sucked up would be fine; and (b) let folks make donations
> to TUHS as payment.
However, as with all the Unix stuff, I'm still concerned about copyright
issues. So this is what I'm going to do. You will find a collection of
bits at this URL: http://minnie.tuhs.org/Z3/QCU/qcu.epub
In 24 hours I'll remove the link. After that, you can "do a Lions" on
the bits. I did the scanning, OCR'ing and proofing, so if you spot any
mistakes, let me know.
I'm not really interested in any payment for either the book or TUHS
itself. However, if you do feel generous, my e-mail address is also
my PayPal account.
Cheers, Warren
Thanks, Warren, for the (brief) posting of the ePub file for Peter
Salus' fine book, A Quarter Century of Unix.
I have a printed copy of that book on my shelf, and here is a list of
the errata that I found in it when I read it in 2004 that might also
be present in the ePub version:
p. 23, line 7:
deveoloped -> developed
p. 111, line 5:
Dave Nowitz we'd do -> Dave Nowitz said we'd do
p. 142, line 7:
collaboaration -> collaboration
p. 144, line -4 (i.e., 4 from bottom):
reimplemeted -> reimplemented
p. 160, line 10:
the the only -> the only
p. 196, line 17:
develope JUNET -> develop JUNET
p. 221, running header:
Berkley -> Berkeley
p. 222, line 11:
Mellon Institue -> Mellon Institute
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Since a few people here are Bell Labs veterans, I'd to ask if someone
can explain a bit about that place. Sometimes I hear about work done
there that I'd like to follow up on, but I have no idea where to start.
For starters, I assume that everybody had to write up periodical reports
on their work. Was that stuff archived and is it still accessible
someplace? What about software that got to the point that it actually
had users beyond the developers? I know that major commercial projects
like UNIX are tied up in licensing limbo, but does that apply to
absolutely everything made there?
There is the AT&T Archives and History Center in Warren, NJ. Is it worth
asking if they have old tech reports?
--
http://www.fastmail.com - Or how I learned to stop worrying and
love email again
Steve Bourne tried hard to interest us in A68, and I personally liked some
features of it (especially the automatic type morphing of arguments into
the expected types). But the documentation was a huge barrier--all the
familiar ideas were given completely new (and unintuitive) names, making
it very difficult to get into.
I may be biased in my view, but I think one fatal mistake that A68 made
was that it had no scheme for porting the language to the plethora of
computers and systems around at that time. (The Bliss language from CMU
had a similar problem, requiring a bigger computer to compile for the
PDP-11). Pascal had P-code, and gave C a real run, especially as a
teaching language. C had PCC.
Nowadays, newer languages like Python just piggyback on C or C++...
On recent visit to the Living Computer Museum in
Seattle I got to play with Unix on a 3B2--something
I never did at Bell Labs. Maybe next time I
go they'll offer a real nostalgia trip on
the PDP-7, thanks to Warren's efforts.
doug
Hello,
I want to complete my local ML archive (I deleted a few emails and I
wasn't subscribed before 2001 or so I think). After downloading the
archives and hitting them a few times to get somewhat importable mboxes,
I ended with 8699 emails in a maildir (in theory that should be a
superset of the 5027 emails in my regular TUHS maildir. I will merge
them next.). Two dozens mails are obviously defective (can be repaired
manually maybe) and some more might be defective (needs deeper
checking). So, has anybody more ;)?
Regards
hmw
> AFAIK the later ESS switches include a 3B machine but it only handles
> some administrative functions, with most of the the actual call
> processing being performed in dedicated hardware.
That is correct. The 3B2 was an administrative appendage.
Though Unix itself didn't get into switches, Unix people did
have a significant influence on the OS architecture for
ESS 5. Bob Morris, having observed some of the tribulations of
that project, suggested that CS Research build a demonstration
switch. Lee McMahon, Ken Thompson, and Joe Condon spearheaded
the effort and enlisted Gerard Holzmann's help in verification
(ironically, the only application of Gerhard's methods to
software made in his own department). They called the system,
which was very different from Unix, TPC--The Phone Company. It
actually controlled many of our phones for some years. The
cleanliness of McMahon's architecture, which ran on a PDP-11,
caught the attention of Indian Hill and spurred a major
reworking of the ESS design.
Doug
All, I've been asked by Wendell to forward this query about C
interpreters to the mailing list for him.
----- Forwarded message from Wendell P <wendellp(a)operamail.com> -----
I have a project at softwarepreservation.org to collect work done,
mostly in the 1970s and 80s, on C interpreters.
http://www.softwarepreservation.org/projects/interactive_c
One thing I'm trying to track down is Cin, the C interpreter in UNIX
v10. I found the man page online and the tutorial in v2 of the Saunders
book, but that's it. Can anyone help me to find files or docs?
BTW, if you have anything related to the other commercial systems
listed, I'd like to hear. I've found that in nearly all cases, the
original developers did not keep the files or papers.
Cheers,
Wendell
----- End forwarded message -----
All, I was invited to give a talk at a symposium in Paris
on the early years of Unix. Slides and recording at:
http://minnie.tuhs.org/Z3/Hapop3/
Feel free to point out the inaccuracies :-)
For example, I thought Unix was used at some point
as the OS for some of the ESS switches in AT&T, but
now I think I was mistaken.
That's a temp URL, it will move somewhere else
eventually.
Cheers, Warren
On 2016-07-01 15:43, William Cheswick <ches(a)cheswick.com> wrote:
>
>>> >>...why didn't they have a more capable kernel than MS-DOS?
> >I don't think they cared. or felt it was needed at the time (I disagreed then and still do).
>
> MS-DOS was a better choice at the time than Unix. It had to fit on floppies, and was very simple.
>
> “Unix is a system administrations nightmare” — dmr
>
> Actually, MS-DOS was a runtime system, not an operating system, despite the last two letters of its name.
> This is a term of art lost to antiquity.
Strangely enough, the definition I have of a runtime system is very
different than yours. Languages had/have runtime systems. Some
environments had runtime systems, but they have a somewhat different
scope than what MS-DOS is.
I'd call MS-DOS a program loader and a file system.
> Run time systems offered a minimum of features: a loader, a file system, a crappy, built-in shell,
> I/O for keyboards, tape, screens, crude memory management, etc. No multiuser, no network stacks, no separate processes (mostly). DEC had several (RT11, RSTS, RSX) and the line is perhaps a little fuzzy: they were getting operating-ish.
Uh? RSX and RSTS/E are full fledged operating systems with multiuser
proteciton, time sharing, virtual memory, and all bells and whistles you
could ever ask for... Including networking... DECnet was born on RSX.
And RSTS/E offered several runtime systems, it had an RT-11 runtime
system, an RSX runtime system, you also had a TECO runtime system, and
the BASIC+ runtime system, and you could have others. You could
definitely have had a Unix runtime system in RSTS/E as well, but I don't
know if anyone ever wrote one.
In RSX, compilers/languages have runtime systems, which you linked with
your object files for that language, in order to get a complete runnable
binary.
Johnny
Ori Idan <ori(a)helicontech.co.il> asks today:
>> Pascal compiler written in Pascal? how can I compile the compiler it I
>> don't yet have a pascal compiler? :-)
You compile the code by hand into assembly language for the CDC
6400/6600 machines, and bootstrap that way: see
Urs Ammann
On Code Generation in a PASCAL Compiler
http://dx.doi.org/10.1002/spe.4380070311
Niklaus Wirth
The Design of a PASCAL Compiler
http://dx.doi.org/10.1002/spe.4380010403
It has been a long time since I read those articles in the journal
Software --- Practice and Experience, but my recollection is that they
wrote the compiler in a minimal subset of Pascal needed to do the job,
just to ease the hand-translation process.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
On 2016-06-30 21:22, Clem Cole <clemc(a)ccc.com> wrote:
>
> but when Moto came out with a memory management chip it had some
>> > severe flaws that made paging and fault recovery impossible, while the
>> > equivalent features available on the 8086 line were tolerable.
> Different issues...
>
> When the 68000 came out there was a base/limit register chip available,
> who's number I forget (Moto offered to Apple for no additional cost if they
> would use it in the Mac but sadly they did not). This chip was similar
> to the 11/70 MMU, as that's what Les and Nick were used to using (they used
> a 11/70 running Unix V6 has the development box and had been before the
> what would become the 68000 -- another set of great stories from Les, Nick
> and Tom Gunter).
Clem, I think pretty much all you are writing is correct, except that I
don't get your reference to the PDP-11 MMU.
The MMU of the PDP-11 is not some base/limit register thing. It's a
paged memory, with a flat address space. Admittedly, you only have 8
pages, but I think it's just plain incorrect to call it something else.
(Even though noone I know of ever wrote a demand-paged memory system for
a PDP-11, there is no technical reason preventing you from doing it.
Just that with 8 pages, and load more physical memory than virtual, it
just didn't give much of any benifits.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> Ronald Natalie <ron(a)ronnatalie.com>
>
>>
>> On the other hand, there was
>> no excuse for a Pascal compiler to be either large, buggy, or slow, even before Turbo Pascal.
>>
> I remember the Pascal computer on my Apple II used to have to use some of the video memory while it was running.
UCSD Pascal, the Apple Pascal base, would grab the video memory as space to write the heap when compiling. When the Terak system was in use at UCSD the video memory would display on the screen so you could watch the heap grow down the screen while the stack crawled up when compiling. If it ever hit in the middle, you had a crash. Exciting times.
Terak systems were 11/03 based, IIRC. (http://www.threedee.com/jcm/terak/)
David
> On Jun 30, 2016, at 10:27 AM, schily(a)schily.net (Joerg Schilling)
> Marc Rochkind <rochkind(a)basepath.com> wrote:
>
>> Bill Cheswick: "What a different world it would be if IBM had selected the
>> M68000 and UCSD Pascal. Both seemed
>> to me to better better choices at the time."
>>
>> Not for those of us trying to write serious software. The IBM PC came out
>> in August, 1981, and I left Bell Labs to write software for it full time
>> about 5 months later. At the time, it seemed to me to represent the future,
>> and that turned out to be a correct guess.
>
> I worked on a "Microengine" in 1979.
>
> The Microengine was a micro PDP-11 with a modified micro code ROM that directly
> supported to execute p-code.
>
> The machine was running a UCSD pascal based OS and was really fast and powerful.
>
> Jörg
Very likely one of the Western Digital products. They were the first to take UCSD Pascal and burned the p-code interpreter into the ROM. Made for a blindingly fast system. I worked with the folks who did the port and make it all play together. Fun days.
I worked on the OS and various utility programs those days. Nothing to do with the interpreters.
When the 68000 came out SofTech did a port of the system to it. Worked very well; you could take code compiled on the 6502 system write it to a floppy, take the floppy to the 68k system and just execute the binary. It worked amazingly well.
David
> From: scj(a)yaccman.com
> I think one fatal mistake that A68 made
One of many, apparently, given Hoare's incredible classic "The Emperor's Old
Clothes":
http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf
(which should be required reading for every CS student).
Noel
Steve almost right....mixing a few memories...see below..
On Thu, Jun 30, 2016 at 1:17 PM, <scj(a)yaccman.com> wrote:
> My memory was that the 68000 gave the 8086 a pretty good run for its
> money,
Indeed - most of the UNIX workstations folks picked it because of the
linear addressing.
but when Moto came out with a memory management chip it had some
> severe flaws that made paging and fault recovery impossible, while the
> equivalent features available on the 8086 line were tolerable.
Different issues...
When the 68000 came out there was a base/limit register chip available,
who's number I forget (Moto offered to Apple for no additional cost if they
would use it in the Mac but sadly they did not). This chip was similar
to the 11/70 MMU, as that's what Les and Nick were used to using (they used
a 11/70 running Unix V6 has the development box and had been before the
what would become the 68000 -- another set of great stories from Les, Nick
and Tom Gunter).
The problem with running a 68000 with VM was not the MMU, it was the
microcode. Nick did not store all of the needed information needed by the
microcode to recover from a faulted instruction, so if a instruction could
not complete, it could not be restarted without data loss.
> There were
>
> some bizarre attempts to page with the 68000 (I remember one product that
> had two 68000 chips, one of which was solely to sit on the shoulder of the
> other and remember enough information to respond to faults!).
This was referred to as Forest Baskett mode -- he did a early paper that
described it. I just did a quick look but did not see a copy in my shelf
of Moto stuff. At least two commercial systems were built this way -
Apollo and Masscomp.
The two processors are called the "executor" and "fixer." The trick is
that when the MMU detects an fault will occur, the executor is sent "wait
state" cycles telling it that the required memory location is just taking
longer to read or write. The fixer is then given the faulting address,
which handles the fault. When the page is finally filled, on the
Masscomp system the cache is then loaded and the executor is allowed to
complete the memory cycle.
When Nick fixed the microcode for the processor, the updated chip was
rebranded as the 68010. In the case of the Masscomp MC-500 CPU board, we
popped the new chip in as the executor and changed the PAL's so the fault
was allowed to occur (creating the MPU board). This allowed the executor
to go do other work while the fixer was dealing with the fault. We
picked up a small amount of performance, but in fact it was not much. I
still have a system on my home network BTW (although I have not turned it
on in a while -- it was working last time I tried it).
Note the 68010 still needed an external MMU. Apollo and Masscomp built
their own, although fairly soon after they did the '10 Moto created a chip
to replace the base/limit register scheme with one that handled 2-level
pages.
In Masscomp's case when we did the 5000 series which was based on the
68020, use their MMU for their low end (300 series) and our custom MMU on
the larger systems (700 series).
> By the time
>
> Moto fixed it, the 8086 had taken the field...
>
Well sort of. The 68K definitely won the UNIX wars, at least until the
386 and linear addressing would show up in the Intel line. There were
some alternatives like the Z8000, NS32032, and AT&T did the 32100 which was
used in the 3B2/3B5 et al. but 68K was the lion share.
Clem
> I'm curious if the name "TPC" was an allusion to the apocryphal telephone
> company of the same name in the 1967 movie, "The President's Analyst"?
Good spotting. Ken T confirms it was from the flick.
doug
> From: Dave Horsfall <dave(a)horsfall.org>
>
> On Wed, 29 Jun 2016, scj(a)yaccman.com wrote:
>
>> Pascal had P-code, and gave C a real run, especially as a teaching
>> language.
>
> Something I picked up at Uni was that Pascal was never designed for
> production use; instead; you debugged your algorithm in it, then ported it
> to your language of choice.
I was an active member of the UCSD Pascal project from 77 to 80, and then was with SofTech MicroSystems for a couple years after that.
An unwritten legacy of the Project was that, according to Professor Ken Bowles, IBM wanted to use UCSD Pascal as the OS for their new x86 based personal computer. The license was never worked out as the University of California got overly involved in it. As a result IBM went with their second choice, some small Redmond based company no one had ever heard of. So it was intended and, at least IBM thought, it was good enough for production use.
I also knew of UCSD Pascal programs written to do things such as dentist office billing and scheduling and other major ‘real world’ tasks. So it wasn’t just an academic project.
I still have UCSD Pascal capable of running in a simulator, though I’ve not run it in a while. And I have all the source for the OS and interpreter for the Version I.5 and II.0 systems. Being a code pig just means that I need a lot of disk space.
David
Hi.
Can anyone give a definitive date for when Bill Joy's csh first got out
of Berkeley? I suspect it's in the 1976 - 1977 time frame, but I don't
know for sure.
Thanks!
Arnold
The requested URL /pub/Wish/wish_internals.pdf was not found on this
server.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> On Jun 26, 2016, at 5:59 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> I detested the CSH syntax. In order to beat back the CSH proponents at BRL, I added JOB control to the SV (and later SVR2) Bourne Shell. Then they beat on me for not having command like editing in (a la TCSH), so I added that. This shell went out as /bin/sh in the Doug Gwyn SV-on-BSD release so every once and a while over the years I trip across a “Ron shell” usually people who were running Mach-derived things that ran my shell as /bin/sh.
When porting BSD to new hardware at Celerity (later Floating Point, now part of Sun, oops Oracle) I got ahold of the code that Doug was working on and made the jsh (Job control sh) my shell of choice. Now that Bash does all of those things and almost everything emacs can do, Bash is my shell.
As far as customizing, I’ve got a .cshrc that does nothing more than redirect to a launch of bash if available and /bin/sh if nothing else. And my scripts for logging in are so long a convoluted due to many years of various hardware and software idiosyncratic changes (DG/UX anyone, anyone?) that I’m sure most of it is now useless. And I don’t change it for fear of breaking something.
David
I asked Jeff Korn (David Korn's son), who in turn asked David Korn who
confirmed that 'read -u' comes from ksh and that 'u' stands for 'unit'.
- Dan C.
Yes, indeed. He says:
*I added -u when I added co processes in the mid '80s. The u stands for
unit. It was command to talk about file descriptor unit at that time.*
On Tue, May 31, 2016 at 6:06 AM, Dan Cross <crossd(a)gmail.com> wrote:
> Hey, did your dad do `read -u`?
>
> ---------- Forwarded message ----------
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Date: Tue, May 31, 2016 at 3:27 AM
> Subject: [TUHS] etymology of read -u
> To: tuhs(a)minnie.tuhs.org
>
>
> What's the mnmonic significance, if any, of the u in
> the bash builtin read -u for reading from a specified
> file descriptor? Evidently both f and d had already been
> taken in analogy to usage in some other commands.
>
> The best I can think of is u as in "tape unit", which
> was common usage back in the days of READ INPUT TAPE 5.
> That would make it the work of an old timer, maybe Dave Korn?
>
>
>
What's the mnmonic significance, if any, of the u in
the bash builtin read -u for reading from a specified
file descriptor? Evidently both f and d had already been
taken in analogy to usage in some other commands.
The best I can think of is u as in "tape unit", which
was common usage back in the days of READ INPUT TAPE 5.
That would make it the work of an old timer, maybe Dave Korn?
> Now we are hoping to get the Living Computer Museum people to bring it up
on their real PDP-7.
Truly a fantastic prospect! The only Unix the museum has running is
on a 3B2--a curious byway perhaps, but of little historic interest.
The PDP-7 version would be a tremendous coup.
doug
On Wed, May 04, 2016 at 12:44:15AM +0300, Diomidis Spinellis wrote:
> This would have found any code from the PDP-7 Unix that appeared in the
> First Edition. (I was hoping that some PDP-7 instruction sequences might be
> the same in PDP-11.)
> Unsurprisingly, nothing came out.
No, the instruction set is completely different. The PDP-11 ISA is a paradise
compared to the spartan PDP-7 ISA.
Cheers, Warren