> Date: Thu, 10 Aug 2023 03:17:25 +0000
> From: segaloco
>
>>>> TCP/IP, not datakit
>
>
> All of the files that have timestamps at the top list 83/07/29, except ip_input.c which has 83/08/16 instead. The V8 version has _device (device driver) and _ld (line discipline) components that the 4.1cBSD code does not have. Many other files have analogs between the two. The byte ordering subroutines have been copied into a file, goo.s, from their home in 4.1cBSD in the C library (/usr/src/lib/libc/net/misc). When this work originated someone else would need to answer, [...]
As far as I can tell the history of this code line goes back to 1977, when Jack Haverty at BBN wrote a TCP/IP library (porting earlier work written in PDP-11 assembler) for a slightly modified 6th Edition Unix. Fighting with 64KB core limits, throughput was horrific and he concluded that a bigger PDP-11 was needed. Mike Wingfield then did a re-implementation in C for a PDP-11/70. This worked in early 1979 and is arguably the first Unix TCP/IP stack that can still interoperate with current IPv4. However, it was still mostly a proof of concept user mode design (it was funded as a test vehicle for the later abandoned Autodin-II fork of TCP).
BBN then got a contract to write a kernel mode TCP/IP stack for 4BSD (“VAX TCP” in the old BBN doc’s). This work was performed by Rob Gurwitz under supervision of Jack Haverty. This stack - although all new code - still showed its heritage: it was designed as a loosely bound kernel process providing the NCP-Unix API. Some sources seem to imply that it was developed first as a user mode process and once working in that context changed into a kernel process / thread. Beta releases were available in 1981. It worked (and interoperates with modern IPv4), but in my experiments a few years back it turned out that it is difficult to get the scheduling for this kernel process right at higher system loads.
Bill Joy of CSRG concluded that the BBN stack did not perform according to his expectations. Note that CSRG was focused on usage over (thick) ethernet links, and BBN was focused on usage over Arpanet and other wide-area networks (with much lower bandwidth, and higher latency and error rates). He then in 1982 rewrote the stack to match the CSRG environment, changing the design to use software interrupts instead of a kernel thread and optimising the code (e.g. checksumming and fast code paths). It was a matter of debate how new the code was, with the extremes being that it was written from scratch using the spec versus it being mostly copied. Looking at it with a nearly 50 year distance, it seems in between: small bits of surviving SCCS suggest CSRG starting with parts of BBN code followed by rapid, massive modification; the end result is quite different but retained the ‘mbuf’ core data structure and a BBN bug (off-by-one for OOB TCP segments).
The shift from the NCP-Unix API to sockets is separate from this and was planned. CSRG had the contract to develop a new API for facilitating distributed systems with Unix and this gelled into the sockets interface. The first prototypes for this were done in 1981.
Nearly all of the above source is available in the TUHS online Unix Tree (Wingfield, VAX-TCP and two early versions from CSRG - one in 2.9BSD and one in 4.1cBSD).
Good morning folks, just sharing some eBay sales I spotted that are just not in the cards for me, both in terms of expense and I just don't have the bandwidth to focus on other UNIX lines right now.
That said, someone is selling a very, very large collection of HP-UX documents:
https://www.ebay.com/itm/285425883705https://www.ebay.com/itm/285425882004
As mentioned, quite pricy, and bulky too. However, if anyone knows anyone with a particular eye for HP-UX history, this may interest them. No idea what is and isn't preserved there, non-Bell-and-UCB stuff hasn't been on my radar hardly at all other than acknowledging it exists.
Tangential but I do appreciate the consistency in their documentation appearance. I see HP-UX stuff pop up time to time and the cover motif is identical to the documents they published with analytical equipment like gas chromatographs before spinning that unit off into Agilent.
- Matt G.
> Warner Losh imp at bsdimp.com
> Thu Aug 10 12:45:54 AEST 2023
> wrote:
>
> Yea, I thought it was 4.1bsd + later tcp code but with a STREAMS instead of
> Socket interface...
Please see this old TUHS post for some more background in DMR’s own words:
https://www.tuhs.org/pipermail/tuhs/2019-August/018325.html
On the topic of DMR Streams, I’m increasingly intrigued by its design: recently I’ve been deep diving into classic USB to better understand this class of devices and how to drive them (https://gitlab.com/pnru/usb_host) It would seem to me that Streams would have been a neat way to organise the USB driver stack in a v8 context. Note that an USB analog did exist in 1982: https://en.wikipedia.org/wiki/Hex-Bus
Sometimes I wonder what combining v8 streams with v8 virtual directories (i.e. like Killian’s /proc) could have looked like. Having the streams network stack (or usb stack) exposed as virtual directories would have been quite powerful.
Sorry if this has been asked before, but:
"Welcome to Eighth Edition Unix. You may be sure that it
is suitably protected by ironclad licences, contractual agreements,
living wills, and trade secret laws of every sort. A trolley car is
certain to grow in your stomach if you violate the conditions
under which you got this tape. Consult your lawyer in case of any doubt.
If doubt persists, consult our lawyers.
Please commit this message to memory. If this is a hardcopy terminal,
tear off the paper and affix it to your machine. Otherwise
take a photo of your screen. Then delete /etc/motd.
Thank you for choosing Eighth Edition Unix. Have a nice day."
was this one person or a group effort. It's wonderful.
six years later…
A note for the list:
Warren (in. IMHO, a stroke of genius) changed the Repo from xv6-minix to xv6-freebsd.
<https://github.com/DoctorWkt/xv6-freebsd>
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
At the risk of releasing more heat than light (so if you feel
compelled to flame by this message, please reply just to me
or to COFF, not to TUHS):
The fussing here over Python reminds me very much of Unix in
the 1980s. Specifically wars over editors, and over BSD vs
System V vs Research systems, and over classic vs ISO C, and
over embracing vendor-specific features (CSRG and USG counting
as vendors as well as Digital, SGI, Sun, et al) vs sticking
to the common core. And more generally whether any of the
fussing is worth it, and whether adding stuff to Unix is
progress or just pointless complication.
Speaking as an old fart who no longer gets excited about this
stuff except where it directly intersects something I want to
do, I have concluded that nobody is entirely right and nobody
is entirely wrong. Fancy new features that are there just to
be fancy are usually a bad idea, especially when they just copy
something from one system to a completely different one, but
sometimes they actually add something. Sometimes something
brand new is a useful addition, especially when its supplier
takes the time and thought to fit cleanly into the existing
ecosystem, but sometimes it turns out to be a dead end.
Personal taste counts, but never as much as those of us
brandishing it like to think.
To take Python as an example: I started using it about fifteen
years ago, mostly out of curiousity. It grew on me, and these
days I use it a lot. It's the nearest thing to an object-
oriented language that I have ever found to be usable (but I
never used the original Smalltalk and suspect I'd have liked
that too). It's not what I'd use to write an OS, nor to do
any sort of performance-limited program, but computers and
storage are so fast these days that that rarely matters to me.
Using white space to denote blocks took a little getting used
to, but only a little; no more than getting used to typing
if ...: instead of if (...). The lack of a C-style for loop
occasionally bothers me, but iterating over lists and sets
handles most of the cases that matter, and is far less cumbersome.
It's a higher-level language than C, which means it gets in the
way of some things but makes a lot of other things easier. It
turns out the latter is more significant than the former for
the things I do with it.
The claim that Python doesn't have printf (at least since ca. 2.5,
when I started using it) is just wrong:
print 'pick %d pecks of %s' % (n, fruit)
is just a different spelling of
printf("pick %d pecks of %s\n", n, fruit)
except that sprintf is no longer a special case (and snprintf
becomes completely needless). I like the modern
print(f'pick {n} pecks of {fruit}')
even better; f strings are what pushed me from Python 2 to
Python 3.
I really like the way modules work in Python, except the dumbass
ways you're expected to distribute a program that is broken into
modules of its own. As a low-level hacker I came up with my own
way to do that (assembling them all into a single Python source
file in which each module is placed as a string, evaled and
inserted into the module table, and then the starting point
called at the end; all using documented, stable interfaces,
though they changed from 2 to 3; program actually written as
a collection of individual source files, with a tool of my
own--written in Python, of course--to create the single file
which I can then install where I need it).
I have for some years had my own hand-crafted idiosyncratic
program for reading mail. (As someone I know once said,
everybody writes a mailer; it's simple and easy and makes
them feel important. But in this case I was doing it just
for myself and for the fun of it.) The first edition was
written 20 years ago in C. I rewrote it about a decade ago
in Python. It works fine; can now easily deal with on-disk
or IMAP4 or POP3 mailboxes, thanks both to modules as a
concept and to convenient library modules to do the hard work;
and updating in the several different work and home environments
where I use it no longer requires recompiling (and the source
code need no longer worry about the quirks of different
compilers and libraries); I just copy the single executable
source-code file to the different places it needs to run.
For me, Python fits into the space between shell scripts and
awk on one hand, and C on the other, overlapping some of the
space of each.
But personal taste is relevant too. I didn't know whether I'd
like Python until I'd tried it for a few real programs (and
even then it took a couple of years before I felt like I'd
really figured out out to use it). Just as I recall, about
45 years ago, moving from TECO (in which I had been quite
expert) to ed and later the U of T qed and quickly feeling
that ed was better in nearly every way; a year or so later,
trying vi for a week and giving up in disgust because it just
didn't fit my idea of how screen editors should work; falling
in love with jim and later sam (though not exclusively, I still
use ed/qed daily) because they got the screen part just right
even if their command languages weren't quite such a good match
for me.
And I have never cottoned on to Perl for, I suspect, the same
reason I'd be really unhappy to go back to TECO. Tastes
evolve. I bet there's a lot of stuff I did in the 1980s that
I'd do differently could I have another go at it.
The important thing is to try new stuff. I haven't tried Go
or Rust yet, and I should. If you haven't given Python or
Perl a good look, you should. Sometimes new tools are
useless or cumbersome, sometimes they are no better than
what you've got now, but sometimes they make things easier.
You won't know until you try.
Here endeth today's sermon from the messy office, which I ought
to be cleaning up, but preaching is more fun.
Norman Wilson
Toronto ON
I’m re-reading Brian Kernighan’s book on Early Unix (‘Unix: A History & Memoir’)
and he mentions the (on disk) documentation that came with Unix - something that made it stand out, even for some decades.
Doug McIlroy has commented on v2-v3 (1972-73?) being an extremely productive year for Ken & Dennis.
But as well, they wrote papers and man pages, probably more.
I’ve never heard anyone mention keyboard skills with the people of the CSRC - doesn’t anyone know?
There’s at least one Internet meme that highly productive coders necessarily have good keyboard skills,
which leads to also producing documentation or, at least, not avoiding it entirely, as often happens commercially.
Underlying this is something I once caught as a random comment:
The commonality of skills between Writing & Coding.
Does anyone has any good refs for this crossover?
Is it a real effect or a biased view.
That great programmers are also “good writers”:
takes time & focus, clarity of vision, deliberate intent and many revisions, chopping away the cruft that’s isn’t “the thing” and “polishing”, not rushing it out the door.
Ken is famous for his brevity and succinct statements.
Not sure if that’s a personal preference, a mastered skill or “economy in everything”.
steve j
=========
A Research UNIX Reader: Annotated Excerpts from the Programmer's Manual, 1971-1986
M.D. McIlroy
<https://www.cs.dartmouth.edu/~doug/reader.pdf>
<https://archive.org/details/a_research_unix_reader/page/n13/mode/2up>
pg 10
3.4. Languages
CC (v2 page 52)
V2 saw a burst of languages:
a new TMG,
a B that worked in both core-resident and software-paged versions,
the completion of Fortran IV (Thompson and Ritchie), and
Ritchie's first C, conceived as B with data types.
In that furiously productive year Thompson and Ritchie together
wrote and debugged about
100,000 lines of production code.
=========
Programming's Dirtiest Little Secret
Wednesday, September 10, 2008
<http://steve-yegge.blogspot.com/2008/09/programmings-dirtiest-little-secret…>
It's just simple arithmetic. If you spend more time hammering out code, then in order to keep up, you need to spend less time doing something else.
But when it comes to programming, there are only so many things you can sacrifice!
You can cut down on your documentation.
You can cut down on commenting your code.
You can cut down on email conversations and
participation in online discussions, preferring group discussions and hallway conversations.
And... well, that's about it.
So guess what non-touch-typists sacrifice?
All of it, man.
They sacrifice all of it.
Touch typists can spot an illtyperate programmer from a mile away.
They don't even have to be in the same room.
For starters, non-typists are almost invisible.
They don't leave a footprint in our online community.
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin