On Thu, Apr 20, 2023 at 8:56 AM Paul Ruizendaal <pnr@planet.nl> wrote:

> Date: Mon, 10 Apr 2023 18:27:51 +0000
> From: segaloco

> ... or was there no single guiding principle and each machine came up, at that level at least, in a relative vacuum, with only the machine interface to UNIX being the guiding principle?

I stumbled into the same question last year, when doing my SysIII to RV64 port. I managed to turn that into a somewhat chaotic discussion, mixing old and new, and history with ideas. From that chaotic discussion I got the impression that it was indeed mostly ad hoc. In context, hardware was much easier to boot and drive back then -- it probably was not seen as complex enough to warrant much research into layering and abstraction.

Also bear in mind that something like a boot rom only became the norm in the late 70’s. Before that, one keyed in two dozen words with a tiny program to load the first boot stage.

That said, there is an implicit layering in v7 and beyond:

- “low.s" does hardware setup, incl. such stuff as setting up interrupt tables. As this is closely tied to the hardware, it would have been a custom job in each case.

V7 used l.s for this from research, though different names were used in different ports (though many retain l.s too). 32V used locore.s, a convention that all the BSDs I know of picked up and used, as well as many BSD-derived kernels that were later rewritten to support SMP.

Oftentimes, the interrupt vector was in the lowest core addresses, and the first part of this file was just a giant table of places to jump for all the different architecturally defined exception and/or vectors (depending on the architecture). Often it contained glue from an interrupt to a ISR call as well, since there were many times where you'd share an exception, get the interrupt "level" or "vector" from some other bit of hardware and this code would often do the simple task of offsetting into a table and jumping.

And it also had the "start" entry point for the whole kernel. And frequently silly aux routines like 'doadump' and the bcopy/fubyte(etc)/ and context switching code, which is also in assembler, often ended up there as well. Finally, it was a place to have various bits of storage that the kernel needed to bootstrap whatever VM was there.

- “mch.s” (later also mch.c) has the basic routines that are hardware dependent (switching stacks, changing priority levels and modes, etc.). It also has emulation for ‘missing’ instructions, such as floating point ops where this is not available in hardware. Same as above, I think. Maybe h/w related memory protection operations should live here as well, but the hardware was still quite divergent in this area in the 70’s and early 80’s.

32V called this machdep.c, which all the BSDs inherited. While machine dependent, it tended to be slightly more portable and was for stuff that could be written in C...  Agreed on the very divergent part though.
 
- low-level device drivers live in the ‘dmr’ or (later) ‘io’ directory. Here there is some standardisation, as all device drivers must conform to the (char/block) device switch APIs. It seems to me that most of these drivers were written by taking one that was similar to what needed to be written and to start from there. Maybe this is still how it works in Linux today.

To be fair, V7 was released in a time where there were no common protocol devices: There was no USB, bluetooth, SATA, SCSI, etc that had divergent drivers to talk to the hardware, but a common transport layer for the protocol. Even MSCP and TSCP, which were the first inklings of this, were a controller interface, not a transport one. The one SCSI driver from this era I've looked at implemented all the SCSI protocol itself. Thankfully the controller had a microcontroller for dealing with the physical signalling bits (unlike a different card for my DEC Rainbow which did it all in hardware by bit-banging I/O ports).
 
- To the extent that there is such a thing as 'high-level device drivers’ in early Unix, the structure is less clearly visible. The file system (and there was only one at the time of v7) is placed between the block device switch and the mount table so to speak. This was structured enough that splicing in other file systems seems to have been fairly easy in the early 80’s (the splicing in, not the writing of the file system itself, of course). Starting with 8th edition, the ‘file system switch’ created a clear API for multiple file systems. Arguably, the ‘tty’ subsystem is also a ‘high-level device driver’, but this one lives as custom code together with the serial port device drivers. Also in 8th Edition, ‘streams' were introduced. One could think of this as a structured approach to high-level device drivers for character mode devices, incl. the ’tty’ subsystem.

Yes. It took a long time for there to even be common disk partition handling code. For a long time (and certainly in the V7 ports to PCish boxes) all that was in the driver, and usually cut and pasted from driver to driver.  It was only later that better abstraction arose. Network stacks, and the like, were later inventions.
 
- I don’t think there was ever anything in early Unix that merged ’streams’ and the 'file system switch' into a single abstraction (but maybe 9P did?).

I think you're right. They barely had a file system switch... And in the BSD side of the Unix world Network and File system were two different beasts.

> Where I'm trying to put this sort of knowledge into use is I'm starting to spec out a kernel bootstrap for the RPi Pico and Pine64 Ox64 boards (ARM32 and RISCV64 respectively) that is not only sufficient to start a V7-ish kernel on each, but that are ultimately based on the same design, varying literally only where the hardware strictly necessitates it, but similar enough that reading the two assembly files side by side yields essentially the exact same discrete operations.

I have a similar interest, but to avoid the same chaos as I created before, I’ll respond to this with a pm.

I'd be keen to understand this, but it's mostly a passing fancy... 

Warner