[TUHS] PCS Munix kernel source

Paul Ruizendaal pnr at planet.nl
Sun Aug 14 06:40:26 AEST 2022

> On Aug 11, 2022, at 11:01 AM, Holger Veit <hveit01 at web.de> wrote:
>> My understanding so far (from reading the paper a few years ago) is that Newcastle Connection works at the level of libc, substituting  system calls like open() and exec() with library routines that scan the path, and if it is a network path invokes user mode routines that use remote procedure calls to give the illusion of a networked kernel. I’ve briefly looked at the Git repo, but I do not see that structure in the code. Could you elaborate a bit more on how Newcastle Connection operates in this kernel? Happy to communicate off-list if it goes in too much detail.
> Maybe the original NC did so, but here there are numerous additions to
> the kernel, including a new syscall uipacket() which is the gateway into
> the MUNET/NC implementation. Stuff is in /usr/sys/munet, the low level
> networking is in uipacket.c and uiswtch.c which will process RPC
> open/close/read/write/exec etc. calls, as well support master and
> diskless nodes). The OS code is full of "if (master) {...} else {...}'
> code which then redirects detected access to remote paths to the network
> handler.

I came across a later paper for Unix United / Newcastle Connection. It seems to consider moving parts of the code into the kernel, to combat executable bloat (the relevant libc code would be copied into every executable, prior to shared libraries):


>> Re-reading the Newcastle Connection paper also brought up some citations from Bell Labs work that seems to have been lost. There is a reference to “RIDE” which appears to be a system similar to Newcastle Connection. The RIDE paper is from 1979 and it mentions that RIDE is a Datakit re-implementation of earlier an earlier system that ran on Spider. Any recollections about these things among the TUHS readership?
>> The other citation is for J. C. Kaufeld and D. L. Russell, "Distributed UNIX System", in Workshop on Fundamental Issues in Distributed Computing, ACM SIGOPS and SIGPLAN (15-17 Dec. 1980). It seems contemporaneous with the Luderer/Marshall/Chu work on S/F-Unix. I could not find this paper so far. Here, too, any recollections about this distributed Unix among the TUHS readership?

> Good to mention this. I am interested in such stuff as well.

I found a summary of that ACM SIGOPS workshop. There is a half page summary about David Russels presentation on “Distributed Unix”:


2.4 David Russell: Distributed UNIX

Distributed UNIX is an experimental system consisting of a collection of machines running a modified version of UNIX.

Communication among processors is performed by a packet switching network built on Datakit hardware. The network has
a capacity of roughly 7.5 megabits, shared among several channels. Addressing is done by channel, not by target
processor. Messages are received in the order sent. To the user, communication takes the form of a virtual circuit,
a logical full-duplex connection between processors.

A virtual circuit can be modeled as a box with four ends: DIN and DOUT control data transmission, and CIN and COUT
control transmission of control information. Circuits can be spliced together, or attached to devices. Circuits can
also be joined into groups. A virtual circuit is set up in the following way: a socket is created with a globally
unique name, and processes then request connections to the named socket. Routing information is implicit. Virtual
circuits support location transparency, since sockets can move, but not replication transparency. If a circuit breaks,
it is set up again, although it is up to the user to handle recovery of state information. Machine failure will destroy
a virtual circuit.

A transparent distributed file system was set up. When a remote file is accessed, a socket name is generated, which is
used to establish a connection with a daemon at the file server processor. The daemon carries out the file access on
behalf of the remote user. In conclusion, offloading of tasks was found to work well. The path manager maintains very
little state. The splice interface between virtual circuits was found to be very efficient, although UNIX scheduling
was not appropriate for fast setup of circuits.


The modeling of virtual circuits sounds like a mid-point between Chesson’s multiplexed files and Ritchie’s streams.

The file system actually sounds like it could be the RIDE system. Maybe this Distributed Unix and RIDE are one and the same thing.
(although the original Newcastle Connection paper suggests they are not:   https://inis.iaea.org/collection/NCLCollectionStore/_Public/16/081/16081910.pdf?r=1&r=1).


More information about the TUHS mailing list