[TUHS] Early Linux and BSD

Clem Cole clemc at ccc.com
Wed Jan 22 06:27:04 AEST 2020


1982, the dual-processor MC500/DP originally with 68000s upgraded to 010's
shortly after they became available[see below]
1984, the 16 Processor MC5000/700 using '020 [the 500 was renamed the
MC5000/500 and a single processor MC5000/300 was also introduced.  In the
/700 and /300 design the fixor was unneeded and the base 020 serviced it's
own faults].

FWIW: Purdue VAX predates the 500/DP, but was a one-off that George made.
 The Sequent MP box would be about 3 or 4 years later.

Through the RTU 2.x, the OS originally ran Purdue VAX-like [*Goble/Marsh: ISCA
'82: Proceedings of the 9th annual symposium on Computer Architecture: "A
Dual Processor VAX 11/780", **Pages 291–298*] in all interrupts and system
calls went to a 'master'  and the second MPU/CPU board ran as a 'slave' (
*i.e.* user-mode code). By RTU 3.0 ~12 mons later, full locks were done and
each processor could service anything.


Note each CPU/MPU board had processor two chips on it, the executor and
fixor but the board was really not a multiprocessor - the second chip was
literally just running kernel code to service the page fault.  Thus (not
including the other 68000's processors in graphics or I/O boards) the
500/DP had either 4 68000's or 2 68010 & 2 68000's in it when it had two
CPU or two MPU boards in the backplane.  The idea was originally proposed
for the Z8000 by Forest Baskett at an early Asilomar conference.   The
formal citation is: Forest Baskett: "*Pascal and Virtual Memory in a Z8000
or MC68000 based Design Station*," COMPCON 80, Digest of Papers, pp
456-459, IEEE Computer Society, Feb. 25, 1980.

On Tue, Jan 21, 2020 at 2:14 PM Warner Losh <imp at bsdimp.com> wrote:

>
>
> On Tue, Jan 21, 2020, 11:46 AM Clem Cole <clemc at ccc.com> wrote:
>
>> sorry...    all *MPU* boards had to be the revision but we may have done
>> the same with the CPU boards.
>>
>
> When did Masscomp ship their first MP system?
>
> Warner
>
> On Tue, Jan 21, 2020 at 1:43 PM Clem Cole <clemc at ccc.com> wrote:
>>
>>>
>>>
>>> On Tue, Jan 21, 2020 at 12:18 PM Jon Steinhart <jon at fourwinds.com>
>>> wrote:
>>>
>>>> My memory is very very very fuzzy on this.  I seem to recall that
>>>> microcode
>>>> state was pushed onto a stack in certain cases,
>>>
>>> State, not the code.
>>>
>>> In fact, Masscomp having built the first MP UNIX box, ran into this
>>> problem early on.  Different processor stepping had different internal
>>> microcode state on the stack after an IRQ.  If you resumed with a processor
>>> that was a different processor revision, the wrong state was returned.
>>>
>>> Will may remember this, but Masscomp issues strick orders to the FE that
>>> all CPU boards had to be the revision.  You could not just swap a CPU
>>> board, they had to go as sets. It was a real bummer.
>>>
>>> Moto fixed that with the 020 and later devices as more people made MP
>>> systems.
>>>
>>>
>>>
>>>
>>>
>>>> ...  just heard grumbles from other folks about it.
>>>>
>>> Probably me ...  it took me, tjt and Terry Hayes about 3-4 weeks to
>>> figure out that problem.   It was not originally documented, other than
>>> to state on certain faults X bytes of reserved information was pushed on
>>> the stack.
>>>
>>> BTS: I don't remember, but it may have started with the 68010.
>>>  Becuase before that, the 'executor' was wait stated and the fixor handled
>>> and fixed the fault so the 68000 never actually saw  fault in the original
>>> Masscomp CPU board.   The "MPU" board was the same board with a couple of
>>> PAL's changed and an 68010 as the executor.   It was allowed to actually
>>> fault and do something else while the fixor corrected the fault.  But the
>>> key is that when the fault was repaired, another executor on a different
>>> MPU board could be the processor that 'returned' from the fault.   That
>>> ended up being a no-no.
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20200121/3f83119a/attachment-0001.html>


More information about the TUHS mailing list