[TUHS] 68000 vs. 8086 ( was Algol68 vs. C at Bell Labs)

Dan Cross crossd at gmail.com
Fri Jul 1 04:52:36 AEST 2016


On Thu, Jun 30, 2016 at 1:17 PM, <scj at yaccman.com> wrote:

> My memory was that the 68000 gave the 8086 a pretty good run for its
> money, but when Moto came out with a memory management chip it had some
> severe flaws that made paging and fault recovery impossible, while the
> equivalent features available on the 8086 line were tolerable.  There were
> some bizarre attempts to page with the 68000 (I remember one product that
> had two 68000 chips, one of which was solely to sit on the shoulder of the
> other and remember enough information to respond to faults!).  By the time
> Moto fixed it, the 8086 had taken the field...
>

Brantley Coile mentioned this here a couple of years ago: M68k couldn't
restart auto-inc/dec style instructions after a fault (there wasn't enough
information in the trap frame to ensure the side-effects were correct),
which made e.g. dynamic paging hard. He did a Unix port in which he simply
loaded the process completely before switching to it (e.g., like <= 32V).
But if I understand correctly, this was fixed by the time the 68010 came
out (1982). And it wouldn't have mattered for a non-paging system. So in
principle the 68000 could support a 7th-Edition style Unix and indeed, did
so. Surely it could have supported another similar operating system for a
hypothetical IBM PC based around the 68k.

Still, the point that the 68451 MMU was pretty lame is well taken. The
segment table was too small (96 entries?) and it was clearly designed to
support segmented memory rather than paging. It is inadequate to the latter
task. The 68851 available for the 68020 got it right; supposedly this could
be used with the 68010 as well, but I don't know that anyone ever tried
that in a real product.

I got a lot more respect for the 8086 architecture when working at
> Transmeta.  The instruction set encoding means that programs are small,
> and that means that, for a given icache size, the cache hit rate was much
> better than for our (wide word) machine.  By the time we had upped the
> size of our caches, the increased area and cost made our chip much less
> competitive.
>

I've heard that this is still one of the selling points of the x86
instruction set. As someone put it to me not too long ago, "think of it as
a dense bytecoding over the underlying RISC core." We're back to a P-code.

        - Dan C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20160630/0912ba80/attachment.html>


More information about the TUHS mailing list