[COFF] Terminology query - 'system process'?

Paul Winalski paul.winalski at gmail.com
Thu Dec 21 06:29:17 AEST 2023


On 12/20/23, Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
>
> To give an example; the first DEC machine with an IC
> processor was the -11/20, in 1970 (the KI10 was 1972); starting with the
> LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a
> CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced
> after that used microprocessors.

The VAX-11/780, 11/750, and 11/730 were all implemented using
7400-series discrete, gate-level TTL integrated circuits.  The planned
follow-on series was Venus (ECL gate level; originally planned to be
the 11/790 but after many delays released as the VAX 8600), Gemini (a
two-board VAX implementation; cancelled), and Scorpio (a chip set
eventually released as the VAX 8000).  Superstar, released as the
VAX-11/785, is a re-implementation of the 11/780 using faster TTL.
The floating point accelerator board for the 11/785 was implemented
using Fast Fairchild TTL.  The first microprocessor implementation of
the VAX architecture was the MicroVAX-I.  It, and all later VAX
processors, implemented only the MicroVAX subset of the VAX
architecture in hardware and firmware.  The instructions left out were
the character string instructions (except for MOVC), decimal
arithmetic, H-floating point, octaword, and a few obscure, little-used
instructions such as EDITPC and CRC.  The missing instructions were
simulated by the OS.  These instructions were originally dropped from
the architecture because there wasn't enough real estate on a chip to
hold the microcode for them.  It's interesting that they continued to
be simulated in macrocode even after several process shrink cycles
made it feasible to move them to microcode.

I wrote a distributed (computation could be done in parallel over a
DECnet LAN or WAN) Mandelbrot Set computation and display program for
VAX/VMS.  It was implemented in PL/I.  The actual display was
incredibly sluggish, and I went in search of the performance problem.
It turned out to be in the cartesian-to-screen coordinate translation
subroutine.  The program did its computations in VAX D-float double
precision floating point.  The window was 800x800 pixels and this was
divvied up into 32x32-pixel cells for distributed, parallel
computation.  The expression "800/32" occurred in the coordinate
conversion program.  In PL/I language data type semantics, this
expression is "fixed decimal(3,0) divided by fixed decimal(2,0)".
This expression was being multiplied by a VAX D-float (float decimal
in PL/I) and this mixture of fixed and float triggered one of the more
bizarre of PL/I's baroque implicit data type conversions.  First, the
D-float values were converted to full-precision (15-digit) packed
decimal by calling one of the Fortran RTL's subroutines.  All the
arithmetic in the routine was done in full-precision packed decimal.
The result was then converted back to D-float (again by a Fortran RTL
call).  There were effectively only two instructions in the whole
subroutine that weren't simulated in macrocode, and one of those was
the RET instruction at the end of the routine!  I changed the
offending expression to "800E0/32E0" and got a 100X
speedup--everything was now being done in (hardware) D-float.

-Paul W.


More information about the COFF mailing list