[TUHS] Questions for TUHS great minds

Steve Johnson scj at yaccman.com
Thu Jan 12 04:34:29 AEST 2017



>>  Or does the idea of a single OS disintegrate into a fractal cloud
of zero-cost VM's?  What would a meta-OS need to manage that?  Would
we still recognize it as a Unix?

This may be off topic, but the aim of studying history is to avoid the
mistakes of the past.  And part of that is being honest about where
we are now...

IMHO, hardware has left software in the dust.  I figured out that if
cars had evolved since 1970 at the same rate as computer memory, we
could now buy 1,000 Tesla Model S's for a penny, and each would have a
top speed of 60,000 MPH.  This is roughly a factor of a trillion in
less than 50 years.

For quite a while, hardware (Moore's law) was giving software
cover--the hardware speed and capacity was exceeding the rate of bloat
in software.  For the last decade, that's stopped happening (the
hardware speed, not the software bloat!)   What hardware is now
giving us is many many more of the same thing rather than the same
thing faster.  To fully exploit the hardware, the old model of
telling a processor "do this, then do this, then do this" simply
doesn't scale.  Things like multicore look to me like a desperate
attempt to hold onto an outmoded model of computing in the face of a
radically different hardware landscape.

The company I'm working for now, Wave Computing, is building a chip
with 16,000 8-bit processors on a chip.  These processors know how to
snuggle up together and do 16-, 32-, and 64-bit arithmetic.  The chip
is intended to be part of systems with as many as a quarter million
processors, with machine learning being one possible target.  There
are no global signals on the chip (e.g., no central clock).  (The
hardware people aren't perfect--it's not yet sunk in that making a
billion transistors operate fast while remaining in synch is
ultimately an impossible goal as the line sizes get smaller).

Chips like ours are not intended to be general purpose -- they act
more like FPGA's. They allow tremendous resources to be focused on a
single problem at a time.  And the focus to change quickly as
desired.  They aren't good for everything.  But I do think they
represent the current state of hardware pretty well, and the trends
are strongly towards even more of the same.  

The closest analogy of programming for the chip is microcode -- it's
as if you have a programmable machine with hundreds or thousands of
"instructions" that are far more powerful than traditional
instructions, able to operate on structured data and do many
arithmetic operations.  And you can make new instructions at will. 
The programming challenge is to wire these instructions together to
get the desired effect. 

This may not be the only path to the future, and it may fail to
survive or be crowded out by other paths.  But the path we have been
on for the last 50 years is a dead end, and the quicker we wise up and
grab the future, the better...

Steve

PS: another way to visualize the hardware progress:  If we punched
out a petabyte of data onto punched cards, the card deck height would
be 6 times the distance to the moon!  Imagine the rubber band....

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20170111/64d9c337/attachment.html>


More information about the TUHS mailing list