[TUHS] PDP-11 legacy, C, and modern architectures

Perry E. Metzger perry at piermont.com
Sat Jun 30 04:41:00 AEST 2018


On Fri, 29 Jun 2018 08:58:48 -0400 "Theodore Y. Ts'o" <tytso at mit.edu>
wrote:
> All of this is to point out that talking about 2998 threads really
> doesn't mean much.  We shouldn't be talking about threads; we should
> be talking about how many CPU cores can we usefully keep busy at the
> same time.  Most of the time, for desktops and laptops, except for
> brief moments when you are running "make -j32" (and that's only for
> us weird programmer-types; we aren't actually the common case),
> most of the time, the user-facing CPU is twiddling its fingers.

On the other hand, when people are doing machine learning stuff on
GPUs or TPUs, or are playing a video game on a GPU, or are doing
weather prediction or magnetohydrodynamics simulations, or are
calculating the bonding energies of molecules, they very much _are_
going to use each and every computation unit they can get their hands
on. Yah, a programmer doesn't use very much CPU unless they're
compiling, but the world's big iron isn't there to make programmers
happy these days.

Even if "all" you're doing is handling instant message traffic for a
couple hundred million people, you're going to be doing a lot of stuff
that isn't easily classified as concurrent vs. parallel
vs. distributed any more, and the edges all get funny.

> > 4. The reason most people prefer to use one very high perf.
> >    CPU rather than a bunch of "wimpy" processors is *because*
> >    most of our tooling uses only sequential languages with
> >    very little concurrency.
>
> The problem is that I've been hearing this excuse for two decades.
> And there have been people who have been working on this problem.
> And at this point, there's been a bit of "parallelism winter" that
> is much like the "AI winter" in the 80's.

The parallelism winter already happened, in the late 1980s. Larry may
think I'm a youngster, but I worked over 30 years ago on a machine
called "DADO" built at Columbia with 1023 processors arranged in a
tree with MIMD and SIMD features depending on how it was
arranged. Later I worked at Bellcore on something called the Y machine
built for massive simulations.

All that parallel stuff died hard because general purpose CPUs kept
getting faster too quickly for it to be worthwhile. Now, however,
things are different.

> Lots of people have been promisng wonderful results for a long time;
> Sun bet their company (and lost) on it; and there haven't been much
> in the way of results.

I'm going to dispute that. Sun bet the company on the notion that
people wanted machines they could sell them at ridiculously high
margins with pretty low performance per dollar when they could go out
and buy Intel boxes running RHEL instead. As has been said elsewhere
in the thread, what failed wasn't Niagra as much as Sun's entire
attempt to sell people what were effectively mainframes at a point
where they no longer wanted giant boxes with low throughput per
dollar.

Frankly, Sun jumped the shark long before. I remember some people (Hi
Larry!) valiantly keeping adding minor variants on the SunOS release
number (what did it get to? SunOS 4.1.3_u1b or some such?) at a point
where the firm had committed to Solaris. They stopped shipping
compilers so they could charge you for them, stopped maintaining the
desktop, stopped updating the userland utilities so they got
ridiculously behind (Solaris still, when I last checked, had a /bin/sh
that wouldn't handle $( ) ), and put everything into selling bigger
and bigger iron, which paid off for a little while, until it didn't
any more.

> Sure, there are specialized cases where this has been useful ---
> making better nuclear bumbs with which to kill ourselves, predicting
> the weather, etc.  But for the most part, there haven't been much
> improvement for anything other than super-specialzied use cases.

Your laptop's GPUs beat it's CPUs pretty badly on power, and you're
using them every day. Ditto for the GPUs in your cellphone. You're
invoking big parallel back-ends at Amazon, Google, and Apple all the
time over the network, too, to do things like voice recognition and
finding the best route on a large map. They might be "specialized"
uses, but you're invoking them enough times an hour that maybe they're
not that specialized?

> > 6. The conventional wisdom is parallel languages are a failure
> >    and parallel programming is *hard*.  Hoare's CSP and
> >    Dijkstra's "elephants made out of mosquitos" papers are
> >    over 40 years old.
>
> It's a failure because there hasn't been *results*.  There are
> parallel languages that have been proposed by academics --- I just
> don't think they are any good, and they certainly haven't proven
> themselves to end-users.

Er, Erlang? Rust? Go has CSP and is in very wide deployment? There's
multicore stuff in a bunch of functional languages, too.

If you're using Firefox right now, a large chunk of the code is
running multicore using Rust to assure that parallelism is safe. Rust
does that using type theory work (on linear types) from the PL
community. It's excellent research that's paying off in the real
world.

(And on the original point, you're also using the GPU to render most
of what you're looking at, invoked from your browser, whether Firefox,
Chrome, or Safari. That's all parallel stuff and it's going on every
time you open a web page.)

Perry
-- 
Perry E. Metzger		perry at piermont.com



More information about the TUHS mailing list