[TUHS] core

Noel Chiappa jnc at mercury.lcs.mit.edu
Tue Jun 19 03:56:38 AEST 2018


    > From: Clem Cole

    > My experience is that more often than not, it's less a failure to see
    > what a successful future might bring, and often one of well '*we don't
    > need to do that now/costs too much/we don't have the time*.'

Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.

By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.

In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.


    > I can imagine that he did not want to spend on anything that he thought
    > was wasteful.

Understandable. But see above...

The art is in finding a path that leave the future open (i.e.  reduces future
costs, when you 'hit the wall'), without running up costs now.

A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.

They managed to have their cake (fairly minimal costs now) and eat it too
(later expansion).


    > Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
    > thinking Brooks was nuts

Don't think I've heard that one?


    >> the decision to remove the variable-length addresses from IPv3 and
    >> substitute the 32-bit addresses of IPv4.

    > I always wondered about the back story on that one.

My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.

(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)

    > 32-bits seemed infinite in those days and no body expected the network
    > to scale to the size it is today and will grow to in the future

Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!

And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.


    >> So, is poor vision common? All too common.

    > But to be fair, you can also end up with being like DEC and often late
    > to the market.

Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...

This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
makes sense.

So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.


    > I think in both cases would have been allowed Alpha to be better
    > accepted if DEC had shipped earlier with a few hacks, but them improved
    > Tru64 as a better version was developed (*i.e.* replace the memory
    > system, the I/O system, the TTY handler, the FS just to name a few that
    > got rewritten from OSF/1 because folks thought they were 'weak').

But you can lose with that strategy too.

Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.

Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:

  http://www.multicians.org/myths.html

Sigh, sometimes you can't win!

      Noel



More information about the TUHS mailing list