> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Core memory wiped out competing technologies (Williams tube, mercury
> delay line, etc) almost instantly and ruled for over twent years.
I never lived through that era, but reading about it, I'm not sure people now
can really fathom just how big a step forward core was - how expensive, bulky,
flaky, low-capacity, etc, etc prior main memory technologies were.
In other words, there's a reason they were all dropped like hot potatoes in
favour of core - which, looked at from our DRAM-era perspective, seems
quaintly dinosaurian. Individual pieces of hardware you can actually _see_
with the naked eye, for _each_ bit? But that should give some idea of how much
worse everything before it was, that it killed them all off so quickly!
There's simply no question that without core, computers would not have
advanced (in use, societal importance, technical depth, etc) at the speed they
did without core. It was one of the most consequential steps in the
development of computers to what they are today: up there with transistors,
ICs, DRAM and microprocessors.
> Yet late in his life Forrester told me that the Whirlwind-connected
> invention he was most proud of was marginal testing
Given the above, I'm totally gobsmacked to hear that. Margin testing was
important, yes, but not even remotely on the same quantum level as core.
In trying to understand why he said that, I can only suppose that he felt that
core was 'the work of many hands', which it was (see e.g. "Memories That
Shaped an Industry", pg. 212, and the things referred to there), and so he
only deserved a share of the credit for it.
Is there any other explanation? Did he go into any depth as to _why_ he felt
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
> I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum
> level as core.
> In trying to understand why he said that, I can only suppose that he felt that
> core was 'the work of many hands'...and so he only deserved a share of the
> `credit for it.
It is indeed a striking comment. Forrester clearly had grave concerns
about the reliability of such a huge aggregation of electronics. I think
jpl gets to the core of the matter, regardless of national security:
> Whirlwind ... was tube based, and I think there was a tradeoff of speed,
> as determined by power, and tube longevity. Given the purpose, early
> warning of air attack, speed was vital, but so, too, was keeping it alive.
> So a means of finding a "sweet spot" was really a matter of national
> security. I can understand Forrester's pride in that context.
If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
radio to a 5000-tube computer (say nothing of the 50,000-tube machines
for which Whirlwind served as a prototype), computing looks like a
crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
computed reliably--a sine qua non for the nascent industry.
On 2018-06-16 21:00, Clem Cole <clemc(a)ccc.com> wrote:
> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>> but I don't recall exactly where I saw it.)
> I think it was part of the same paper where he made the observation that
> the greatest mistake an architecture can have is too few address bits.
I think the paper you both are referring to is the "What have we learned
from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
There is some additional comments in
> My understanding is that the problem was that UNIBUS was perceived as an
> I/O bus and as I was pointing out, the folks creating it/running the team
> did not value it, so in the name of 'cost', more bits was not considered
Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was
very clearly designed a the system bus for all needs by DEC, and was
used just like that until the 11/70, which introduced a separate memory
bus. In all previous PDP-11s, both memory and peripherals were connected
on the Unibus.
Why it only have 18 bits, I don't know. It might have been a reflection
back on that most things at DEC was either 12 or 18 bits at the time,
and 12 was obviously not going to cut it. But that is pure speculation
on my part.
But, if you read that paper again (the one from Bell), you'll see that
he was pretty much a source for the Unibus as well, and the whole idea
of having it for both memory and peripherals. But that do not tell us
anything about why it got 18 bits. It also, incidentally have 18 data
bits, but that is mostly ignored by all systems. I believe the KS-10
made use of that, though. And maybe the PDP-15. And I suspect the same
would be true for the address bits. But neither system was probably
involved when the Unibus was created, but made fortuitous use of it when
they were designed.
> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
> engineering at DEC for many years. Henk was notoriously frugal (we might
> even say 'cheap'), so I can imagine that he did not want to spend on
> anything that he thought was wasteful. Just like I retold the
> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
> I don't know for sure, but I can see that without someone really arguing
> with Henk as to why 18 bits was not 'good enough.' I can imagine the
> conversation going something like: Someone like me saying: *"Henk, 18 bits
> is not going to cut it."* He might have replied something like: *"Bool
> sheet *[a dutchman's way of cursing in English], *we already gave you two
> more bit than you can address* (actually he'd then probably stop mid
> sentence and translate in his head from Dutch to English - which was always
> interesting when you argued with him).
Quite possible. :-)
> Note: I'm not blaming Henk, just stating that his thinking was very much
> that way, and I suspect he was not not alone. Only someone like Gordon and
> the time could have overruled it, and I don't think the problems were
> foreseen as Noel notes.
Bell in retrospect thinks that they should have realized this problem,
but it would appear they really did not consider it at the time. Or
maybe just didn't believe in what they predicted.
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
Core memory wiped out competing technologies (Williams tube, mercury
delay line, etc) almost instantly and ruled for over twent years. Yet
late in his life Forrester told me that the Whirlwind-connected
invention he was most proud of was marginal testing: running the
voltage up and down once a day to cause shaky vacuum tubes to
fail during scheduled maintenance rather than randomly during
operation. And indeed Wirlwind racked up a notable record of
> As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
The Newcastle Connection (aka Unix United) implemented this idea.
Name spaces could be pasted together simply by making .. at the
roots point "up" to a new superdirectory. I do not remember whether
UIDS had to agree across the union (as in NFS) or were mapped (as
We lost the founder of IBM, Thomas J. Watson, on this day in 1956 (and I
have no idea whether or not he was buried 9-edge down).
Oh, and I cannot find any hard evidence that he said "Nobody ever got
fired for buying IBM"; can anyone help? I suspect that it was a media
beat-up from his PR department i.e. "fake news"...
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On 2018-06-18 14:17, Noel Chiappa wrote:
> > The "separate" bus for the semiconductor memory is just a second Unibus
> Err, no. :-) There is a second UNIBUS, but... its source is a second port on
> the FASTBUS memory, the other port goes straight to the CPU. The other UNIBUS
> comes out of the CPU. It _is_ possible to join the two UNIBI together, but
> on machines which don't do that, the _only_ path from the CPU to the FASTBUS
> memory is via the FASTBUS.
Ah. You and Ron are right. I am confused.
So there were some previous PDP-11 models who did not have their memory
on the Unibus. The 11/45,50,55 accessed memory from the CPU not through
the Unibus, but through the fastbus, which was a pure memory bus, as far
as I understand. You (obviously) could also have memory on the Unibus,
but that would be slower then.
Ah, and there is a jumper to tell which addresses are served by the
fastbus, and the rest then go to the Unibus. Thanks, I had missed these
details before. (To be honest, I have never actually worked on any of
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole
> My experience is that more often than not, it's less a failure to see
> what a successful future might bring, and often one of well '*we don't
> need to do that now/costs too much/we don't have the time*.'
Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.
By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.
In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.
> I can imagine that he did not want to spend on anything that he thought
> was wasteful.
Understandable. But see above...
The art is in finding a path that leave the future open (i.e. reduces future
costs, when you 'hit the wall'), without running up costs now.
A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.
They managed to have their cake (fairly minimal costs now) and eat it too
> Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
> thinking Brooks was nuts
Don't think I've heard that one?
>> the decision to remove the variable-length addresses from IPv3 and
>> substitute the 32-bit addresses of IPv4.
> I always wondered about the back story on that one.
My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.
(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)
> 32-bits seemed infinite in those days and no body expected the network
> to scale to the size it is today and will grow to in the future
Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!
And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.
>> So, is poor vision common? All too common.
> But to be fair, you can also end up with being like DEC and often late
> to the market.
Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...
This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.
> I think in both cases would have been allowed Alpha to be better
> accepted if DEC had shipped earlier with a few hacks, but them improved
> Tru64 as a better version was developed (*i.e.* replace the memory
> system, the I/O system, the TTY handler, the FS just to name a few that
> got rewritten from OSF/1 because folks thought they were 'weak').
But you can lose with that strategy too.
Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.
Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:
Sigh, sometimes you can't win!
> From: "Theodore Y. Ts'o"
> To be fair, it's really easy to be wise to after the fact.
Right, which is why I added the caveat "seen to be such _at the time_ by some
people - who were not listened to".
> failed protocols and designs that collapsed of their own weight because
> architects added too much "maybe it will be useful in the future"
And there are also designs which failed because their designers were too
un-ambitious! Converting to a new system has a cost, and if the _benefits_
(which more or less has to mean new capabilities) of the new thing don't
outweigh the costs of conversion, it too will be a failure.
> Sometimes having architects being successful to add their "vision" to a
> product can be worst thing that ever happened
A successful architect has to pay _very_ close attention to both the 'here and
now' (it has to be viable for contemporary use, on contemporary hardware, with
contemporary resources), and also the future (it has to have 'room to grow').
It's a fine edge to balance on - but for an architecture to be a great
success, it _has_ to be done.
> The problem is it's hard to figure out in advance which is poor vision
> versus brilliant engineering to cut down the design so that it is "as
> simple as possible", but nevertheless, "as complex as necessary".
Absolutely. But it can be done. Let's look (as an example) at that IPv3->IPv4
One of two things was going to be true of the 'Internet' (that name didn't
exist then, but it's a convenient tag): i) It was going to be a failure (in
which case, it probably didn't matter what was done, or ii) it was going to be
a success, in which case that 32-bit field was clearly going to be a crippling
With that in hand, there was no excuse for that decision.
I understand why they ripped out variable-length addresses (I was just about
to start writing router code, and I know how hard it would have been), but in
light of the analysis immediately above, there was no excuse for looking
_only_ at the here and now, and not _also_ looking to the future.