On Wed, Jan 16, 2019 at 09:29:50AM -0800, Larry McVoy wrote:
I have a different view, having been at Sun when Sun was eating DEC's
lunch. Sun made stuff that was just as good as what DEC built but they
were cheaper. DEC couldn't adapt to decent machines that didn't cost
a big pile.
History repeats itself, Sun couldn't wean itself off $20,000 workstations
when you could get an almost as fast PC for 1/4th or less of that price
This is applicable in the Open Source world as well, and there it's a
much more clearly an example of the "Better is Worse", "New Jersey
Style" vs. "The MIT Approach"/"The Right Thing" debate.
Example: Linux had PCMCIA support before the *BSD variants, and even
when the BSD's had PCMCIA support, the PCMCIA card (most commonly, for
WiFi) had to be plugged when the system was booted. Where as for
years ahead of *BSD, Linux had hot-plug PCMCIA support --- but if you
ejected the card, there was a 30-50% chance the system would crash.
(Which could be lowered if you carefully shutdown the network, and
waited until all open/pending TCP connections that couldn't be closed
had timed out, etc.) Eventually, the *BSD's pulled ahead of Linux,
and had rock-solid PC Card support, and it took a lot longer for
Linux's hot-eject support to be fully stable.
For most users, of couse, hot-pluggable PCMCIA was way more important
than stability problems when you ejected the card, especially when it
usually worked (especially if you were careful). And if you have more
users, you are much more likely to get bug reports, and more likely to
recruit developers (which in the open source world, are more likely to
stick around if you are willing to accept less-than-perfect patches,
as opposed to insisting that they be picture-perfect before they can
be committed). There's a downside with this approach, of course,
which is that it may take a lot longer to get the code cleaned up
after the 1.0 "launch" of the feature.