I chanced upon a brochure describing the Perkin-Elmer Series 3200 /
(previously Interdata, later Concurrent Computer Corporation) Sort/Merge
II utility [1]. It is instructive to compare its design against that of
the contemporary Unix sort(1) program [2].
- Sort/Merge II appears to be marketed as a separate product (P/N
S90-408), whereas sort(1) was/is an integral part of the Unix used
throughout the system.
- Sort/Merge II provides interactive and batch command input modes;
sort(1) relies on the shell to support both usages.
- Sort/Merge II appears to be able to also sort binary files; sort(1)
can only handle text.
- Sort/Merge II can recover from run-time errors by interactively
prompting for user corrections and additional files. In Unix this is
delegated to shell scripts.
- Sort/Merge II has built-in support for tape handling and blocking;
sort(1) relies on pipes from/to dd(1) for this.
- Sort/Merge II supports user-coded decision subroutines written in
FORTRAN, COBOL, or CAL. Sort(1) doesn't have such support to this day.
One could construct a synthetic key with awk(1) if needed.
- Sort/Merge II can automatically "allocate" its temporary file. For
sort(1) file allocation is handled by the Unix kernel.
To me this list is a real-life demonstration of the differences between
the, prevalent at the time, thoughtless agglomeration of features into a
monolith approach against Unix's careful separation of concerns and
modularization via small tools. The same contrast appears in a more
contrived setting in J. Bentley's CACM Programming Pearl's column where
Doug McIlroy critiques a unique word counting literate program written
by Don Knuth [3]. (I slightly suspect that the initial program
specification was a trap set up for Knuth.)
I also think that the design of Perkin-Elmer's Sort/Merge II shows the
influence of salespeople forcing developers to tack-on whatever features
were required by important customers. Maybe the clean design of Unix
owes a lot to AT&T's operation under the 1956 consent decree that
prevented it from entering the computer market. This may have shielded
the system's design from unhealthy market pressures during its critical
gestation years.
[1]
https://bitsavers.computerhistory.org/pdf/interdata/32bit/brochures/Sort_Me…
[2] https://s3.amazonaws.com/plan9-bell-labs/7thEdMan/v7vol1.pdf#page=166
[3] https://doi.org/10.1145/5948.315654
Diomidis - https://www.spinellis.gr
> To me this list is a real-life demonstration of the differences between
> the, prevalent at the time, thoughtless agglomeration of features into a
> monolith approach against Unix's careful separation of concerns and
> modularization via small tools. The same contrast appears in a more
> contrived setting in J. Bentley's CACM Programming Pearl's column where
> Doug McIlroy critiques a unique word counting literate program written
> by Don Knuth [3]. (I slightly suspect that the initial program
> specification was a trap set up for Knuth.)
It wasn't a setup. Although Jon's introduction seems to imply that he had
invited both Don and me to participate, I actually was moved to write the
critique when I proofread the 2-author column, as I did for many of Jon's
Programming Pearls. That led to the 3-author arrangement. Knuth and
I are still friends; he even reprinted the critique. It is also memorably
depicted at https://comic.browserling.com/tag/douglas-mcilroy.
Doug
> I often repeat a throwaway sentence that UUCP was Lesk,
> building a bug fix distribution mechanism.
> Am I completely wrong? I am sure Mike said this to me mid 80s.
That was an important motivating factor, but Mike also had an
unerring anticipatory sense of public "need". Thus his programs
spread like wildfire despite their bugs. UUCP itself is the premier
example. Its popularity impelled its inclusion in v7 despite its
woeful disregard for security.
> Does anyone have [Robert Morris's UUCP CSTR]? Doug?
Not I.
Doug
Robert's uucp was in use in the Research world when I arrived
in late summer of 1984. It had an interesting and sensible
structure; in particular uucico was split into two programs,
ci and co.
One of the first things I was asked to do when I got there was
to get Honey Danber working as a replacement. I don't remember
why that was preferred; possibly just because Robert was a
summer student, not a full-fledged member of the lab, and we
didn't want something as important to us as uucp to rely on
orphaned code.
Honey Danber was in place by the time we made the V8 tape,
toward the end of 1984.
Norman Wilson
Toronto ON
The sound situation in the UNIX world to me has always felt particularly
fragmentary, with OSS offering some glimmer of hope but faltering under the long
shadow of ALSA, with a hodge podge of PCM and other low level interfaces
littered about other offerings.
Given AT&T's involvement with the development of just about everything
"sound over wires" for decades by the time UNIX comes along, one would suspect
AT&T would be quite invested in standardizing interfaces for computers
interacting with audio signals on copper wire. Indeed much of the ESS R&D was
taking in analog telephone signals, digitizing them, and then acting on those
digitized results before converting back to analog to send to the other end.
Where this has me curious is if there were any efforts in Bell Labs, prior to
other industry players having their hands on the steering wheel, to establish an
abstract UNIX interface pattern for interacting with streams of converted audio
signal. Of course modern formats didn't exist, but the general idea of PCM was
well established, concepts like sampling rates, bit depths, etc. could be used
in calculations to interpret and manipulate digitized audio streams.
Any recollections? Was the landscape of signal processing solutions just so
particular that trying to create a centralized interface didn't make sense at
the time? Or was it a simple matter of priorities, with things like language
development and system design taking center stage, leaving a dearth of resources
to direct towards these sorts of matters? Was there ever a chance of seeing,
say, the 5ESS handling of PCM, extended out to non-switching applications, or
was that stuff firmly siloed over in the switching groups, having no influence
on signal processing outside?
- Matt G.
I mentioned a few weeks ago that I was writing this invited paper for an
upcoming 50-year anniversary of the first issue of IEEE Transactions on
Software Engineering.
The paper has now been accepted for publication and here's a preprint
version of it:
https://www.mrochkind.com/mrochkind/docs/SCCSretro2.pdf
Marc
> Was the landscape of signal processing solutions just so
> particular that trying to create a centralized interface didn't make
sense at
> the time? Or was it a simple matter of priorities, with things like
language
> development and system design taking center stage, leaving a dearth of
resources
> to direct towards these sorts of matters? Was there ever a chance of
seeing,
> say, the 5ESS handling of PCM, extended out to non-switching
applications,
In the early days of Unix there were intimate ties between CS Research and
Visual and Acoustic Research. V&A were Bell Labs' pioneer minicomputer
users because they needed interactive access to graphics and audio, which
would have been prohibitively expensive on the Labs' pre-timesharing
mainframes. Also they generally had EE backgrounds, so were comfortable
working hands-on with hardware, whereas CS had been largely spun off from
the math department.
Ed David, who led Bell Labs into Multics, without which Unix might not have
happened, had transferred from V&A to CS. So had Vic Vyssotsky and Elliot
Pinson (Dennis's department head and coauthor with me of the introduction
to the 1978 BSTJ Unix issue). John Kelly, a brilliant transferee who died
all too young pre-Unix, had collaborated with Vic on BLODI, the first
dataflow language, which took digital signal processing off breadboards and
into computers. One central member of the Unix lab, Lee McMahon, never left
V&A.
The PDP-7 of Unix v0 was a hand-me-down from Pinson's time in V&A. And the
PDP-11 of v1 was supported by a year-end fund surplus from there.
People came from V&A to CS because their interests had drifted from signal
processing to computing per se. With hindsight, one can see that CS
recruiting--even when it drew on engineering or physics
talent--concentrated on similarly motivated people. There was dabbling in
acoustics, such as my "speak" text-to-speech program. And there were
workers dedicated to a few specialties, such as Henry Baird in optical
character recognition. But unlike text processing, say, these fields never
reached a critical mass of support that might have stimulated a wider array
of I/O drivers or full toolkits to use them.
Meanwhile, in V&A Research linguists adopted Unix, but most others
continued to roll their own one-off platforms. It's interesting to
speculate whether the lack of audio interfaces in Unix was a cause or a
result of this do-it-yourself impulse.
Doug