In Brailsford's youtube series in Computerphile (something I came across
through BWK's interview with Brailsford),
Episode 86
<https://www.youtube.com/watch?v=ixJCo0cyAuA&list=PLUTypj9XuPp4YBaHucPvr-zisHwfEGIEq&index=87>
is
about "Where Did Bytes Come From?". He claims that if you wanted to do
decimal arithmetic on a
binary machine, you'd want to have 10 digits of accuracy to capture the 10
digit log tables that were then popular.
10 digits is around 33 to 36 bits, so words ended up that size (or half
that size), 36 or 18 bits. (Brailsford's lectures are
fabulous, by the way, likely to appeal to TUHS types.)
I like that explanation better than the story I heard that the IBM 709
series had 36 bit words because Arthur Samuel,
then at IBM, needed 32 bits to identify the playable squares on a
checkerboard, plus some bits for color and kinged
(if that's the proper term for getting across the board and gaining the
ability to move toward either side). Samuel was
famous for writing a checker playing program that played champion-quality
checkers.
On Thu, Sep 8, 2022 at 2:02 PM Noel Chiappa <jnc(a)mercury.lcs.mit.edu> wrote:
On Sep 8,
2022, at 9:51 AM, Jon Steinhart <jon(a)fourwinds.com> wrote:
One of those questions for which there is no
search engine
incantation.
Whatever it is, it's really old. I found it used, not quite in the modern
sense, in "Hi-Speed Computing Devices", by ERA, 1950. It was used, in the
modern sense, in "Planning a Computer System", Buchholz,1962.
Noel