> Does anyone know why the computer industry wound up standardising on
8-bit bytes?

I give the credit to the IBM Stretch, aka 7030, and the Harvest attachment
they made for NSA. For autocorrelation on bit streams--a fundamental need
in codebreaking--the hardware was bit-addressable. But that was overkill
for other supercomputing needs, so there was coarse-grained addressability
too. Address conversion among various operand sizes made power of two a
natural, lest address conversion entail division. The Stretch project also 
coined the felicitous word "byte" for the operand size suitable for character 
sets of the era.

With the 360 series, IBM fully committed to multiple operand sizes. DEC
followed suit and C naturalized the idea into programmers' working
vocabulary.

The power-of-2 word length had the side effect of making the smallest
reasonable size for floating-point be 32 bits. Someone on the
Apollo project once noted that the 36-bit word on previous IBM
equipment was just adequate for planning moon orbits; they'd
have had to use double-precision if the 700-series machines had
been 32-bit. And double-precision took 10 times as long. That
observation turned out to be prescient: double has become the
norm. 

Doug