On Jan 17, 2020, at 11:25 PM, Rob Pike
<robpike(a)gmail.com> wrote:
I am convinced that large-scale modern compute centers would be run very differently,
with fewer or at least lesser problems, if they were treated as a single system rather
than as a bunch of single-user computers ssh'ed together.
But history had other ideas.
So I’ve clearly got a dog in this fight
(
https://athornton.github.io/Tucson-Python-Dec-2019/ et al. (also mostly at
athornton.github.io)) but I feel like Kubernetes is an interesting compromise in that
space.
Admittedly, I come to this not only from a Unix/Linux background but an IBM VM background,
but:
1) containerization is a necessary but not sufficient first step
2) black magic to handle the internal networking and endpoint exposure through fairly
simple configuration on the user’s part is essential
3) abstractions to describe resources (the current enumerated-objects quota stuff is
clunky but sufficient; the CPU/Memory quota stuff is fine), and
4) an automated lifecycle manager
taken together give you a really nifty platform for defining complex applications via
composition, which (IMHO) is one of the fundamental wins of Unix, although in this case,
it’s really _not_ very analogous to plugging pipeline stages together.
Note that _running_ Kubernetes is still a pain, so unless running data centers is your
job, just rent capacity from someone else’s managed Kubernetes service.
Adam