[TUHS] evolution of the cli

Luther Johnson via TUHS tuhs at tuhs.org
Sun Nov 2 03:41:40 AEST 2025


Pardon me if this comment rubs anyone the wrong way, but this discussion 
seems a little like trying to retrofit intelligent design on the 
evolution of species. All things evolve, in somewhat arbitrary and not 
necessarily purposeful ways, in the context of their environment and 
immediate stimuli. Later, as a particular evolution proves useful to 
survival in the long run, it's tempting to say "the species grew an 
extra arm in order to compete and survive", but of course the species 
itself, with or without any awareness of it, has no control over its 
mutations. Well in writing software we do have a little more control, 
but we usually do not have as much foresight as we will later attribute 
to those efforts with the benefit of hindsight.

I do agree with one thread in this discussion though. Any kind of energy 
injected into the system will tend to yield more of the same. So the 
emotional and aesthetic motivations of Unix's earliest contributors will 
create responses from people who care about the same sorts of things, 
and derive the same kind of joy from their efforts, that's something 
that I think we can count on.

On 11/01/2025 10:13 AM, A. P. Garcia via TUHS wrote:
> Clem, Marc, this is incredibly helpful context, thank you.
>
> Clem, your “Linux is still a cathedral, just with different master
> builders” hit me hard, because it immediately reframes this not as
> mythology but governance and economics. Who gets to steer. Who keeps it
> cheap enough to win.
>
> Marc, your point about make as externalized memory lit up the other half of
> my brain. The idea that half of sysadmin life was just not forgetting the
> exact incantation from yesterday. yes. I’ve lived a tiny, modern version of
> that and it still hurts.
>
> Where this all lands for me is: if Unix historically survived because we
> kept capturing practice in repeatable form (make, shell, cron, rc scripts,
> etc.), maybe the next logical step is to expose the machine itself in those
> terms.
>
> What I’ve been sketching out with a friend is a shell where the fundamental
> objects aren’t strings or ad-hoc JSON, but live views of kernel structures.
>
> For example, every process becomes a Task object that conceptually maps to
> task_struct:
>
> t = kernel.tasks.find(pid=1234)
> t.pid -> 1234
> t.comm -> "sshd"
> t.state -> "TASK_RUNNING"
> t.parent.pid -> 1
> t.children -> [ ... ]
>
> t.kill(SIGTERM)
> t.set_prio(80)
>
> Under the hood it’s not magic — it’s just reading /proc/1234/*, assembling
> a stable “Task” view from the pieces the kernel already exports, and giving
> you safe verbs that wrap the normal syscalls (kill(2), cgroup moves, etc.).
>
> Same idea for network interfaces (struct net_device), mounts / superblocks
> (struct super_block), open sockets (struct sock), etc. You’d get objects
> like NetIf, Mount, Socket, each with fields and sensible methods:
>
> iface = kernel.net.ifaces["eth0"]
> iface.mtu -> 1500
> iface.rx_bytes -> 123456789
> iface.addrs_v4 -> ["192.0.2.10/24"]
>
> iface.up()
> iface.set_mtu(9000)
> iface.add_addr("192.0.2.99/24")
>
> The goal here is not to invent some shiny abstraction layer — it’s almost
> the opposite. It’s to acknowledge, honestly, “this is how the kernel
> already thinks about the world,” and then hand that to the operator as a
> first-class, queryable vocabulary.
>
> Why I think this lines up with both of your notes:
>
> • Clem’s control point — this becomes the control surface. You keep the
> cathedral (Linus, subsystem maintainers, etc.), but you finally give the
> folks in production a coherent, inspectable, scriptable view of that
> cathedral’s state instead of twenty tiny tools with incompatible flags.
>
> • Marc’s memory point — this becomes institutional memory. Instead of “what
> was that five-stage awk pipeline Karl wrote in ’97 to find stuck tasks?”,
> you ask the system:
>
> kernel.tasks
>    .where('$.state == "TASK_UNINTERRUPTIBLE" && $.waiting_on ==
> "io_schedule"')
>    .group('$.cgroup')
>
> and you get structured results you can act on. That knowledge survives
> handoff.
>
> The other (slightly mind-blowing) side effect is that the same interface
> could be pointed at a crash dump or a snapshot, so postmortem triage could
> look exactly like live triage.
>
> I’d love to hear if this resonates with your lived experience, or if you’d
> say “nice dream, kid, but here’s why it falls apart in the real world.”
>
> Because to me it feels like the same thread you both pulled on: we’ve
> always been trying to capture practice so we can hand it to the next person
> without losing our minds.
>
>
> Phil
>
> On Sat, Nov 1, 2025, 12:34 PM Clem Cole <clemc at ccc.com> wrote:
>
>> Marc, I agree. Like you,  I think Phillips' observations resonate, but you
>> nailed it with the drive for higher-level abstractions/being able to do
>> more as better automation of a lower-level idea or facility.
>>
>>
>> On Sat, Nov 1, 2025 at 11:57 AM Marc Donner via TUHS <tuhs at tuhs.org>
>> wrote:
>>
>>> A lot of what you say is appealing and resonates with me.
>>>
>>> Let me offer another dimension to help think about the evolution of CLI
>>> goodies: automation.
>>>
>>> In the early days of the Arpanet we got a wonderful ability - transfer a
>>> file from one computer to another without writing a tape and making a trip
>>> to the post office.  FTP was wonderful.  With administrative coherence we
>>> also got smoother integration with tools like rcp.
>>>
>>> Along with these things came rapid growth in the number of machines in a
>>> domain and the need to manage them coherently.  Now we needed to transfer
>>> a
>>> bunch of files to a bunch of different machines.  Enter rdist.  (We will
>>> leave the security challenges to the side.).  Suddenly we could establish
>>> a
>>> common system image for a large number of machines.
>>>
>>> We then discovered that not all machines should be absolutely identical,
>>> so
>>> we layered all sorts of machinery on top of rdist and its multifarious
>>> descendants so that we could keep various subtrees coherent.
>>>
>>> What we ended up with is a growing set of layered abstractions.  At the
>>> bottom were some fairly simple pieces of machinery that did this or that
>>> on
>>> the bare OS.  Next were a collection of abstractions that automated the
>>> orchestration of these underlying bits.  Some of these abstractions turned
>>> out to be seminal innovations in and of themselves and were then used in
>>> developing yet another tier of abstractions and automations on top of the
>>> second tier.
>>>
>>> As time passed we layered more and more abstractions.
>>>
>>> Of course, from time to time we also looked at the chaotic pile of
>>> abstractions and attempted to streamline and simplify them, with varying
>>> levels of success.
>>>
>>> Best,
>>>
>>> Marc
>>> =====
>>> mindthegapdialogs.com <https://www.mindthegapdialogs.com>
>>> north-fork.info <https://www.north-fork.info>
>>>
>>>
>>> On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS <tuhs at tuhs.org>
>>> wrote:
>>>
>>>> i'm a bit reluctant to post this here lest you rip it apart, but i'm
>>> guess
>>>> i'm ok with that if it happens. i'm more interested in learning the
>>> truth
>>>> than i am in being right.
>>>>
>>>> The Evolution of the Command Line: From Terseness to Expression
>>>>
>>>> 1. The Classical Unix Model (1970s–80s)
>>>>
>>>> cmd -flags arguments
>>>>
>>>> The early Unix commands embodied the ideal of “do one thing well.” Each
>>>> flag was terse and mnemonic (-l, -r), and each utility was atomic. The
>>>> shell provided composition through pipes. Commands like grep, cut, and
>>> sort
>>>> combined to perform a series of operations on the same data stream.
>>>>
>>>> 2. The GNU Era (late 80s–90s)
>>>>
>>>> cmd --long-simple --long-key=value [arguments]
>>>>
>>>> The GNU Project introduced long options to help people remember what the
>>>> terse flags meant. Common options like --help and --version made tools
>>>> self-describing.
>>>>
>>>> Strengths: clarity, accessibility, scriptability
>>>> Weaknesses: creeping featurism
>>>>
>>>> 3. The “Swiss Army Knife” Model (1990s–2000s)
>>>>
>>>> Next was consolidation. Developers shipped a single binary with multiple
>>>> subcommands:
>>>>
>>>> command subcommand [options] [arguments]
>>>>
>>>> Example: openssl x509 -in cert.pem -noout -text
>>>>
>>>> Each subcommand occupied its own domain, effectively creating
>>> namespaces.
>>>> This structure defined tools like git, svn, and openssl.
>>>>
>>>> Strengths: unified packaging, logical grouping
>>>> Weaknesses: internal inconsistency; subcommands evolved unevenly.
>>>>
>>>> 4. The Verb-Oriented CLI (2000s–Present)
>>>>
>>>> As CLIs matured, their design grew more linguistic. Tools like Docker,
>>> Git,
>>>> and Kubernetes introduced verb-oriented hierarchies:
>>>>
>>>> tool verb [object] [flags]
>>>>
>>>> Example: docker run -it ubuntu bash
>>>>
>>>> This mapped naturally to the mental model of performing actions:
>>> “Docker,
>>>> run this container.” “Git, commit this change.” Frameworks like Go’s
>>> Cobra
>>>> or Python’s Click standardized the pattern.
>>>>
>>>> Strengths: extensible, discoverable, self-documenting
>>>> Weaknesses: verbosity and conceptual overhead. A CLI became an
>>> ecosystem.
>>>> 5. The Sententious Model
>>>>
>>>> When a domain grows too complex for neat hierarchies, a single command
>>>> becomes a compact expression of a workflow. Consider zfs, an elegant
>>>> example of declarative-imperative blending:
>>>>
>>>> zfs create -o compression=lz4 tank/data
>>>> It reads almost like a sentence:
>>>>
>>>> “Create a new dataset called tank/data with compression enabled using
>>> LZ4.”
>>>> Each option plays a grammatical role:
>>>>
>>>> create — the verb
>>>> -o compression=lz4 — a property or adverbial modifier
>>>> tank/data — the object being acted upon
>>>>
>>>> One fluent expression defines what and how. The syntax is a kind of
>>>> expressive and efficient shell-native DSL.
>>>>
>>>> This phase of CLI design is baroque: not minimalist, not verbose, but
>>>> literary in its compression of meaning.
>>>>
>>>> 6. The Configuration-Driven CLI (Modern Era)
>>>>
>>>> Example: kubectl apply -f deployment.yaml
>>>>
>>>> Today’s tools often speak in declarative terms. Rather than specify
>>> every
>>>> step, you provide a desired state in a file, and the CLI enacts it.
>>>>
>>>> Strengths: scales elegantly in automation, integrates with APIs
>>>> Weaknesses: less immediacy; the human feedback loop grows distant.
>>>>
>>>> Across half a century of design, the command line has evolved from terse
>>>> incantations to expressive languages of intent.
>>>>



More information about the TUHS mailing list